Skip to content

Experiencias en MozCamp América Latina

MozCamp LatAm finalizó hace una semana y todavía estoy intentando recuperar el sueño. Fue una experiencia intensa, mucho más que cualquier otro evento de Mozilla al que haya atendido anteriormente. Se sintió como que cada miembro de la comunidad latina / hispana venía mentalizado para pasarla fantásticamente, y sacarle el jugo a cada minuto que pasaron en Buenos Aires. Bueno, creo que lo lograron con creces!

He aquí un pequeño recuento de mis experiencias en el MozCamp:

MDN Hack Day

El MDN Hack Day ocurrió el día antes del MozCamp, un viernes. Se dieron varias charlas interesantes sobre apps, herramientas de desarrollo, y el Add-ons SDK, entre otras.

Desafortunadamente la mayoría de las charlas fueron en inglés, lo cual fue un problema para muchos locales que atendieron el evento. A pesar que uno esperaría que la gente que trabaja en software tenga un entendimiento razonable del inglés, esto varía mucho a través de América Latina. Creo que tenemos que hacer un esfuerzo mayor en conseguir expositores que hablen la lengua nativa, aunque no sean expertos de primera mano en los temas de los que se va a hablar.

La tarde se abrió para hacer hacking libre. Al principio nadie sabía qué hacer, pero luego nos organizamos y formamos grupos alrededor de distintos temas y empezamos a trabajar en proyectos pequeños que se pudieran hacer durante la sesión. Dediqué un tiempo para demostrar cómo se puede hacer un userscript de cero, utilizando las herramientas de desarrollo del Firefox. La herramienta de inspección, el Scratchpad y la extensión Scriptish sirven para crear buenos userscripts en cuestión de minutos; ésto lo aprendí durante la preparación para mi presentación en el MozCamp.

Luego Hernán se nos unió y propuso una idea que podíamos trabajar como equipo: un complemento hecho con el SDK que imita a Instagram. El Hipsterizer permite hacer click derecho en una imagen y subirla a un servicio de imágenes luego de aplicarle un pinche efecto fotográfico. Actualmente solo aplica blanco y negro, y sube la imagen a imgur, pero podría aplicar cualquiera de los efectos de la librería filtrr, que es lo que utilizamos. Los filtros se aplican usando un canvas, así que estamos utilizando estándares web de lleno. Quieren contribuir? También estamos dispuestos a vender el proyecto por 7 mil millones de dólares.

MozCamp!

Como siempre, el MozCamp arrancó con una maratón de presentaciones excelentes, seguidas de un tallado calendario de dos días. Los tracks estaban muy bien organizados, así que no tuve mucho problema en decidir qué charlas atender. El sitio del evento estaba muy bien, aunque sí había que caminar bastante para cambiar de salas. El clima no fue ningún problema, afortunadamente.

El domingo temprano hubo un juego de fútbol para los pocos que estuvimos dispuestos a levantarnos a esa hora tan terrible y correr un rato bajo el sol mañanero. Terminamos abarrotando alrededor de 20 jugadores en una cancha de fútbol 5. Estuvo genial! Me divertí mucho y todas la carrera me dio energía para el resto del día. Debería ejercitarme más cuando viajo.

Madalina - mejor arquera del mundo
Madalina – mejor arquera del mundo

Dí mi charla el segundo día. Normalmente me pongo muy nervioso al hablar en público, y las circunstancias no ayudaron ni un poco. Siendo el segundo día del evento, las cosas empezaron un poco tarde, y la sesión de Q&A antes de mi charla se extendió más de lo esperado, así que ya todo iba con una media hora de atraso. Luego hubo un problema con las pantallas en la sala donde yo iba a dar la charla, y eso tomó una eternidad en resolverse (creo que fueron unos 15 minutos de tiempo real). Tuvo que dar la charla muy rápidamente y luego robarle algo de tiempo al almuerzo para poder responder preguntas.

Dando mi charla
Muevo mucho mis manos

A pesar de todo, creo que la charla salió bien. El demo salió sin problemas, que es mucho decir dada la pobre conexión de internet que tuvimos todo el tiempo. Recibí una buena ovación cuando dije que iba a dar la charla en español. Mi elección de idioma iba a depender en quiénes iban a estar en la charla, pero finalmente la decisión la tomé debido a la limitación de tiempo que tenía. Solo había un brasileño en el público y por suerte él entendía español suficientemente bien.

Community Work Day

Luego del MozCamp hubo un día adicional dedicado a coordinar el proyecto Mozilla Hispano.

Esto es algo que creo que es bastante único de la comunidad hispana. Mozilla Hispano es una meta-comunidad compuesta de varias comunidades hispanohablantes, las cuales comprenden numerosos países y potencialmente uno de los grupos de contribuyentes más grandes del mundo. La ventaja de tener un idioma común significa que la mayoría de las iniciativas se pueden coordinar a nivel de Mozilla Hispano, en vez de individualmente en cada comunidad.

La comunicación en el MozCamp ocurre principalmente en una dirección, un punto que Kevin Dangoor cubrió muy bien. El Work Day le dio a las comunidades un canal de dos vías para alinear sus visiones y decidir qué quieren hacer. Las sesiones fueron muy abiertas y dirigidas a entablar discusiones y debate. Todos tuvieron la oportunidad de dar sus opiniones, a veces emocionalmente pero nunca agresivamente. Esto es algo que me gustaría que todos los MozCamps tuvieran.

Community Work DayEstaba muy interesado en la sesión de Mozilla Hispano Labs, por razones obvias, y me encontré hablando mucho en ella. Las mayores preocupaciones del grupo tenían que ver con conseguir la gente correcta para contribuir en sus proyectos, además de mantenerlos interesados. Esto me recordó muchas de las experiencias que he tenido con la comunidad de desarrolladores de complementos, y el grupo de los AMO Editors, así que tuve mucho qué decir al respecto. Se puso en evidencia que el grupo no estaba haciendo lo suficiente para proyectar su imagen y atraer desarrolladores. Una serie de acciones salieron de la discusión: crear una lista de correos separada, actualizar la página principal de Labs, y crear un blog de desarrolladores (al que me ofrecí a contribuir). Avanzada la discusión, Felipe – quien la estaba liderando y actualmente coordina MH Labs – dijo algo que me pareció divertido y curioso. Fue algo como “Bueno, parece que no podemos codear ninguna de estas soluciones”. Cuando tienes un martillo, todo parece un clavo :).

Me alegra mucho haberme reconectado con la comunidad de MH, y me hace sentir un poco culpable que ellos hagan tanto por Mozilla en su tiempo libre mientras yo me siento frecuentemente agobiado solo haciendo el trabajo por el cual me pagan. También me avergüenza un poco que no exista una comunidad de Mozilla en Costa Rica, y que yo debería haber hecho algo al respecto hace mucho tiempo. Así que estoy tomando algunos pasos iniciales en esa dirección, y estoy determinado en formar aunque sea un grupo pequeño que represente a mi país en MH. Daré actualizaciones sobre esto en el futuro cercano.

A pesar de lo exhaustivo que fue mentalmente y físicamente, este MozCamp es el mejor evento de Mozilla al que he atendido. Cada hora que pasé sin dormir valió totalmente la pena. Juntar tanta pasión, intelecto y fuerza bajo un solo techo es algo que no tiene precio. Me encanta la dirección que los MozCamps como un todo están tomando, y espero que continúen creciendo de esta manera.

Y, finalmente, para todos los que trabajaron organizando este evento: GRACIAS!

Tagged , , , ,

Experiences at MozCamp Latin America

MozCamp LatAm ended a week ago and I’m still trying to recover my sleep. It was an intense experience, much more so than any other Mozilla event I’ve ever attended before. It felt like every member of the Latin American / Hispanic community had their mind set on having a fantastic time and making the best of every minute they spent in Buenos Aires. Well, I think they succeeded!

Here’s a recap of my MozCamp experiences:

MDN Hack Day

The MDN Hack Day was held the day before MozCamp, on Friday. There were a number of interesting talks about apps, developer tools and the Add-ons SDK, among others.

Unfortunately most talks were given in English, which is a challenge for many locals who attended the event. While one would expect people working in software to have an above average understanding of English, this varies significantly across Latin America. I think we need to make a bigger effort getting native speakers to give these talks, even if they are not first-hand experts in the subjects being discussed.

The afternoon was opened for free hacking. People at first had no idea what to do, but then we got organized and formed groups around different subjects and started working on small projects we could get done during the session. I spent some time demoing how you can build a userscript from scratch using the developer tools that now ship with Firefox. Inspector, Scratchpad and the Scriptish extension can help you create effective userscripts in a matter of minutes, as I discovered while preparing my MozCamp talk.

Later, Hernán joined us and came up with an idea we could work on as a team: an SDK add-on that imitates Instagram. Hipsterizer lets you right-click on an image and upload it to an image service after applying some lame photo effect. At the moment it only does black and white and uploads to imgur, but it can potentially use all effects supported by the filtrr library, which is what we’re using. The image filters are applied using canvas, so it is web standards all the way down. Want to contribute? We’re also willing to sell it for 7 billion dollars.

MozCamp!

As usual, MozCamp began with a marathon of excellent keynotes, followed by a tightly packed two-day schedule. The tracks were fairly well organized, so I didn’t have much trouble figuring out which talks to attend. The venue was a nice place, though it took lots of walking most of the times you needed to switch rooms.

Early on Sunday there was a soccer match for those few of us who were willing to get up at an ungodly time and run around under the morning sun. We ended up packing 20 or so players in a small field usually meant for 5-on-5 games. It was a blast! I had a great time and running around energized me for the rest of the day. I should exercise more when I travel.

Madalina - best goalkeeper ever

Madalina - best goalkeeper ever

I gave my talk on the second day. I’m usually very nervous about speaking in public, and the circumstances didn’t help at all. This being the second day, things started a bit late, and the Q&A session right before my talk ran longer than scheduled, so we were a good half an hour behind. Then there was a problem with the video displays in the room I was giving the talk in, and that took what seemed like an eternity to fix (I think it took about 15 minutes in reality). So, I had to rush the talk and then take some extra time from the lunch break to do Q&A.

Giving my add-ons talk

I move my hands too much

Overall, I think the talk went well. The demo went without a hitch, which is saying a lot given the poor Internet connection we had across the whole event. I got a good cheer when I told people I was giving the talk in Spanish. My choice in language initially depended on the crowd, but finally I just went with Spanish because of all the delays. There was only one Brazilian in the crowd and fortunately he understood Spanish well enough.

Community Work Day

Following MozCamp, there was a separate day dedicated for Mozilla Hispano project coordination.

This is something that I think is fairly unique to the Hispanic community. Mozilla Hispano is a meta community composed of several Spanish-speaking communities, spanning numerous countries and potentially one of the largest groups of contributors in the world. Having a common language means that most efforts can be coordinated at the Mozilla Hispano level, rather than the local community level. The communication at MozCamp is fairly unidirectional, a point Kevin Dangoor covered very well. The Work Day gave the Hispanic communities a bidirectional channel to align their visions and figure out what they want to do. The sessions were very open-ended and meant to encourage debate. Everyone had their chance to speak their mind, sometimes emotionally but never aggressively. I’d like to see MozCamp have some of that.

Community Work DayI was very interested in the Mozilla Hispano Labs session, for obvious reasons, and I found myself talking a lot in it. Their biggest concerns were focused around getting the right people to contribute in their projects, and keeping them interested. This paralleled many of the experiences I’ve had while working with the add-on developer community and the AMO Editors team, so I had plenty to say. It became evident that the group wasn’t doing enough to project its image and engage developers. A number of important action items came out of this discussion: setting up a separate mailing list, updating the Labs landing page, and setting up a developer blog (where I volunteered to contribute). About halfway through the discussion, Felipe – who was leading the discussion and currently manages MH Labs – said something like “Well, it looks like we can’t code any of these solutions!”. When you have a hammer, everything looks like a nail :).

I was very happy to reconnect with the MH community, and it makes me feel a tad guilty that they are doing so much for Mozilla in their spare times while I often feel overwhelmed just doing the work I’m paid for. I’m also a bit embarrassed that there’s no Costa Rican Mozilla community whatsoever, and I should’ve done something about it long ago. So, I’m taking some initial steps in that direction, and now I’m determined in forming at least a small group that can represent my country in MH. Expect updates about this in the near future.

As mentally and physically exhausting as it was, this MozCamp is the best Mozilla event I’ve attended so far. Every hour I spent not sleeping was totally worth it. Bringing so much passion, intellect and strength under one roof is priceless. I love the direction MozCamps are taking in general, and I hope they continue growing as they have.

To all the people who worked organizing this event: THANK YOU!

Tagged , , , ,

A couple of add-ons that could use forks / developers

I’m preparing to write a few posts for the Add-ons Blog in which I’ll be covering Firefox 3.6 compatibility. Specifically, I’ll be talking about add-ons that are still popular in 3.6 and are holding some users back since they don’t have recent updates. Luckily, I’ve discovered that most of these add-ons have suitable replacements, so this is more a matter of telling users where to find them.

I mentioned this in passing in a dev.planning discussion, and someone brought up a couple of add-ons that are also in need of some developer love and I know won’t be mentioned in my upcoming posts. I was specially interested in them because they provide useful accessibility features and they are in need of support for Thunderbird and SeaMonkey, which admittedly we give little attention to.

Here they are:

  • NoSquint. It currently supports Firefox. Apparently, older versions of this add-on supported SeaMonkey and Thunderbird, which is what we want. It is licensed under the GPL.
  • Quote Colors. Works on older versions of Thunderbird and SeaMonkey, but it hasn’t been updated for a while. Judging by its usage stats, forcing compatibility makes it work, which would make it a fairly trivial fork. It is licensed under the MPL.

My recommendation when forking add-ons is to always try to approach the original developer first. The ideal case is that you can work together and get the new version out under the same add-on listing. However, if that doesn’t work, creating a new listing with the fork is the next best thing. Choose a name that relates to the original – so that users can find it – but not that close to be confusing. Something like “Quote Colors Updated” works.

Let me know if you know of any active forks or if you’re interested in taking any of these.

Thanks!

Tagged , , , ,

All green!

Today we reached a very significant milestone.

Review queue statusThis is a screenshot of the AMO Editors dashboard, which we use to track the status of the AMO review queues. For the first time since I can remember, we are all green!

What does this mean? It means that all add-ons currently waiting in the queues have been waiting for less than 5 days. Also noteworthy is the fact that all queues have a length in the low double digits and even one in the single digits! They are normally in the hundreds, so this is quite impressive.

All credit goes to our add-on review community, the AMO Editors. Their continuous dedication and incredible competitiveness have brought us to a point that we could only dream of. And now to clear the rest of the queues… :D

Thank you!

Tagged , , ,

My experience porting an add-on to Mobile Firefox

I’ve been meaning to experiment more with Mobile Firefox for a while, but I’ve had very little time to work on my own add-ons, which are the best source of real-world development experience for me. Since I had received a couple of requests to port Remote XUL Manager to mobile, and this is a fairly simple extension, I thought this would be the ideal learning experience.

Since I already had a working desktop extension, the amount of coding needed was fairly small. However, I did encounter several difficulties along the way, which I think are worth documenting. Here’s how it went.

Documentation

The first thing I looked for was documentation. It’s not hard to find the Mobile documentation page if you know that the Mozilla Developer Network exists. That’s a good hub with useful tidbits, although it could use some cleanup and consolidation. Many of its useful links lead to the Mozilla wiki, outside of MDN, and they are written in the format you expect for the wiki, not MDN. The Fennec Extensions page covers some of the basics, but it fails to say how to create a basic overlay. Something many developers need to know to get started is which chrome path is necessary to overlay. I believe it’s the same path as in Firefox, but I ended up not using an overlay at all.

UI

Figuring out the UI for my port was one of the most difficult steps. In Firefox I just add a menu item that opens the main management window. My extension also has a few extra windows for advanced features, but early on I decided not to support them in the mobile version.

The only UI area you really have available is the content, so I decided to create an about:remotexul XUL page. This requires registering a JS component instead of an overlay, and the component just tells Firefox to redirect about:remotexul to a XUL page in my chrome package. So, instead of clicking on a menu item, the user has to type this URL. Not terribly user-friendly, but not that bad either. I considered adding a button somewhere, but gave up almost immediately.

Which leads me to a few questions about mobile add-on development. Firstly, how can we expect add-ons to be built for mobile if there is no place for them in the UI? Do we only want add-ons that work silently in the content? And (more importantly IMO) are we applying the same philosophy for tablets? It seems that, at least in terms of add-ons and UI space, mobile phones and tablets are entirely different playgrounds.

My take on it is that we need an Add-on Bar for both. The panel on the right-hand side can afford one more button, and this button could toggle another panel where add-on buttons can live. A similar approach could be used for tablets, but in their case I think the toolbar could be enabled by default (provided it has buttons in it). Having no way to add UI is a gigantic obstacle in the way of add-on creation for mobile. The possibility of improving performance through more native UI toolkits is also a looming obstacle that we’ll need to tackle.

As a minor side note, I noticed some drawing oddities when panning my XUL document in the content area. I guess XUL content hasn’t been given enough attention on mobile, so if you want to take this same approach, you might want to consider using HTML instead.

Testing environment

Getting Mobile Firefox set up was much more difficult than I expected.

Since there’s a desktop emulator of Mobile Firefox and I didn’t have any supporting mobile devices at the moment, this was my only way to test. So I went to the main Mobile page to download the emulator. Oddly enough, this page offered (and continues to offer) version 4.0.1, which is a few releases old by now. The “See all our channels” link takes you to the desktop Firefox channels page, and changing firefox for mobile in the URL leads to a 404.  So I downloaded 4.0.1, and one of the first things I tried was to check for updates. There were none.

Hmm, ok, so I needed to do a little hunting for the emulator. Luckily, the MDN page has a direct link to the FTP site, with all builds for all platforms. So I downloaded the Mobile Firefox 6 emulator for Mac OS. I fired it up and everything looks OK until I try to load a page. Any page turns up blank. So I go to the #mobile channel on IRC and ask around. I am told to use Linux instead… But I also got a useful bug link. It turns out that the switch to multiple processes for Mobile Firefox broke the Mac OS build, and I guess it’s not such a big priority to get that sorted out. I suppose that this bug will get more attention once desktop Firefox goes multi-process for content.

At this point I realized I was going the about: page route for the UI, so I was able to test and debug most of the port in desktop Firefox. Then the Mozilla All Hands came along and I got a new Asus Android tablet for testing, so no more emulation for me :).

The build-install-test cycle was really slow for me, though, since the easiest way to get the extension installed is to upload it to some public URL and install it using InstallTrigger (a bit of JS magic used to install add-ons from HTML). If you can use the emulator in your system, though, file:// URLs work fine as a shortcut.

Tools

I also felt the lack of development tools, some of which are key to add-on development.

I commonly used the FUEL library in my add-ons. Since it isn’t supported on mobile, I had to get rid of that code. Luckily, it wasn’t much I needed to change. There is ongoing work to port Jetpack to mobile, so that will probably become the best set of libraries available for add-ons in the future. For now I’ll just plug to components directly.

DOM Inspector is another tool I use heavily. But, is it even possible to have something like DOMi on mobile? Maybe on the tablet version, but certainly not on a phone… However, I think there are ways around this. It should be possible to integrate DOMi into the emulator. You can have it open in a separate window and inspect the mobile UI from there. Another random idea I have is to wrap the Mobile Firefox emulator as an extension you can install in desktop Firefox, so you get the benefits of all development tools that are already part of Firefox. I’m not sure if that’s feasible, though.

Conclusions

There are still many rough edges when diving into mobile add-ons for Firefox, some of which I think are not hard to fix. But there are also hard problems to solve, and hard questions to ask: does it make sense to support such a wide variety of add-ons in a mobile phone browser? And what about tablets?

This is an almost exhaustive list of the problems I encountered while porting my add-on, so I hope it isn’t taken the wrong way. The truth is that most of my code worked without any modifications. It is very satisfying to be able to support even more users of my add-on, and to have one of the few desktop+mobile add-ons around. Plus, tinkering around with code, failing and then succeeding is just plain fun :).

Remote XUL Manager 1.1 is the first version that supports mobile, and it’s currently waiting to be reviewed (irony!). You can give it a try on this page.

I’d love to hear what others have to say about trying to port add-ons to mobile. If you have any similar stories, please share them in the comments.

Tagged , , , , ,

Keeping add-ons compatible in the rapid release process

I began this discussion in the newsgroups today. Keeping add-ons compatible in the rapid release process. It is mostly aimed at Mozilla developers, but this should interest add-on developers just the same. We’re establishing a better system to communicate breaking changes, which should make it easier and quicker to identify what needs to be added to the compatibility validator for the automatic version bumps.

Discuss!

Tagged , , , , ,

Running multiple Firefox instances on Mac OS

I visit my Google Plus Stream every now and then to see if there’s anything useful in it. Unsurprisingly, most of the time there isn’t anything. I blame primitive filtering and lack of users. But one thing that caught my eye was a post by sheppy, asking about running multiple versions of Firefox easily. While most add-on developers should have little problem with this, it’s kinda tricky to get this right on Mac OS. The lack of the “shortcut” concept is problematic, so running a matrix of multiple Firefox profiles and multiple Firefox versions (at the same time) can be troublesome. Specially now that we have multiple channels instead of “what we have” and “what’s coming”. Here’s what I do.

I use the Automator application that is part of all default Mac OS installations. This is a tool that allows you to create automated tasks, as you might have guessed. Then I do something like this:

  1. Open Automator.
  2. Choose the Application template.
  3. From the long list in the second column, double click on Run Shell Script. This will add a box on the main panel, with a text input.
  4. Enter this: /Applications/${APPLICATION}.app/Contents/MacOS/firefox-bin -P ${PROFILE} > /dev/null &
  5. Replace ${APPLICATION} with the name of the application you want to run (Firefox, Aurora, Nightly, etc.). Replace ${PROFILE} with the name of the profile you want to run, or leave it blank to open the profile manager.
  6. (Optional) Replace /dev/null with a file path to save the console output to a file.
  7. Save the application to the Desktop.

That’s it. You now have a Windows-style shortcut that opens a specific application and profile combination. Repeat as necessary and you’ll have all the shortcuts you need, which you can open in parallel (except opening the same profile with multiple applications, which would be a bad idea). My desktop has a matrix of profiles and applications that are very convenient for my testing needs. You may want to have them in the dock or elsewhere, but that’s a matter of preference. You can also change their icon, which is not really worthwhile IMO.

You can copy and paste applications and then open them in Automator to edit their parameters. You need to open Automator and then choose the Open option, there’s no contextual edit option, which is a bit of an annoyance.

If anybody has a different system to deal with this on Mac OS, I’d love to hear it.

Tagged , , , ,

Version numbers and add-on breakage

Gerv started a fairly intense discussion about the new rapid release cycle, from the perspective of browser versions and their meaning. As expected, many have replied that the discussion is silly and version numbers are meaningless. This is true for most software developers, and it should be true for most web developers. In software, we have been learning that version numbers have been hijacked by marketing and we shouldn’t pay that much attention to them from the technical perspective. We have branches and revision ids for that anyway. On the web, what should matter are the features the browser offers, not the browser itself or its version. Browser and version detection are frowned upon, and most web devs worth their salt avoid it when possible.

People saying that this discussion is pointless neglect that Firefox is one more thing. It’s a platform, and most add-on developers rely on it. Version numbers matter to add-on developers because the add-on compatibility system relies on application version, not features. And since the features are pretty much every access point in the platform, doing feature detection is not realistic. So, if you’re an add-on developer and you want to keep your add-on up to date, you’ll have to test and increase its compatibility every six weeks, potentially having to make code changes.

On AMO, we have implemented a system where we automatically detect which add-ons are compatible with the latest Aurora version (the version that is between 12 and 6 weeks from release) and then upgrade those that do. All developers for the tested add-ons should get an email explaining their add-on was upgraded, and if not, why. This has been fairly successful, and we have a high compatibility percentage (relative to usage) at the moment (it’s even higher if you ignore the .NET Assistant, which could be compatible now). However, there are still many popular add-ons that aren’t compatible, specially those hosted elsewhere. And the burden on those who develop add-ons with binary components is pretty much constant, because they’ll have to update their add-ons every 6 weeks, with no exception.

So, while only a small percentage of add-on usage consists of add-ons that are incompatible (hovering something between 10% and 20%, I think), the probability that one of our add-on users has at least one incompatible add-on is even higher. And having to roll the dice every 6 weeks and see if your add-ons are compatible or not puts a great deal of stress and dissatisfaction on users. And like Colin Coghill comments in Gerv’s post:

If add-ons break randomly every few weeks, effectively FF no longer has add-ons.

Version numbers have a real impact on our users then, and they should be taken very seriously. Now, would the previous releases have worked as minor version increases? I think so, yes. The breaking compatibility changes have been fairly minor so far, with the exception of binary add-ons. Even if we hadn’t done any communication about it, I’m fairly sure that the problems would have gone mostly unnoticed, by both developers and users. Of course, that’s only because so far the changes have been minor. This shouldn’t be the case for every release, and that’s a big concern. For example, Firefox 7 will remove a couple of JSON parser functions that are heavily used by add-on developers. We’ll see how that goes.

So, what are the alternatives? There are a number of proposals, with their own advantages and limitations. I personally think that going back to the initial idea or 3 month-long tracks (instead of 6 weeks) would be a major improvement. Nils Maier (author of DownThemAll!) has a very detailed proposal in the comments on Gerv’s blog (unfortunately, I can’t link to the comment directly), which consists of having planned minor and major releases, where major releases happen only twice a year. I like that one as well.

As usual, I expected this be a short post and then it grew into a monster. Oh well. I’ll close by saying that we’re working on a number of ideas to make this release cycle as smooth and stress-free as possible to add-on developers. Most releases will change nothing you care about, so you shouldn’t need to worry about it. I strongly recommend all of you to track the Add-ons Blog, which is becoming an increasingly important source for updates. I also recommend that you give the Add-on SDK a shot. If your add-on can be implemented using the SDK, it’s better for you that it is.

Tagged , , , ,

Back from Beijing

I returned on Sunday from Beijing, where I presented at the Mozilla Developer Conference (warning: all-Mandarin page). Twice, in fact. I made a presentation about the Add-ons World (available here), and ended up stepping in for Paul Rouget, who couldn’t make it. His presentation on HTML 5 is really great and it didn’t take much effort on my part to talk about it. All of his demos are prerecorded, so the presentation is pretty much snafu-free, unlike my presentation or Myk’s, both of which had (minor) technical difficulties during the live demo. I tried following some of Chris Hellman’s recommendations this time around, but I didn’t go as far as recording my demos. Oh, well.

My presentation is meant to be a starting point for those interested in add-on development. It contains many links on how to get started making extensions, GM scripts, SDK add-ons, Personas, Themes, and even mentions language packs, search engines and dictionaries. They include a trivial add-on that translates a string on a webpage, developed as a GM script, as an SDK add-on, and as an extension. Then I compare code complexity, flexibility, the security framework and other characteristics, trying to give developers a balanced view and good information on how to choose any of the above when building an add-on. Of course I can only gloss over the details during a 40-minute presentation; hence all the links. In the end of the presentation I briefly cover publishing on AMO, add-on monetization, and the plans for an add-on marketplace. I tried to personalize it a bit for the Chinese audience, so some things may not make as much sense.

The Q&A session was surprising, in that the developers who asked questions were very knowledgeable in add-on development and had very specific questions. Some had very well-established products and demoed some really advanced add-ons. There’s such great add-on development happening in China that I wish we had a much better communication channel with developers over there. That’s something we need to work on.

Overall, the trip was excellent. The presentation went well, I got in touch with a number of developers in the area, and most importantly we had plenty of time to talk to the Mozilla Online team. I discussed our add-on performance initiative with them, which is specially relevant for them given that the default Firefox install for China includes about 10 add-ons preinstalled. A number of them also joined the AMO Editors team, which is actively looking for new members. I’m really happy because there are many add-ons that are only testable in China, by people who understand the language and local websites.

And, of course, the food and the sights we managed to squeeze into one day of touring around were all fantastic. I’d like to thank the Mozilla Online team again for the invitation and organizing this very successful event. Special thanks to 张羽 (Rachel Zhang) for taking care of us. I’m sure she feels like on holiday this week :P

Random anecdote: I’m riding the subway by myself on the way to meet a friend who lives in Beijing, pointlessly trying to appear as if I do this all the time. Then some guy approaches me and starts speaking in what I assume was Mandarin, pointing to a cellphone he has in his hands. While I’m deciding how to react to this, I look at the phone and see a picture of me at the conference. Heh. It was just an attendant who was really happy to run into me in the subway. We managed to talk a little bit during the subway ride. Very friendly guy. So, there ;)

Tagged , , , ,

Testing add-on startup performance

Our add-on performance initiative is getting lots of attention for, lets say, various reasons. There have been objections about transparency and our testing methods, so I decided to add something valuable to the discussion and document my own testing process.

I revisited my old add-on performance article and noticed that the contents of the Measuring Startup wiki page have changed substantially since I originally linked to it. It now recommends installing an add-on  to measure startup performance. I haven’t tried it, but there are a few reasons I can think this is not the best approach. (Update: I’ve been informed that the add-on is only a display for data that is gathered and stored locally. You can make test runs and then install the add-on to look at the data. That dispels my previous doubts about this approach.) Regardless, I’m documenting the old testing method here, because it is the one I have been using for a while and is also very similar to the one implemented on our automated Talos testing framework.

I have been doing lots of add-on startup testing recently, mainly to double check if the results of the Talos tests are sound. We also correlate them to real world usage data that we have been collecting since early versions of Firefox 4. This data, manual testing and source code review are the main backup sources that give us a good confidence level in the results we display on our infamous performance page (it has been linked enough).

Here’s what I do.

Setup

  1. Create a new profile dedicated for testing add-on performance (I called it startuptest).
  2. Download this HTML page and save it somewhere convenient. The page is blank if you open it directly. All it does is run some JS that extracts a timestamp after the # character in the URL, compares it against the timestamp when the script is run, and shows the difference on the page.
  3. Set up a console command that opens Firefox in your testing profile and opens the downloaded file, with the current timestamp embedded after the # character. On my system (Mac OS), this command is the following:
/Applications/Firefox.app/Contents/MacOS/firefox-bin -P startuptest -no-remote file:///Users/jorge/startup.html#`python -c 'import time; print int(time.time() * 1000);'`

The old version of the Measuring Startup page explains how to set this up on Windows.

Testing

  1. Locate the testing profile folder and delete all files in it, if there are any.
  2. Open Firefox on this profile. You can use the console command or any other shortcut if you prefer.
  3. Copy and paste the add-on listing page URL on the new profile and open the page.
  4. Install the add-on using the install button and restart if necessary.
  5. Optionally, set up the add-on in a realistic way. For example, if this is a Facebook add-on, it may make sense to log in to a Facebook account since otherwise most of the add-on’s functionality would be inactive.
  6. Quit Firefox.
  7. Run Firefox using the console command.
  8. Note the result in the startup page.
  9. Quit Firefox. I prefer using the Quit key shortcut, to interact with Firefox as little as possible.
  10. Repeat steps 7-9. I discard the 2 first runs, which are normally much slower than the rest, and measure the 10 runs after that.

Interpreting results

For your results to make any sense, it’s also necessary to make a test run without any add-ons installed, and use that as your baseline. It’s also a good idea to run all the tests consecutively to have some certainty that they are all running under similar conditions. I record and compare my results on a spreadsheet, like this one where I tested both my add-ons.

Looking at the results, Fire.fm has a somewhat noticeable impact on startup. This is not surprising because it is a complex add-on with a very complex overlay and startup processes. I documented on improving startup code in my old blog post, and we’re planning on greatly simplifying its overlay soon(ish). I doubt we’ll make the coveted 5%, but we’ll see. Remote XUL Manager is clearly simpler, and it shows how the results should not be taken at face value. Since all it does in the overlay is add a menu item that opens a separate window, it’s understandable that its impact is negligible. But does it really improve startup? No, of course not. This just means that the error margin is larger than its real performance impact.

The key takeaway here is that the results of manual tests shouldn’t be taken literally, but they’re still a good indicator of the performance impact of an add-on. Even if the error margin is not ideal (or even measurable under these conditions), you can still get a good idea of who’s fast and who’s slow. They have been very valuable to us when comparing them against Talos results.

How does this compare to Talos?

On one hand, these tests are influenced by how the testing system is set up. I have several applications open at all times, and I don’t close them all for testing. I do take care in not running anything heavy simultaneously, like Time Machine or MobileMe Sync. And then there’s clearly the fact that I have to spend some time setting things up and running the tests. The longer the tests take, the more likely it is that some other process affects the results.

On the other hand, it’s easier for me to recognize errors during testing. Many of the complaints we’ve received about the testing system is that it makes silly mistakes like trying to install an add-on from an incorrect URL, or trying to install an add-on that is not compatible with the Firefox version being tested. These are things that one can clearly see when testing manually, but they weren’t obvious when running the tests automatically. Those add-ons have been getting very good performance rankings because they’re not really being loaded, so those results are not reliable.

Luckily, the people complaining about our testing are also filing bugs and talking to us directly, so we’re looking into the issues and trying to get them resolved as soon as possible. Special thanks to Wladimir and Nils, who have been very helpful filing and categorizing bugs. More details coming up in the Add-ons Blog.

As always, the developer community proves itself as an invaluable asset for Mozilla (well, you are Mozilla). Even if our discussions can become harsh and are generally very public, the outcome is almost always a set of improvements both in our technical and communication fronts. Getting things right take a lot of work and a lot of patience, and I hope we can quickly get to a place where we’re all satisfied.

Tagged , , , ,