Skip to content

Porting a Chrome extension to WebExtensions in half a step

A coworker contacted me today about the Unbias Me extension. It’s a set of simple userscripts that hide profile pictures and names in sites like LinkedIn and GitHub, where unconscious bias can affect your behavior toward women and other groups. She wanted to know if we could have an add-on like this for Firefox.

I gave the add-on a quick look and realized it would be trivial to port it to WebExtensions. WebExtensions already support browser_action and content_scripts, which is what the add-on uses. So, ultimately all I needed to do was add a little bit of metadata:

  "applications": {
    "gecko": {
      "id": "unbias-me@fureigh",
        "strict_min_version": "45.0"

That’s it! You just zip the files together and use the .xpi file extension, and you’re good to go. Here’s the unsigned XPI if you want to give it a try. I sent a PR to the developer, so I hope they can integrate this code and get the add-on listed on our site.

This may seem like an unfair example since content scripts are the simplest form of add-on. However, keep in mind that they’re also the most common form of add-on. Having support for this API means that there are probably thousands of Chrome extensions that can be very easily ported right now to work with release versions of Firefox and to be listed on AMO. If there’s a Chrome extension you would like to see ported to Firefox, you should give this little experiment a try.

Tagged , , , ,

WebExtensions and code review

All add-on files listed on (AMO for short) have undergone code review and testing by a team of volunteer and hired add-on reviewers. The review team’s goal is to ensure these add-ons do what they claim to do and are safe to use.

Most safety issues come from add-on code that unintentionally puts users at risk of attack, for example XSS. We rarely encounter outright malicious add-ons attempting to be listed. Also, while the broad APIs exposed by XPCOM give add-ons the power to do lots of harm, most security problems we discover come from front end code (touching web content or managing the add-on’s UI), which also affect SDK add-ons and WebExtensions.

WebExtensions give us the opportunity to start over in some ways, so this is the ideal time to rethink what we can do to ensure that these add-ons are safer than their predecessors. By making them safer, we will also make them much easier to review, possibly even automatically in some cases.

So, what kinds of things do we look for in code review?


eval is a powerful function that lets you execute the contents of a string as JavaScript code. It is often used to parse JSON (and JSONP) responses from APIs. JSON parsing can be done trivially now, so it’s not allowed to use eval in add-ons for this purpose.

Due to its risks, we rarely allow eval in an add-on. Since eval can execute arbitrary code, these risks are clear. Using it in privileged add-on code is even more dangerous, so it needs to be done sparingly and very carefully. For review purposes we need to ensure that the string being evaluated is trusted (doesn’t come from an external source) and that there’s no better way to handle this in the add-on.

A common exception we grant is when an add-on overrides a native JS function to splice code into it. For example, if an add-on wanted to modify tab behavior, it could need to override an internal function in the Firefox tab code to accomplish its goal. In this case the add-on converts the function into a string, makes string replacements in it and then overrides it. If this sounds flaky and error-prone to you, you’re right, but in some cases it’s unavoidable. I don’t expect this to apply much to WebExtensions because they don’t have access to Firefox internals.

Another common use is code obfuscation. We could mitigate this for all add-ons (not just WebExtensions), by providing code obfuscation on AMO. This has been brought up before as a useful feature to offer, but has never been considered a priority.

setTimeout(), setInterval(), event handlers

Just like eval, these APIs can execute arbitrary JS code from strings. Unlike eval, they can be used in ways that don’t evaluate strings, which we strongly recommend.

So, instead of:

window.setTimeout("doSomething()", 1000);

You can do this:

window.setTimeout(function() { doSomething() }, 1000);

While the first example clearly isn’t doing anything unsafe, this requires manual inspection to determine. Our code analysis tool doesn’t inspect code in string literals, and the first example can easily become much more complicated by adding string concatenation and dynamic values. Thus we have to flag and manually inspect most uses of setTimeout() and similar in add-ons. Not a great use of anyone’s time, considering 99% of the time they’re perfectly safe.

One possible solution here is to offer “safe” versions of all of these methods for WebExtensions, which force using a function like in the second example. A more radical solution would be to force the “safe” behavior on the existing functions, but that would break lots of existing code, including libraries, so that seems very unlikely.

innerHTML, outerHTML

innerHTML is the easiest way to inject a string of HTML code into a document. It’s a very convenient object property, but it comes with security and performance costs. Like in previous examples, the string being injected could contain untrusted executable code (innerHTML prevents script elements from being executed, but there are ways around that). Because of the way innerHTML is often used, it can be very difficult for reviewers to determine the origin and safety of the string being injected. On the other hand, it’s a very convenient shortcut that developers use all too often.

In these cases, reviewers request developers to make sure the string being passed is sanitized (stripped from any code) before injection. But, like in the setTimeout() case, it’d be better if we could provide a safer equivalent so developers can opt into it and don’t get flagged for review.

Remote scripts / frames

Loading remote scripts significantly undermines the review process. If an add-on can load arbitrary code dynamically then it can be unsafe – or become unsafe – without the reviewers being able to detect it.

Injecting remote code into a privileged context – in which add-on code runs – is very dangerous. This gives the untrusted code control over what the add-on can do, so we never allow it. This should be a smaller problem for WebExtensions, since they have more limited permissions. However, there’s still real harm that can come from this, so I think it should be blocked on WebExtensions. Remote code execution should be limited to unprivileged scopes.

Injecting remote code into content is a smaller problem. Bad code and content injection used to break secure sites, but that’s less of a problem now with CSP and mixed content blocking. However, it’s still possible to inject remote scripts into arbitrary sites, potentially stealing private data or performing unwanted actions on behalf of the user. This is very common behavior in malicious add-ons. On the other hand, there are some valid use cases for remote scripts in content, so instead of blocking, this should probably be restricted for WebExtensions. I think only HTTPS URLs should be allowed, and the allowed origins be specified in the permissions.

Wrap up

WebExtensions will be safer than XUL and SDK add-ons by virtue of more limited APIs. However, they are still potentially dangerous to users, so we can’t completely do away with code reviews for AMO. Hopefully we can make some changes in the API that can minimize the need for code reviews and make developers’ lives easier.

Tagged , , , ,

Using WebExtensions to communicate between iframes

After my presentation at FOSDEM, I was approached by developers who had a specific use case for an add-on. They asked which of our add-on technologies they should use to implement it, and whether it was practical to do. I knew it was doable, and I thought it would be an interesting programming exercise, so I gave it a shot.

Their problem concerns a government website that was particularly difficult to use and not very easy on the eyes. I think we all know the type. The add-on should make the site more manageable. So far, this is a straightforward content script. What makes this case trickier is that the site uses iframes to display parts of its contents, and the add-on should take the contents of those frames and embed them into the top document. My add-on will do this for a minimal test case, using the new WebExtensions API.

The first thing I need to work on this add-on is a page to test it with, so I created this one.

Test page

Test page. Note the green borders around the iframes.

It’s a very basic HTML page that has two iframes, one pointing to a page in the same domain and another one pointing to Using a different domain matters because certain iframe interactions are subject to same-origin policies and I wanted to ensure this add-on worked for all cases.


Other than the basic metadata, my manifest.json file has this:

"content_scripts": [
      "matches": [
        "*" ],
    "all_frames": true,
    "js": [ "scripts/flatten.js" ]

"background": {
  "scripts": ["background.js"]

I declare the script using content_scripts, giving it access to both domains involved. It’s important to set the all_frames flag so that the content script is loaded into the internal frames of the page. Otherwise it is only loaded for top-level documents.

I also declare a background script, which will act as a message broker between the frames and the main document.


Messaging is the key element in this implementation. We need the top-level document to obtain the contents of the frames and then swap that content for the iframe nodes. Since different documents can be potentially running in different processes, message-passing is necessary. The frames will send their contents to the main document, through the background script:

if (window != {
  // frame. Send message to background script.
    { "host" : window.location.hostname,
      "body" : document.body.innerHTML });

The frames use sendMessage() to pass the message to the background script. The message also includes the host so it’s easier to tell one frame from the other.

The background script is listening for these messages:

browser.runtime.onMessage.addListener(function(message, sender) {

and then forwards them to the main document:

browser.tabs.sendMessage(, messages[0]);

Note that the sender argument received from the frame message includes information of its tab of origin, which is necessary in order to us know where to send the message back. The sender object also includes a frame ID, which could be useful in more complex scenarios.

Finally, the main document receives the message and swaps the frame for its contents:

browser.runtime.onMessage.addListener(function(message, sender) {
  // ...
  // XXX: message.body should be sanitized before injecting!
  container.innerHTML = message.body;
  // inject content from frame.
  frame.parentNode.insertBefore(container, frame);
  // remove frame.
  // ...

I’ll reiterate here that using innerHTML in this way is very unsafe, since it opens the top document to XSS attacks. The contents of message.body should be sanitized before they are injected.

Coordinating messages

Here it got interesting. I assumed that the content script for the main document would always load before the iframes, which would make managing the messages fairly easy, but that was not the case. During my testing, the content script on the first iframe loaded first. This means the background script in the extension needs to cache at least that first message until the message listener in the main document becomes available. The first message is lost if it’s forwarded right away.

To make the implementation close to correct, I have the background script wait for the 3 documents to be loaded before forwarding the messages to the main document:

if (message.loaded) {
  // top document has loaded.
  loaded = true;
} else {
  // iframe has loaded.

if (loaded && messages.length == 2) {
  // forward messages back to tab so the top window will use it.
  browser.tabs.sendMessage(, messages[0]);
  browser.tabs.sendMessage(, messages[1]);
  // clear state.
  loaded = false;
  messages = [];

I set message.loaded for the message coming from the main document. If that flag isn’t set, the message is coming from one of the iframes. When all 3 documents have reported themselves, both messages are forwarded to the main document, and the state is cleared for future loads.

This solution isn’t complete. It doesn’t take into account concurrent loads. This could be fixed with a map that is indexed by the sender tab id. It also doesn’t handle cases like when an iframe didn’t load, or when one of the iframe pages is loaded as the main document. These are all solvable problems but require more code than this simple example.


You can find the sources here and build the XPI for yourself. You can also download the built XPI here. Loading the test page with the add-on installed should look like this:

Test page with add-on installed

Note the entire contents are now visible, and the borders turn red to make it easier to tell the add-on worked.

That’s it! Hope this helps someone with their add-on. Let me know in the comments if you have any questions.

Tagged , , , ,

Some notes on WebExtensions discovery

As I was preparing for my presentation at FOSDEM, I tried to approach WebExtensions from a beginner’s perspective and document the entire process. I wrote it all down on this Etherpad, if you’re interested in the raw notes. This blog post is about the first part of the notes: discovery.

We have a history with naming at Mozilla, where project codenames often end up taking over, project names are reused, and all sorts of confusion ensue. I wanted to see what showed up running various queries through the most popular search engines, and here’s what I found:

  • Searching for “web extensions” will often point to domain extensions, rather than any Mozilla docs. Maybe in time our docs will get higher ranking.
  • Searching for “WebExtensions” will most often point to our wiki page, which makes sense since the project is only taking off. However, we would want these queries to eventually point to MDN instead. Again, it might be only a matter of time.
  • Searching for “firefox extensions” and “develop firefox extensions” generally yield useful results, though they are inconsistent. Some point to AMO while others point to the add-ons blog. We want MDN to be the place to find add-on docs, so we’ll need to work on this.
  • didn’t show up. While this site is mostly aimed at developers interested in the progress of the API, I think it would be useful if it showed up among the top results.

My main takeaway is that we have our SEO work cut out for the coming months. Maybe WebExtensions isn’t the best of names for this new technology, but it’s been out there long enough that there’s probably no turning back. And choosing names is hard anyway.

The complete results are in the Etherpad. Of course, your results may vary, since many search engines personalize rankings based on past searches, and I ran this experiment 3 weeks ago. The pad also has some notes about the documentation and how easy it was to port a relatively simple add-on to the new API. There’s also a good blog post about porting a Chrome extension to WebExtensions in the Mozilla Hacks blog, which I recommend you read next.

Tagged , , , ,

WebExtensions presentation at FOSDEM 2016

Last week, a big group of Mozillians converged in Brussels, Belgium for FOSDEM 2016. FOSDEM is a huge free and open source event, with thousands of attendees. Mozilla had a stand and a “dev room” for a day, which is a room dedicated to Mozilla presentations.

This year I attended for the first time, and I gave a presentation titled Building Firefox Add-ons with WebExtensions. The presentation covers some of the motivations behind the new API. I also spent a little time going over one of the WebExtensions examples on MDN. I only had 30 minutes for the whole talk, so it was all covered fairly quickly.

The presentation went well, and there were lots of people showing interest and asking questions. I felt that for all of the Mozilla presentations I attended, which makes me want to kick myself for not trying to go to FOSDEM before. It’s a great venue to discuss our ideas, and I want us to come back and do more. We have lots of European contributors and have been looking for a good venue where to have a meetup. This looks ideal, so maybe next year ;).

Tagged , , , ,

Webmakering in Belize

Just a few months ago, I was approached by fellow Mozillian Christopher Arnold about a very interesting and fairly ambitious idea. He wanted to organize a Webmaker-like event for kids in a somewhat remote area of Belize. He had made some friends in the area who encouraged him to embark on this journey, and he was on board, so it was up to me to decide if I wanted to join in.

As a Costa Rican, I’ve always been very keen on helping out in anything that involves Latin America, and I’m especially motivated when it comes to the easy to forget region of Central America. Just last October I participated in ECSL, a regional free software congress, where I helped bootstrap the Mozilla communities in a couple of Central American countries. I was hoping I could do the same in Belize, so I accepted without much hesitation. However, even I had to do some reading on Belize, since it’s a country we hear so little about.  Its status as a former British colony, English being its official language, and its diminutive size even by our standards contributes to its relative isolation in the region. This made this event even more challenging and appealing.

After forming an initial team, Christopher took on the task to crowdfund the event. Indiegogo is a great platform for this kind of thing and we all contributed some personal videos and did our fair share of promoting the event. We didn’t quite reach our goal, but raised significant funds to cover our travel. If it isn’t clear by now, I’ll point out that this wasn’t an official Mozilla event, so this all came together thanks to Christopher. He did a fantastic job setting everything up and getting the final team together: Shane Caraveo, Matthew Ruttley and I. A community member from Mexico also meant to attend, but had to cancel shortly before.

Traveling to our venue was a bit unusual. Getting to Belize takes two flights even from Costa Rica, and then it took two more internal flights to make it to Corozal, due to its remoteness. Then it took about an hour drive, including a ride on a hand-cranked ferry, to reach our venue (which was also our hotel during our stay). Years of constant travel and some previous experience on propeller planes made this all much easier for me.

Belize from a tiny plane
I made it to Corozal, as planned, on December 28th. I could only stay for a couple of days because I wanted to make it back home for New Year’s, so we planned accordingly. I would be taking care of the first sessions, with Shane helping out and dealing with some introductory portions, and then Matthew would arrive later and he and Shane would lead the course for the rest of the week.

Part of our logistics involved handing out Firefox OS phones and setting up some laptops for the kids to use. It didn’t take long before things got… interesting.

Charging 6 Firefox OS phones
Having only a couple of power strips and power outlets made juggling all of the hardware a bit tricky, and since our location was a very eco-friendly, self-sustaining lodge, this meant that we couldn’t leave stuff plugged overnight. But this isn’t really the “interesting” part, it just added to it. What really got to us and kept Shane working furiously in the background during the sessions was that the phones had different versions of Firefox OS, none of them very recent, and half of them were crashing constantly. We managed to get the non-crashy ones updated, but by the time I left we had yet to find a solution for the rest. Flashing these “old” ZTE phones isn’t a trivial task, it seems.

Then Monday came and the course began. We got about 30 very enthusiastic and smart kids, so putting things in motion was a breeze.

First day of classA critical factor that made things easy for me was attending a Teach The Web event that held in Costa Rica just a couple of weeks earlier. This event was lead by Melissa Romaine, and to say that I took some ideas from it would not be doing it justice. I essentially copied everything she did and it’s because of this that I think my sessions were successful. So, thanks, Melissa!

So, here’s how Melissa’s my sessions went. I showed the kids what a decision tree is and asked them to draw a simple tree of their own, in groups, for a topic I gave each group. After that I showed them a simple app created in Appmaker that implements a decision tree as a fun quiz app. They were then asked to remix (hack on) the example application and adapt it to their own tree. Then they were asked to share their apps and play around with them. This all worked out great and it was a surprisingly easy way to get people acquainted with logic flows and error cases.

Kids hacking away

The next day we got back to pen and paper, this time to design an app from scratch. We asked everyone to come up with their own ideas, and then grouped them again to create wireframes for the different screens and states their apps would have. I was very happy to see very diverse ideas, from alternatives to social networking, to shopping, and more local solutions like sea trip planning. Once they had their mockups ready, it was back to their laptops and Appmaker, to come up with a prototype of their app.

Unfortunately, my time was up and I wasn’t able to see their finished apps. I did catch a glance of what they were working on, and it was excellent. The great thing about kids is that they put up no resistance when it comes to learning new things. Different tools, completely new topics… no problem! It was too bad I had to leave early. Matthew arrived just as I was leaving, so I got to talk to him all of 30 seconds.

But my trip wasn’t over. Due to the logistics of the 4 flights (!) it takes to go back home, I couldn’t make it back in one go, so I chose to spend the night of December 30th in Belize City, in the hopes of finding people to meet who were interested in forming a Mozilla community. I did some poking around in the weeks leading to the event, but couldn’t find any contacts. However, word of our event got around and we were approached by a government official to talk about what we were doing. So, I got to have a nice conversation with Judene Tingling, Science and Technology Coordinator for the Government of Belize. She was very interested in Webmaker and the event we did, and was very keen in repeating it on a larger scale. I hope that we can work with her and her country to get some more Webmaker events going over there.

On my last day I finally got some rest and managed to squeeze in a quick visit to Altun Ha, which is fairly small but still very impressive.

Mayan ruins!

I’ll wrap up this post with some important lessons I learned, more as a note to self if I go back:

  • While the official language is English, a significant amount of people living outside the city centers speak Spanish in their homes (school is taught in English, though). In the cities Spanish is a secondary language at best.
  • When in doubt, bring your own post-its, markers, etc.
  • Tools that require accounts pose a significant hurdle when it comes to children. It’s not a good idea to have them set up accounts or email addresses without parental consent, so be prepared with your own accounts. I ended up creating a bunch of fake email addresses with spamgourmet so I could have enough Webmaker accounts for all computers (Persona, why so many clicks??).
  • If you’re ever in a remote jungle area, wear socks and ideally pants. Being hot is truly insignificant next to being eaten alive by giant bugs that are impervious to repellents. Two weeks later I still have a constellation of bug bites that prove it.

Many thanks to Christopher for setting all of this up, our hosts Bill and Jen at the Cerros Beach Resort, Shane and Matthew for all the hard work, Mike Poessy for setting up the laptops we used, and everyone else who helped out with this event, assistant teachers and students.

Tagged , , , , , ,

Interview with Extension.Zone

I was recently approached by Extension.Zone for an interview. I was pleasantly surprised to see a new website dedicated to browser-agnostic reporting of add-ons. Then I was just plain surprised that .zone is now a thing.

Anyway, the interview is up here. There are some interesting questions about what makes the Firefox add-on ecosystem different than others, and what I think is an under-explored area in add-on development.

Tagged , , ,

Taller de Firefox OS en Panamá

Me invitaron a dar una charla de desarrollo para apps en Firefox OS este fin de semana, en Ciudad de Panamá. El evento fue organizado por CascoStation, un coworking ubicado en un área muy interesante de la ciudad. Harold de CascoStation hizo un trabajo excepcional para asegurarse que todo saliera bien y todos estuviéramos muy cómodos.

El taller fue similar a los que he dado en el pasado, con algunas mejoras gracias a las lecciones aprendidas. La charla introductoria se puede encontrar aquí: Introducción a Firefox OS. Algunas de las páginas no tienen mucho sentido sin la charla, pero los vínculos son útiles para empezar a trabajar con Firefox OS.

La asistencia fue buena, alrededor de 20 personas. Lo más importante es que la mayoría tuvo suficiente interés para jugar un rato con Firefox OS durante el taller. Tomamos una foto grupal, pero ya varios se habían ido.

Foto de grupo al final

También nos hicieron una nota en El Espectador de Panamá, donde se puede apreciar un poco el ambiente del taller.

Una sorpresa muy grata es que en CascoStation trabaja un artista 3D muy habilidoso, que además aplica sus talentos para crear impresiones 3D con un nivel de detalle espectacular. Él nos creó una figura de Firefox OS que no podría haber quedado mejor.

Modelo de la figura de Firefox OS Figura de Firefox OS

Espero que podamos ver más de estos en el futuro :).

Por último, pude conocer a algunos de los miembros de la nueva comunidad de Mozilla Panamá (y en Facebook). Pudimos hablar sobre los lanzamientos en América Central y los retos que tenemos adelante. Esperamos tener noticias de Costa Rica muy pronto.

La experiencia estuvo excelente, y esperamos ver de nuevo a nuestros amigos panameños en unas semanas para el Encuentro Centroamericano de Software Libre 2014.

Tagged , , , , ,

The complex AMO review process

The add-on review process on AMO is fairly complicated, and can get very overwhelming if you need to look at it close enough that you must understand file and add-on statuses. AMO admins, devs, and reviewers are usually the ones who have to worry about this stuff and there aren’t good docs for it.

Since the issue popped up again today, I decided to take a few minutes to create a chart that explains the AMO review cycle from a file and add-on status perspective. If you think this chart is pretty crazy, you should keep in mind it’s a simplified view of the process. It doesn’t take into account developers deleting versions of marking their add-ons as inactive, and a few repetitive connections were left out. Still, it should give a good idea of how add-on and file statuses interact during the review process, and should help admins figure out which status means what (to add more confusion to the mix, AMO has old unused statuses, as well as others that are only used in Marketplace).

Here’s the chart without the notes:

Review cycle chartFor the real deal, check out the doc.

Is this complexity necessary? Probably. We have two review levels because it allows us to list polished add-ons as well as experimental ones, giving developers and users more flexibility and choice. This in turn makes AMO more diverse and generally a better option than self-hosting.

Tagged , , ,

Mozilla Hispano Labs Blog Coordination

All of us Spanish-speaking Mozillians need to take two roles: one with our local community and one with Mozilla Hispano. Mozilla Hispano is a meta-community that allow us to share a whole bunch of knowledge and experience in our first language, which is pretty unique within the Mozilla world.

So, while I help out with local events in Costa Rica, I also collaborate with Mozilla Hispano in various ways.

This year I volunteered to help with the Mozilla Hispano Labs blog, what I consider to be a very important communication channel to developers who can eventually join us. In this blog we regularly publish technical articles, translated or original, related to Mozilla projects and the open web.

MH LabsIt too some time, but the blog has now picked up a good publication pace. Just recently we were publishing roughly one article per week, and this month it looks like we’ll have twice as many. We have 3 regular writers and a few new ones who look like they might stick around.

At the moment most of the articles are translations, and this is the process we follow:

  1. Identify a good article in English.  Every now and then I’ll comb articles published at Mozilla Hacks and choose the best recent ones. Sometimes there will be interesting pieces that come up from other sources.
  2. Create a translation task in our Teambox tool. This is an internal task management tool we use. Anyone interested in helping out just has to write to our mailing list in order to gain access to the tool.
  3. A volunteer chooses the task from the available options and takes it.
  4. The translation is delivered, usually in the form of a Google Doc. Experienced writers get access to the Mozilla Hispano WordPress so they can deliver it as a draft post.
  5. La translation is reviewed (usually by me) and some cosmetic fixes are done before publishing it.

This role has been very useful for me. It keeps me focused because there are always new articles to review, and at the same time it doesn’t require too much time (reviewing and publishing an article takes about an hour).

The biggest challenge I’ve run into is to keep enough topic diversity. There was a moment when most articles were about Firefox OS app development. That’s not necessarily bad, given that it’s new and popular, but I think we need to cater to various interests. It’s also been complicated to get original material to publish. Most developers don’t have much time to write about what they do (myself included), which is unfortunate. I’ll try to make more contributions of my own and try to lead with the example.

Overall it’s been a great experience, and I’m glad to be in charge of this little corner in the MH world.

Tagged , , ,