Mar 232018

The next SEO experiment I’d like to discuss results for is the MDN “Competitive Content Analysis” experiment. In this experiment, performed through December into early January, involved selecting two of the top search terms that resulted in MDN being included in search results—one of them where MDN is highly-placed but not at #1, and one where MDN is listed far down in the search results despite having good content available.

The result is a comparison of the quality of our content and our SEO against other sites that document these technology areas. With that information in hand, we can look at the competition’s content and make decisions as to what changes to make to MDN to help bring us up in the search rankings.

The two keywords we selected:

  • “tr”: For this term, we were the #2 result on Google, behind w3schools.
  • html colors”: For this keyword, we were in 27th place. That’s terrible!

These are terms we should be highly placed for. We have a number of pages that should serve as good destinations for these keywords. The job of this experiment: to try to make that be the case.

The content updates

For each of the two keywords, the goal of the experiment was to improve our page rank for the keywords in question; at least one MDN page should be near or at the top of the results list. This means that for each keyword, we need to choose a preferred “optimum” destination as well as any other pages that might make sense for that keyword (especially if it’s combined with other keywords).

To accomplish that involves updating the content of each of those pages to make sure they’re as good as possible, but also to improve the content of pages that link to the pages that should show up on search results. The goal is to improve the relevant pages’ visibility to search as well as their content quality, in order to improve page position in the search results.

Things to look for

So, for each page that should be linked to the target pages, as well as the target pages themselves, these things need to be evaluated and improved as needed:

  • Add appropriate links back and forth between each page and the target pages.
  • Is the content clear and thorough?
  • Make sure there’s interactive content, such as new interactive examples.
  • Ensure the page’s layout and content hierarchy is up-to-date with our current content structure guidelines.
  • Examine web analytics data to determine what improvements the data suggest should be done beyond these items.

Pages reviewed and/or updated for “tr”

The primary page, obviously, is this one in the HTML element reference:

These pages are closely related and were also reviewed and in most cases updated (sometimes extensively) as part of the experiment:

A secondary group of pages which I felt to be a lower priority to change but still wound up reviewing and in many cases updating:

Pages reviewed and/or updated for “html colors”

This one is interesting in that “html colors” doesn’t directly correlate to a specific page as easily. However, we selected the following pages to update and test:

The problem with this keyword, “html colors”, is that generally what people really want is CSS colors, so you have to try to encourage Google to route people to stuff in the CSS documentation instead of elsewhere. This involves ensuring that you refer to HTML specifically in each page in appropriate ways.

I’ve opted in general to consider the CSS <color> value page to be the destination for this, for reference purposes, with the article “Applying color” being a new one I created to use as a landing page for all things color related to route people to useful guide pages.

The results

As was the case with previous experiments, we only allowed about 60 days for the Google to pick up and fully internalize the changes as well as for user reactions to affect the outcome, despite the fact that 90 days is usually the minimum time you run these tests for, with six months being preferred. However, we have had to compress our schedule for the experiments. We will, as before, continue to watch the results over time.

Results for the “tr” keyword

The pages updated to improve their placement when the “tr” keyword is used in Google search, as well as the amount of change over time seen for each, is shown in the table below. These were the pages which were updated and which appeared in search results analysis for the selected period of time.

Change (%)



Clicks Position CTR
HTML/Element/tr -43.22% 124.57% 2.58% 285.71%
HTML/Element/table 26.68% 27.02% -2.90% 0.00%
HTML/Element/template 27.02% 9.21% -15.45% -14.05%
API/HTMLTableRowElement/insertCell -2.78% -23.91% -2.16% -21.77%
HTML/Element/thead 38.82% 19.70% 0.00% -13.67%
HTML/Element/tbody 42.72% 100.52% 14.19% 40.68%
HTML/Element/tfoot 8.90% 11.29% 2.64% 2.18%
HTML/Element/th -50.32% 3.43% 0.39% 106.25%
HTML/Element/td 20.05% 40.27% -8.04% 17.01%

The data is interesting. Impression counts are generally up, as are clicks and search engine results page (SERP) position. Interestingly, the main <tr> page, the primary page for this keyword, has lost impressions yet gained clicks, with the CTR skyrocketing by a sizeable 285%. This means that people are seeing better search results when searching just for “tr”, and getting right to that page more often than before we began.

Results for the “html colors” keyword

The table below shows the pages updated for the “html colors” keyword and the amount of change seen in the Google activity for each page.

Page Change Impressions (%) Change Clicks (%) Change Position (Absolute) Change Position (%) Change CTR (%) +28.61% +19.88% -0.99 -4.54% -6.78% -43.88% -38.17% -2.71 -21.20% +10.17% +51.59% +33.33% +3.28 +48.62% -12.04% +34.87% +29.55% -1.35 -11.34% -3.94% +9.03% +19.26% -0.17 -2.46% +9.38% +36.02% +36.98% -0.09 -1.38% +0.71% +23.04% +23.42% +0.03 +0.34% +0.31% +14.95% +34.09% -1.21 -10.26% +16.65% -10.78% +6.68% +1.76 +24.78% +19.56% +830.70% +773.91% -0.97 -12.42% -6.10% +3254.57% +3429.41% -1.45 -21.98% +5.21% +50.32% +45.21% -0.56 -4.83% -3.40% +31.15% +25.57% -0.44 -4.44% -4.25%

These results are also quite promising, especially since time did not permit me to make as many changes to this content as I’d have liked. The changes for the color value type page are good; nearly 15% increase in impressions and a very good 34% rise in clicks means a health boost to CTR. Ironically, though, our position in search results dropped by nearly 1.25 points., or 10%.

The approximate 23% increase in both impressions and clicks on the CSS color attribute is quite good, and I’m pleased by the 10% gain in CTR for the learning area article on styling box backgrounds.

Almost every page sees significant gains in both impressions and clicks (take a look at text-decoration-color, in particular, with over 3000% growth!).

The sea of red is worrisome at first glance, but I think what’s happening here is that because of the improvements in impression counts (that is, how often users see these pages on Google), they are prone to reaching the page they really want more quickly. Note which pages are the ones with the positive click-through rate (CTR), which is the ratio of clicks divided by impressions. This is in order of highest change in CTR to lowest:


What I believe we’re seeing is this: due to the improvements to SEO (and potentially other factors), all of the color-related pages are getting more traffic. However, the ones in the list above are the ones seeing the most benefit; they’re less prone to showing up at inappropriate times and more likely to be clicked when they are presented to the user. This is a good sign.

Over time, I would hope to improve the SEO further to help bring the search results positions up for these pages, but that takes a lot more time than we’ve given these pages so far.


For this experiment, the known uncertainties (an oxymoron, but we’ll go with that term anyway) include:

  • As before, the elapsed time was far too share to get viable data for this experiment. We will examine the data again in a few months to see how things are progressing.
  • This project had additional time constraints that led me not to make as many changes as I might have preferred, especially for the “html colors” keyword. The results may have been significantly different had more time been available, but that’s going to be common in real-world work anyway.
  • Overall site growth during the time we ran this experiment also likely inflated the results somewhat.


After sharing these results with Kadir and Chris, we came to the following initial conclusions:

  • This is promising, and should be pursued for pages which already have low-to-moderate traffic.
  • Regardless of when we begin general work to perform and make changes as a result of competitive content analysis, we should immediately update MDN’s contributor guides to incorporate recommended changes.
  • The results suggest that content analysis should be a high-priority part of our SEO toolbox. Increasing our internal link coverage and making documents relate to each other creates a better environment for search engine crawlers to accumulate good data.
  • We’ll re-evaluate the results in a few months after more data has accumulated.

If you have questions or comments about this experiment or its results, please feel free to post to this topic on Mozilla’s Discourse forum.

 Posted by at 4:46 PM
Mar 212018

Following in the footsteps of MDN’s “Thin Pages” SEO experiment done in the autumn of 2017, we completed a study to test the effectiveness and process behind making changes to correct cases in which pages are perceived as “duplicates” by search engines. In SEO parlance, “duplicate” is a fuzzy thing. It doesn’t mean the pages are identical—this is actually pretty rare on MDN in particular—but that the pages are similar enough that they are not easily differentiated by the search engine’s crawling technology.

This can happen when two pages are relatively short but are about a similar topic, or on reference pages which are structurally and content-wise quite similar to one another. From a search engine’s standpoint, if you create articles about the background-position and background-origin CSS attributes, you have two pages based on the same template (a CSS reference page) whose contents are easily prone to being identical. This is especially true given how common it is to start a new page by copying content from another, similar, page and making the needed changes, and sometimes even only the barest minimum changes needed.

All that means that from an SEO perspective, we actually have tons of so-called “duplicate” pages on MDN. As before, we selected a number of pages that were identified as duplicates by our SEO contractor and made appropriate changes to them per their recommendations, which we’ll get into briefly in a moment.

Once the changes were made, we waited a while, then compared the before and after analytics to see what the effects were.

The content updates

We wound up choosing nine pages to update for this experiment. We actually started with ten but one of them wound up being removed from the set after the fact when I realized it wasn’t a usable candidate. Unsurprisingly, essentially all duplicate pages were found in reference documentation. Guides and tutorials were with almost no exceptions immune to the phenomenon.

The pages were scattered through various areas of the open web reference documentation under

Things to fix or improve

Each page was altered as much as possible in an attempt to differentiate it from the pages which were found to be similar. Tactics to accomplish this included:

  • Make sure the page is complete. If it’s a stub, write the page’s contents up completely, including all relevant information, sections, etc.
  • Ensure all content on the page is accurate. Look out for places where copy-and-pasted information wasn’t corrected as well as any other errors.
  • Make sure the page has appropriate in-content (as opposed to sidebar or “See also”) links to other pages on MDN. Feel free to also include links in the “See also” section, but in-content links are a must. At least the first mention of any term, API, element, property, attribute, technology, or idea should have a link to relevant documentation (or to an entry in MDN’s Glossary). Sidebar links are not generally counted when indexing web sites, so relying on them entirely for navigation will wreak utter havoc on SEO value.
  • If there are places in which the content can be expanded upon in a sensible way, do so. Are there details not mentioned or not covered in as much depth as they could be? Think about the content from the perspective of the first-time reader, or a newcomer to the technology in question. Will they be able to answer all possible questions they might have by reading this page or pages that the page links to through direct, in-content links?
  • What does the article assume the reader knows that they might not yet? Add the missing information. Anything that’s skimmed over, leaving out details and assuming the reader can figure it out from context needs to be fleshed out with more information.
  • Ensure that all of the API’s features are covered by examples. Ideally, every attribute on an element, keyword for a CSS property’s value, parameter for a method, and so forth should be used in at least one example.
  • Ensure that each example is accompanied by descriptive text. There should be an introduction explaining what the example does and what feature or features of the technology or API are demonstrated as well as any text that explains how the example works. For example, on an HTML element reference page, simply listing the properties then providing an example that only uses a subset of those properties isn’t enough.  Add more examples that cover the other properties, or at least the ones that are likely to be used by anyone who isn’t a deeply advanced user.
  • Do not simply add repetitive or unhelpful text. Beefing up the word count just to try to differentiate the page from other content is actually going to do more harm than good.
  • It’s frequently helpful to also change the pages which are considered to be duplicate pages. By making different changes to each of the similar pages, you can create more variations than by changing one page alone.

Pages to be updated

The pages we selected to update:


In most if not all of these cases, the pages which are similar are obvious.

The results

After making appropriate changes to the pages listed above as well as certain other pages which were similar to them, we allowed 60 days to pass. This is less than the ideal 90 days, or, better, six months, but time was short. We will check the data again in a few months to see how things change given more time.

The changes made were not always as extensive as would normally be preferred, again due to time constraints created by the one-person experimental model. When we do this work on a larger scale, it will be done by a larger community of contributors. In addition, much of this work will be done from the beginning of the writing process as we will be revising our contributor guide to incorporate the recommendations as appropriate after these experiments are complete.

Pre-experiment unvisited pages

As was the case with the “thin pages” experiment, pages which were getting literally zero—or even close to zero— traffic before the changes continued to not get much traffic. Those pages were:


For what it’s worth, we did learn from this eventually and experiments that were begun after the “thin pages” results were in no longer included pages which got no traffic during the period leading up to the start of the experiment, but this experiment had already begun running by then.

Post-experiment unvisited pages

There were also two pages which had a small amount of traffic before the changes were made but no traffic afterward. This is likely a statistical anomaly or fluctuation, so we’re discounting these pages for now:


The outcome

The remaining three pages had useful data. This is a small data set but is what we have, so let’s take a look.

 Page URL Sept. 16-Oct. 16 Nov. 18 – Dec 18
Clicks Impressions Clicks Impressions Clicks Chg. % Impressions Chg. % 678 8,751 862 13,435 27.14% 34.86% 1,193 3,724 1,395 7,700 14.48% 51.64% 447 2,961 706 7,906 36.69% 62.55%

For each of these three pages, the results are promising. Both click and impression counts are up for each of them, with click counts increasing by anywhere from 14% to 36% and impression counts increasing between 34% and 62% (yes, I rounded down for each of these values, since this is pretty rough data anyway). We’ll check the results again soon and see if the results changed further.


Because of certain implementation specifics of this experiment, there are obviously some uncertainties in the results:

  • We didn’t allow as much time as is recommended for the changes to fully take effect in search data before measuring the results. This was due to time constraints for the experiment being performed, but, as previously mentioned, we’ll look at the data again later to double-check our results.
  • The set of pages with useful results was very small, and even the original set of pages was fairly small.
  • There was substantial overall site growth during this period, so it’s likely the results are affected by this. However, the size of the improvements seen here suggests that even with that in mind, the outcome was significant.


After a team review of these results, we came to some conclusions. We’ll revisit them later, of course, if we decide that a review of the data later suggests changes be made.

  1. The amount of improvement seen strongly suggests we should prioritize fixing duplicate pages, at least in cases where at least one of the pages which are considered duplicates of one another is getting at least a low-to-moderate amount of traffic.
  2. The MDN meta-documentation, in particular the writer’s guide and the guides to writing each of the types of reference content, will be updated to incorporate the recommendations into the general guidelines for contributing to MDN. Additionally, the article on MDN about writing SEO-friendly content will be updated to include this information.
  3. It turns out that many of the changes needed to fix “thin” pages also applies when fixing “duplicate” pages.
  4. We’ll re-evaluate prioritization after reviewing the latest data after more time has passed.

The most interesting thing I’m learning about SEO, I think, is that it’s really about making great content. If the content is really top-notch, SEO practically attends to itself. It’s all about having thorough, accurate content with solid internal and external links.


If you have comments or questions about this experiment or the changes we’ll be making, please feel free to follow-up or comment on this thread on Mozilla’s Discourse site.

 Posted by at 7:02 PM
Nov 132017

Last week, I wrote about the results of our “thin pages” (that is, pages too short to be properly cataloged by search engines) SEO experiment, in which we found that while there appear to be gains in some cases when improving the pages considered to be too short, there was too much uncertainty and too few cases in which gains seemed to occur at all, to justify making a full-fledged effort to fix every thin page on MDN.

However, we do want to try to avoid thin pages going forward! Having content that people can actually find is obviously important. In addition, we encourage contributors working on articles for other reasons who find that they’re too short to go ahead and update them.

I’ve already updated our meta-documentation (that is, our documentation about writing documentation) to incorporate most of the recommendations for avoiding thin content. These changes are primarily in the writing style guide. I’ve also written the initial portions of a separate guide to writing for SEO on MDN.

For fun, let’s review the basics here today!

What’s a thin page?

A thin page is a page that’s too short for search engines to properly catalog and differentiate from other pages. Pages that are shorter than 250-300 words of content text do not provide enough context for search algorithms to reliably comprehend what the article is about, which means the page winds up not in the right place in search results.

For the purposes of computing the length of an article, the article’s length is the number of words of body content—that is, content that isn’t in headers, footers, sidebars, or similar constructs—plus the number of words located in alt text on <img> elements.

How to avoid thin pages

These tips are taken straight from the guidelines on MDN:

  • Keep an eye on the convenient word counter located in the top-right corner of the editor’s toolbar on MDN.
  • Obviously, if the article is a stub, or is missing material, add it. We try to avoid outright “stub” pages on MDN, although they do exist, but there are plenty of pages that are missing large portions of their content while not technically being a “stub.”
  • Generally review the page to ensure that it’s structured properly for the type of page it is. Be sure every section that it should have is present and has appropriate content.
  • Make sure every section is complete and up-to-date, with no information missing. Are all parameters listed and explained?
  • Be sure everything is fully fleshed-out. It’s easy to give a quick explanation of something, but make sure that all the nuances are covered. Are there special cases? Known restrictions that the reader might need to know about?
  • There should be examples covering all parameters or at least common sets of parameters. Each example should be preceded with an overview of what the example will do, what additional knowledge might be needed to understand it, and so forth. After the example (or interspersed among pieces of the example) should be text explaining how the code works. Don’t skimp on the details and the handling of errors in examples; readers will copy and paste your example to use in their own projects, and your code will wind up used on production sites! See Code examples and our Code sample guidelines for more useful information.
  • If there are particularly common use cases for the feature being described, talk about them! Instead of assuming the reader will figure out that the method being documented can be used to solve a common development problem, actually add a section with an example and text explaining how the example works.
  • Include proper alt text on all images and diagrams; this text counts, as do captions on tables and other figures.
  • Do not descend into adding repetitive, unhelpful material or blobs of keywords, in an attempt to improve the page’s size and search ranking. This does more harm than good, both to content readability and to our search results.

Reviewing the above guidelines and suggestions (some of which are admittedly pretty obvious) when confronted with pages that are just too short may help kick-start your creativity so you can write the needed material to ensure that MDN’s content drifts to the top of the turbid sea of web documentation and other content to be found on the Internet.

 Posted by at 12:50 PM
Nov 082017

The MDN team has been working on a number of experiments designed to make decisions around prioritization of various forms of SEO problems we should strive to resolve. In this document, we will examine the results of our first such experiment, the “Thin Pages” experiment.

The goal of this experiment was to select a number of pages on MDN which are considered “thin”—that is, too short to be usefully analyzed—and update them using guidelines provided by our SEO contractor.

Once the changes were made and a suitable time had passed, we re-evaluated the pages’ statistics to determine whether or not the changes had an appreciable effect. With that information in hand, we then made a determination as to whether or not prioritizing this work makes sense.

The content updates

We selected 20 pages, choosing from across the /Web and /Learn areas of MDN and across the spectrum of usage levels. Some pages had little to no traffic at the outset, while others were heavily trafficked. Then, I went through these pages and updated them substantially, adding new content and modernizing their layouts in order to bring them up to a more useful size.

The changes were mostly common-sense ones:

  • Pages that aren’t necessary were deleted (as it turns out, none of the pages we selected were in this category).
  • Ensuring each page had all of the sections they’re meant to have.
  • Ensuring that every aspect of the topic is covered fully.
  • Ensuring that examples are included and cover an appropriate set of cases.
  • Ensuring that all examples include complete explanations of what they’re doing and how they work.
  • Ensuring that pages include not only the standard tags, but additional tags that may add useful keywords to the page.
  • Fleshing out ideas that aren’t fully covered.

The pages we chose to update are:

The results

After making the changes we were able to make in a reasonable amount of time, we then allowed the pages some time to percolate through Google’s crawler and such. Then we re-measured the impression and click counts, and the results were not quite what we expected.

First, of all the pages involved, only a few actually got any search traffic at all. The following pages were not seen by users searching on Google at all during either/or the starting or ending analysis:

The remaining pages did generally see measurable gains, some of them quite high, but none are clearly outside the range of growth expected giving MDN’s ongoing growth:

June 1-30 Sept. 24 – Oct. 23
Page URL Clicks Impressions Clicks Impressions Clicks Chg. % Impressions Chg. % 15 112 111 2600 640.00% 2221.43% 1789 6331 1866 9004 4.30% 42.22% 3151 60793 4729 100457 50.08% 65.24%

This is unfortunately not a very large data set, but we can draw some crude results from it. We’ll also continue to watch these pages over the next few months to see if there’s any further change.

The number of impressions went up, in some cases dramatically. But there’s just not enough here to be sure this was related to the thin page revisions or related to other factors, such as the large-scale improvements to the HTML docs recently made.


There are, as mentioned already, some uncertainties around these results:

  • The number of pages that had useful results was smaller than we would have preferred.
  • We had substantial overall site growth during the same time period, and certain areas of the site were heavily overhauled. Both of these facts may have impacted the results.
  • We only gave the pages a couple of months after making the changes before measuring the results. We were advised that six months is a more helpful time period to monitor (so we’ll look again in a few months).


After reviewing these results, and weighing the lack of solid data at this stage, we did come to some initial conclusions, which are open to review if the numbers change going forward:

  1. We won’t launch a full-scale formal project around fixing thin pages. It’s just not worth it given the dodginess of the numbers we have thus far.
  2. We will, however, update the meta-documentation to incorporate the recommendations around thin pages.That means providing advice about the kinds of content to include, reminding people to be thorough, reminding writers to include plenty of samples that cover a variety of use cases and situations, and so forth. We will also add a new “SEO” area to the meta docs that covers these recommendations more explicitly in terms of the SEO impact.
  3. We will check these numbers again in a couple of months to see if there’s been further improvement. The recommendation was to wait six months for results, but we did not have that kind of time.


For discussion of this experiment, and of the work updating MDN that will come from it, I encourage you to follow-up or comment in this thread on the Mozilla Discourse site.

 Posted by at 11:22 AM
May 182017

My health continues to be an adventure. My neuropathy continues to worsen steadily; I no longer have any significant sensation in many of my toes, and my feet are always in a state of “pins and needles” style numbness. My legs are almost always tingling so hard they burn, or feel like they’re being squeezed in a giant fist, or both. The result is that I have some issues with my feet not always doing exactly what I expect them to be doing, and I don’t usually know exactly where they are.

For example, I have voluntarily stopped driving for the most part, because much of the time, sensation in my feet is so bad that I can’t always tell whether my feet are in the right places. A few times, I’ve found myself pressing the gas and brake pedals together because I didn’t realize my foot was too far to the left.

I also trip on things a lot more than I used to, since my feet wander a bit without my realizing it. On January 2, I tripped over a chair in my office while carrying an old CRT monitor to store it in my supply cabinet. I went down hard on my left knee and landed on the monitor I was carrying, taking it squarely to my chest. My chest was okay, just a little sore, but my knee was badly injured. The swelling was pretty brutal, and it is still trying to finish healing up more than four months later.

Given the increased problems with my leg pain, my neurologist recently had an MRI performed on my lumbar (lower) spine. An instance of severe nerve root compression was found which is possibly contributing to my pain and numbness in my legs. We are working to schedule for them to attempt to inject medication at that location to try to reduce the swelling that’s causing the compression. If successful, that could help temporarily relieve some of my symptoms.

But the neuropathic pain in my neck and shoulders continues as well. There is some discussion of possibly once again looking at using a neurostimulator implant to try to neutralize the pain signals that are being falsely generated. Apparently I’m once again eligible for this after a brief period where my symptoms shifted outside the range of those which are considered appropriate for that type of therapy.

In addition to the neurological issues, I am in the process of scheduling a procedure to repair some vascular leaks in my left leg, which may be responsible for some swelling there that could be in part responsible for some of my leg trouble (although that is increasingly unlikely given other information that’s come to light since we started scheduling that work).

Then you can top all that off with the side effects of all the meds I’m taking. I take at least six medications which have the side effect of “drowsiness” or “fatigue” or “sleepiness.” As a result, I live in a fog most of the time. Mornings and early afternoons are especially difficult. Just keeping awake is a challenge. Being attentive and getting things written is a battle. I make progress, but slowly. Most of my work happens in the afternoons and evenings, squeezed into the time between my meds easing up enough for me to think more clearly and alertly, and time for my family to get together for dinner and other evening activities together.

Balancing work, play, and personal obligations when you have this many medical issues at play is a big job. It’s also exhausting in and of itself. Add the exhaustion and fatigue that come from the pain and the meds, and being me is an adventure indeed.

I appreciate the patience and the help of my coworkers and colleagues more than I can begin to say. Each and every one of you is awesome. I know that my unpredictable work schedule (between having to take breaks because of my pain and the vast number of appointments I have to go to) causes headaches for everyone. But the team has generally adapted to cope with my situation, and that above all else is something I’m incredibly grateful for. It makes my daily agony more bearable. Thank you. Thank you. Thank you.

Thank you.

 Posted by at 11:15 AM
May 122017

I’ve been writing developer documentation for 20 years now, 11 of those years at Mozilla. For most of those years, documentation work was largely unmanaged. That is to say, we had management, and we had goals, but how we reached those goals was entirely up to us. This worked well for me in particular. My brain is like a simple maze bot in some respects, following all the left turns until it reaches a dead end, then backing up to where it made the last turn and taking the next path to the right, and repeating until the goal has been reached.

This is how I wrote for a good 14 or 15 years of my career. I’d start writing about a topic, linking to APIs, functions, other guides and tutorials, and so forth along the way—whether they already existed or not. Then I’d go back through the page and click the first link on the page I just created, and I’d make sure that that page was solid. Any material on that page that needed to be fixed for my new work to be 100% understood, I’d update. If there were any broken links, I’d fix them, creating and writing new pages as needed, and so forth.

How my mind wants to do it

Let’s imagine that the standards gurus have spoken and have decided to add to a new <dial> element to HTML, providing support for creating knobs and speedometer-style feedback displays. My job is to document this element.

I start by creating the main article in the HTML reference for <dial>, and I write that material, starting with a summary (which may include references to <progress>, <input>, and other elements and pages). It may also include links to articles I plan to create, such as “Using dial elements” and “Displaying information in HTML” as well as articles on forms.

As I continue, I may wind up with links to subpages which need to be created; I’ll also wind up with a link to the documentation for the HTMLDialElement interface, which obviously hasn’t been written yet. I also will have links to subpages of that, as well as perhaps for other elements’ attributes and methods.

Having finished the document for <dial>, I save it, review it and clean it up, then I start following all the links on the page. Any links that take me to a page that needs to be written, I write it. Any links that take me to a page that needs content added because of the new element, I expand them. Any links that take me to a page that is just horribly unusably bad, I update or rewrite as needed. And I continue to follow those left-hand turns, writing or updating article after article, until eventually I wind up back where I started.

If one of those pages is missing an example, odds are good it’ll be hard to resist creating one, although if it will take more than a few minutes, this is where I’m likely to reluctantly flag it for someone else to do later, unless it’s really interesting and I am just that intrigued.

By the time I’m done documenting <dial>, I may also have updated badly out of date documentation for three other elements and their interfaces, written pages about how to decide on the best way to represent your data, added documentation for another undocumented element that has nothing to do with anything but it was a dead link I saw along the way, updated another element’s documentation because that page was where I happened to go to look at the correct way to structure something, and I saw it had layout problems…

You get the idea.

How I have to do it now

Unfortunately, I can’t realistically do that anymore. We have adopted a system of sprints with planned work for each sprint. Failing to complete the work in the expected amount of time tends to get you dirty looks from more and more people the longer it goes on. Even though I’m getting a ton accomplished, it doesn’t count if it’s not on the sprint plan.

So I try to force myself to work on only the stuff directly related to the sprint we’re doing. But sometimes the line is hard to find. If I add documentation for an interface, but the documentation for its parent interface is terrible, it seems to me that updating that parent interface is a fairly obvious part of my job for the sprint. But it wasn’t budgeted into the time available, so if I do it, I’m not going to finish in time.

The conundrum

That leaves me in a bind: do strictly what I’m supposed to do, leaving behind docs that are only partly usable, or follow at least some of those links into pages that need help before the new content is truly usable and “complete,” but risk missing my expected schedule.

I almost always choose the latter, going in knowing I’m going to be late because of it. I try to control my tendency to keep making left turns, but sometimes I lose myself in the work and write stuff I am not expected to be doing right now.

Worse, though, is that the effort of restraining myself to just writing what’s expected is unnatural to me. My brain rebels a bit, and I’m quite sure my overall throughput is somewhat lower because of it. As a result: a less enjoyable writing experience for me, less overall content created, and unmet goals.

I wonder, sometimes, how my work results would look if I were able to cut loose and just go again. I know I have other issues slowing me down (see my earlier blog post Peripheral neuropathy and me), but I can’t help wondering if I could be more productive by working how I think, instead of doing what doesn’t come naturally: work on a single thing from A to Z without any deviation at all for any reason.

 Posted by at 10:30 AM
Apr 032017

As of today—April 3, 2017—I’ve been working as a Mozilla staffer for 11 years. Eleven years of documenting the open Web, as well as, at times, certain aspects of the guts of Firefox itself. Eleven years. Wow. I wrote in some detail last year about my history at Mozilla, so I won’t repeat the story here.

I think 2017 is going to be a phenomenal year for the MDN team. We continue to drive forward on making open web documentation that can reach every web developer regardless of skill level. I’m still so excited to be a part of it all!

A little fox that Sophie got me

Last night, my eleven-year-old daughter (born about 10 months before I joined Mozilla) brought home this fox beanie plush for me. I don’t know what prompted her to get it—I don’t think she’s aware of the timing—but I love it! It may or may not actually be a red panda, but it has a very Firefox look to it, and that’s good enough for me.

 Posted by at 7:12 AM
Oct 192016

One of the most underappreciated features of Firefox’s URL bar and its bookmark system is its support for custom keyword searches. These let you create special bookmarks that type a keyword followed by other text, and have that text inserted into a URL identified uniquely by the keyword, then that URL gets loaded. This lets you type, for example, “quote aapl” to get a stock quote on Apple Inc.

You can check out the article I linked to previously (and here, as well, for good measure) for details on how to actually create and use keyword searches. I’m not going to go into details on that here. What I am going to do is share a few keyword searches I’ve configured that I find incredibly useful as a programmer and as a writer on MDN.

For web development

Here are the search keywords I use the most as a web developer.

Keyword Description URL
if Opens an API reference page on MDN given an interface name.
elem Opens an HTML element’s reference page on MDN.
css Opens a CSS reference page on MDN.
fx Opens the release notes for a given version of Firefox, given its version number.
mdn Searches MDN for the given term(s) using the default filters, which generally limit the search to include only pages most useful to Web developers.
mdnall Searches MDN for the given term(s) with no filters in place.

For documentation work

When I’m writing docs, I actually use the above keywords a lot, too. But I have a few more that I get a lot of use out of, too.

Keyword Description URL
bug Opens the specified bug in Mozilla’s Bugzilla instance, given a bug number.
bs Searches Bugzilla for the specified term(s).
dxr Searches the Mozilla source code on DXR for the given term(s).
file Looks for files whose name contains the specified text in the Mozilla source tree on DXR.
ident Looks for definitions of the specified identifier (such as a method or class name) in the Mozilla code on DXR.
func Searches for the definition of function(s)/method(s) with the specified name, using DXR.
t Opens the specified MDN KumaScript macro page, given the template/macro name.
wikimo Searches for the specified term(s).

Obviously, DXR is a font of fantastic information, and I suggest click the “Operators” button at the right end of the search bar there to see a list of the available filters; building search keywords for many of these filters can make your life vastly easier, depending on your specific needs and work habits!

 Posted by at 5:33 PM
Jul 262016

The Web moves pretty fast. Things are constantly changing, and the documentation content on the Mozilla Developer Network (MDN) is constantly changing, too. The pace of change ebbs and flows, and often it can be helpful to know when changes occur. I hear this most from a few categories of people:

  • Firefox developers who work on the code which implements a particular technology. These folks need to know when we’ve made changes to the documentation so they can review our work and be sure we didn’t make any mistakes or leave anything out. They often also like to update the material and keep up on what’s been revised recently.
  • MDN writers and other contributors who want to ensure that content remains correct as changes are made. With so many people making change to some of our content, keeping up and being sure mistakes aren’t made and that style guides are followed is important.
  • Contributors to specifications and members of technology working groups. These are people who have a keen interest in knowing how their specifications are being interpreted and implemented, and in the response to what they’ve designed. The text of our documentation and any code samples, and changes made to them, may be highly informative for them to that end.
  • Spies. Ha! Just kidding. We’re all about being open in the Mozilla community, so spies would be pretty bored watching our content.

There are a few ways to watch content for changes, from the manual to the automated. Let’s take a look at the most basic and immediately useful tool: MDN page and subpage subscriptions.

Subscribing to a page

Animation showing how to subscribe to a single MDN page After logging into your MDN account (creating one if you don’t already have one), make your way to the page you want to subscribe to. Let’s say you want to be sure nobody messes around with the documentation about <marquee> because, honestly, why would anyone need to change that anyway?

Find the Watch button near the top of the MDN page; it’s a drawing of an eye. In the menu that opens when you hover over that icon, you’ll find the option “Subscribe to this page.” Simply click that. From then on, each time someone makes a change to the page, you’ll get an email. We’ll talk about that email in a moment.

First, we need to consider another form of content subscriptions: subtree or sub-article subscriptions.

Subscribing to a subtree of pages


 Posted by at 8:15 PM
Apr 192016

One great thing about watching the future of the Web being planned in the open is that you can see how smart people having open discussions, combined with the occasional sudden realization, epiphany, or unexpected spark of creative genius can make the Web a better place for everyone.

This is something I’m reminded of regularly when I read mailing list discussions about plans to implement new Web APIs or browser features. There are a number of different kinds of discussion that take place on these mailing lists, but the ones that have fascinated me the most lately have been the “Intent to…” threads.

There are three classes of “Intent to…” thread:

  • Intent to implement. This thread begins with an announcement that someone plans to begin work on implementing a new feature. This could be an entire API, or a single new function, or anything in between. It could be a change to how an existing technology behaves, for that matter.
  • Intent to ship. This thread starts with the announcement that a feature or technology which has been implemented, or is in the process of being implemented, will be shipped in a particular upcoming version of the browser.
  • Intent to unship. This thread starts by announcing that a previously shipped feature will be removed in a given release of the software. This usually means rolling back a change that had unexpected consequences.

In each of these cases, discussion and debate may arise. Sometimes the discussion is very short, with a few people agreeing that it’s a good (or bad) idea, and that’s that. Other times, the discussion becomes very lengthy and complicated, with proposals and counter-proposals and debates (and, yes, sometimes arguments) about whether it’s a good idea or how to go about doing it the best way possible.

You know… I just realized that this change could be why the following sites aren’t working on nightly builds… maybe we need to figure out a different way to do this.

This sounds great, but what if we add a parameter to this function so we can make it more useful to a wider variety of content by…

The conversation frequently starts innocuously enough, with general agreement or minor suggestions that might improve the implementation, and then, sometimes, out of nowhere someone points out a devastating and how-the-heck-did-we-miss-that flaw in the design that causes the conversation to shift into a debate about the best way to fix the design. Result: a better design that works for more people with fewer side effects.

These discussions are part of what makes the process of inventing the Web in the open great. Anyone who has an interest can offer a suggestion or insight that might totally change the shape of things to come. And by announcing upcoming changes in threads such as these, developers make it easier than ever to get involved in the design of the Web as a platform.

Mozilla is largely responsible for the design process of the Web being an open one. Before our global community became a force to be reckoned with, development crawled along inside the walls of one or two corporate offices. Now, dozens of companies and millions of people are active participants in the design of the Web and its APIs. It’s a legacy that every Mozillian—past, present, and future—can be very proud of.

 Posted by at 11:00 AM