Nov 092020

Following my departure from the MDN documentation team at Mozilla, I’ve joined the team at Amazon Web Services, documenting the AWS SDKs. Which specific SDKs I’ll be covering are still being finalized; we have some in mind but are determining who will prioritize which still, so I’ll hold off on being more specific.

Instead of going into details on my role on the AWS docs team, I thought I’d write a new post that provides a more in-depth introduction to myself than the little mini-bios strewn throughout my onboarding forms. Not that I know if anyone is interested, but just in case they are…


Now a bit about me. I was born in the Los Angeles area, but my family moved around a lot when I was a kid due to my dad’s work. By the time I left home for college at age 18, we had lived in California twice (where I began high school), Texas twice, Colorado twice, and Louisiana once (where I graduated high school). We also lived overseas once, near Pekanbaru, Riau Province on the island of Sumatra in Indonesia. It was a fun and wonderful place to live. Even though we lived in a company town called Rumbai, and were thus partially isolated from the local community, we still lived in a truly different place from the United States, and it was a great experience I cherish.

My wife, Sarah, and I have a teenage daughter, Sophie, who’s smarter than both of us combined. When she was in preschool and kindergarten, we started talking about science in pretty detailed ways. I would give her information and she would draw conclusions that were often right, thinking up things that essentially were black holes, the big bang, and so forth. It was fun and exciting to watch her little brain work. She’s in her “nothing is interesting but my 8-bit chiptunes, my game music, the games I like to play, and that’s about it” phase, and I can’t wait until she’s through that and becomes the person she will be in the long run. Nothing can bring you so much joy, pride, love, and total frustration as a teenage daughter.


I got my first taste of programming on a TI-99/4 computer bought by the Caltex American School there. We all were given a little bit of programming lesson just to get a taste of it, and I was captivated from the first day, when I got it to print a poem based on “Roses are red, violets are blue” to the screen. We were allowed to book time during recesses and before and after school to use the computer, and I took every opportunity I could to do so, learning more and more and writing tons of BASIC programs on it.

This included my masterpiece: a game called “Alligator!” in which you drove an alligator sprite around with the joystick, eating frogs that bounced around the screen while trying to avoid the dangerous bug that was chasing you. When I discovered that other kids were using their computer time to play it, I was permanently hooked.

I continued to learn and expand my skills over the years, moving on to various Apple II (and clone) computers, eventually learning 6502 and 65816 assembly, Pascal, C, dBase IV, and other languages.

During my last couple years of high school, I got a job at a computer and software store (which was creatively named “Computer and Software Store”) in Mandeville, Louisiana. In addition to doing grunt work and front-of-store sales, I also spent time doing programming for contract projects the store took on. My first was a simple customer contact management program for a real estate office. My most complex completed project was a three-store cash register system for a local chain of taverns and bars in the New Orleans area. I also built most of a custom inventory, quotation, and invoicing system for the store, though this was never finished due to my departure to start college.

The project about which I was most proud: a program for tracking information and progress of special education students and students with special needs for the St. Tammany Parish School District. This was built to run on an Apple IIe computer, and I was delighted to have the chance to build a real workplace application for my favorite platform.

I attended the University of California—Santa Barbara with a major in computer science. I went through the program and reached the point where I only needed 12 more units (three classes): one general education class under the subject area “literature originating in a foreign language” and two computer science electives. When I went to register for my final quarter, I discovered that none of the computer science classes offered that quarter were ones I could take to fulfill my requirements (I’d either already taken them or they were outside my focus area, and thus wouldn’t apply).

Thus I withdrew from school for a quarter with the plan that I would try again for the following quarter. During the interim, I became engaged to my wife and took my first gaming industry job (where they promised to allow me leave, with benefits, for the duration of my winter quarter back at school). As the winter approached, I drove up to UCSB to get a catalog and register in person. Upon arrival, I discovered that, once again, there were no computer science offerings that would fulfill my requirements.

At that point, I was due to marry in a few months (during the spring quarter), my job was getting more complicated due to the size and scope of the projects I was being assigned, and the meaningfulness of the degree was decreasing as I gained real work experience.

Thus, to this day, I remain just shy of my computer science degree. I have regrets at times, but never so much so that it makes sense to go back. Especially given the changed requirements and the amount of extra stuff I’d probably have to take. My experience at work now well outweighs the benefits of the degree.

The other shoe…

Time to drop that other shoe. I have at least one, and possibly two, chronic medical conditions that complicate everything I do. If there are two such conditions, they’re similar enough that I generally treat them as one when communicating with non-doctors, so that’s what I’ll do here.

I was born with tibiae (lower leg bones) that were twisted outward abnormally. This led to me walking with feet that were pointed outward at about a 45° angle from normal. This, along with other less substantial issues with the structure of my legs, was always somewhat painful, especially when doing things which really required my feet to point forward, such as walking on a narrow beam, skating, or trying to water ski.

Over time, this pain became more constant, and began to change. I noticed numbness and tingling in my feet and ankles, along with sometimes mind-crippling levels of pain. In addition, I developed severe pain in my neck, shoulders, and both arms. Over time, I began to have spasms, in which it felt like I was being blasted by an electric shock, causing my arms or legs to jerk wildly. At times, I would find myself uncontrollably lifting my mouse off the desk and slamming it down, or even throwing it to the floor, due to these spasms.

Basically, my nervous system has run amok. Due, we think, to a lifetime of strain and pressure caused by my skeletal structure issues, the nerves have been worn and become permanently damaged in various places. This can’t be reversed, and the pain I feel every moment of every day will never go away.

I engage in various kinds of therapy, both physical and medicinal, and the pain is kept within the range of “ignorable” much of the time. But sometimes it spikes out of that range and becomes impossible to ignore, leading to me not being able to focus clearly on what I’m doing. Plus there are times when it becomes so intense, all I can do is curl up and ride it out. This can’t be changed, and I’ve accepted that. We do our best to manage it and I’m grateful to have access to the medical care I have.

This means, of course, that I have times during which I am less attentive than I would like to be, or have to take breaks to ride out pain spikes. On less frequent occasions, I have to write a day off entirely when I simply can’t shake the pain at all. I have to keep warm, too, since cold (even a mild chill or a slight cooling breeze) can trigger the pain to spike abruptly.

But I’ve worked out systems and skills that let me continue to do my work despite all this. The schedule can be odd, and I have to pause my work for a while a few times a week, but I get the job done.

I wrote a longer and more in-depth post about my condition and how it feels a few years ago, if you want to know more.

Hobbies and free time

My very favorite thing to do in my free time is to write code for my good old Apple II computers. Or, barring that, to work on the Apple IIgs emulator for the Mac that I’m the current developer for. I dabble in Mac and iOS code and have done a few small projects for my own use. However, due to my medical condition, I don’t get to spend as much time doing that as I can.

With the rest of my free time, I love to watch movies and read books. So very, very many books. My favorite genres are science fiction and certain types of mysteries and crime thrillers, though I dabble in other areas as well. I probably read, on average, four to six novels per week.

I also love to play various types of video games. I have an Xbox One, but I don’t play on it much (I got it mostly to watch 4K Blu-Ray discs on it), though I do have a few titles for it. I’ve played and enjoyed the Diablo games as well as StarCraft, and have been captivated by many of the versions of Civilization over the years (especially the original and Civ II). I also, of course, play several mobile games.

My primary gaming focus has been World of Warcraft for years. I was a member of a fairly large guild for a few years, then joined a guild that split off from it, but that guild basically fell apart. Only a scant handful of us are left, and none of us really feel like hunting for another guild, and since we enjoy chatting while we play, we do fine. It’d be interesting, though, to find coworkers that play and spend some time with them.


I began my career as a software engineer for a small game development shop in Southern California. My work primarily revolved around porting games from other platforms to the Mac, though I did also do small amounts of Windows development at times. I ported a fair number of titles to Mac, most of which most people probably don’t remember—if they ever heard of them at all—but some of them were actually pretty good games. I led development of a Mac/Windows game for kids which only barely shipped due to changing business conditions with the publisher, and by then I was pretty much fed up with the stress of working in the games industry.

Over the years, I’ve picked or at least used up a number of languages, including C++ (though I’ve not written any in years now), JavaScript, Python, Perl, and others. I love trying out new languages, and have left out a lot of the ones I experimented with and abandoned over the years.


By that point, I was really intrigued by the Be Operating System, and started trying to get a job there. The only opening I wasn’t painfully under-qualified for was a technical writing position. It occurred to me that I’ve always written well (according to teachers and professors, at least), so I applied and eventually secured a junior writer job on the BeOS documentation team.

Within seven months, I was a senior technical writer and was deeply involved in writing the Be Developers Guide (more commonly known as the Be Book) and Be Advanced Topics (which are still the only two published books which contain anything I’ve written). Due to the timing of my arrival, very little for my work is in the Be Book, but broad sections of Be Advanced Topics were written by me.


Unfortunately, for various reasons, BeOS wasn’t able to secure a large enough chunk of the market to succeed, and our decline led to our acquisition by Palm, the manufacturers of the Palm series of handhelds. We were merged into the various teams there, with myself joining the writing team for Palm OS documentation.

Within just a handful of weeks, the software team, including those of us documenting the operating system, was spun off into a new company, PalmSource. There, I worked on documentation updates for Palm OS 5, as well as updated and brand-new documentation for Palm OS 6 and 6.1.

My most memorable work was done on the rewrites and updates of the Low-Level Communications Guide and the High-Level Communications Guide, as well as on various documents for licensees’ use only. I also wrote the Binder Guide for Palm OS 6, and various other things. It was fascinating work, though ultimately and tragically doomed due to the missteps with the Palm OS 6 launch and the arrival of the iPhone three years after my departure from PalmSource.

Unfortunately, neither the Palm OS 6 update and its follow up, version 6.1, saw significant use during my tenure due to the vast changes in the platform that were made in order to modernize the operating system for future devices. The OS was also sluggish in Palm OS 6, though 6.1 was quite peppy and really very nice. But the writing was on the wall there and, though I survived three or four rounds of layoffs, eventually I was cut loose.

After PalmSource let me go, I returned briefly to the games industry, finalizing the ports of a couple of games to the Mac for Reflexive Entertainment (now part of Amazon): Ricochet Lost Worlds and Big Kahuna Reef. I departed Reflexive shortly after completing those projects, deciding I wanted to return to technical writing, or at least something not in the game industry.


That’s why my friend Dave Miller, who was at the time on the Bugzilla team as well as working in IT for Mozilla, recommended me for a Mac programming job at Mozilla. When I arrived for my in-person interviews, I discovered that they had noticed my experience as a writer and had shunted me over to interview for a writing job instead. After a very long day of interviews and a great deal of phone and email tag, I was offered a job as a writer on the Mozilla Developer Center team.

And there I stayed for 14 years, through the changes in documentation platform (from MediaWiki to DekiWiki to our home-built Kuma platform), the rebrandings (to Mozilla Developer Network, then to simply MDN, then MDN Web Docs), and the growth of our team over time. I began as the sole full-time writer, and by our peak as a team we had five or six writers. The MDN team was fantastic to work with, and the work had enormous meaning. We became over time the de-facto official documentation site for web developers.

My work spanned a broad range of topics from basics of HTML and CSS to the depths of WebRTC, WebSockets, and other, more arcane topics. I loved doing the work, with the deep dives into the source code of the major browsers to ensure I understood how things work and why and reading the specifications for new APIs.

Knowing that my work was not only used by millions of developers around the world, but was being used by teachers, professors, and students, as well as others who just wanted to learn, was incredibly satisfying. I loved doing it and I’m sure I’ll always miss it.

But between the usual business pressures and the great pandemic of 2020, I found myself laid off by Mozilla in mid-2020. That leads me to my arrival at Amazon a few weeks later, where I started on October 26.


So here we are! I’m still onboarding, but soon enough, I’ll be getting down to work, making sure my parts of the AWS documentation are as good as they can possibly be. Looking forward to it!

 Posted by at 1:21 PM
Aug 182020

By now, most folks have heard about Mozilla’s recent layoff of about 250 of its employees. It’s also fairly well known that the entire MDN Web Docs content team was let go, aside from our direct manager, the eminently-qualified and truly excellent Chris Mills. That, sadly, includes myself.

Yes, after nearly 14½ years writing web developer documentation for MDN, I am moving on to new things. I don’t know yet what those new things are, but the options are plentiful and I’m certain I’ll land somewhere great soon.

Winding down

But it’s weird. I’ve spent over half my career as a technical writer at Mozilla. When I started, we were near the end of documenting Firefox 1.5, whose feature features (sorry) were the exciting new <canvas> element and CSS improvements including CSS columns. A couple of weeks ago, I finished writing my portions of the documentation for Firefox 80, for which I wrote about changes to WebRTC and Web Audio, as well as the Media Source API.

Indeed, in my winding-down days, when I’m no longer assigned specific work to do, I find myself furiously writing as much new material as I can for the WebRTC documentation, because I think it’s important, and there are just enough holes in the docs as it stands to make life frustrating for newcomers to the technology. I won’t be able to fix them all before I’m gone, but I’ll do what I can.

Because that’s how I roll. I love writing developer documentation, especially for technologies for which no documentation yet exists. It’s what I do. Digging into a technology and coding and debugging and re-coding (and cursing and swearing a bit, perhaps) until I have working code that ensures that I understand what I’m going to write about is a blast! Using that code, and what I learned while creating it, to create documentation to help developers avoid at least some of the debugging (and cursing and swearing a bit, perhaps) that I had to go through.

The thrill of creation is only outweighed by the deep-down satisfaction that comes from knowing that what you’ve produced will help others do what they need to do faster, more efficiently, and—possibly most importantly—better.

That’s the dream.

Wrapping up

Anyway, I will miss Mozilla terribly; it was a truly wonderful place to work. I’ll miss working on MDN content every day; it was my life from the day I joined as the sole full-time writer, through the hiring and departure of several other writers, until the end.

First, let me thank the volunteer community of writers, editors, and translators who’ve worked on MDN in the past—and who I hope will continue to do so going forward. We need you more than ever now!

Jean-Luc Picard demonstrates the facepalm

Me, if I’ve forgotten to mention anyone.

Then there are our staff writers, both past and present. Jean-Yves Perrier left the team a long while ago, but he was a fantastic colleague and a great guy. Jérémie Pattonier was a blast to work with and a great asset to our team. Paul Rouget, too, was a great contributor and a great person to work with (until he moved on to engineering roles; then he became a great person to get key information from, because he was so easy to engage with).

Chris Mills, our amazing documentation team manager and fabulous writer in his own right, will be remaining at Mozilla, and hopefully will find ways to make MDN stay on top of the heap. I’m rooting for you, Chris!

Florian Scholz, our content lead and the youngest member of our team (a combination that tells you how amazing he is) was a fantastic contributor from his school days onward, and I was thrilled to have him join our staff. I’m exceptionally proud of his success at MDN.

Janet Swisher, who managed our community as well as writing documentation, may have been the rock of our team. She’s been a steadfast and reliable colleague and a fantastic source of advice and ideas. She kept us on track a lot of times when we could have veered sharply off the rails and over a cliff.

Will Bamberg has never been afraid to take on a big project. From developer tools to browser extensions to designing our new documentation platform, I’ve always been amazed by his focus and his ability to do big things well.

Thank you all for the hard work, the brilliant ideas, and the devotion to making the web better by teaching developers to create better sites. We made the world a little better for everyone in it, and I’m very, very proud of all of us.

Farewell, my friends.

 Posted by at 1:10 PM
Apr 142020

First, let’s get the most important part out of the way: I hope you and yours are faring as well as possible during these difficult times. Though much of what’s happening was inescapable, enough mistakes were made that we find ourselves in an essentially generation-defining crisis. This is the time my daughter will point at and tell her kids, “You think you have it rough? Back in my day…”

Please do everything you can to contain the spread of COVID-19. Stay home to the greatest possible extent.Don’t have visitors, or go out visiting others, and if you must interact with other people, stay as far apart as can be managed—at least six feet—and don’t allow your hands or face to come into contact with anything that others have touched. And, of course, wash those hands. A lot.

Surprisingly okay

Generally speaking, I’ve been getting along surprisingly well during all this. I’ve been working primarily from home for over 20 years, and due to my previously-mentioned medical issues, I’ve not been driving for a couple of years now, so I’m mostly at home now, too. So the direct changes to my personal existence are pretty minimal. It mostly comes down to increasing challenges in getting things I want or need, with the much more complicated shopping for my wife (who has always been a total rock star when it comes to compensating for the drop in my contributions to keeping our family running) and the difficulties in getting deliveries of even basic grocery items.

But generally, I keep on keeping on.

My wife is obviously stressed a bit, between having my daughter home all the time and the much-increased demand on her for errands and so forth. She took a couple days to go chill out at a home my in-laws have up in the mountains, because she needed a well-deserved break, but she’s back at it now.

Sophie, my daughter, is stir crazy as all get out. Given that her favorite pastime is lying flopped across her bed texting with her friends, you’d think that she’d be happy as can be, but apparently it’s different when you have basically no choice. She’s got remote learning to do through the school, which is great, but she’s not keeping on top of it well, and her grades are not as good as they should be. She’s at an age where it’s difficult for us to manage her schoolwork, especially since we don’t always know what she needs to be doing.

On top of all that, she misses her friends. Yeah, mostly they interact virtually these days, but the times when they do get together in real life apparently were still key to her well-being.

Life these days would be a fascinating social experiment if not for the horrible human toll it takes…

We’re all fine down here. How are you?

So, my family is okay, for the most part. And my work is going along largely the same as always, other than the understandable slowdown due to just… processing what’s going on in the wider world around me. You can’t go along unaffected by the dreadfulness of it all.

If, like me, you’re soldiering on at home… but, unlike me, doing it for the first time or for the first extended stretch of time, welcome to the family of home working! I hope you’ve adapted well to it. It does take getting used to, especially if you’re someone who either enjoys lots of direct human interaction or finds it necessary for business reasons. There’s not much I can do about the “direct” part there.

But working at home isn’t so bad. Ideally, you already have a home office or den you can work in. If not, do what you can to find a good spot away from the main flow of your household action in which you can sequester yourself to work. Working while family things go on around you is not easy to do, even when you have years of home working experience. Fortunately, it shouldn’t be too hard to adjust to with a little effort.

I could sit here and type out a long post with all the suggestions in the world for how to properly work from home, but here’s the thing: I can’t tell you that. Everyone has their own needs and preferences. Some people need to feel like they’re in the middle of some action to slip into the zone. Other people need silence, with no distractions. Do you need music? Office noises? Standing or sitting? Desk or sofa?

These are decisions that, really, only you can make. I guess I’m left with only three pieces of advice I’m comfortable offering.

First, find what works for you. Experiment. Don’t let anyone dictate how you should work at home, when they aren’t you. If setting up your work space at home to be as close as possible to your desk at work makes you the most comfortable and productive you can be, then that’s your answer. On the other hand, if you find that enveloping yourself in whatever chaos might be going on around you at home is somehow comforting and helps you get things done, then by all means, go with it.

Second, if you feel like the way your household operates conflicts with your typical daily work hours, try talking to your manager or supervisor about amending your work schedule. If you need to take shifts supervising your kids’ remote learning each day, try to work with your employer to arrange to swap out those hours for some other hours during the week. Start work early or work late in order to have that time in the afternoon or morning for your kids. Or put in a few hours on Saturday. Find ways to alter your work schedule to fit in better with working at home.

Third, set up a back channel for chatting with your coworkers. This can be a chat room on your service of choice, or a video channel people can just drop into and out of to chitchat in the kind of way people in an office do over the cubicle walls or when bumping into each other at the coffee machine. This is especially useful if your company already has remote workers you can talk to and get insights and support from. Indeed, odds are good they already have a place for talking about the remote experience at your company.

Flexibility is key when working from home. Ideally, both your home and your work are willing to make adjustments to let you do your best possible work while keeping your home life intact.

Stay home and stay safe

These days, the best thing you can do to slow the spread of COVID-19 until a vaccine can be found is to stay at home. Hopefully with a little effort, patience, and flexibility, you’ll find a comfortable way to do so.

Take care, and stay away from me. I don’t know where you’ve been. ?

 Posted by at 12:19 PM
Aug 062019

Picture of an old clockOne of my most dreaded tasks is that of estimating how long tasks will take to complete while doing sprint planning. I have never been good at this, and it has always felt like time stolen away from the pool of hours available to do what I can’t help thinking of as “real work.”

While I’m quite a bit better at the time estimating process than I was a decade ago—and perhaps infinitely better at it than I was 20 years ago—I still find that I, like a lot of the creative and technical professionals I know, dread the process of poring over bug and task lists, project planning documents, and the like in order to estimate how long things will take to do.

This is a particularly frustrating process when dealing with tasks that may be nested, have multiple—often not easily detected ahead of time—dependencies, and may involve working with technologies that aren’t actually as ready for prime time as expected. Add to that the fact that your days are filled with distractions, interruptions, and other tasks you need to deal with, and predicting how long a given project will take can start to feel like a guessing game.

The problem isn’t just one of coming up with the estimates. There’s a more fundamental problem of how to measure time. Do you estimate projects in terms of the number of work hours you’ll invest in them? The number of days or weeks you’ll spend on each task? Or some other method of measuring duration?

Hypothetical ideal days

On the MDN team, we have begun over the past year to use a time unit we call the hypothetical ideal day or simply ideal day. This is a theoretical time unit in which you are able to work, uninterrupted, on a project for an entire 8-hour work day. A given task may take any appropriate number of ideal days to complete, depending on its size and complexity. Some tasks may take less than a single ideal day, or may otherwise require a fractional number of ideal days (like 0.5 ideal days, or 1.25 ideal days).

There are a couple of additional guidelines we try to follow: we generally round to a quarter of a day, and we almost always keep our user stories’ estimates at five ideal days or less, with two or three being preferable. The larger a task is, the more likely it is that it’s really a group of related tasks.

There obviously isn’t actually any such thing as an ideal, uninterrupted day (hence the words “hypothetical” and “theoretical” a couple of paragraphs ago). Even on one’s best day, you have to stop to eat, to stretch, and do do any number of other things that you have to do during a day of work. But that’s the point to the ideal day unit: by building right into the unit the understanding that you’re not explicitly accounting for these interruptions in the time value, you can reinforce the idea that schedules are fragile, and that every time a colleague or your manager (or anyone else) causes you to be distracted from your planned tasks, the schedule will slip.

Ideal days in sprint planning

The goal, then, during sprint planning is to do your best to leave room for those distractions when mapping ideal days to the actual calendar. Our sprints on the MDN team are 12 business days long. When selecting tasks to attempt to accomplish during a sprint, we start by having each team member count up how many of those 12 days they will be available for work. This involves subtracting from that 12-day sprint any PTO days, company or local holidays, substantial meetings, and so forth.

When calculating my available days, I like to subtract a rough number of partial days to account for any appointments that I know I’ll have. We then typically subtract about 20% (or a day or two per sprint, although the actual amount varies from person to person based on how often they tend to get distracted and how quickly they rebound), to allow for distractions and sidetracking, and to cover typical administrative needs. The result is a rough estimate of the number of ideal days we’re available to work during the sprint.

With that in hand, each member of the team can select a group of tasks that can probably be completed during the number of ideal days we estimate they’ll have available during the sprint. But we know going in that these estimates are in terms of ideal days, not actual business days, and that if anything unanticipated happens, the mapping of ideal days to actual days we did won’t match up anymore, causing the work to take longer than anticipated. This understanding is fundamental to how the system works; by going into each sprint knowing that our mapping of ideal days to actual days is subject to external influences beyond our control, we avoid many of the anxieties that come from having rigid or rigid-feeling schedules.

For your consideration

For example, let’s consider a standard 12-business-day MDN sprint which spans my birthday as well as Martin Luther King, Jr. Day, which is a US Federal holiday. During those 12 days, I also have two doctor appointments scheduled which will have me out of the office for roughly half a day total, and I have about a day’s worth of meetings on my schedule as of sprint planning time. Doing the math, then, we find that I have 8.5 days available to work.

Knowing this, I then review the various task lists and find a total of around 8 to 8.5 days worth of work to do. Perhaps a little less if I think the odds are good that more time will be occupied with other things than the calendar suggests. For example, if my daughter is sick, there’s a decent chance I will be too in a few days, so I might take on just a little less work for the sprint.

As the sprint begins, then, I have an estimated 8 ideal days worth of work to do during the 12-day sprint. Because of the “ideal day” system, everyone on the team knows that if there are any additional interruptions—even short ones—the odds of completing everything on the list are reduced. As such, this system not only helps make it easier to estimate how long tasks will take, but also helps to reinforce with colleagues that we need to stay focused as much as possible, in order to finish everything on time.

If I don’t finish everything on the sprint plan by the end of the sprint, we will discuss it briefly during our end-of-sprint review to see if there’s any adjustment we need to make in future planning sessions, but it’s done with the understanding that life happens, and that sometimes delays just can’t be anticipated or avoided.

On the other hand, if I happen to finish before the sprint is over, I have time to get extra work done, so I go back to the task lists, or to my list of things I want to get done that are not on the priority list right now, and work on those things through the end of the sprint. That way, I’m able to continue to be productive regardless of how accurate my time estimates are.

I can work with this

In general, I really like this way of estimating task schedules. It does a much better job of allowing for the way I work than any other system I’ve been asked to work within. It’s not perfect, and the overhead is a little higher than I’d like, but by and large it does a pretty god job. That’s not to say we won’t try another, possibly better, way of handling the planning process in the future

But for now, my work days are as ideal as can be.

 Posted by at 5:04 PM
Apr 052018

Our fourth and final SEO experiment for MDN, to optimize internal links within the open web documentation, is now finished. Optimizing internal links involves ensuring that each page (in particular, the ones we want to improve search engine results page (SERP) positions for, are easy to find.

This is done by ensuring that each page is linked to from as many topically relevant pages as possible. In addition, it should in turn link outward to other relevant pages. The more quality links we have among related pages, the better our position is likely to be. The object, from a user’s perspective, is to ensure that even if the first page they find doesn’t answer their question, it will link to a page that does (or at least might help them find the right one).

Creating links on MDN is technically pretty easy. There are several ways to do it, including:

  • Selecting the text to turn into a link and using the toolbar’s “add link” button
  • Using the “add link” hotkey (Ctrl-K or Cmd-K)
  • Any one of a large number of macros that generate properly-formatted links automatically, such as the domxref macro, which creates a link to a page within the MDN API reference; for example: {{domxref(“RTCPeerConnection.createOffer()”)}} creates a link to, which looks like this: RTCPeerConnection.createOffer(). Many of the macros offer customization options, but the default is usually acceptable and is almost always better than trying to hand-make the links.

Our style guide talks about where links should be used. We even have a guide to creating links on MDN that covers the most common ways to do so. Start with these guidelines.

The content updates

10 pages were selected for the internal link optimization experiment.

Changes made to the selected pages

In general, the changes made were only to add links to pages; sometimes content had to be added to accomplish this but ideally the changes were relatively small.

  • Existing text that should have been a link but was not, such as mentions of terms that need definition, concepts for which pages exist that should have been linked to, were turned into links.
  • The uses of API names, element names, attributes, CSS properties, SVG element names, and so forth were turned into links when either first used in a section or if a number of paragraphs had elapsed since they were last a link. While repeated links to the same page don’t count, this is good practice for usability purposes.
  • Any phrase for which a more in-depth explanation is available was turned into a link.
  • Links to related concepts or topics were added where appropriate; for example, on the article about HTMLFormElement.elements, a note is provided with a link to the related Document.forms property.
  • Links to related functions or HTML elements or whatever were added.
  • The “See also” section was reviewed and updated to include appropriate related content.

Changes made to targeted pages

Targeted pages—pages to which links were added—in some cases got smaller changes made, such as the addition of a link back to the original page, and in some cases new links were added to other relevant content if the pages were particularly in need of help.

Pages to be updated

The pages selected to be updated for this experiment:

The results

The initial data was taken during the four weeks from December 29, 2017 through January 25th, 2018. The “after” data was taken just over a month after the work was completed, covering the period of March 6 through April 2, 2018.

The results from this experiment were fascinating. Of all of the SEO experiments we’ve done, the results of this one were the most consistently positive.

Landing Page Unique Pageviews Organic Searches Entrances Hits
/en-us/docs/web/api/document_object_model/locating_dom_elements_using_selectors +46.38% +23.68% +29.33% +59.35%
/en-us/docs/web/api/htmlcollection/item +52.89% +35.71% +53.60% +38.56%
/en-us/docs/web/api/htmlformelement/elements +69.14% +57.83% +69.30% +70.74%
/en-us/docs/web/api/mediadevices/enumeratedevices +23.55% +14.53% +16.71% +15.67%
/en-us/docs/web/api/xmlhttprequest/html_in_xmlhttprequest +49.93% -3.50% +24.67% +59.63%
/en-us/docs/web/api/xmlserializer +36.24% +46.94% +31.50% +37.66%
/en-us/docs/web/css/all +46.15% +16.52% +23.51% +48.28%
/en-us/docs/web/css/inherit +22.55% +27.16% +20.58% +17.12%
/en-us/docs/web/css/object-position +102.78% +119.01% +105.56% +405.52%
/en-us/docs/web/css/unset +24.60% +18.45% +19.20% +35.01%

These results are amazing. Every single value is up, with the sole exception of a small decline in organic search views (that is, appearances in Google search result lists) for the article “HTML in XMLHttpRequest.” Most values are up substantially, with many being impressively improved.

Note: The data in the table above was updated on April 12, 2018 after realizing the “before” data set was inadvertently one day shorter than the “after” set. This reduced the improvements marginally, but did not affect the overall results.


Due to the implementation of the experiment and certain timing issues, there are uncertainties surrounding these results. Those include:

  • Ideally, much more time would have elapsed between completing the changes and collecting final data.
  • The experiment began during the winter holiday season, when overall site usage is at a low point.
  • There was overall site growth of MDN traffic over the time this experiment was underway.


Certain conclusions can be reached:

  1. The degree to which internal link improvements benefited traffic to these pages can’t be ignored, even after factoring in the uncertainties. This is easily the most benefit we got from any experiment, and on top of that, the amount of work required was often much lower. This should be a high priority portion of our SEO plans.
  2. The MDN meta-documentation will be further improved to enhance recommendations around linking among pages on MDN.
  3. We should consider enhancements to macros used to build links to make them easier to use, especially in cases where we commonly have to override default behavior to get the desired output. Simplifying the use of macros to create links will make linking faster and easier and therefore more common.
  4. We’ll re-evaluate the data after more time has passed to ensure the results are correct.


If you’d like to comment on this, or ask questions about the results or the work involved in making these changes, please feel free to follow up or comment on the thread I’ve created on our Discourse forum.

 Posted by at 10:08 AM
Mar 232018

The next SEO experiment I’d like to discuss results for is the MDN “Competitive Content Analysis” experiment. In this experiment, performed through December into early January, involved selecting two of the top search terms that resulted in MDN being included in search results—one of them where MDN is highly-placed but not at #1, and one where MDN is listed far down in the search results despite having good content available.

The result is a comparison of the quality of our content and our SEO against other sites that document these technology areas. With that information in hand, we can look at the competition’s content and make decisions as to what changes to make to MDN to help bring us up in the search rankings.

The two keywords we selected:

  • “tr”: For this term, we were the #2 result on Google, behind w3schools.
  • html colors”: For this keyword, we were in 27th place. That’s terrible!

These are terms we should be highly placed for. We have a number of pages that should serve as good destinations for these keywords. The job of this experiment: to try to make that be the case.

The content updates

For each of the two keywords, the goal of the experiment was to improve our page rank for the keywords in question; at least one MDN page should be near or at the top of the results list. This means that for each keyword, we need to choose a preferred “optimum” destination as well as any other pages that might make sense for that keyword (especially if it’s combined with other keywords).

To accomplish that involves updating the content of each of those pages to make sure they’re as good as possible, but also to improve the content of pages that link to the pages that should show up on search results. The goal is to improve the relevant pages’ visibility to search as well as their content quality, in order to improve page position in the search results.

Things to look for

So, for each page that should be linked to the target pages, as well as the target pages themselves, these things need to be evaluated and improved as needed:

  • Add appropriate links back and forth between each page and the target pages.
  • Is the content clear and thorough?
  • Make sure there’s interactive content, such as new interactive examples.
  • Ensure the page’s layout and content hierarchy is up-to-date with our current content structure guidelines.
  • Examine web analytics data to determine what improvements the data suggest should be done beyond these items.

Pages reviewed and/or updated for “tr”

The primary page, obviously, is this one in the HTML element reference:

These pages are closely related and were also reviewed and in most cases updated (sometimes extensively) as part of the experiment:

A secondary group of pages which I felt to be a lower priority to change but still wound up reviewing and in many cases updating:

Pages reviewed and/or updated for “html colors”

This one is interesting in that “html colors” doesn’t directly correlate to a specific page as easily. However, we selected the following pages to update and test:

The problem with this keyword, “html colors”, is that generally what people really want is CSS colors, so you have to try to encourage Google to route people to stuff in the CSS documentation instead of elsewhere. This involves ensuring that you refer to HTML specifically in each page in appropriate ways.

I’ve opted in general to consider the CSS <color> value page to be the destination for this, for reference purposes, with the article “Applying color” being a new one I created to use as a landing page for all things color related to route people to useful guide pages.

The results

As was the case with previous experiments, we only allowed about 60 days for the Google to pick up and fully internalize the changes as well as for user reactions to affect the outcome, despite the fact that 90 days is usually the minimum time you run these tests for, with six months being preferred. However, we have had to compress our schedule for the experiments. We will, as before, continue to watch the results over time.

Results for the “tr” keyword

The pages updated to improve their placement when the “tr” keyword is used in Google search, as well as the amount of change over time seen for each, is shown in the table below. These were the pages which were updated and which appeared in search results analysis for the selected period of time.

Change (%)



Clicks Position CTR
HTML/Element/tr -43.22% 124.57% 2.58% 285.71%
HTML/Element/table 26.68% 27.02% -2.90% 0.00%
HTML/Element/template 27.02% 9.21% -15.45% -14.05%
API/HTMLTableRowElement/insertCell -2.78% -23.91% -2.16% -21.77%
HTML/Element/thead 38.82% 19.70% 0.00% -13.67%
HTML/Element/tbody 42.72% 100.52% 14.19% 40.68%
HTML/Element/tfoot 8.90% 11.29% 2.64% 2.18%
HTML/Element/th -50.32% 3.43% 0.39% 106.25%
HTML/Element/td 20.05% 40.27% -8.04% 17.01%

The data is interesting. Impression counts are generally up, as are clicks and search engine results page (SERP) position. Interestingly, the main <tr> page, the primary page for this keyword, has lost impressions yet gained clicks, with the CTR skyrocketing by a sizeable 285%. This means that people are seeing better search results when searching just for “tr”, and getting right to that page more often than before we began.

Results for the “html colors” keyword

The table below shows the pages updated for the “html colors” keyword and the amount of change seen in the Google activity for each page.

Page Change Impressions (%) Change Clicks (%) Change Position (Absolute) Change Position (%) Change CTR (%) +28.61% +19.88% -0.99 -4.54% -6.78% -43.88% -38.17% -2.71 -21.20% +10.17% +51.59% +33.33% +3.28 +48.62% -12.04% +34.87% +29.55% -1.35 -11.34% -3.94% +9.03% +19.26% -0.17 -2.46% +9.38% +36.02% +36.98% -0.09 -1.38% +0.71% +23.04% +23.42% +0.03 +0.34% +0.31% +14.95% +34.09% -1.21 -10.26% +16.65% -10.78% +6.68% +1.76 +24.78% +19.56% +830.70% +773.91% -0.97 -12.42% -6.10% +3254.57% +3429.41% -1.45 -21.98% +5.21% +50.32% +45.21% -0.56 -4.83% -3.40% +31.15% +25.57% -0.44 -4.44% -4.25%

These results are also quite promising, especially since time did not permit me to make as many changes to this content as I’d have liked. The changes for the color value type page are good; nearly 15% increase in impressions and a very good 34% rise in clicks means a health boost to CTR. Ironically, though, our position in search results dropped by nearly 1.25 points., or 10%.

The approximate 23% increase in both impressions and clicks on the CSS color attribute is quite good, and I’m pleased by the 10% gain in CTR for the learning area article on styling box backgrounds.

Almost every page sees significant gains in both impressions and clicks (take a look at text-decoration-color, in particular, with over 3000% growth!).

The sea of red is worrisome at first glance, but I think what’s happening here is that because of the improvements in impression counts (that is, how often users see these pages on Google), they are prone to reaching the page they really want more quickly. Note which pages are the ones with the positive click-through rate (CTR), which is the ratio of clicks divided by impressions. This is in order of highest change in CTR to lowest:


What I believe we’re seeing is this: due to the improvements to SEO (and potentially other factors), all of the color-related pages are getting more traffic. However, the ones in the list above are the ones seeing the most benefit; they’re less prone to showing up at inappropriate times and more likely to be clicked when they are presented to the user. This is a good sign.

Over time, I would hope to improve the SEO further to help bring the search results positions up for these pages, but that takes a lot more time than we’ve given these pages so far.


For this experiment, the known uncertainties (an oxymoron, but we’ll go with that term anyway) include:

  • As before, the elapsed time was far too share to get viable data for this experiment. We will examine the data again in a few months to see how things are progressing.
  • This project had additional time constraints that led me not to make as many changes as I might have preferred, especially for the “html colors” keyword. The results may have been significantly different had more time been available, but that’s going to be common in real-world work anyway.
  • Overall site growth during the time we ran this experiment also likely inflated the results somewhat.


After sharing these results with Kadir and Chris, we came to the following initial conclusions:

  • This is promising, and should be pursued for pages which already have low-to-moderate traffic.
  • Regardless of when we begin general work to perform and make changes as a result of competitive content analysis, we should immediately update MDN’s contributor guides to incorporate recommended changes.
  • The results suggest that content analysis should be a high-priority part of our SEO toolbox. Increasing our internal link coverage and making documents relate to each other creates a better environment for search engine crawlers to accumulate good data.
  • We’ll re-evaluate the results in a few months after more data has accumulated.

If you have questions or comments about this experiment or its results, please feel free to post to this topic on Mozilla’s Discourse forum.

 Posted by at 4:46 PM
Mar 212018

Following in the footsteps of MDN’s “Thin Pages” SEO experiment done in the autumn of 2017, we completed a study to test the effectiveness and process behind making changes to correct cases in which pages are perceived as “duplicates” by search engines. In SEO parlance, “duplicate” is a fuzzy thing. It doesn’t mean the pages are identical—this is actually pretty rare on MDN in particular—but that the pages are similar enough that they are not easily differentiated by the search engine’s crawling technology.

This can happen when two pages are relatively short but are about a similar topic, or on reference pages which are structurally and content-wise quite similar to one another. From a search engine’s standpoint, if you create articles about the background-position and background-origin CSS attributes, you have two pages based on the same template (a CSS reference page) whose contents are easily prone to being identical. This is especially true given how common it is to start a new page by copying content from another, similar, page and making the needed changes, and sometimes even only the barest minimum changes needed.

All that means that from an SEO perspective, we actually have tons of so-called “duplicate” pages on MDN. As before, we selected a number of pages that were identified as duplicates by our SEO contractor and made appropriate changes to them per their recommendations, which we’ll get into briefly in a moment.

Once the changes were made, we waited a while, then compared the before and after analytics to see what the effects were.

The content updates

We wound up choosing nine pages to update for this experiment. We actually started with ten but one of them wound up being removed from the set after the fact when I realized it wasn’t a usable candidate. Unsurprisingly, essentially all duplicate pages were found in reference documentation. Guides and tutorials were with almost no exceptions immune to the phenomenon.

The pages were scattered through various areas of the open web reference documentation under

Things to fix or improve

Each page was altered as much as possible in an attempt to differentiate it from the pages which were found to be similar. Tactics to accomplish this included:

  • Make sure the page is complete. If it’s a stub, write the page’s contents up completely, including all relevant information, sections, etc.
  • Ensure all content on the page is accurate. Look out for places where copy-and-pasted information wasn’t corrected as well as any other errors.
  • Make sure the page has appropriate in-content (as opposed to sidebar or “See also”) links to other pages on MDN. Feel free to also include links in the “See also” section, but in-content links are a must. At least the first mention of any term, API, element, property, attribute, technology, or idea should have a link to relevant documentation (or to an entry in MDN’s Glossary). Sidebar links are not generally counted when indexing web sites, so relying on them entirely for navigation will wreak utter havoc on SEO value.
  • If there are places in which the content can be expanded upon in a sensible way, do so. Are there details not mentioned or not covered in as much depth as they could be? Think about the content from the perspective of the first-time reader, or a newcomer to the technology in question. Will they be able to answer all possible questions they might have by reading this page or pages that the page links to through direct, in-content links?
  • What does the article assume the reader knows that they might not yet? Add the missing information. Anything that’s skimmed over, leaving out details and assuming the reader can figure it out from context needs to be fleshed out with more information.
  • Ensure that all of the API’s features are covered by examples. Ideally, every attribute on an element, keyword for a CSS property’s value, parameter for a method, and so forth should be used in at least one example.
  • Ensure that each example is accompanied by descriptive text. There should be an introduction explaining what the example does and what feature or features of the technology or API are demonstrated as well as any text that explains how the example works. For example, on an HTML element reference page, simply listing the properties then providing an example that only uses a subset of those properties isn’t enough.  Add more examples that cover the other properties, or at least the ones that are likely to be used by anyone who isn’t a deeply advanced user.
  • Do not simply add repetitive or unhelpful text. Beefing up the word count just to try to differentiate the page from other content is actually going to do more harm than good.
  • It’s frequently helpful to also change the pages which are considered to be duplicate pages. By making different changes to each of the similar pages, you can create more variations than by changing one page alone.

Pages to be updated

The pages we selected to update:


In most if not all of these cases, the pages which are similar are obvious.

The results

After making appropriate changes to the pages listed above as well as certain other pages which were similar to them, we allowed 60 days to pass. This is less than the ideal 90 days, or, better, six months, but time was short. We will check the data again in a few months to see how things change given more time.

The changes made were not always as extensive as would normally be preferred, again due to time constraints created by the one-person experimental model. When we do this work on a larger scale, it will be done by a larger community of contributors. In addition, much of this work will be done from the beginning of the writing process as we will be revising our contributor guide to incorporate the recommendations as appropriate after these experiments are complete.

Pre-experiment unvisited pages

As was the case with the “thin pages” experiment, pages which were getting literally zero—or even close to zero— traffic before the changes continued to not get much traffic. Those pages were:


For what it’s worth, we did learn from this eventually and experiments that were begun after the “thin pages” results were in no longer included pages which got no traffic during the period leading up to the start of the experiment, but this experiment had already begun running by then.

Post-experiment unvisited pages

There were also two pages which had a small amount of traffic before the changes were made but no traffic afterward. This is likely a statistical anomaly or fluctuation, so we’re discounting these pages for now:


The outcome

The remaining three pages had useful data. This is a small data set but is what we have, so let’s take a look.

 Page URL Sept. 16-Oct. 16 Nov. 18 – Dec 18
Clicks Impressions Clicks Impressions Clicks Chg. % Impressions Chg. % 678 8,751 862 13,435 27.14% 34.86% 1,193 3,724 1,395 7,700 14.48% 51.64% 447 2,961 706 7,906 36.69% 62.55%

For each of these three pages, the results are promising. Both click and impression counts are up for each of them, with click counts increasing by anywhere from 14% to 36% and impression counts increasing between 34% and 62% (yes, I rounded down for each of these values, since this is pretty rough data anyway). We’ll check the results again soon and see if the results changed further.


Because of certain implementation specifics of this experiment, there are obviously some uncertainties in the results:

  • We didn’t allow as much time as is recommended for the changes to fully take effect in search data before measuring the results. This was due to time constraints for the experiment being performed, but, as previously mentioned, we’ll look at the data again later to double-check our results.
  • The set of pages with useful results was very small, and even the original set of pages was fairly small.
  • There was substantial overall site growth during this period, so it’s likely the results are affected by this. However, the size of the improvements seen here suggests that even with that in mind, the outcome was significant.


After a team review of these results, we came to some conclusions. We’ll revisit them later, of course, if we decide that a review of the data later suggests changes be made.

  1. The amount of improvement seen strongly suggests we should prioritize fixing duplicate pages, at least in cases where at least one of the pages which are considered duplicates of one another is getting at least a low-to-moderate amount of traffic.
  2. The MDN meta-documentation, in particular the writer’s guide and the guides to writing each of the types of reference content, will be updated to incorporate the recommendations into the general guidelines for contributing to MDN. Additionally, the article on MDN about writing SEO-friendly content will be updated to include this information.
  3. It turns out that many of the changes needed to fix “thin” pages also applies when fixing “duplicate” pages.
  4. We’ll re-evaluate prioritization after reviewing the latest data after more time has passed.

The most interesting thing I’m learning about SEO, I think, is that it’s really about making great content. If the content is really top-notch, SEO practically attends to itself. It’s all about having thorough, accurate content with solid internal and external links.


If you have comments or questions about this experiment or the changes we’ll be making, please feel free to follow-up or comment on this thread on Mozilla’s Discourse site.

 Posted by at 7:02 PM
Nov 132017

Last week, I wrote about the results of our “thin pages” (that is, pages too short to be properly cataloged by search engines) SEO experiment, in which we found that while there appear to be gains in some cases when improving the pages considered to be too short, there was too much uncertainty and too few cases in which gains seemed to occur at all, to justify making a full-fledged effort to fix every thin page on MDN.

However, we do want to try to avoid thin pages going forward! Having content that people can actually find is obviously important. In addition, we encourage contributors working on articles for other reasons who find that they’re too short to go ahead and update them.

I’ve already updated our meta-documentation (that is, our documentation about writing documentation) to incorporate most of the recommendations for avoiding thin content. These changes are primarily in the writing style guide. I’ve also written the initial portions of a separate guide to writing for SEO on MDN.

For fun, let’s review the basics here today!

What’s a thin page?

A thin page is a page that’s too short for search engines to properly catalog and differentiate from other pages. Pages that are shorter than 250-300 words of content text do not provide enough context for search algorithms to reliably comprehend what the article is about, which means the page winds up not in the right place in search results.

For the purposes of computing the length of an article, the article’s length is the number of words of body content—that is, content that isn’t in headers, footers, sidebars, or similar constructs—plus the number of words located in alt text on <img> elements.

How to avoid thin pages

These tips are taken straight from the guidelines on MDN:

  • Keep an eye on the convenient word counter located in the top-right corner of the editor’s toolbar on MDN.
  • Obviously, if the article is a stub, or is missing material, add it. We try to avoid outright “stub” pages on MDN, although they do exist, but there are plenty of pages that are missing large portions of their content while not technically being a “stub.”
  • Generally review the page to ensure that it’s structured properly for the type of page it is. Be sure every section that it should have is present and has appropriate content.
  • Make sure every section is complete and up-to-date, with no information missing. Are all parameters listed and explained?
  • Be sure everything is fully fleshed-out. It’s easy to give a quick explanation of something, but make sure that all the nuances are covered. Are there special cases? Known restrictions that the reader might need to know about?
  • There should be examples covering all parameters or at least common sets of parameters. Each example should be preceded with an overview of what the example will do, what additional knowledge might be needed to understand it, and so forth. After the example (or interspersed among pieces of the example) should be text explaining how the code works. Don’t skimp on the details and the handling of errors in examples; readers will copy and paste your example to use in their own projects, and your code will wind up used on production sites! See Code examples and our Code sample guidelines for more useful information.
  • If there are particularly common use cases for the feature being described, talk about them! Instead of assuming the reader will figure out that the method being documented can be used to solve a common development problem, actually add a section with an example and text explaining how the example works.
  • Include proper alt text on all images and diagrams; this text counts, as do captions on tables and other figures.
  • Do not descend into adding repetitive, unhelpful material or blobs of keywords, in an attempt to improve the page’s size and search ranking. This does more harm than good, both to content readability and to our search results.

Reviewing the above guidelines and suggestions (some of which are admittedly pretty obvious) when confronted with pages that are just too short may help kick-start your creativity so you can write the needed material to ensure that MDN’s content drifts to the top of the turbid sea of web documentation and other content to be found on the Internet.

 Posted by at 12:50 PM
Nov 082017

The MDN team has been working on a number of experiments designed to make decisions around prioritization of various forms of SEO problems we should strive to resolve. In this document, we will examine the results of our first such experiment, the “Thin Pages” experiment.

The goal of this experiment was to select a number of pages on MDN which are considered “thin”—that is, too short to be usefully analyzed—and update them using guidelines provided by our SEO contractor.

Once the changes were made and a suitable time had passed, we re-evaluated the pages’ statistics to determine whether or not the changes had an appreciable effect. With that information in hand, we then made a determination as to whether or not prioritizing this work makes sense.

The content updates

We selected 20 pages, choosing from across the /Web and /Learn areas of MDN and across the spectrum of usage levels. Some pages had little to no traffic at the outset, while others were heavily trafficked. Then, I went through these pages and updated them substantially, adding new content and modernizing their layouts in order to bring them up to a more useful size.

The changes were mostly common-sense ones:

  • Pages that aren’t necessary were deleted (as it turns out, none of the pages we selected were in this category).
  • Ensuring each page had all of the sections they’re meant to have.
  • Ensuring that every aspect of the topic is covered fully.
  • Ensuring that examples are included and cover an appropriate set of cases.
  • Ensuring that all examples include complete explanations of what they’re doing and how they work.
  • Ensuring that pages include not only the standard tags, but additional tags that may add useful keywords to the page.
  • Fleshing out ideas that aren’t fully covered.

The pages we chose to update are:

The results

After making the changes we were able to make in a reasonable amount of time, we then allowed the pages some time to percolate through Google’s crawler and such. Then we re-measured the impression and click counts, and the results were not quite what we expected.

First, of all the pages involved, only a few actually got any search traffic at all. The following pages were not seen by users searching on Google at all during either/or the starting or ending analysis:

The remaining pages did generally see measurable gains, some of them quite high, but none are clearly outside the range of growth expected giving MDN’s ongoing growth:

June 1-30 Sept. 24 – Oct. 23
Page URL Clicks Impressions Clicks Impressions Clicks Chg. % Impressions Chg. % 15 112 111 2600 640.00% 2221.43% 1789 6331 1866 9004 4.30% 42.22% 3151 60793 4729 100457 50.08% 65.24%

This is unfortunately not a very large data set, but we can draw some crude results from it. We’ll also continue to watch these pages over the next few months to see if there’s any further change.

The number of impressions went up, in some cases dramatically. But there’s just not enough here to be sure this was related to the thin page revisions or related to other factors, such as the large-scale improvements to the HTML docs recently made.


There are, as mentioned already, some uncertainties around these results:

  • The number of pages that had useful results was smaller than we would have preferred.
  • We had substantial overall site growth during the same time period, and certain areas of the site were heavily overhauled. Both of these facts may have impacted the results.
  • We only gave the pages a couple of months after making the changes before measuring the results. We were advised that six months is a more helpful time period to monitor (so we’ll look again in a few months).


After reviewing these results, and weighing the lack of solid data at this stage, we did come to some initial conclusions, which are open to review if the numbers change going forward:

  1. We won’t launch a full-scale formal project around fixing thin pages. It’s just not worth it given the dodginess of the numbers we have thus far.
  2. We will, however, update the meta-documentation to incorporate the recommendations around thin pages.That means providing advice about the kinds of content to include, reminding people to be thorough, reminding writers to include plenty of samples that cover a variety of use cases and situations, and so forth. We will also add a new “SEO” area to the meta docs that covers these recommendations more explicitly in terms of the SEO impact.
  3. We will check these numbers again in a couple of months to see if there’s been further improvement. The recommendation was to wait six months for results, but we did not have that kind of time.


For discussion of this experiment, and of the work updating MDN that will come from it, I encourage you to follow-up or comment in this thread on the Mozilla Discourse site.

 Posted by at 11:22 AM