Nov 132017
 

Last week, I wrote about the results of our “thin pages” (that is, pages too short to be properly cataloged by search engines) SEO experiment, in which we found that while there appear to be gains in some cases when improving the pages considered to be too short, there was too much uncertainty and too few cases in which gains seemed to occur at all, to justify making a full-fledged effort to fix every thin page on MDN.

However, we do want to try to avoid thin pages going forward! Having content that people can actually find is obviously important. In addition, we encourage contributors working on articles for other reasons who find that they’re too short to go ahead and update them.

I’ve already updated our meta-documentation (that is, our documentation about writing documentation) to incorporate most of the recommendations for avoiding thin content. These changes are primarily in the writing style guide. I’ve also written the initial portions of a separate guide to writing for SEO on MDN.

For fun, let’s review the basics here today!

What’s a thin page?

A thin page is a page that’s too short for search engines to properly catalog and differentiate from other pages. Pages that are shorter than 250-300 words of content text do not provide enough context for search algorithms to reliably comprehend what the article is about, which means the page winds up not in the right place in search results.

For the purposes of computing the length of an article, the article’s length is the number of words of body content—that is, content that isn’t in headers, footers, sidebars, or similar constructs—plus the number of words located in alt text on <img> elements.

How to avoid thin pages

These tips are taken straight from the guidelines on MDN:

  • Keep an eye on the convenient word counter located in the top-right corner of the editor’s toolbar on MDN.
  • Obviously, if the article is a stub, or is missing material, add it. We try to avoid outright “stub” pages on MDN, although they do exist, but there are plenty of pages that are missing large portions of their content while not technically being a “stub.”
  • Generally review the page to ensure that it’s structured properly for the type of page it is. Be sure every section that it should have is present and has appropriate content.
  • Make sure every section is complete and up-to-date, with no information missing. Are all parameters listed and explained?
  • Be sure everything is fully fleshed-out. It’s easy to give a quick explanation of something, but make sure that all the nuances are covered. Are there special cases? Known restrictions that the reader might need to know about?
  • There should be examples covering all parameters or at least common sets of parameters. Each example should be preceded with an overview of what the example will do, what additional knowledge might be needed to understand it, and so forth. After the example (or interspersed among pieces of the example) should be text explaining how the code works. Don’t skimp on the details and the handling of errors in examples; readers will copy and paste your example to use in their own projects, and your code will wind up used on production sites! See Code examples and our Code sample guidelines for more useful information.
  • If there are particularly common use cases for the feature being described, talk about them! Instead of assuming the reader will figure out that the method being documented can be used to solve a common development problem, actually add a section with an example and text explaining how the example works.
  • Include proper alt text on all images and diagrams; this text counts, as do captions on tables and other figures.
  • Do not descend into adding repetitive, unhelpful material or blobs of keywords, in an attempt to improve the page’s size and search ranking. This does more harm than good, both to content readability and to our search results.

Reviewing the above guidelines and suggestions (some of which are admittedly pretty obvious) when confronted with pages that are just too short may help kick-start your creativity so you can write the needed material to ensure that MDN’s content drifts to the top of the turbid sea of web documentation and other content to be found on the Internet.

 Posted by at 12:50 PM
Nov 082017
 

The MDN team has been working on a number of experiments designed to make decisions around prioritization of various forms of SEO problems we should strive to resolve. In this document, we will examine the results of our first such experiment, the “Thin Pages” experiment.

The goal of this experiment was to select a number of pages on MDN which are considered “thin”—that is, too short to be usefully analyzed—and update them using guidelines provided by our SEO contractor.

Once the changes were made and a suitable time had passed, we re-evaluated the pages’ statistics to determine whether or not the changes had an appreciable effect. With that information in hand, we then made a determination as to whether or not prioritizing this work makes sense.

The content updates

We selected 20 pages, choosing from across the /Web and /Learn areas of MDN and across the spectrum of usage levels. Some pages had little to no traffic at the outset, while others were heavily trafficked. Then, I went through these pages and updated them substantially, adding new content and modernizing their layouts in order to bring them up to a more useful size.

The changes were mostly common-sense ones:

  • Pages that aren’t necessary were deleted (as it turns out, none of the pages we selected were in this category).
  • Ensuring each page had all of the sections they’re meant to have.
  • Ensuring that every aspect of the topic is covered fully.
  • Ensuring that examples are included and cover an appropriate set of cases.
  • Ensuring that all examples include complete explanations of what they’re doing and how they work.
  • Ensuring that pages include not only the standard tags, but additional tags that may add useful keywords to the page.
  • Fleshing out ideas that aren’t fully covered.

The pages we chose to update are:

The results

After making the changes we were able to make in a reasonable amount of time, we then allowed the pages some time to percolate through Google’s crawler and such. Then we re-measured the impression and click counts, and the results were not quite what we expected.

First, of all the pages involved, only a few actually got any search traffic at all. The following pages were not seen by users searching on Google at all during either/or the starting or ending analysis:

The remaining pages did generally see measurable gains, some of them quite high, but none are clearly outside the range of growth expected giving MDN’s ongoing growth:

June 1-30 Sept. 24 – Oct. 23
Page URL Clicks Impressions Clicks Impressions Clicks Chg. % Impressions Chg. %
https://developer.mozilla.org/en-US/docs/Web/CSS/Media_Queries 15 112 111 2600 640.00% 2221.43%
https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/translateZ 1789 6331 1866 9004 4.30% 42.22%
https://developer.mozilla.org/en-US/docs/Web/HTML/Inline_elements 3151 60793 4729 100457 50.08% 65.24%

This is unfortunately not a very large data set, but we can draw some crude results from it. We’ll also continue to watch these pages over the next few months to see if there’s any further change.

The number of impressions went up, in some cases dramatically. But there’s just not enough here to be sure this was related to the thin page revisions or related to other factors, such as the large-scale improvements to the HTML docs recently made.

Uncertainties

There are, as mentioned already, some uncertainties around these results:

  • The number of pages that had useful results was smaller than we would have preferred.
  • We had substantial overall site growth during the same time period, and certain areas of the site were heavily overhauled. Both of these facts may have impacted the results.
  • We only gave the pages a couple of months after making the changes before measuring the results. We were advised that six months is a more helpful time period to monitor (so we’ll look again in a few months).

Decisions

After reviewing these results, and weighing the lack of solid data at this stage, we did come to some initial conclusions, which are open to review if the numbers change going forward:

  1. We won’t launch a full-scale formal project around fixing thin pages. It’s just not worth it given the dodginess of the numbers we have thus far.
  2. We will, however, update the meta-documentation to incorporate the recommendations around thin pages.That means providing advice about the kinds of content to include, reminding people to be thorough, reminding writers to include plenty of samples that cover a variety of use cases and situations, and so forth. We will also add a new “SEO” area to the meta docs that covers these recommendations more explicitly in terms of the SEO impact.
  3. We will check these numbers again in a couple of months to see if there’s been further improvement. The recommendation was to wait six months for results, but we did not have that kind of time.

Discussion?

For discussion of this experiment, and of the work updating MDN that will come from it, I encourage you to follow-up or comment in this thread on the Mozilla Discourse site.

 Posted by at 11:22 AM
May 122017
 

I’ve been writing developer documentation for 20 years now, 11 of those years at Mozilla. For most of those years, documentation work was largely unmanaged. That is to say, we had management, and we had goals, but how we reached those goals was entirely up to us. This worked well for me in particular. My brain is like a simple maze bot in some respects, following all the left turns until it reaches a dead end, then backing up to where it made the last turn and taking the next path to the right, and repeating until the goal has been reached.

This is how I wrote for a good 14 or 15 years of my career. I’d start writing about a topic, linking to APIs, functions, other guides and tutorials, and so forth along the way—whether they already existed or not. Then I’d go back through the page and click the first link on the page I just created, and I’d make sure that that page was solid. Any material on that page that needed to be fixed for my new work to be 100% understood, I’d update. If there were any broken links, I’d fix them, creating and writing new pages as needed, and so forth.

How my mind wants to do it

Let’s imagine that the standards gurus have spoken and have decided to add to a new <dial> element to HTML, providing support for creating knobs and speedometer-style feedback displays. My job is to document this element.

I start by creating the main article in the HTML reference for <dial>, and I write that material, starting with a summary (which may include references to <progress>, <input>, and other elements and pages). It may also include links to articles I plan to create, such as “Using dial elements” and “Displaying information in HTML” as well as articles on forms.

As I continue, I may wind up with links to subpages which need to be created; I’ll also wind up with a link to the documentation for the HTMLDialElement interface, which obviously hasn’t been written yet. I also will have links to subpages of that, as well as perhaps for other elements’ attributes and methods.

Having finished the document for <dial>, I save it, review it and clean it up, then I start following all the links on the page. Any links that take me to a page that needs to be written, I write it. Any links that take me to a page that needs content added because of the new element, I expand them. Any links that take me to a page that is just horribly unusably bad, I update or rewrite as needed. And I continue to follow those left-hand turns, writing or updating article after article, until eventually I wind up back where I started.

If one of those pages is missing an example, odds are good it’ll be hard to resist creating one, although if it will take more than a few minutes, this is where I’m likely to reluctantly flag it for someone else to do later, unless it’s really interesting and I am just that intrigued.

By the time I’m done documenting <dial>, I may also have updated badly out of date documentation for three other elements and their interfaces, written pages about how to decide on the best way to represent your data, added documentation for another undocumented element that has nothing to do with anything but it was a dead link I saw along the way, updated another element’s documentation because that page was where I happened to go to look at the correct way to structure something, and I saw it had layout problems…

You get the idea.

How I have to do it now

Unfortunately, I can’t realistically do that anymore. We have adopted a system of sprints with planned work for each sprint. Failing to complete the work in the expected amount of time tends to get you dirty looks from more and more people the longer it goes on. Even though I’m getting a ton accomplished, it doesn’t count if it’s not on the sprint plan.

So I try to force myself to work on only the stuff directly related to the sprint we’re doing. But sometimes the line is hard to find. If I add documentation for an interface, but the documentation for its parent interface is terrible, it seems to me that updating that parent interface is a fairly obvious part of my job for the sprint. But it wasn’t budgeted into the time available, so if I do it, I’m not going to finish in time.

The conundrum

That leaves me in a bind: do strictly what I’m supposed to do, leaving behind docs that are only partly usable, or follow at least some of those links into pages that need help before the new content is truly usable and “complete,” but risk missing my expected schedule.

I almost always choose the latter, going in knowing I’m going to be late because of it. I try to control my tendency to keep making left turns, but sometimes I lose myself in the work and write stuff I am not expected to be doing right now.

Worse, though, is that the effort of restraining myself to just writing what’s expected is unnatural to me. My brain rebels a bit, and I’m quite sure my overall throughput is somewhat lower because of it. As a result: a less enjoyable writing experience for me, less overall content created, and unmet goals.

I wonder, sometimes, how my work results would look if I were able to cut loose and just go again. I know I have other issues slowing me down (see my earlier blog post Peripheral neuropathy and me), but I can’t help wondering if I could be more productive by working how I think, instead of doing what doesn’t come naturally: work on a single thing from A to Z without any deviation at all for any reason.

 Posted by at 10:30 AM
Apr 032017
 

As of today—April 3, 2017—I’ve been working as a Mozilla staffer for 11 years. Eleven years of documenting the open Web, as well as, at times, certain aspects of the guts of Firefox itself. Eleven years. Wow. I wrote in some detail last year about my history at Mozilla, so I won’t repeat the story here.

I think 2017 is going to be a phenomenal year for the MDN team. We continue to drive forward on making open web documentation that can reach every web developer regardless of skill level. I’m still so excited to be a part of it all!

A little fox that Sophie got me

Last night, my eleven-year-old daughter (born about 10 months before I joined Mozilla) brought home this fox beanie plush for me. I don’t know what prompted her to get it—I don’t think she’s aware of the timing—but I love it! It may or may not actually be a red panda, but it has a very Firefox look to it, and that’s good enough for me.

 Posted by at 7:12 AM
Oct 192016
 

One of the most underappreciated features of Firefox’s URL bar and its bookmark system is its support for custom keyword searches. These let you create special bookmarks that type a keyword followed by other text, and have that text inserted into a URL identified uniquely by the keyword, then that URL gets loaded. This lets you type, for example, “quote aapl” to get a stock quote on Apple Inc.

You can check out the article I linked to previously (and here, as well, for good measure) for details on how to actually create and use keyword searches. I’m not going to go into details on that here. What I am going to do is share a few keyword searches I’ve configured that I find incredibly useful as a programmer and as a writer on MDN.

For web development

Here are the search keywords I use the most as a web developer.

Keyword Description URL
if Opens an API reference page on MDN given an interface name. https://developer.mozilla.org/en-US/docs/Web/API/%s
elem Opens an HTML element’s reference page on MDN. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/%s
css Opens a CSS reference page on MDN. https://developer.mozilla.org/en-US/docs/Web/CSS/%s
fx Opens the release notes for a given version of Firefox, given its version number. https://developer.mozilla.org/en-US/Firefox/Releases/%s
mdn Searches MDN for the given term(s) using the default filters, which generally limit the search to include only pages most useful to Web developers. https://developer.mozilla.org/en-US/search?q=%s
mdnall Searches MDN for the given term(s) with no filters in place. https://developer.mozilla.org/en-US/search?q=%s&none=none

For documentation work

When I’m writing docs, I actually use the above keywords a lot, too. But I have a few more that I get a lot of use out of, too.

Keyword Description URL
bug Opens the specified bug in Mozilla’s Bugzilla instance, given a bug number. https://bugzilla.mozilla.org/show_bug.cgi?id=%s
bs Searches Bugzilla for the specified term(s). https://bugzilla.mozilla.org/buglist.cgi?quicksearch=%s
dxr Searches the Mozilla source code on DXR for the given term(s). https://dxr.mozilla.org/mozilla-central/search?q=%s
file Looks for files whose name contains the specified text in the Mozilla source tree on DXR. https://dxr.mozilla.org/mozilla-central/search?q=path%3A%s
ident Looks for definitions of the specified identifier (such as a method or class name) in the Mozilla code on DXR. https://dxr.mozilla.org/mozilla-central/search?q=id%3A%s
func Searches for the definition of function(s)/method(s) with the specified name, using DXR. https://dxr.mozilla.org/mozilla-central/search?q=function%3A%s
t Opens the specified MDN KumaScript macro page, given the template/macro name. https://developer.mozilla.org/en-US/docs/Template:%s
wikimo Searches wiki.mozilla.org for the specified term(s). https://wiki.mozilla.org/index.php?search=%s

Obviously, DXR is a font of fantastic information, and I suggest click the “Operators” button at the right end of the search bar there to see a list of the available filters; building search keywords for many of these filters can make your life vastly easier, depending on your specific needs and work habits!

 Posted by at 5:33 PM
Jul 262016
 

The Web moves pretty fast. Things are constantly changing, and the documentation content on the Mozilla Developer Network (MDN) is constantly changing, too. The pace of change ebbs and flows, and often it can be helpful to know when changes occur. I hear this most from a few categories of people:

  • Firefox developers who work on the code which implements a particular technology. These folks need to know when we’ve made changes to the documentation so they can review our work and be sure we didn’t make any mistakes or leave anything out. They often also like to update the material and keep up on what’s been revised recently.
  • MDN writers and other contributors who want to ensure that content remains correct as changes are made. With so many people making change to some of our content, keeping up and being sure mistakes aren’t made and that style guides are followed is important.
  • Contributors to specifications and members of technology working groups. These are people who have a keen interest in knowing how their specifications are being interpreted and implemented, and in the response to what they’ve designed. The text of our documentation and any code samples, and changes made to them, may be highly informative for them to that end.
  • Spies. Ha! Just kidding. We’re all about being open in the Mozilla community, so spies would be pretty bored watching our content.

There are a few ways to watch content for changes, from the manual to the automated. Let’s take a look at the most basic and immediately useful tool: MDN page and subpage subscriptions.

Subscribing to a page

Animation showing how to subscribe to a single MDN page After logging into your MDN account (creating one if you don’t already have one), make your way to the page you want to subscribe to. Let’s say you want to be sure nobody messes around with the documentation about <marquee> because, honestly, why would anyone need to change that anyway?

Find the Watch button near the top of the MDN page; it’s a drawing of an eye. In the menu that opens when you hover over that icon, you’ll find the option “Subscribe to this page.” Simply click that. From then on, each time someone makes a change to the page, you’ll get an email. We’ll talk about that email in a moment.

First, we need to consider another form of content subscriptions: subtree or sub-article subscriptions.

Subscribing to a subtree of pages

 

 Posted by at 8:15 PM
Apr 032016
 

Today—April 3, 2016—marks the tenth anniversary of the day I started working at Mozilla as a writer on the Mozilla Developer Center project (now, of course, the Mozilla Developer Network or MDN). This was after being interviewed many (many) times by Mozilla luminaries including Asa Dotzler, Mike Shaver, Deb Richardson, and others, both on the phone and in person after being flown to Mountain View.

Ironically, when I started at Mozilla, I didn’t care a lick about open source. I didn’t even like Firefox. I actually said as much in my interviews in Mountain View. I still got the job.

I dove in in those early days, learning how to create extensions and how to build Firefox, and I had so, so very much fun doing it.

Ironically, for the first year and a half I worked at Mozilla, I had to do my writing work in Safari, because a bug in the Firefox editor prevented me from efficiently using it for in-browser writing like we do on MDN.

Once Deb moved over to another team, I was the lone writer for a time. We didn’t have nearly as many highly-active volunteer contributors as we do today (and I salute you all!), so I almost single-handedly documented Firefox 2.0. One of my proudest moments was when Mitchell called me out by name for my success at having complete (more or less) developer documentation for Firefox 2.0—the first Firefox release to get there before launch.

Over the past ten years, I’ve documented a little of everything. Actually, a lot of everything. I’ve written about extensions, XPCOM interfaces, HTML, a broad swath of APIs, Firefox OS, building Firefox and other Mozilla-based projects, JavaScript, how to embed SpiderMonkey into your own project (I even did so myself in a freeware project for Mac OS X), and many other topics.

As of the moment of this writing, I have submitted 42,711 edits to the MDN wiki in those ten years. I mostly feel good about my work over the last ten years, although the last couple of years have been complicated due to my health problems. I am striving to defeat these problems—or at least battle them to a more comfortable stalemate—and get back to a better level of productivity.

Earlier, I said that when I took the job at Mozilla, I didn’t care about the Web or about Firefox. That’s changed. Completely.

Today, I love my job, and I love the open Web. When I talk to people about my job at Mozilla, I always eventually reach a point at which I’m describing how Mozilla is changing the world for the better by creating and protecting the open Web. We are one of the drivers of the modernization of the world. We help people in disadvantaged regions learn and grow and gain the opportunity to build something using the tools and software we provide. Our standards work helps to ensure that a child in Ghana can write a Web game that she and her friends can play on their phones, yet also share it with people all over the world to play on whatever device they happen to have access to.

The Web can be the world’s greatest unifying power in history if we let it be. I’m proud to be part of one of the main organizations trying to make that happen. Here’s to many more years!

 Posted by at 2:59 PM
Nov 132015
 

I’m going to highlight a meeting for you today. This is the point where you yawn politely, look at the time, and and try to escape without my noticing. But I see you over there! Get back here. This is important!

Each Thursday, the MDN content team holds its weekly API documentation meeting at 8 AM Pacific time in Mozilla’s Open the DevEngage Vidyo room. This meeting is for discussions about ongoing and upcoming work on documentation for all Web APIs. This includes the classic DOM as well as all newer APIs, from Ambient Light to Speech Synthesis and beyond. It even includes Firefox OS-specific APIs. We don’t even discriminate against non-standard APIs, as long as they’re exposed to browser content.

That’s a lot of stuff to cover! Everything needs to be understood, written about, sample code located or created (and tested!), and all tied together and reviewed until it makes sense and is as accurate as we can make it.

That’s why we have been holding these meetings in collaboration with the API development team for a long. A few months ago, the technical evangelism team also started sending a representative to each meeting. This tripartite meeting lets each team share recent accomplishments and what they’ll be doing next. This has multiple benefits:

  • The writers learn what new technologies are being implemented, what improvements are in the works, and when things are likely to ship. We also learn when special events are coming that would benefit from having documentation ready.
  • The technical evangelists get details on what new APIs are coming up, and can discuss plans for spreading the word with the developers creating the APIs and the writers documenting them, to coordinate plans and schedules.
  • The technical evangelists can relay user sentiment information in a more personal way to both the development teams and the writers; this kind of feedback is incredibly helpful!
  • The development team can let the writers and evangelists know what the status is on current API work, and we can discuss this status in a team setting instead of only reading about it in a formal note or bug comments.
  • The developers can share information about what problem points they see or expect to exist in understanding and working with technologies, in order to help guide future work in samples, demos, and documentation.

There are intangible benefits, too. Over the two-plus years we’ve been holding this weekly meeting, we’ve developed an increasingly close working relationship between the developer documentation and the API engineering teams. This has enormous benefits not just for these two teams, but for the Web we serve.

If you have a passion for creating APIs for the Web or for teaching others how to use them, please consider joining our meeting. Even if you only drop in once in a while, you’ll find it a great way to stay informed and to help guide the future of our content and evangelism efforts.

 Posted by at 6:07 PM
Nov 032015
 

It’s been a while since I wrote anything on my blog about technology or the Web (indeed, the last several posts I’ve written have been my 5-word movie reviews. While fun, these aren’t very informative to the primary audience of my blog: you, the (probably) Web developer, genius type.

A lot has changed in the last few months. We’ve got so many exciting new technologies and APIs to play with. Not to mention ECMAScript 6 (a.k.a. ECMAScript 2015, a.k.a. the latest version of JavaScript). In ES6, the big new toys, for me, are Promises and arrow functions. Both take some getting used to, but once you do, they make a huge difference in code readability and despite feeling alien and weird to my old procedural programming brain, they still make code just plain better.

Add to that all the amazing new APIs, including WebRTC, Web Notifications, Service Workers, the Push API, and so much more, and my mind boggles at the immense power of the Web in this day and age.

I was in college when the Web first exploded into existence. Back then, it was mostly a thing students and researchers played with, but I already knew it was going to change the world. And it has.

I’ll try to get back into the habit of blogging more regularly; there’s far too much exciting stuff to talk about to let my blog stay idle any longer.

 Posted by at 11:19 AM
Apr 032015
 

It was nine years ago today that I joined Mozilla as a senior technical writer. I was hired by Mike Shaver and Deb Richardson to help try to keep up with the pace of progress and to work on organizing and cleaning up older content as well. I actually started working the last few days of March, but my first official day (that is to say, the first day I was paid for) was April 3, 2006.

My daughter wasn’t even a year old yet then. Now she’s almost finished with the fourth grade.

We were deep into the documentation process for Firefox 2.0 back then (not to mention trying to finish bits and pieces of critical documentation for Firefox 1.5, which shipped months earlier). It shipped a few months after my joining the company, and was the first release we generally felt was completely documented (for a slightly flexible definition of “completely”).

A lot has changed over those nine years. Back then, Deb and I were the entire writing staff; we had some contributors but not nearly enough. Then Deb moved onward and upward into other awesome things and it was just me for a while. But eventually we started hiring more writers, thankfully, and we wound up with the kick-ass staff we have today. And as we built up our staff, we learned more about community building, and our community of volunteer writers and contributors has grown at an ever-increasing rate.

This is far and away the longest I’ve spent at any job. It’s a great deal of fun, even when I’m stressing out over all the stuff I wish I had time to write about but don’t. Making the world a better place to be a Web developer is a rewarding career path, and I’m glad Dave Miller steered me into the Mozilla community.

 Posted by at 11:00 AM