Jan 302012
 

As you’re probably aware, we’ve had a series of problems with the wiki lately. I’d like to share a status report as to what those are and where we stand.

Failing wiki extensions

There’s an issue that’s causing extensions that add features to the wiki to fail to start up. This is a problem caused by a bug in the wiki software that’s not properly handling the scenario in which multiple hosts are restarted at the same time. Due to a lack of proper mutual exclusion during wiki startup, the launch of extensions can fail fairly spectacularly, resulting in the loss of the configuration of the extensions, so that future attempts to start them are guaranteed to fail.

This didn’t happen on every restart (oh, the joy of mutual exclusion bugs), but happened pretty often. Since the wiki periodically restarts itself automatically to clean up cruftiness, the result was that this bug would randomly crop up fairly frequently.

Late last week, MindTouch’s support team gave us a script that detects when this has happened and repairs the lost configuration and restarts the extensions. This was tested over the weekend on three of our most commonly-used extensions, and seems to have worked very well. They’ll be adding the rest of our extensions into the mix tomorrow.

That’s a short-term, hacky, workaround for the problem.

MindTouch’s engineers are working on a patch that will correct the underlying bug. They’ve implemented the fix on their trunk codebase, and are testing it there now. It’s a substantial revision to how extensions are loaded, and they want to get it thoroughly tested. Once they’re sure the patch works, they’ll backport it to the release we’re running and test it there. Then, finally, we’ll get it installed, and we should be in good shape.

This situation is being tracked in bug 715341 if you’d like to follow along.

The Great DOM Reference Kerfluffle

While experimenting with some ideas for the DOM Reference, Jean-Yves inadvertently initiated a move of the entire DOM Reference to a different part of the wiki. That was bad enough, but could have been easily fixed. Unfortunately, the site crashed partway through the move, resulting in things being left in a state where it wasn’t a simple “move the subtree back where it belongs” operation. Instead, he had to manually move every page back where it belonged one at a time. This took several days, but is now finished. If you notice any issues with the DOM Reference, please let us know!

 Posted by at 12:58 PM
Jan 272012
 

I love the English language. It’s crazy, complicated, and bloated, and those are all things that contribute to its amazing expressiveness. If a word doesn’t exist, someone will make it up, or rip it off from another language. It’s a quirky, twisted amalgamation of words and syntax from a broad swath of other languages. From Latin to German to Japanese and Cherokee, English has swiped words from dozens of other languages.

All of that makes it a tricky language to master. It’s not hard to get your point across in English, but to do it with an appropriate level of grammatical correctness and meet the style and formality of whatever context you’re working in can be difficult.

English can be ugly and twisted or fluid and beautiful, depending on the skill level of the writer and the point they’re trying to get across. It can be used to create magnificent works such as Handel’s “Messiah” and Shakespeare’s Hamlet, popular novels such as Stephen King’s Carrie, or technical materials such as the MDN wiki I administrate. If you look at all of these works, they demonstrate the wide variety of styles of material you can create in English, and they each practically feel like they’re written in a different language, because of how differently the construction of sentences and the flow of the material works.

That English can be difficult to master has the side benefit of making being a technical writer a very attractive and lucrative line of work. If you know how to write code and can also write in English easily and with skill, you have a unique suite of capabilities that make you highly employable. And if you love words, and have fun writing code, technical writing is a blast — being able to do both is the most fun I’ve ever had in my working life, and I’m incredibly thankful that I get to do it.

 Posted by at 4:06 PM
Jan 262012
 

As I came to the realization, which I mentioned in my previous post, that I was fed up with game development for a living, I had been playing with BeOS for a while, so I decided I’d like to work for Be. I went to their job listing page and looked through the list of openings. The only one that didn’t require a college degree (which I didn’t have) was one for a technical writer.

So I applied. A few weeks later, they let me know I didn’t get the job.

I got myself fired by the game company (due to my unwillingness to cooperate with a particularly bad business decision), and wound up at another one. About three weeks after starting that job, Be called back and asked me to come up for an interview. So I went up to the Bay Area and met with Doug Fulton and one or two other people up there (I don’t remember who all else, since there were several, and it’s been a long time). We chatted for a while, I felt I made a terrible impression, and I went home.

A few weeks after that, Be let me know that while they didn’t think I was qualified for the technical writer job, they’d bring me on as a junior writer if I was willing to do that. I jumped all over that, and my career as a technical writer began.

I started at Be in September of 1997, working out of the office in Menlo Park. I was the junior of three technical writers. One of them (whose name I agonizingly fail to remember) left not too long after I started, but Doug I remember. Doug had worked as a writer at NeXT, and would tell stories about how great his desk was being near the rear exit of the building so he could escape when Steve Jobs came in.

I had zero experience as a technical writer, so working with Doug, a long-time writer, was a great experience for me. He didn’t teach, per se, but offered a lot of guidance, and I watched how he did things closely as I got into the swing of things. If that experience hadn’t been such a good one, it’s entirely possible I might have fled technical writing back to programming, which probably would have been a mistake.

I’m a decent programmer — even a very good one, within certain bounds. But I like to think I’m a very good technical writer. Doug (and by extension, Be) gave me the opportunity to figure that out and spread my wings. By the end of my first year at Be, I was a senior technical writer and had had my pay bumped three times to match that title.

I’d found my calling at last. So thanks for the job, Doug.

 Posted by at 11:30 AM
Jan 252012
 

Here are today’s Wiki Wednesday articles! If you know about these topics, please try to find a few minutes to look over these articles that are marked as needing technical intervention and see if you can fix them up. You can do so either by logging into the wiki and editing the articles directly, or by emailing your notes, sample code, or feedback to mdnwiki@mozilla.org.

Contributors to Wiki Wednesday will get recognition in the next Wiki Wednesday announcement. Thanks in advance for your help!

JavaScript

Thanks to David Bruant, -TNO-, and xkizer for their contributions the last couple of weeks.

SpiderMonkey

Developing Mozilla

Extensions

XUL

XPCOM

Thanks to Neil Rashbrook for his contributions!

Interfaces

Plugins

CSS

Thanks to McGurk and cgack for their contributions to CSS documentation!

SVG

HTML

Thanks to Jens.B and tw2113 for their contributions since last time.

DOM

Thanks to cgack for contributing!

 Posted by at 7:42 PM
Jan 252012
 

Back in the olden days, I used to be a programmer writing code for a computer game company. It was hard, unglamorous work, and once the initial excitement wore off, it really became “just a job,” rather than something I loved to do. However, what drove me over the edge into outright hating the entire industry was a particular project that led me to question not my own sanity, but the sanity of artists who thought they were game designers.

Let’s see if I can tell the tale without using names.

Back in the mid-to-late 1990s, lots of movie studios were setting up game development studios to take advantage of languishing properties that they might be able to turn a fast buck on by turning them into games, or, with luck, game franchises. One of these decided to take a family movie from the ’60s and see if they could get an educational game made out of it.

They selected a team of artists — the operators of an outfit in Southern California that produced 3D animation for commercials and other projects — to design this game. That was their third mistake (the second being their selection of a project, and the first being the setting up of an “interactive” division in the first place).

These artists came up with a game idea, got it approved, and subcontracted out the programming to us.

They then proceeded to ignore every bit of design advice we gave them about what was remotely possible using 1997 software technology targeting computers that would be commonly found in schools and homes with small children. We would have meetings explaining how their designs were not possible to achieve, and they would apologize and make changes that made things even worse.

Over time, their grand design did gradually get scaled back — not by removing the impossible features, but by stripping out vast chunks of the game, leaving what had been envisioned as some two dozen scenes with fun, interactive puzzles as just short of 20 screens with animations that would activate when items were clicked and a few mediocre not-really-puzzles. In order to accommodate their poor design choices, multiple versions of the various animation sequences were required to cope with the cases where two animations could overlap one another; we would then select the video to play based on how many animations were supposed to be running, and play one movie covering both animating objects.

On top of all that, their lusciously, beautifully rendered cartoon graphics (and, yes, the artwork was beautiful) would sing and perform, with really quite nice voice acting and music. Except often they would sing songs that included inappropriate lyrics. Then there was the dance that included moves so suggestive that when I first got the video files, my jaw hit the keyboard, and I summoned everyone else in our company to see it, upon which they had to collect their jaws off my office floor.

Not long after that, the designers decided we were so far behind schedule that they moved into our offices and set up a dozen SGI workstations on our conference table to render videos, so they could make all the adjustments needed as we pointed out all the ways they had violated the set of rules we gave them for what they could and could not do in order to pack all this stuff onto a single CD-ROM. It was around that same time that the project manager from the movie-studio-interactive company started hanging around our office despite our having no actual direct business relationship with them. That was awesome too.

By the end of the project, there had been four-day-weekends during which I got less than 3 hours’ total sleep, weeks in which I worked 170+ hours, and actual physical fights in the office. In addition, there was the time I literally fell asleep, face on my keyboard, and one of the designer guys saw me and yelled at me for sleeping, despite having been there for over 20 hours.

As that project wound down, I started looking for a way out of the game business. I’ll continue that story in my next post, since this is a good place to break this one off. I’ll wrap up by saying that the game in question did ship, although less than 3000 copies were delivered, and the movie-studio-interactive company in question folded up not long after that.

 Posted by at 11:30 AM
Jan 242012
 

Next up in my cavalcade of influences that led me to become the technical writer I am today: Morgan Freeman. Yes, that Morgan Freeman. As I mentioned in my previous post about my influences, I watched “The Electric Company” a lot as a kid. A very young Morgan Freeman, playing the role of Easy Reader, made reading really cool. I learned a lot from that show, and although he was certainly not the only actor (and Easy Reader not the only character) to impact my love of reading and of words, he was the most impactful and memorable.

Being taught by someone that cool that reading wasn’t just something you do because you have to, but because you want to, was critical at that age. So… thanks, Mr. Freeman!

 Posted by at 1:30 PM
Jan 232012
 

This morning I was following this obscure train of thought stream that wandered randomly from one thing to another, when I started thinking about the people, things, and events that influenced me to eventually become a technical writer. Then it occurred to me this could make for an interesting set of blog posts, so here we are!

The first and most obvious influence is my parents. In particular, my mom. There are two ways in which she influenced my love of writing.

First, I have vague memories of her printing things like my name and having me copy them, to do things like sign holiday cards. And I always had lots of books. We have recordings of my little voice reading A Fly Went By aloud to my grandparents. “I axed him why he flew so fast…”. Good for the brain cells!

Second, and more interestingly, she would sit me in front of the TV after school and I’d watch Sesame Street, Electric Company, and, eventually, 3-2-1 Contact. Mom tells a story of how my kindergarten teacher was impressed by how I could already read and write, and asked if Mom had been teaching me. “No,” she said, “He learned that watching PBS.”

The point is, my parents encouraged me to read starting very young, before I was even in preschool, and always made sure I had lots of things to read. In fourth grade, I’d spend an afternoon plowing three Hardy Boys books, sometimes three or more in an afternoon. Those were good times, and getting an early start reading certainly helped get me where I am today!

 Posted by at 3:44 PM
Jan 222012
 

During the Engagement team work week last week, the four on-staff Mozilla developer documentation writers (myself, Janet Swisher, Jean-Yves Perrier, and Will Bamberg) had a sit-down to talk. This was a big deal since it was Jean-Yves’s first time meeting with us in person since joining Mozilla on December 1, and Will’s first time meeting with us since he’s been largely off doing fairly separate stuff documenting the Jetpack SDK.

We had a long discussion about a wide variety of things, and I figured I’d blog about it, to share those ideas and thoughts with the wider Mozilla community — and to flesh out the ideas from the outline format I took the notes in.

Today, let’s talk about our reference documentation and what we can do to improve it.

In this, my final post covering what we talked about during this meeting, we’ll take a look at the new organizational hierarchy we’ve developed for MDN (in a series of previous meetings as well as on a public etherpad), and how we’re going to go about rearranging our content into this new order.

Introducing the new hierarchy

We’ve been talking about fixing MDN’s tendency toward being very shallow, organizationally, for some time. Most MDN content is located right at the top of the hierarchy, making it really disorganized. This is something we want to fix, but work has been slow.

Over the last few weeks, we’ve increasingly discussed this in #devmo, and we finally have a new hierarchy designed that we think will improve things enormously. You can get a look at this on the etherpad we created for the purpose of working it out. There are a few areas where debate is ongoing, but you’ll get the idea of what we’re shooting for.

Making the move

The first step toward moving to this new hierarchy will be to create the new landing pages for each section and subsection, and to revise those that already exist as needed.

We’ll move existing articles as we get to them and as they’re discovered during day-to-day work.

In addition, all new pages should be created in the new hierarchy, once the new landing pages are in place.

Between creating all new content in the right places, and moving old content as we’re able, we’ll gradually make this transition.

On top of that, we can add pages to landing pages even before they get moved, and worry about moving them later if we need to.

A team effort

Because this project is unfortunately not going to be a top priority for the full-time writing team (we’re being kept crazy busy keeping up with the release train!), we’ll be relying pretty heavily on the rest of the MDN community to drive this work forward. Fortunately, it’s something that can be done a bit at a time as community members (and full-time writers) have a moment to spare now and then.

Hopefully you can pitch in and help us make MDN cleaner and easier to navigate!

 Posted by at 9:00 AM
Jan 212012
 

During the Engagement team work week last week, the four on-staff Mozilla developer documentation writers (myself, Janet Swisher, Jean-Yves Perrier, and Will Bamberg) had a sit-down to talk. This was a big deal since it was Jean-Yves’s first time meeting with us in person since joining Mozilla on December 1, and Will’s first time meeting with us since he’s been largely off doing fairly separate stuff documenting the Jetpack SDK.

We had a long discussion about a wide variety of things, and I figured I’d blog about it, to share those ideas and thoughts with the wider Mozilla community — and to flesh out the ideas from the outline format I took the notes in.

Today, let’s talk about our reference documentation and what we can do to improve it.

Using the CSS Reference as a test platform

Jean-Yves has been working furiously on updating, cleaning up, and enhancing the CSS Reference. He’s experimenting with new ways to format the content, ways to improve usability, and generally doing a lot of amazing stuff there. If you haven’t looked recently, you should.

You’ll find, if you look it over, that he’s doing two things: he’s going through alphabetically, tidying things up and making corrections, and he’s using a few pages to try out new ways to present the content.

These experiments are paying off. We’ll soon have an improved structure for the content of each page, and new, clearer ways to describe the syntax of CSS properties. Each page will offer a link to an explanation of how the syntax descriptions should be parsed.

We’re also going to be experimenting with using tooltips to define technical terms, so you can get them without having to follow links (although links will still be offered).

In addition, it will be easier in the future to navigate the documentation. Right now, once you’re on a page for a particular CSS property, you have to use the back button in your browser to get back to the index. We’ll be adding an index link, and will be looking at options to present an actual index in a sidebar on every reference page.

Spreading out

Once we’ve got the CSS Reference changes nailed down, we’re going to take what we’ve learned from those experiments and apply them to the other references, including (but not limited to):

In addition, we’ll continue to work toward removing Gecko-specific content, making our open web documentation browser-agnostic.

MDN everywhere

We’re also working on trying to make as much of our content as possible genuinely usable on mobile devices. Part of this process involves removing cases where we use tables to lay out our text (such as we do on many of our landing pages). We’ll be switching to using CSS columns instead.

Other issues that need to be addressed include how to cleanly present sample code on mobile. These tend to be fairly wide, making them hard to read at best on mobile devices. This is going to be a challenge, but we’re up to the task!

 Posted by at 9:00 AM
Jan 202012
 

During the Engagement team work week last week, the four on-staff Mozilla developer documentation writers (myself, Janet Swisher, Jean-Yves Perrier, and Will Bamberg) had a sit-down to talk. This was a big deal since it was Jean-Yves’s first time meeting with us in person since joining Mozilla on December 1, and Will’s first time meeting with us since he’s been largely off doing fairly separate stuff documenting the Jetpack SDK.

We had a long discussion about a wide variety of things, and I figured I’d blog about it, to share those ideas and thoughts with the wider Mozilla community — and to flesh out the ideas from the outline format I took the notes in.

Today, I’ll be sharing the results of our discussion about the impending Kuma migration and what will come next.

Scripts and templates

The next Big Thing that needs to happen in Kuma’s ongoing development process is to implement support for the type of scripted templates we use pretty frequently on MDN. One of my goals for when I resume work after this week I’m taking off (I do so love scheduled automated posting in WordPress) is to go through our existing templates and figure out the types of things we need to be able to do, to get those prioritized for the development team.

It’s almost certain that we’ll be using server-side JavaScript rather than the Lua-derived DekiScript language currently used by MDN’s MindTouch based wiki. That means we’ll need to update our existing templates. It’s possible some sort of automation might be done for that, but realistically, I don’t expect that to happen (and if it does, there will certainly be a need to review and hand-tweak stuff in at least some cases).

That said, this will be a great opportunity to look at all our templates, figure out which ones we don’t really use anymore, and get rid of them. In addition, we can clean up existing templates to work better, be smarter, and integrate localization support in templates that don’t currently have it.

Future development

We need to be ready for the future. The initial deployment of MDN on Kuma will not have all the features we want. Indeed, it won’t even have all the features we already have, although it should have most of the ones we use regularly. As such, we need to be sure we have bugs filed to give the development team a solid set of things to be done going forward.

Among other things, we’ll want to be able to have live examples embedded in documentation, so you can see how things work without having to click a link to a separate page. These should support HTML, CSS, JavaScript, and all the web goodness we love so much.

We need support for server-side components, so that examples for XMLHttpRequest, WebSockets, and the like can be run without needing to host stuff outside Mozilla’s servers.

We’d like features to make it easy to integrate documentation content with IDEs and other utilities, as well as to make it easier for scraping tools to peel out content to present in other formats.

We want offline access to the content, either by publishing sections of the site as PDF or by making it easy to download chunks of the site in HTML form.

We need good localization tools, such as dashboards of content in need of translating, support for comparing the English and translated versions of a page to find the areas that need reviewing, and so forth.

Let’s make it happen!

We’ll be making sure we have a prioritized set of features to guide the developers toward making Kuma a platform that serves our needs as well as possible. If you have ideas for features the Kuma platform could have to make our lives better, please share them.

This is our chance to have as near-perfect a platform for our documentation as possible! It will take time to get there, but we’ll do our best to make it happen!

 Posted by at 9:00 AM  Tagged with: