2013-11-23

Documentation? We don’t need no stinking documentation

Notice anything different between these two blocks of metadata from Twitter search results?1

[{completed_in,0.156},
 {max_id,404346705724063744},
 {max_id_str,<<"404346705724063744">>},
 {query,<<"java">>},
 {refresh_url,<<"?since_id=404346705724063744&amp;q=java&amp;result_type=recent&amp;include_entities="...>>},
 {count,100},
 {since_id,0},
 {since_id_str,<<"0">>}]

[{completed_in,0.14},
 {max_id,404417052276166657},
 {max_id_str,<<"404417052276166657">>},
 {next_results,<<"?max_id=404413797667848192&amp;q=java&amp;count=100&amp;include_entities=1&amp;result_type=recent">>},
 {query,<<"java">>},
 {refresh_url,<<"?since_id=404417052276166657&amp;q=java&amp;result_type=recent&amp;include_entities=1">>},
 {count,100},
 {since_id,0},
 {since_id_str,<<"0">>}]

Same search, performed roughly 4 hours apart. Vital difference? next_results in the second metadata set.

If you look at Twitter’s API documentation for search you may notice something important: there’s no meaningful documentation on what to expect in the results. next_results shows up in the example, but isn’t actually documented.

Even the extended guide, Using the Twitter Search API, doesn’t talk about the return values from the API, just how to construct a request.

Oh, and that count value in the metadata? That’s a reflection of the requested number of search results (apparently, because again, no documentation) while the actual number of tweets I’m seeing are consistently 1 fewer than requested.

Why 99 tweets when 100 are requested? Beats me. Why does next_results sometimes appear and sometimes not? Beats me.

It’s sorely tempting to start writing the API documentation that Twitter should have, but there are only so many hours in the day.

To quote myself from (of course) Twitter today:

Reverse engineering someone else’s API is a massive waste of time. Documentation matters

Documentation is easy to short-change when you’re a small company, but consider that an hour of documentation on your end can save a few hours of experimentation (best case) or days of troubleshooting production failures (not-so-best case) on the other end…for every user you have.

Doing documentation correctly

My favorite story about documentation dates to the early days of UNIX.

One of the more notable decisions was to include a BUGS section in each online documentation page (man pages for the initiated).

As developers started listing the known bugs in their software, they were embarrassed by them, and fixed the more egregious ones rather than document them for the world to see.

Describing what your software is supposed to do is relatively straightforward (although Twitter fails at even that minimal standard).

Describing patterns for using your software is valuable, and I’ll give Twitter credit for doing that pretty well.2

Describing how your software fails is really hard. Capturing the edge cases where things may not work as the user (or even the developer) expects is a good start.

A modest plug and lament

I’m a contributor to the documentation effort at Basho, and we take a great deal of pride in (and receive a fair bit of praise for) our documentation website.

Still, I don’t think we’ll ever reach a point where I’m satisfied with it. Riak, our distributed database, is a very complex system, and it’s an unfortunate reality for such software that it’s impossible3 to convey the subtleties of using it without also communicating much more of the internals than I’d like.

That’s not to say that I’m trying to hide those internal details; it is open source software, after all.4 It’s simply unfortunate that developers have to learn about the challenges of creating a robust, distributed database in order to create a robust application that leverages it.

So I often find myself struggling with the question of how to create layered documentation that reveals finer details as needed, rather than exposing users to a firehose of information, as my co-worker Eric Redmond puts it.

Separation of concerns (and more plugs)

This summer we reworked the Riak documentation, to create not only more layers to limit the firehose effect, but also to reflect the different concerns of operations as opposed to developers. We captured some of the thought process in a Basho blog post.

Eric also recently wrote a fantastic short book on Riak, titled (appropriately enough) A Little Riak Book. He tries to give me too much credit for editing it, but the amount of effort he put into authoring it far exceeded my nitpicking.

I occasionally write blog posts about Riak’s internals, from a perspective that’s hopefully interesting to a wider audience. My first attempt, Understanding Riak’s Configurable Behaviors, was both very long5 and relatively specific to Riak; my more recent post, Clocks Are Bad, Or, Welcome to the Wonderful World of Distributed Systems takes a much broader look at the challenges of consistency and causality in distributed databases.

Our docs website includes video links to lectures in which Basho engineers and customers talk about how to use (and how not to use) Riak.

What’s my point? Documentation takes many different forms, and those forms achieve different objectives.

Blog posts are a great way to capture information that will likely be obsoleted, because even the outdated content provides valuable context and no one (hopefully!) expects you to go back and fix it.6

A book is a much better narrative device to walk people completely new to the technology through the background and introductory steps. Not many beginners excel at reading reference documentation.

And tech talks (live or in video form) are useful to hear real people talking about real pain points. You can hear the agony in the voices of infrastructure engineers who had to scale a software stack on the fly when traffic was pounding their servers, and you can judge for yourself the wisdom of evaluating technologies based on bullet points or magic quadrants.

So, really, what’s your point?

Don’t be like Twitter. Don’t force your users to experiment to learn how your software responds to basic instructions.

Don’t launch a wiki and call it good. I hope I don’t need to elaborate on that one.

Don’t expect your source code to be sufficient documentation. If I need to elaborate on that one, you probably didn’t make it this far anyway.

Think carefully about organization, findability, and discoverability.7 It’s not enough to have everything captured; you have to make it usable, too.

Put more effort into it than you think your users need. If it doesn’t hurt, you’re not doing it right.


  1. Yes, that’s an Erlang data structure. Deal with it.

  2. I’ll point out, however, that Twitter API best practices are much shorter and easier to document than, say, a distributed database.

  3. Calling it impossible may be overstating the case, but certainly beyond my ken.

  4. A certain competitor of ours, who shall remain nameless to protect the guilty, has terribly vague documentation and whitepapers. Assuming it’s not fundamental incompetence, the most likely explanation is that their software is proprietary, and writing detailed documentation would give away too much of their secret sauce.

  5. So long that it took 4 posts plus an epilogue to complete.

  6. Another advantage to blog posts: they generate much broader and more sustained interest than yet another web page on your documentation site.

  7. My definitions of findability and discoverability, which I refuse to verify with any online resources lest I be proved wrong: findability is the ability of a user to find what they’re looking for, while discoverability is the ability of a user to find what they don’t know they’re looking for

twitter documentation riak basho
Previous post
Contemplations on functional programming libraries, part 1 Y u no compose?
Next post
On terminology and inclusiveness Master/slave or primary/replica?