(Essay was written in April, 2002)

(If you were sent here to read about preferences, consider reading this cleaned-up extract instead - you'll avoid a lot of dated discussion of 2002 GNOME. I'm leaving the original essay here but wouldn't suggest reading it all. Click here for the extract.)

 

Many people have argued that free software has trouble creating good user interfaces. Recently Matthew Thomas posted a nice example of this argument, which I found on Joel's web site. It was cool to find mpt's article, because it nicely articulates what's gone wrong with many projects - including past versions of all the major Linux/UNIX desktops.

Aside from the list of bullet points there are a couple big-picture issues I consider significant about GNOME 2, but don't expect most outsiders have noticed yet.

  • Accessibility. The enormous amount of work Sun has invested in GNOME 2 accessibility keeps us in the running for government use in the US and around the world. Without this rather substantial body of work, free software would have been shut out of all government agencies. (Not to mention that this work means we're accessible to disabled users!) Importantly, Mozilla and OpenOffice are being integrated into the same accessibility architecture. Most people don't understand the significance of the accessibility initiative, even if they know about it.
  • Usability. GNOME 2 has a long way to go. But in my opinion, it has changed the slope of the trend line that Matthew Thomas (and others before him) have observed, and moreover GNOME 2 is (in my opinion) the first Linux/UNIX desktop release to round this corner.

I'd been meaning to write about this second point for a while, and mpt's web log really inspired me to go ahead and do so. So here are some random thoughts. This isn't an organized essay.

What is "Good UI"?

As far as I can tell from user posts on mailing lists and magazine/webzine reviews, to your average technically-inclined Internet resident a good UI means you have a lot of features, or alternatively that you have a lot of snazzy graphics, or alternatively that your UI is just like one that the commentator has used in the past.

I don't have any genius definition of "good UI"; I'm not a UI expert. But it clearly doesn't mean snazzy graphics or featuritis. In the end when I think good UI I think of MacOS classic: clean, simple, consistent, productive. You focus on your work or play and the GUI isn't in the way. OS X adds the cool graphics too, but those aren't the part that makes it a good UI, just the part that makes it fun.

Another element of good UI is attention to all the little details: good error dialogs, setting up keyboard navigation, dealing with relayout when a window gets resized, accessibility, internationalization. People don't usually know how to evaluate these aspects of a UI, unless they're experienced UI designers or GUI programmers.

I suppose I have a Cliff's notes approach to good UI and the real UI designers will laugh, but ah well. It's a decent heuristic. ;-)

So anyhow, when I talk about a good UI that's a vague definition of what I mean.

Why free software can do good UI

Have a look at the list of reasons why mpt says that free software UIs come out badly:

  • Not enough UI designers.
  • Too many cooks.
  • Designers can't submit code patches.
  • Just copying Apple and Microsoft.
  • Volunteers only want to do cool stuff.
  • Volunteers don't do boring details.
  • Maintainers cave in and add lame preferences rather than endure flamewars.
  • People want their own fifteen pixels of UI.
  • Workarounds are introduced during the devel process and never removed.

I don't think most of these bullet points explain why free software user interfaces are traditionally sucky. Here is how I explain it: consistently producing quality user interfaces is hard.

Let me explain that a bit more. ;-)

Take all of the above bullet points, and plug in "good software design" in place of "good UI." Observe that most of the bullet points still make sense, and in fact you'll probably feel that you've heard the arguments before. Here are the ones that still apply:

  • Not enough software designers to get the work done.
  • Too many cooks spoil the code's architecture.
  • Free software doesn't innovate, just copies.
  • Volunteers only want to do cool stuff.
  • Volunteers don't do boring details.
  • Maintainers cave in and add misguided features or code rather than endure flamewars.
  • People want their own features to point at.
  • Workarounds are introduced during the devel process and never removed.

People have argued that good software developers won't work for free, that too many cooks could screw up the design, that free software won't have innovative features, that no one will do the boring bugfixes, that lame workarounds will remain forever, etc. etc. (Hmm - it's even possible Bill Gates has been saying some of this stuff lately...)

Guess what - many successful free software projects have overcome those issues on the software architecture side. The tools to overcome them aren't anything mysterious: a culture of strong lead architects propagating good guidelines and insisting that things be done correctly; good process and bug tracking; commercial companies helping to bang out the boring issues; finding good contributors and ignoring the crazy and incompetent ones.

Look at the Linux kernel, Apache, XFree86, whatever - all these successful projects have dealt with these kinds of issues in the realm of code, and it can also be done in the realm of UI.

Good UI is no different from good software, you achieve quality results through quality process and quality people, working over time.

Only one of the bullet points in mpt's article is unique to UI:

  • Designers can't submit code patches.
While it has an impact, I don't think this bullet point is very significant by itself. For one thing, it isn't always true. GNOME UI team leader Seth Nickell is famous for going into CVS and changing things. ;-) For another, based on watching mailing lists and spam from the bug tracker, we have lots of developers who are willing to follow the human interface guidelines, fix UI bugs, and ask the UI team for suggestions. Finally, we have UI designers who work for companies that contribute developers to GNOME, and they typically have the ability to guide the efforts of their company's developers.

(It is true that UI designers have to spend more time convincing others to do things than they might have to in a proprietary because-I-said-so situation; but that's true of everyone involved in a free software project. Some people have the personality for this, others don't.)

If all these points apply to software development as well and free software projects seem to produce great software, why have they typically produced bad user interfaces? Again, consistently producing quality user interfaces is hard. Specifically:

  • User interfaces are hard to learn how to do well, they are a legitimate area of expertise, and people haven't had even basic knowledge of this area.
  • User interface programming is hard to learn how to do well. Answering questions for new GTK+ developers, I've watched hundreds of people go through the same stages; such as the misguided "hmm, maybe I can autogenerate my GUI" stage, or the "oh, I finally understand what 'event-driven' means!" stage.
  • User interfaces are a lot of work even after you know how to do them well, and while companies have thrown hundreds of millions at the Linux kernel to move it from toy to enterprise-class, they have only started to take an interest in the GUI.
  • User interfaces competitive with Windows or OS X require a certain amount of technical infrastructure in order to develop them. The availability of modern GUI toolkits such as GTK+ 2 and Qt 3 (rather than Xt or Tk) is a big step forward. We still have a long way to go on the systems level though.[1]

In short, our UI has sucked for the same reason version 1.0 of the Linux kernel sucked; it's not mature yet. Creating a good UI is a matter of assembling and effectively organizing a sufficiently large and coordinated group of contributors for a sufficiently long amount of time.

In the GNOME Project, my feeling is that we're getting some traction on the hard problem of creating a good interface. Here are some of the specific reasons why:

  • We have several good UI designers with respected leadership roles. I would attribute this in part to Sun and Ximian paying these designers for their time, and also in part to the influence of Eazel.
  • Developers have basic UI understanding, and can make an intelligent case about a UI question on mailing lists or in bugzilla.
  • Programmers have learned how to design and write GUI code effectively.
  • Our culture encourages good UI and discourages "crackrock features." No matter how nice your code, GNOME developers will not be impressed if the UI is junk.

UI bugs are tracked as bugs in bugzilla, they get fixed, and people who won't fix their UI are considered incompetent. UI bugs are just like any other problem with the software; they're something to be addressed.

No rocket science here. It's just a matter of creating a critical mass of people who "get" what the right thing to do is, and having the resources to go ahead and do it.

Over perhaps the last year, especially the last 6 months, I've gotten the sense that we've turned the corner and finally could really tell you what's involved in creating a GUI competitive with OS X or Windows XP. And enough resources have fallen into place that we're making noticeable progress on some of the more challenging aspects of doing so. But there's still a mountain of work remaining.

[1] The kernel and underlying OS frequently don't offer the features you'd need to make a UI competitive with OS X or Windows XP - and most of the lower-level OS programmers, even the famous ones, don't understand what a good UI is like, or how to write a GUI program, so are no help in fixing the situation. Of course I'm also no help in fixing the kernel, so I don't mean to criticize, just to suggest that we need a few people with dual expertise, or better communication between projects.

Do commercial companies help or hurt?

mpt claims in a brief aside that Eazel and Ximian have hurt the GNOME Project. While Eazel and Ximian are the high-profile GNOME-specific companies, the OS vendors (Red Hat, Sun, Hewlett-Packard, Mandrake, etc.) have also been major players. Here's a brief tangent on my personal view, though you are free to consider me biased since I work at Red Hat.

Each influx of commercial developers has had some costs. Several times we've had a big mass of new full-time developers joining our existing development team. As predicted by Fred Brooks, this has resulted in delays. With each new group we've had to enhance and formalize our process for doing things, and learn to scale the project to more participants; and the new developers have taken a while to ramp up.

Moreover, you can certainly blame some bad UI on commercial factors; for example, I think Nautilus would have been more integrated with the desktop if it hadn't been Eazel's flagship product, and it's unfortunate that the company disappeared before reaching the optimization phase of the project. (GNOME 2 has largely fixed both issues, btw.) At the same time, Eazel introduced the culture of usability to GNOME, and the vast majority of Nautilus code is great stuff. So the net contribution was positive.

IMO the companies that have gotten involved in GNOME have been crucial to our success.

  • Having contributors employed full-time is pretty nice. It makes large/hard changes easier and helps get the boring bits of work done.
  • The process and infrastructure we've developed to handle the large number of full-time developers working on GNOME caused short-term delays, but is proving invaluable.
  • An enormous number of bug fixes have originated from the operating system vendors.
  • UI designers and user testing.
  • Understanding of the problems that are encountered by "real world" customers.

While you can certainly point to screwups caused by people working for companies, I wouldn't blame mysterious corporate agendas; most of them have been the kind of screwups you get when you move a software project from 5 people to 300 people. Our largest screwups have been caused by interpersonal/individual issues, not companies; and on the whole, GNOME is a lot more harmonious than the linux-kernel mailing list, or the legendary Perl flamefests.

The GNOME Foundation has been vital for taking advantage of commercial contributions, without letting them compromise the integrity of the project. The Foundation bylaws clearly place all control in the hands of individual GNOME project members, and I expect most members to be volunteers pretty much forever.

Many other projects such as the Linux kernel and Apache have managed to benefit enormously from corporate contributions, without sacrificing technical goals, and IMO the same is happening for GNOME.

The Question of Preferences

The most-often-mentioned example of bad free software UI is "too many preferences." This is pretty much my pet peeve; I was inspired to write a whole window manager on the premise that current window managers suffer from preferences overload.

A traditional free software application is configurable so that it has the union of all features anyone's ever seen in any equivalent application on any other historical platform. Or even configurable to be the union of all applications that anyone's ever seen on any historical platform (Emacs *cough*).

Does this hurt anything? Yes it does. It turns out that preferences have a cost. Of course, some preferences also have important benefits - and can be crucial interface features. But each one has a price, and you have to carefully consider its value. Many users and developers don't understand this, and end up with a lot of cost and little value for their preferences dollar.

Too many preferences means you can't find any of them. While I use X-Chat every day and am fundamentally friendly to it, I have to pick on it in this paragraph and hope the maintainer is good-natured. It has so many preferences that there's a whole menu just to hold all the different preferences dialogs. Plus you can script it - in your choice of languages. If I want to change something about X-Chat, it takes me eons just to find the right button.

Preferences really substantively damage QA and testing. As someone who reads dozens of bug reports per day, and occasionally fixes a couple, I can tell you that it's extremely common to find a bug that only happens if some certain combination of options are enabled. As a software developer, I can tell you that preferences frequently do add quite a bit of code complexity as well; some more than others, but even simple preferences can add a lot of complexity if there are 200 of them.

Upshot: more preferences means fewer real features, and more bugs.

Preferences make integration and good UI difficult.

One of the hardest lessons of GUI programming is that hard coding behavior can be the Right Thing. Programmers are taught to make everything generic and infinitely flexible. The problem is that the more generic and infinitely flexible your UI is, the more similar it is to a programming language. Lisp is not a good user interface.

The point of a good program is to do something specific and do it well.

The thing is that each UI decision depends on countless other UI decisions. A really simple example is keybindings. On UNIX/Linux, it's nearly impossible to pick reasonable default bindings for global desktop navigation because they all conflict with bindings that some app is using. On Windows, the desktop navigation bindings are hardcoded, and no app uses them, because apps know for sure which bindings to avoid.

If your program has no idea what the rest of its environment (or even all the parts of itself) will be like, then it can't really have reasonable defaults. It can't "just work." Basically, at some point the existence of preferences means that everyone must configure their desktop; while ideally preferences are optional and the default behavior is reasonable without intervention. Hard coding what can be hard coded has a lot of value.

Preferences keep people from fixing real bugs. One of the more amusing functions in GNU Emacs is "menu-bar-enable-clipboard." Now that KDE is fixed, Emacs is basically the last remaining X application that insists on having cut and paste that doesn't work correctly. So they have this function "menu-bar-enable-clipboard" which basically means "please make my cut and paste work correctly." Why is this an option? I call this kind of preference the "unbreak my application please" button. Just fix the app and be done with it.

Preferences can confuse many users. Take the famous too many clocks example. A significant number of test subjects were so surprised to have 5 choices of clock they couldn't figure out how to add a clock to their panel. This cost of preferences is invariably underestimated by us technical types.

The simplest issue: the preferences dialog is finite in size. You can see many large apps struggling with where to put the things. Mozilla has a zillion prefs that aren't even in the dialog.

So how do you decide which preferences to have?

On hearing that preferences have a cost, some people get upset that we're going to remove every preference, or that their favorite one is gone, or whatever.

Step back for a minute and think about the preferences problem.

For any program, there are literally an infinite number of possible preferences. Each one has a cost. A program with infinite preferences is therefore infinitely bad. But clearly some preferences are genuinely useful and important. So the UI developer's job is to choose the useful subset of possible preferences.

An argument that "preferences are good" or "preferences are bad" is clearly unproductive. Only an argument that draws a line between when a preference should exist, and when it should not, is a meaningful argument that impacts real-world developer decisions.

The traditional, de facto free software line between when a preference should exist and when it shouldn't is "a preference should exist if someone implements it or asks for it." No one is going to seriously defend this one though. At least, no one who's maintained an application and seen the sheer number of non-overlapping feature/preference requests you typically receive can take this one seriously.

Just as Linus rejects most kernel patches these days - but probably took more of them back in the Linux 0.1 era - as free software matures, more preferences will be rejected.

So how is the decision made? It's a judgment call. I try to go through some questions like these:

  • Ask questions to find out what's really bugging someone who requests a preference. What is the annoyance or inefficiency that prompted them to ask?
  • Can said annoyance be made to go away for all users without requiring a preference? If so, just do that. You may have to think about the big picture of the UI to figure out how to make it Just Work.
  • Is the annoyance or inefficiency really significant, or did it cost them 1 second doing something that users do once per week on average? If it's just some trivial thing, then the extra feature or preference probably costs more than it's worth, even if you can't make things Just Work.
  • Does any other OS have this feature or preference? I'm all for innovation, but if no one else is doing something, you should think it through twice to be sure there isn't a reason they aren't doing it. If you're appropriately humble you'll probably find that a lot of thought and user testing has gone into the currently popular platforms.

The main point is to set limits. Figure you have some fixed number of slots for preferences; is the preference in question worth "spending" one of those slots? Once spent it's hard to free a slot up again. Keep in mind that preferences have a cost, and demand that each preference have some real value.

Standing up to user pressure

As the maintainer of a free software package, it's really tough to stand up to the continuous barrage of feature requests, many of which are "requests for preferences," often accompanied by patches.

Reading dozens of GNOME and Red Hat bugs per day, I find that users ask for a preference by default. If a user is using my app FooBar and they come to something they think is stupid - say the app deletes all their email - it's extremely common that they'll file a bug saying "there should be an option to disable eating all my email" instead of one saying "your craptastic junk-heap of an app ate my email." People just assume that FooBar was designed to eat your email, and humbly ask that you let them turn off this feature they don't like.

Fight the temptation! Admit the truth to your users - you're a loser, FooBar just sucks and ate their email. ;-) This feature should be fixed, not made optional.

I got a patch just today because the Metacity window shading animation is pretty lame-looking, and moreover grabs the X server which results in keeping all apps from repainting during the animation. So the patch made the animation optional. But that's probably not the right fix. As far as I currently know, the problem is that the animation sucks. Based on that info, the preference would be a band-aid.

Linus had a good quote on this, found in LWN this week. The argument on the linux-kernel list was apparently about whether some poorly-implemented IDE feature should have been removed prior to creating a new, better replacement implementation. Linus says:

The fact is, many things are easier to fix afterwards. Particularly because that's the only time you'll find people motivated enough to bother about it. If you were to need to fix everything before-the-fact, nothing fundamental would ever get fixed, simply because the people who can fix one thing are not usually the same people who can fix another.

The annoying problem here is that sometimes you have to temporarily cause regressions in order to get things fixed. For GNOME 2 for example, we wanted to fix the window list (task bar) applet; under GNOME 1.4, it has to be manually sized via preferences. We wanted it to Just Work. But it took a while to make it Just Work and we were flamed regularly during its period of brokenness, because we didn't get it quite right. Most of those flames demanded re-adding all the manual sizing preferences. But that would have been wrong. (Now that the applet is mostly working, it may even make sense to add a "maximum size" preference as a useful feature.)

Stick to your guns and get it right.

What about advanced users?

As Joel points out advanced users do not want a million useless preferences when things should just work on their own. The point rings true in my experience.

Sometimes there may be a good reason for an Advanced tab or the like, but something as elaborate as the Nautilus 1 "user levels" concept isn't warranted IMO, and Advanced tabs must not be used as band-aids or excuses for stupid designs.

(The Desktop Preferences->Advanced menu in current GNOME 2 betas is precisely such a band-aid, because the new control panels haven't fully replaced the functionality of the old ones; this menu needs to die.)

Why all this obsession with preferences anyway?

I find that if you're hard-core disciplined about having good defaults that Just Work instead of lazily adding preferences, that naturally leads the overall UI in the right direction. Issues come up via bugzilla or mailing lists or user testing, and you fix them in some way other than adding a preference, and this means you have to think about the right UI and the right way to fix problems.

Basically, using preferences as a band-aid is the root of much UI evil. It's a big deal and a big change in direction for free software that GNOME has figured this out.

But how can you claim GNOME 2 is good, it lacks <insert my favorite feature here>

Sure, it probably needs that feature. ;-) File a bug report, add the Usability keyword.

I'd be a moron to claim GNOME 2 is perfect. But I do think that in many places it's a clear departure from the traditional hundreds-of-preferences, as-many-buttons-as-will-fit, as-many-menu-items-as-will-fit, union-of-all-possible-features free software UIs. [2]

And when you file that bug report, consider asking that the defaults be fixed, instead of asking for a preference.

[2] Some examples: we've gone from a deeply-nested hierarchy containing 31 control panels to maybe 10 or 11 (plus an unfortunate "Advanced" dumping ground, but that's on the hit list). Look at the preferences for the Desk Guide in 1.4 vs. the new workspace switcher. Throughout GNOME, check out the keyboard navigation; the panel has it, dialogs are loaded up with mnemonics, Nautilus now uses standard key-navigable widgets where applicable. A small panel tweak to notice: the word "applets" doesn't appear in the UI. And so on. There are thousands of little tweaks waiting to be noticed - collect them all. If you're especially adventurous you might try GNOME 2 with Metacity as the window manager. By the way, if you start poking all the little UI details and find problems, report them.