The old blog was stale and strange, but if you're looking for it, for history's sake you can find the list of entries here. Deep links still work fine, but I don't really want the main page heavily spidered, so please don't link to the index..
Current Meanderings (May 18):
Current Transient Interests (May 18):
Long Term Interests:
error, I do not experience these
The partial solution I would proffer is to focus on design instead of usability. There's a big difference. I'm sure there will be a big hoopla over Apple today owing to the expo, and they deserve it. I think it would be very hard to argue that the things Apple does are not interesting. Part of the reason Apple is interesting is because they encourage designs that change market norms. Good design is challenging. I mean that two ways: both that it is hard to do, and that it tends to shake things up.
Extreme shaftation is an oft used and effective approach to producing really good designs. That's part of the reason its far harder to do a good design in a non-1.0 product. In a 1.0 product you don't have existing users, there's nobody to shaft. You can choose who you want to target, and do it well (unless you position yourself, say, as a Microsoft Word replacement in which case you inherit the set of expectations!). As soon as you have users, its very very hard to drop things from the requirements list. The point of the shafting isn't to remove individual features, or to increase simplicity (necessarily). Simplicity sucks if it doesn't do anything. The point is expand the scope of possible designs, its to let you do new and more interesting things.
Focusing on usability devolves into a sort of bean counting. You divide up the "requirements list" and figure out how to cram all of it in, and then trying to organize the minutia (button labels, menu organization, etc) so it somehow still all makes sense. The result isn't very sexy, and is agressively mediocre. Every point on the requirements list pins you down. In the end the requirements list does the design instead of you. When everybody else is producing nutso apps with a billion buttons and no sort of consistency (c.f. GNOME 1.x), the result of usability looks pretty good. But by shedding some constraints, losing most of the requirements, and focusing carefully you can usually make something much better.
Shedding the Requirements List by Zeroing User Expectations (MS Office)Microsoft Office exemplifies usability in action. They have a huge list of features that Office must have or users will be angry. They have done a good job of taking that massive list and producing something sane. I am sure that every dialogue and menu in MS Office is poured over with excruciating care: "Will that wording confuse people?", "What are people most likely to be looking for in this menu?" etc. It shows. Office is very polished. Its also a very poor design.
If I were commissioned by Microsoft to dramatically improve Office, my first step would be to position the project not as a next-generation Microsoft Office, but as a new product. I might even start with the Office codebase, but I sure as hell couldn't work with the smothering mantle of user expectations that looms over Office. Done well, I think you'd largely displace Office in the market (assuming this was a Microsoft product, I don't mean to imply that anybody could just make a better product and flounce Office in the market). So you are meeting the goals people have in using Office. What you're not doing is slogging through trying to meet the specific needs people have of the existing software. If you do that, you'll just end up writing Office again.
New Software Resets the Requirements List Anyway (E-mail)Its important to understand that most 'feature' or 'requirements' lists are a reflection of user's needs and desires relative to existing implementations. If you improve the model enough, most of this is renegotiable.
E-mail is a great example of this. Lets say the internet hadn't appeared until 2004. You are right now in the process of designing the first E-mail app. Clearly users need the ability to make tables, right? I mean, that's "word processing 101". And to format them precisely, oh and insert drawings. And equations. And to edit graphs inline, and to set the margins and page settings. etc etc.
You could easily end up with the requirements list for Microsoft Word: a design for creating multi-page labour intensive laid-out documents. These are the requirements you'd extract from the "word processor + postal mail" model. But E-mail totally renegotiated this. Short little messages are the norm, not multi-page documents. You receive many dozens of mails a day, not several. There's no question that being able to insert a table here and there would be nice, but its by no means a requirement. E-mail's one compelling feature, instant and effortless transmission of text, renders the old model's "must have requirements" list a moot point.
Conclusion: I have a cruel alter ego who wakes up when the alarm goes off, disables it for who knows what reason, laughs mischeviously, and then goes back to bed.
Solution: Tie myself up before going to bed.
Problem: How do I get out of bed when I'm back to my calm mild mannered normal self?
Usability testing (perhaps more aptly called "learnability testing") is all the rage. If you look on the web for information on usability you'll be bombarded with pages from everybody from Jakub Nielsen to Microsoft performing, advocating, and advising on usability testing. Its unsurprising that people who have been educated about HCI primarily by reading web sites (and books recommended on those web sites) equate usability with usability testing. Don't get me wrong, usability testing is a useful tool. But in the context of software design its only one of many techniques, isn't applicable in many situations, and even when it is applicable is often not the most effective.
Why is usability testing lauded all over the internet? The most visible and growing area of HCI is web site usability, because it has received broader corporate adoption than applying usability to other things (e.g. software). In other words: most usability discussed on the internet today is in the context of web page usability, and web page usability is profoundly improved by usability testing. Thus it is not surprising that much usability discussed on the internet today deals with usability testing.
Desktop software usually presents a substantially different problem space from web pages. Compared to each other, desktop software represents more complex and varied operations where long term usability is crucial, whereas web sites represent a simple operation (very similar to 100 other websites users have used) where "walk up and use perfectly" is crucial. Design of infrequently used software, like tax software, is much more similar to web site design. One simple example... In most web pages, learnability is paramount: if on the first time visiting a web site users don't get what they want almost instantly and without making mistakes they will just leave. Learnability is the single most important aspect of web page design, and usability tests (aka learnability tests) do a marvelous job at finding learning problems. In a file open dialog learnability is still important, but how convenient the dialog is to use after the 30th use is more important.
A good designer will get you much farther than a bad design that's gone through lots of testing. (A good design that had testing applied to it is even better, but more comments on this later) Usability testing tends see the trees instead of the forest. You tend to figure out "that button's label is confusing" not "movie and music players represent fundamenatlly different use cases". Because of this usability testing tends to get stuck on local maxima rather than moving toward global optimization. You get all the rough edges sanded, but the product is still not very good at the high level. Microsoft is a poster child for this principle: they spend more money on usability than anyone else (by far), but they tend to spend it post-development (or at least late in devlopment). Its not an efficient use of resources, and even after many iterations (even over multiple versions) the software often still sucks. A good designer will also predict and address a strong majority of "that button's label is confusing" type issues, so if you do perform usability testing you'll be starting with 3 problems to find instead of 30. That's especially important because a single usability test can only find several of the most serious issues: you can't find the smaller issues until the biggies can be fixed. In summary: with a designer you're a lot more likely to end up optimizing toward a global maximuum rather than a local maxima, AND if you do testing it will require far less usabilty testing to get the little kinks out.
Usability testing is not the best technique for figuring out the big picture. Sometimes you will get an "aha" experience triggered by watching people using your software in a usability test, but typically you can get the same experience by watching people using your competitor's software too. Also a lot of these broad observations are contextual, they require an understanding of goals and how products fit into people's lives that is absent in typical usability tests. Ethnographic research is typically a much more rewarding technique for gaining this sort of insight.
Producing a good design requires more art than method. I think a lot of people are more comfortable with usability testing because it seems like a science. Its methodical, it produces numbers, its verifiable, etc. Many designers advocate usability testing less because it improves the design, and more because its a useful tool for convincing reluctant engineers that they need to listen: usability testing sounds all scientific. Usability testing can be a very useful technique for trying to get improvements implemented in a "design hostile environment". This is part of why I pushed/did more usability testing early on in GNOME usability. Companies would love it if there were a magic series of steps you could follow to produce genuine guarunteed ultra usable software. Alas, just like programming, there isn't. A creative insightful informed human designing the software will do much better than any method.
Usability tests can't, in general, be used to find out "which interface is better". I mention this because people periodically propose a usabilty test to resolve a dispute over which way to do things is right. Firstly, you'll only be comparing the learnability. There are many other important factors that will be totally ignored by this. Secondly, usability tests usually don't contain a sufficiently large sample of users to allow rigorous comparison. Sure, if on interface A 10 people used it without trouble, and on interface B 10 people used it and 40 serious problems were reported you can confidently say that interface A was way more learnable (and at these sort of extremes you can probably even assert its much better overall) than interface B. But its rarely like that.
Example: We test interface A on 10 people and we find one problem that effects 8 of the people, but only causes serious problems for 2 people, and 3 serious problems that effect one person each. We test interface B on 10 people and we find one serious problem that effects 3 people, another serious problem that effects 2 people, and 3 serious problems that effect one person each. Which interface is better? Its a little harder to tell. So lets say we argue it out and agree that interface A is better on usability tests. But we've only agreed that interface A is more learnable! Lets say our designer asserts interface B promotes a more useful conceptual model, and that conceptual model is more important than learnability here. How do we weight this evidence against the usability test? We're a little better off than we were before the test, but not a lot, because we still have to weigh the majority of evidence that's not directly comparable. If we always accept "hard data!" as being the final authority (which people often, somewhat erroneously, do in cases of uncertainty), even when the data only covers a subset of the problem consideration then we are worse off than before the test.
So am I saying that usability testing is bad or doesn't improve software? No! If you take a good design, usability test it to learn about major problems, and use that data and experience to improve your design (remembering that design is still about compromise, and sometimes you compromise learnability for other things)... you will end up with a better design. Every design, even very good ones, that a designer pulls out of their head has some mistakes and problems. Usability testing will find many of them.
So why don't I advocate usability testing everything? If you don't have oodles of usability people, up front design by a good designer provides a lot more bang for buck than using that same designer to do usability tests. You get diminishing returns (in terms of average seriousness of problems discovered) as you do more and more fine grained tests. Its all about tradeoffs: Given n people hours across q interface elements (assuming all people involved were equally skilled at testing and design, which is obviously untrue) what is the optimuum ratio of hours spent on design vs. hours spent on testing? For small numbers of people hours across large numbers interface elements, I believe in shotgun testing, and spending the rest of the time on design. Shotgun testing is testing the interface in huge chunks, typically by taking several large high-level tasks that span many interface aspects and observe people trying to perform them.
An example high-level task might be to give somebody a fresh desktop and say: "Here's a digital camera, an e-mail address, and a computer. Take a picture with this camera and e-mail it to to this address". You aim at a huge swath of the desktop and *BLAM* you find its top 10 usability problems.
Anyway, like practically everything I write this is already too long, but I have a million more things to say. Oh well ;-)
January 22, 1984: the Apple Macintosh is unleashed on the world. The world blinks and keeps on turning.
The release of the Macintosh wasn't the revolution, it was a symbol of the revolution. It wasn't merely the introduction of an "insanely great" product line but of the debutante ball of the process that birthed it. And at the heart of that process (human-centered design) was a paradigm shift. The question was no longer "What will this computer's specs be?" but "What will people do with this product?". That question is as relevant (and almost as frequently overlooked) today as it was twenty years ago. The importance of the revolution was less in Windows Icons Menus and Pointer and more in approaching product development from the right direction. Until widespread development and design in the computer industry is focused on a question like that, the Macintosh revolution is far from over.
There is widespread disagreement as to when and where this revolution began, but it is not contentious that the ideas took root in the feracious ground of Xerox PARC in the 70s. The end result was the Xerox 8010 (aka Star) desktop, released in 1981. To a large extent the Star interface is extant in modern desktops, but this belies the importance of the Star: it was the result of human-centered design. Engineers and researchers at Xerox tried to create a computer that could be used to "do people things" rather than just crunch numbers. Focus was not on specs and technology but on what Star could accomplish.
It is interesting to compare the Star interface with the interface of the Executive program from the equally famous Xerox Alto (from the mid 70s). The Alto was a technical marvel, with a bitmapped display, windows, a mouse, and ethernet. But while the Star really adds nothing to this impressive list of technology, the difference between the two, in terms of user experience, is like night and day. Technological invention can enable real improvement, but its not enough (usually its not even necessary). Anyway, enough historical meandering. The story of the Macintosh, Star and Alto is very interesting, and there's a lot of period documents dealing with that subject... maybe I'll post a list of links another day. But back to my agenda: :-)
At best I think most people ask "What could people do with this computer". That's a very different question from "What people will people do with this computer"... there are so many nifty features that if people pushed themselves they could use, but have a high enough barrier to entry that people don't bother.
Example: I have a nice thermostat in my apartment. Its fairly well designed and has quick push buttons for "Daytime", "Night" and "Vacation". It was even straightforward to set these to my preferred temperatures for "In the apartment, awake", "Out of the apartment or asleep", and I haven't bothered with the vacation button. Now I have noticed that I don't like to get out of bed in the morning because it is sort of cold. In fact, sometimes I'll lie in bed for 30+ minutes because its cold, which is a big waste of time (I'm not very rational when I'm waking up). I have noticed that my thermostat supports scheduling changes between day and night temperature. I even looked at the instructions beneath the faceplate, and it looks like it'd be fairly easy to program. But I haven't done it. The device is usable in the sense that if I wanted to, I could program it, and probably get it right on the first or second try. Its not hard to use. But its a little too inconvenient, because I'd have to special case my weekend schedule, I'd have to set several different times using the fairly slow "up", "down", "next item" interface for setting time (on most alarm clocks etc). The point is, its not hard to figure out, but its stills too much hassle. So while I could program the thermostat, I won't. There's always something that seems better to do with my time, and I can't be bothered (even though rationally I know it'd be better overall if I just program the silly thing).
The Macintosh revolution, at least how I see it, was about conceiving your (computer related) product in terms of what people will do with it. Sometimes we need to "get back to the basics"...
Read an interesting article about NASA that basically argues the agency was an order of magnitude more productive during its Apollo-phase than during the current low-earth-orbit Shuttle-phase. The author, who admitedly has an axe to grind, compares NASA's output from 1963-1971 with its output from 1990-now. The Shuttle missions, he claims, are basically done with the argument that "we could use them for something, anything, in the future". The conclusion? When NASA was actually working toward concrete (lofty) goals, a lot more got done. I think its a very general phenomenuum in engineering (and maybe other areas too?). Both in a micro (single person across a day or two) and a macro (hundreds of people across months or years) scale... to achieve great things you need to have great goals that you really buy into.
The article struck a chord with me because it reminded me of GNOME. GNOME exists in a state of perpetual shuttle-missions. It has no goal, no vision. I don't wake up in the morning dying to hack on GNOME (sometimes when one of my projects is doing something especially cool and is almost working I feel that way, but never really about GNOME itself).
On a "micro scale": Ever noticed how a few (or one) motivated people can do really amazing "feats of programming"? I think a lot of times this gets chalked up to either the individual's brilliance or the advantages of small groups of people in terms of communication (think mythical man month material here)... Another important factor, maybe even more important, is that these people are driven by a capital-v Vision. They know what they want to create, they are not intimidated by the steps they need to get there, they dive in and just DO IT. As a developer, how much more productive are you when things are just clicking compared to an average day? For me, the difference is a solid order of magnitude, and it happens when I've got a vision I'm working toward and I've overcome my fear of the hurdles to get there.
On a "macro scale": Were it not for the presence of commercial software to copy I think GNOME (and the free software desktop in general) would be totally stagnant. The acceptance of copying commercial software provides a design goal that people generally agree upon. I (occasionally ;-) hear GNOME people putting forward really solid ideas. They receive no backing until, hey what do you know, a year later Apple or Microsoft comes out with them. Suddenly everybody is in favour of incorporating the idea. In that sense, commercial software provides the closest thing to a vision that we have (of course, this guarantees that we will be perpetually at least a release behind, and because we haven't developed expertise in really understanding the issue we will probably do a poor clone).
I don't know why this happens exactly. For one thing, beauracracy and the like plague any collection of more than a dozen people (sometimes even then)... but that can be overcome. I think its deeper than that. I think many free software developers might feel inadequate: they may not think they can produce things or think of things that are as good as the "big boys".
Maybe we are just really risk averse: unwilling to try anything unless we are sure its a safe bet. We have more room to be betting than companies! If anything we should be less risk averse than they: our users are probably more tolerant of interesting ideas that turn out to be failures (they can work around them and turn them off), we have a smaller user base, we aren't liable to third quarter financials, and GNOME 1.x proved we can suck a lot compared to the "competition" and it'll take a long time before that starts showing in "the numbers".
Maybe we were burned by the long development cycle between 1.4 and 2.0 and aren't willing to try anything "big" again. The problem with 1.4 to 2.0 wasn't the time it took, it was that we didn't have anything to show after that period. That was a perfect example of shuttle-mission mentality. I mean, things were improved, sure (just like space tech keeps creeping forward during 1990-2003 for NASA), but not commensurate with the work and time that went into them. If we had produced something markedly better, really cool, etc people would have quickly forgiven the time it took between releases. If we had focused on an idea we could have done it.
I mean, I'm not saying we should ditch quick releases, I think they're a good thing, but we MUST be planning much farther than the next release if our releases are going to be 6 mo apart. Right now the release team has become the defacto "forward thinking" power in GNOME by virtue of the board being explicitly not-involved with technical issues and the release team being the only other organized body in GNOME. Release team is doing a good job planning 6 mo releases. But we need more than that, and I don't think the release team would do a good job at it. They are focused (by mandate and by personality) toward worrying about more immediate and pressing issues, and I think people in such a position naturally sacrifice "non urgent" issues (that are exceedingly important, but in the nebulous and distant future) for "urgent" issues (where you can see the consequence of not doing something TODAY). Its good to have an RT that's dead-set focused on what needs to be in the release several months from now.
How many "man years" have gone into GNOME? How much better could GNOME be today if all that work had been toward a cool, coherent vision?
We also need a vision for GNOME that's farther out. And it can't just be something that some powerful cabal writes down on paper, it has to be the life blood of day to day GNOME programming. It has to motivate our technical decisions, clarify our priorities, and goals, focus our interface improvements, help us decide what "applications" and technologies to develop next, etc. It has to seep into every pore of the GNOME body and take over the organism. On the other hand, it can't be designed by committee or consenus, because then it would inevitably suck. What we (aka I) want is a vision infused with creative ideas from everyone, bounded (but not too bounded) by the dire warnings of failure pronounced by experts on certain ideas, and drafted by a small cabal of fascists... a vision that is so compelling that we all (or next to all) buy into it. And that's the tricky part. A vision that will make people drop their disputes and differing goals (I do not think diversity of goals makes GNOME stronger, diversity of talents toward the same goal is what works). Does such a thing exist? I think so, I wish so, I hope so. Can we find it, will we even look for it? Probably not.
Oh well.