Seth Nickell, interaction designer guy

The old blog was stale and strange, but if you're looking for it, for history's sake you can find the list of entries here. Deep links still work fine, but I don't really want the main page heavily spidered, so please don't link to the index..

Current Meanderings (May 18):

Current Transient Interests (May 18):

Long Term Interests:
error, I do not experience these


 
Ladies and gentlemen, and now I present, for your reading pleasure, a few of the entries from the old blog that have stood my test of time...

All work and no play makes Jack a dull boy.

When you interpret usability soley as restraint and polishing it can really dampen project enthusiasm over time. All work and no play makes jack a dull boy.

Design not Usability

The partial solution I would proffer is to focus on design instead of usability. There's a big difference. I'm sure there will be a big hoopla over Apple today owing to the expo, and they deserve it. I think it would be very hard to argue that the things Apple does are not interesting. Part of the reason Apple is interesting is because they encourage designs that change market norms. Good design is challenging. I mean that two ways: both that it is hard to do, and that it tends to shake things up.

Extreme shaftation is an oft used and effective approach to producing really good designs. That's part of the reason its far harder to do a good design in a non-1.0 product. In a 1.0 product you don't have existing users, there's nobody to shaft. You can choose who you want to target, and do it well (unless you position yourself, say, as a Microsoft Word replacement in which case you inherit the set of expectations!). As soon as you have users, its very very hard to drop things from the requirements list. The point of the shafting isn't to remove individual features, or to increase simplicity (necessarily). Simplicity sucks if it doesn't do anything. The point is expand the scope of possible designs, its to let you do new and more interesting things.

Focusing on usability devolves into a sort of bean counting. You divide up the "requirements list" and figure out how to cram all of it in, and then trying to organize the minutia (button labels, menu organization, etc) so it somehow still all makes sense. The result isn't very sexy, and is agressively mediocre. Every point on the requirements list pins you down. In the end the requirements list does the design instead of you. When everybody else is producing nutso apps with a billion buttons and no sort of consistency (c.f. GNOME 1.x), the result of usability looks pretty good. But by shedding some constraints, losing most of the requirements, and focusing carefully you can usually make something much better.

Shedding the Requirements List by Zeroing User Expectations (MS Office)

Microsoft Office exemplifies usability in action. They have a huge list of features that Office must have or users will be angry. They have done a good job of taking that massive list and producing something sane. I am sure that every dialogue and menu in MS Office is poured over with excruciating care: "Will that wording confuse people?", "What are people most likely to be looking for in this menu?" etc. It shows. Office is very polished. Its also a very poor design.

If I were commissioned by Microsoft to dramatically improve Office, my first step would be to position the project not as a next-generation Microsoft Office, but as a new product. I might even start with the Office codebase, but I sure as hell couldn't work with the smothering mantle of user expectations that looms over Office. Done well, I think you'd largely displace Office in the market (assuming this was a Microsoft product, I don't mean to imply that anybody could just make a better product and flounce Office in the market). So you are meeting the goals people have in using Office. What you're not doing is slogging through trying to meet the specific needs people have of the existing software. If you do that, you'll just end up writing Office again.

New Software Resets the Requirements List Anyway (E-mail)

Its important to understand that most 'feature' or 'requirements' lists are a reflection of user's needs and desires relative to existing implementations. If you improve the model enough, most of this is renegotiable.

E-mail is a great example of this. Lets say the internet hadn't appeared until 2004. You are right now in the process of designing the first E-mail app. Clearly users need the ability to make tables, right? I mean, that's "word processing 101". And to format them precisely, oh and insert drawings. And equations. And to edit graphs inline, and to set the margins and page settings. etc etc.

You could easily end up with the requirements list for Microsoft Word: a design for creating multi-page labour intensive laid-out documents. These are the requirements you'd extract from the "word processor + postal mail" model. But E-mail totally renegotiated this. Short little messages are the norm, not multi-page documents. You receive many dozens of mails a day, not several. There's no question that being able to insert a table here and there would be nice, but its by no means a requirement. E-mail's one compelling feature, instant and effortless transmission of text, renders the old model's "must have requirements" list a moot point.

Waking Up is Hard to Do

Conclusion: I have a cruel alter ego who wakes up when the alarm goes off, disables it for who knows what reason, laughs mischeviously, and then goes back to bed.

Solution: Tie myself up before going to bed.

Problem: How do I get out of bed when I'm back to my calm mild mannered normal self?

On Usability Testing

Usability testing (perhaps more aptly called "learnability testing") is all the rage. If you look on the web for information on usability you'll be bombarded with pages from everybody from Jakub Nielsen to Microsoft performing, advocating, and advising on usability testing. Its unsurprising that people who have been educated about HCI primarily by reading web sites (and books recommended on those web sites) equate usability with usability testing. Don't get me wrong, usability testing is a useful tool. But in the context of software design its only one of many techniques, isn't applicable in many situations, and even when it is applicable is often not the most effective.

So am I saying that usability testing is bad or doesn't improve software? No! If you take a good design, usability test it to learn about major problems, and use that data and experience to improve your design (remembering that design is still about compromise, and sometimes you compromise learnability for other things)... you will end up with a better design. Every design, even very good ones, that a designer pulls out of their head has some mistakes and problems. Usability testing will find many of them.

So why don't I advocate usability testing everything? If you don't have oodles of usability people, up front design by a good designer provides a lot more bang for buck than using that same designer to do usability tests. You get diminishing returns (in terms of average seriousness of problems discovered) as you do more and more fine grained tests. Its all about tradeoffs: Given n people hours across q interface elements (assuming all people involved were equally skilled at testing and design, which is obviously untrue) what is the optimuum ratio of hours spent on design vs. hours spent on testing? For small numbers of people hours across large numbers interface elements, I believe in shotgun testing, and spending the rest of the time on design. Shotgun testing is testing the interface in huge chunks, typically by taking several large high-level tasks that span many interface aspects and observe people trying to perform them.

An example high-level task might be to give somebody a fresh desktop and say: "Here's a digital camera, an e-mail address, and a computer. Take a picture with this camera and e-mail it to to this address". You aim at a huge swath of the desktop and *BLAM* you find its top 10 usability problems.

Anyway, like practically everything I write this is already too long, but I have a million more things to say. Oh well ;-)

Two Decades of the Macintosh revolution, a reflection

January 22, 1984: the Apple Macintosh is unleashed on the world. The world blinks and keeps on turning.

The release of the Macintosh wasn't the revolution, it was a symbol of the revolution. It wasn't merely the introduction of an "insanely great" product line but of the debutante ball of the process that birthed it. And at the heart of that process (human-centered design) was a paradigm shift. The question was no longer "What will this computer's specs be?" but "What will people do with this product?". That question is as relevant (and almost as frequently overlooked) today as it was twenty years ago. The importance of the revolution was less in Windows Icons Menus and Pointer and more in approaching product development from the right direction. Until widespread development and design in the computer industry is focused on a question like that, the Macintosh revolution is far from over.


The Star desktop, circa 1981

There is widespread disagreement as to when and where this revolution began, but it is not contentious that the ideas took root in the feracious ground of Xerox PARC in the 70s. The end result was the Xerox 8010 (aka Star) desktop, released in 1981. To a large extent the Star interface is extant in modern desktops, but this belies the importance of the Star: it was the result of human-centered design. Engineers and researchers at Xerox tried to create a computer that could be used to "do people things" rather than just crunch numbers. Focus was not on specs and technology but on what Star could accomplish.


The Alto's "Executive", circa mid 1970s

It is interesting to compare the Star interface with the interface of the Executive program from the equally famous Xerox Alto (from the mid 70s). The Alto was a technical marvel, with a bitmapped display, windows, a mouse, and ethernet. But while the Star really adds nothing to this impressive list of technology, the difference between the two, in terms of user experience, is like night and day. Technological invention can enable real improvement, but its not enough (usually its not even necessary). Anyway, enough historical meandering. The story of the Macintosh, Star and Alto is very interesting, and there's a lot of period documents dealing with that subject... maybe I'll post a list of links another day. But back to my agenda: :-)

At best I think most people ask "What could people do with this computer". That's a very different question from "What people will people do with this computer"... there are so many nifty features that if people pushed themselves they could use, but have a high enough barrier to entry that people don't bother.

Example: I have a nice thermostat in my apartment. Its fairly well designed and has quick push buttons for "Daytime", "Night" and "Vacation". It was even straightforward to set these to my preferred temperatures for "In the apartment, awake", "Out of the apartment or asleep", and I haven't bothered with the vacation button. Now I have noticed that I don't like to get out of bed in the morning because it is sort of cold. In fact, sometimes I'll lie in bed for 30+ minutes because its cold, which is a big waste of time (I'm not very rational when I'm waking up). I have noticed that my thermostat supports scheduling changes between day and night temperature. I even looked at the instructions beneath the faceplate, and it looks like it'd be fairly easy to program. But I haven't done it. The device is usable in the sense that if I wanted to, I could program it, and probably get it right on the first or second try. Its not hard to use. But its a little too inconvenient, because I'd have to special case my weekend schedule, I'd have to set several different times using the fairly slow "up", "down", "next item" interface for setting time (on most alarm clocks etc). The point is, its not hard to figure out, but its stills too much hassle. So while I could program the thermostat, I won't. There's always something that seems better to do with my time, and I can't be bothered (even though rationally I know it'd be better overall if I just program the silly thing).

The Macintosh revolution, at least how I see it, was about conceiving your (computer related) product in terms of what people will do with it. Sometimes we need to "get back to the basics"...

Vision, goals and getting cool things done

Read an interesting article about NASA that basically argues the agency was an order of magnitude more productive during its Apollo-phase than during the current low-earth-orbit Shuttle-phase. The author, who admitedly has an axe to grind, compares NASA's output from 1963-1971 with its output from 1990-now. The Shuttle missions, he claims, are basically done with the argument that "we could use them for something, anything, in the future". The conclusion? When NASA was actually working toward concrete (lofty) goals, a lot more got done. I think its a very general phenomenuum in engineering (and maybe other areas too?). Both in a micro (single person across a day or two) and a macro (hundreds of people across months or years) scale... to achieve great things you need to have great goals that you really buy into.

The article struck a chord with me because it reminded me of GNOME. GNOME exists in a state of perpetual shuttle-missions. It has no goal, no vision. I don't wake up in the morning dying to hack on GNOME (sometimes when one of my projects is doing something especially cool and is almost working I feel that way, but never really about GNOME itself).

On a "micro scale": Ever noticed how a few (or one) motivated people can do really amazing "feats of programming"? I think a lot of times this gets chalked up to either the individual's brilliance or the advantages of small groups of people in terms of communication (think mythical man month material here)... Another important factor, maybe even more important, is that these people are driven by a capital-v Vision. They know what they want to create, they are not intimidated by the steps they need to get there, they dive in and just DO IT. As a developer, how much more productive are you when things are just clicking compared to an average day? For me, the difference is a solid order of magnitude, and it happens when I've got a vision I'm working toward and I've overcome my fear of the hurdles to get there.

On a "macro scale": Were it not for the presence of commercial software to copy I think GNOME (and the free software desktop in general) would be totally stagnant. The acceptance of copying commercial software provides a design goal that people generally agree upon. I (occasionally ;-) hear GNOME people putting forward really solid ideas. They receive no backing until, hey what do you know, a year later Apple or Microsoft comes out with them. Suddenly everybody is in favour of incorporating the idea. In that sense, commercial software provides the closest thing to a vision that we have (of course, this guarantees that we will be perpetually at least a release behind, and because we haven't developed expertise in really understanding the issue we will probably do a poor clone).

I don't know why this happens exactly. For one thing, beauracracy and the like plague any collection of more than a dozen people (sometimes even then)... but that can be overcome. I think its deeper than that. I think many free software developers might feel inadequate: they may not think they can produce things or think of things that are as good as the "big boys".

Maybe we are just really risk averse: unwilling to try anything unless we are sure its a safe bet. We have more room to be betting than companies! If anything we should be less risk averse than they: our users are probably more tolerant of interesting ideas that turn out to be failures (they can work around them and turn them off), we have a smaller user base, we aren't liable to third quarter financials, and GNOME 1.x proved we can suck a lot compared to the "competition" and it'll take a long time before that starts showing in "the numbers".

Maybe we were burned by the long development cycle between 1.4 and 2.0 and aren't willing to try anything "big" again. The problem with 1.4 to 2.0 wasn't the time it took, it was that we didn't have anything to show after that period. That was a perfect example of shuttle-mission mentality. I mean, things were improved, sure (just like space tech keeps creeping forward during 1990-2003 for NASA), but not commensurate with the work and time that went into them. If we had produced something markedly better, really cool, etc people would have quickly forgiven the time it took between releases. If we had focused on an idea we could have done it.

I mean, I'm not saying we should ditch quick releases, I think they're a good thing, but we MUST be planning much farther than the next release if our releases are going to be 6 mo apart. Right now the release team has become the defacto "forward thinking" power in GNOME by virtue of the board being explicitly not-involved with technical issues and the release team being the only other organized body in GNOME. Release team is doing a good job planning 6 mo releases. But we need more than that, and I don't think the release team would do a good job at it. They are focused (by mandate and by personality) toward worrying about more immediate and pressing issues, and I think people in such a position naturally sacrifice "non urgent" issues (that are exceedingly important, but in the nebulous and distant future) for "urgent" issues (where you can see the consequence of not doing something TODAY). Its good to have an RT that's dead-set focused on what needs to be in the release several months from now.

How many "man years" have gone into GNOME? How much better could GNOME be today if all that work had been toward a cool, coherent vision?

We also need a vision for GNOME that's farther out. And it can't just be something that some powerful cabal writes down on paper, it has to be the life blood of day to day GNOME programming. It has to motivate our technical decisions, clarify our priorities, and goals, focus our interface improvements, help us decide what "applications" and technologies to develop next, etc. It has to seep into every pore of the GNOME body and take over the organism. On the other hand, it can't be designed by committee or consenus, because then it would inevitably suck. What we (aka I) want is a vision infused with creative ideas from everyone, bounded (but not too bounded) by the dire warnings of failure pronounced by experts on certain ideas, and drafted by a small cabal of fascists... a vision that is so compelling that we all (or next to all) buy into it. And that's the tricky part. A vision that will make people drop their disputes and differing goals (I do not think diversity of goals makes GNOME stronger, diversity of talents toward the same goal is what works). Does such a thing exist? I think so, I wish so, I hope so. Can we find it, will we even look for it? Probably not.

Oh well.