.comment-link {margin-left:.6em;}

Beyond the Fat Wire

Friday, September 30, 2005

Don't Make Me Think: The Workshop

Just got back from the Chicago edition of Steve Krug's Don't Make Me Think workshop (by the way, it's pronouned kroog (as in goog-le) not krug (as in rug))

It was good, I'm glad I went. The main thing from it, for me, is kind of a good news/bad news thing.

One feature of the workshop is that attendees send in a problem URL and Steve does a 12 minute mini-expert review. I sent in our Research Resources page and he additionally included our Find Articles page.

The good news is, we aren't doing much worse than other library sites he knows. The bad news is, even though everybody agrees it sucks, he has no good solution. He thinks it might be, at heart, an Information Architecture problem (that is, he suggested that he and we consult with Lou Rosenfeld). But, while there may be some IA things to do, essentially the problem is that we have content tied up in 250+ silos that can barely talk to each other. And, OneSearch helps only a little. It lets us peek into some of the silos, but the view into those silos isn't that great, not all silos are viewable, and, overall, it takes such effort to get that peek that it's hard to know which is better -- to have the peek or to stick with choosing one database at a time.

He did like our plan to turn the "Find" pages into some sort of research wizard, where we lead people along a decision path that results in suggestions of resources.

But, people outside libraries don't really get the complexity of what we're doing. One guy wondered why we just don't list the 5 or 6 databases, and was dumbstruck at the concept of 250 3rd party licensed databases and 30,000 ejournals.

Another guy wondered why we can't use cookies or something to allow for re-running a search in another database. When I suggested that would be like doing a Google search and then popping over to Yahoo and expecting to re-run the same search, he got a deer-in-headlights look.

I just hope we can build that research wizard app. I've got a sense that task will be daunting.

Otherwise, the workshop was good. He talked some about the major points in his book.

The best parts were the mini-reviews and mock user tests of attendees sites. Along with this, he gave tips and advice on conducting tests and reporting on them. He's a big advocate of simple testing, simple tools, and avoiding the "big honking report". Rather than a report, he prefers debriefing the design team and/or stakeholders. And even more than debriefing stakeholders, he prefers to get them to observe actual tests as a way to get them to "get it".

And, of course, he's a big advocate of testing early and often, and repeating the early and often throughout the life of the site.

While a fancy lab is not necessary, we really do need some sort of facility. Laptop, Camtasia or Morae, some place private where the tester can feel comfortable, some place where there can be appointments to allow notetaking and decompressing on the part of the test facilitator in between appointments, and a way for the test to be observed by others who are not present (audio or audio-video feed, one-way mirror, or something to allow remote observation of the test)

And scripts. We need scripts.

Oh, regarding that person who hated that fact that we provided a custom "Page not Found" page with explanation and site index for the old web pages we can't redirect individually -- he suggested the book Defensive Design for the Web.

He'll have his slides on his site after the 3rd edition of the workshop (he prefers that attendees not read ahead). He does have other downloads though: a sample test script and a video consent form (both are MS Word documents, so right click and save as)

Additionally, about the mini-reviews, attendees whose sites didn't get a review during the workshop are entitled to a phone consultation with him. In our case, even though we got a review during the workshop, since he didn't have any really good advice, we get to have a phone consulation too. We can do this as a conference call, so we'll see what kind of scheduling we can do for that.

Regarding usability testing tips:
  • Start doing it before you think you can. This is the "sketch on a napkin" stage.
  • Continue through the "cubicle" stage (showing ideas to people in your organization) and the "neighbor" stage (getting the ideas from someone totally unrelated to your project)
  • Test limitedly and often. This means, bring in 3-4 people for about 45 minutes each to do a task. Then fix the problems found by those people. Then bring in 3-4 more people to test again.
  • Plan on testing on a regular basis, for example, set aside a morning a month (in our case, this would be per project if we are testing/developing multiple projects) to test whatever you have going on.
  • Don't stress about demographics. All people really need is to have used a web browser before. It's best if they don't have a lot of experience with your site.
  • Be careful with sequencing tasks during a test. Once a person has completed a task, they have learned about your site, and that can mean that they miss problems with the subsequent tasks you ask them to do
  • At the beginning of a testing session, get the person to talk: use this time to ask who they are, what they do, how much time they spend online, what they do online, what kind of web sites they like. Part of this is to give you context for when they are performing your tasks, but it also is a "warm up" period. It gets them accustomed to talking to you, it shows them that you are interested and listening to them, it helps them trust you.
  • Record the sessions with Camtasia or Morae. Have the person sign a permission form for recording the session. This is even if we don't do IRB exemption.
  • Eye tracking software is cool, BUT ... it's really expensive and provides more data than most people know what to do with. At this point, if you need eye tracking data, it's probably cheaper in the long run to outsource the eye tracking tests.
  • Ignore the people who say "I don't like color [X]" or other design opinions based on personal preference. On the other hand, if you get several people who all express the same strong opinion on a design element, then you should pay attention.
  • Each test will be task oriented, but you will get non-task things (such as points of confusion, points where the site loses the trust of the user, etc) from listening to the person's talk during the test and observing points of hesitation, etc.
  • As a test facilitator, you'll be in "therapist" mode: getting people to talk about what they're seeing and thinking, asking what they expect will happen if they do [X], prompting them to say what they're thinking/reading if they have been silent for 15 seconds or so, and, most importantly, not leading the person to an action or statement.
  • If the real live task involves a completion point (for example, for an online store, buying something ... in our case, I suppose it could be saving a citation, printing an article, etc.) take the person through to the end of that completion point, even if you set up a dummy situation.
  • Give the person some choice in the task they'll do. For example, if the task is "find 5 articles for a paper", let the person choose the topic of the paper. The idea is to give the person some personal involvement in the outcome of the task. This also lets them do the task within a conceptual space they are comfortable with.
  • Always pay the people in some way. In our case, we should always give some kind of gift certificate, maybe a $10 JavaCity certificate, a pair of movie passes, etc. Note that in a typical, non-academic environment (i.e. in the commercial or non-profit worlds), the standard rate for paying people to be testers is $50 a session, unless they are a select group, for which you'd have to pay more. We don't have to pay that much, but we have to give people something for their time.
And, as with all things related to web design and development, in the end "it all depends". Test informally or formally, test random people or user segments, depending on what you're doing, what information you're after, etc.


Post a Comment

<< Home