.comment-link {margin-left:.6em;}

Beyond the Fat Wire

Thursday, October 27, 2005

Thursday's workshop

Toolbox of Specialized Usability Techniques
Chauncey E. Wilson
WilDesign Consulting

Thursday, October 27, 2005
User Experience 2005
Boston



He'll set up a bog and/or wiki on this topic. Will send us details by email later.
12 people in this workshop. (Monday's had about 25-30)
Interesting that many people are involved with establishing formal usability programs and usability labs

UTest usability mailing list

email him for: UTest subscription info; tips for setting up a usability lab

at least 3 Canadians in this room

apparently, Jakob asked him to do this workshop

This guy (Chauncey Wilson) is a psych/HCI guy, but more on the Steve Krug end of the spectrum regarding how to view usability. The other person sitting at my table was also in the same Monday workshop as me (Advanced User Testing) and she doesn't have a high opinion of that workshop either.

There is going to be a "World Usability Day" on November 3rd. See the UPA web site for details. They've reserved (? I'll check later when I have an internet connection) the Boston Museum of Science.

I've been curious about Verizon's broadband wireless service ... basically, you get a voice plan and then for like $60/month you get unlimited broadband wifi anywhere within Verizon's broadband service areas, which are pretty much the major metropolitan areas in the US. For example, if I had Verizon broadband, I'd be online right now, even though the hotel doesn't have wireless in the meeting rooms. Assuming, of course, that the Verizon broadband is good service.




Fishbone Diagrams


  • to review factors that might have an effect on or contribute to a problem, process, or goal
  • the diagram has a main line (spine) that is the effect you want to examine
  • "main bones" are cause categories that act on the effect
  • each main bone is a major potential cause
  • there is also a root cause that would explain a problem, symptom or effect
  • Major cause categories
    • The 4 Ms: methods, machines, materials, manpower
    • The 4 Ps: place, procedure, people, policies
    • The 4 Ss: surroundings, suppliers, systems, skills
  • Common categories for usability
    • readability (effect)
      • font size (cause)
      • contrast (cause)
      • language/internationalization (cause)
      • line length (cause)
    • navigation
    • performance
    • accessibility
    • organization
    • perception/credibility/trust

Question from the audience regarding sorting out cause and effect that is valid, versus apparent but not real cause and effect, and/or cause and effect with intervening variables

Even if you have effects where you have no control over the cause(s), it's useful to understand the effect and its causes

Affinity diagram vs. card sort -- an affinity diagram is a group, social activity, where the group comes to consensus about the grouping/categorization/affinity of a set of concepts (e.g., by moving sticky notes around on a wall). a card sort does the same thing, but it is an individual activity. A group card sort is not recommended -- if you want a group activity, organize it as an affinity diagram activity instead

5-Why Analysis for getting from the proximate cause to the root cause (aka "the deep nagging approach")

  • technique for moving from symptoms to root causes
  • move from major categories on a fishbone diagram to root causes

Rapid Analyses for Fishbone Diagrams

  • vote on the most likely cause
  • rank main causes on importance, fixability, etc
  • rank the sub-causes within each main cause
  • do before and after fishbone diagrams

Tools

  • SmartDraw
  • RFFlow http://www.rff.com/
  • RCA-XPress http://www.rcaxpress.com/

Q-Sorting/Repeated Sorts

(see tutorial booklet)

Tool: WebQ

Cardsorting - Tom Tulles at Fidelity says 25 people. Others say around 40. This speaker says 20-100

Forcing choices (that is, through a Q-Sort, or saying "Spend $1000 in this store" gets more differentiation than if you have people rate on a 1-5 scale such as in a survey.

See WebSort for online versions of cardsorting/Qsorting




P Sorting

Closed vs. Open card sorting

Q sorting (along a dimension)

Repeated sorting

Q-Sort, Repeated Sort:

  • individual card sort
  • the person sorts based on self-chosen criteria
  • after, facilitator asks what that criteria was, etc, records responses, etc
  • facilitator shuffles the cards/papers
  • repeat

Another tool: EZSort and EZCalc (from IBM)

you want to see what cards appear with which other cards ... that is where you will find your correlations




Freelisting (a variation on brainwriting, which is a variation on brainstorming) -- to see what comes to people's minds first, for a given topic. this is an individual activity

Brainwriting -- write thoughts on sticky notes, or they write stuff and you collect the papers.

Brainstorming is a social event, and all ideas are supposed to be criticism free. with brainwriting, it's similar, but you write instead of speak out loud. People can write individually, and then the paper gets passed along and others add to it.

"Brainstorming is fraught with peril"

Brainwriting generates a larger quantity of ideas (and, the measurement of brainstorming success is quantity)

Braindrawing is similar, but with drawing instead of writing words

"The Icon Book" - about 10 years old, but has good ideas about iconic images

2-5 minute breaks every 15 minutes makes brainstorming more productive

if you give the brainstorming group a goal of X number of ideas (which will be 10-20% more ideas than you think they can generate) the brainstorming session will be more productive




KLM (Keystroke Level Model)

Like, having read the book by Card, Moran and Newell (1980) is the sign that you're a serious HCI person

GOMS - goals, operators, methods (combinations of operators), selection tools

KLM allows quick estimates of task time with a minimum of theoretical or conceptual background

Fitz Law (Tognazzini) -- you have a screen, a pointer, and a target. Fitz Law says, the bigger the target, the faster you get to it. Tognazzini talks about how big the target has to be to most effiiciently get the user to it.

Tuesday, October 25, 2005

Interfaces and complexity

Apart from what I or we might think about Microsoft, MS Office, Jakob, or this conference, this evening's plenary was valuable I think.

There are ways in which our web site is similar to MS Office.
  • Lots of complexity and functionality
  • It's hard to find functions in that complexity
  • People have to be trained how to find and use those functions
  • People use our site because it's what they have to work with and/or someone tells them they have to use it
  • (now, if only we had Microsoft's billions of $$ heheh)
Because of that, it seems to me that the design goals and redesign principles he listed are applicable to us.

I go to these events -- Gilbane a year ago, ETech last spring -- and each time, I'm thinking: if only we could really get down to the real thing the library does -- provide access to information -- and recognize that anything else we do is mere "commentary" on that ... THEN we could design the site we need to have, create a brand image that will work, communicate to the university clearly about our value.

Tuesday evening - a preview of the new Office interface

As a preliminary, let me say that I witnessed a Jakob fan-boy moment. Walking toward the room where the event was to be held, Jakob was walking some paces ahead of me. A guy (who was your typical 30-ish traditional corp tech type (apparently)) passed in the other direction. "Jakob Nielsen", he exclaimed, "I'm a big fan of your work!"

Yikes




This was a talk by a guy from Microsoft who talked about the interface design for the upcoming version of some components of Microsoft Office (particularly Word, PowerPoint, and Excel)

Although both Jakob and the Microsoft guy were billed, in reality, Jakob did a 15 minute introduction and then the Microsoft guy spoke for about 40 minutes. 20 minutes of Q&A followed (thus, it ended at 6:30, not at 6:45)

The idea behind the evening was for the evolution of interfaces, away from "command oriented" actions and toward "results oriented" actions.

Jakob's introduction went through a brief history of the user interface:
  • batch commands
  • line mode commands
  • fullscreen text-only terminals
  • GUIs
All of which, he pointed out, are command based interactions.

Back in 1992, he said he had predicted that by 1996 interfaces would have evolved into a non-command form. Now he's saying maybe that will happen by 2010.

The idea of a non-command interface is what he calls "agent oriented" task performance, where the interface does what the user intends rather than what the user commands. I presume that the system gets trained by the user to know what that intent is. The idea is that complex commands get performed to the user's specification without the user having to detail those specifications each time. (at least, that's my interpretation. Jakob didn't actually say that. However, I can't imagine he means that software intuitively or telepathicly interacts with the computer user)

So, on that note, he turned the room over to the Microsoft guy, Tim Briggs.

More history, as he reviewed the Word interfaces from version 1 to the present (Office 2003, which is 2 years old already. I hadn't realized it'd been that long).

The point of the review was to illustrate the complexity of the interfaces ... Word has grown to have 300 commands and 31 toolbars. People can't find the commands they want. They asked for functionality that already existed (but they couldn't find it, or didn't recognize its function if they saw it). The UI is hard to browse. Core functions took too long for users to accomplish.

Design goals:
  • Keep frequent and familiar tasks efficient
  • Help people discover best practices (that is, help them find the fastest way to do things)
  • Make browsing for familiar goals (tasks, commands) easier
  • Let people focus on the output (their end document), not on the UI
Redesign principles:
  • Streamline the core functionality
  • Consolidate the UI areas
  • Apply 3-stage formatting, which means:
    • gather bundles of features together into, sort of, palettes to apply to a given portion of a document
    • demonstrate what's possible (that is, show previews of what something will look like if they apply a feature/palette)
    • dialog access for tweaking (that is, give people access to power tools if they want them)
He walked through some of the interface. The Menu bar at the top of the apps is now a "ribbon" with tabs. So, instead of clicking and looking at dropdowns with flyouts from the dropdowns, you get a big "toolbar" (though it's not called that now) with buttons/widgets. These "ribbons" have "function chunks" (meaning only that all the, for example, text formatting widgets are grouped together).

These "commands" are not organized around a scenario or object. There is supposed to be better labeling and feedback to the user. Basically, this means there are now "super tooltips" ... not only does mousing-over display a label, but it's a super-label that include what the widget is for and what it might be used to do.

Tools will be contextually relevant. This means that, for example, if you insert an image into a Word document, if you select that image, you'll THEN see image editing/formatting tools. Then, when the image is no longer selected, those image tools will disappear.

Finally, he went through what he called "Effect of a new experience". That is, that from their pre-beta testing of the new Office interface, that they expect people will experience a drop in productivity at first. Productivity (and user's perception of his/her own productivity) will stay low while the user keeps using the new UI for the old tasks he/she is accustomed to doing. The user's perceived productivity (though, at this point, he started getting fuzzy about whether he was talking about real or perceived productivity) will start to increase as the user starts discovering how to accomplish new tasks with the app.

Actually, this last part sounded suspiciously like a pitch to partners that MS wants to have buy in to the new version of Office.

Not too much of note in the Q&A. One person asked about Mac users of Office, and the answer was, basically, they are still screwed. Someone asked about accessibility for people who don't use a mouse. The answer was "it'll be better" (but, that's easy to say, isn't it?)

Someone asked how they decide what new features to include. This led him to talk about "SQM" (pronounced "squim") -- system quality management, which originally was to provide MS with information about application crashes. Now, they use it to gather feature usage.

You know that feature/option in MS products to participate in a "Customer Improvement Program" or "Feedback Program", sometimes hidden under "Service Options"? Well, by saying "yes", you are agreeing to send Microsoft information about the commands and features you use in the MS product.

One guy asked a question that seems to me to typify the difference between this event (the User Experience 2005 event) and, say, ETech (O'Reilly's Emerging Technology conference from last spring).

The question was, what has the interface done to prevent the questioner's big pet peeve. The pet peeve is people making headings in a Word document by applying text formatting, rather than by applying a structural style. Of course, there's no real answer to this.

And, of course, in the context of html, I also would want my authors to use the markup I specify rather than something else, even if the result "looks" the same on the web page, in the end.

But, the point is, this guy is worried about how to force people to use a piece of software (in this case, MS Word) in a way that HE thinks is correct.

ETech, on the other hand, was all about molding any tool so that THE TOOL does what you want. That the user is the one that rules the software, the hardware, the application, the task -- and not the other way around.

Rain!

It's raining, 47 degrees, and windy. Hopefully, the weather will be better tomorrow.

This evening there is a plenary with Jakob and somebody from Microsoft, otherwise this is a free day. It's curious to me that they insist on calling this a conference. When it's pay-by-the-day for each day's one-day workshop, and there isn't really anything else here besides the workshops (this evening's "plenary" aside), the fact that the thing lasts for 6 days does not qualify it as a "conference" in my opinion.

Monday's workshop was so disappointing. But, I still have hopes for Thursday's, which is "Toolbox of Specialized Usability Methods". I mean, since he bothered to put his outline up for people to see (or so it seems), maybe he'll actually talk about how to use real methods.

Monday, October 24, 2005

Monday's workshop

Advanced User Testing
Jill Strawbridge, Symantec

Monday, October 24, 2005
9:00am-5:00pm
User Testing 2005
Boston, MA
  • exp psych./human factors/indust & systems engg
  • currently Symantec designing apps
  • teaches human factors course at a design school (Arts Center) in Pasadena CA
  • and teaches an upper division pysc class somewhere else



(no wifi in this room. argh. I'm blogging in Dreamweaver...)

Nothing in the conference metadata indicated where registration would be, other than a vague mention of "lobby" somewhere.

So, go to the Lobby level ... nothing. Take the escalator up one level. There is a sign pointing up. Walk up the stairs to the 3rd floor. And there is a registration table.

There was breakfast. (hadn't been told that). Where is the meeting room? Mostly down one hallway. But I saw Peter Morville asking where his was (he's teaching an IA workshop here) ... he'd been down the hallway. Turns out his is on a different floor.

When you get your badge at the registration table, they give you a tacky bag. I'm looking at the handouts on the table with the tutorials and their room locations. ("Oh, there's one of those in your bag").

We get a spiral-bound set of the slides for the workshop -- at the next table. I'd vaguely been pointed there, but it didn't register .. so I have to go back to pick up my booklet.

So much for usability at a high-priced 6-day series of User Experience workshops....

At the morning break they told us they'd set up a wifi room. By noon, the wifi had lost it's connection to the internet.



Now we have the around-the-room introductions...

So far, 2 people in a row who should have been at Steve Krug's workshop, or at least should have copies of his book to give to their managers, since one big problem is getting the organization to truly buy in to iterative testing. I suppose mentioning that (in public in the session) would be bad form, given that this is a Jakob event =)

the usual corporate people. a guy w/ a genealogy site. the guy sitting next to me is w/ a company in Irving, and he's a UTA grad. small world. and a guy at Frost Bank in San Antonio.



Usability Magnitude Estimation (UME) and Master Usability Scaling (MUS), methodology from Mick McGee
  • UME - strength of rating, degree of difference
  • MUS - allowing comparisons
Defining Usability
  • a psychological response to using an interface (intuitive, well-organized, consistent)
  • ... that is evaluated to improve or compare designs
    • formative (to identify and solve problems)
    • summative (to evalutate with metrics)
Limitations with task-based usability measures
  • task differences, complexity, etc., make metrics difficult
  • if people fail, they could fail for different reasons
  • if people succeed, we can't know why
Types of data
  • nominal (no value. names of categories of data only)
  • ordinal (rank order, Likert scales)
  • interval (rank order + size differences, but not ratio differences)
  • ratio (rank order + degree of difference)

Usability Magnative Estimation (UME)

  • developed from the limitations of subjective usability measures.
  • developed from psychophysics
  • a psychophysical measurement method for assessing the psychological sensation of a physical stimulus
    • interpreting multidimensional stimuli
    • magnitude of effect is expressed as a ratio
  • generates lots of stuff to put into reports to impress management

We got academic research babble



There is a wifi room. There will be lunch "on our own" for an hour and 15 mins. So, I think, a sandwich from somewhere and either go to my room or go to the wifi room

Since this is another conference, I again have the strong urge to shop for a new laptop -- a lighter and smaller one, but, then I'd probably miss performance and battery-life.

Oh, during the break I had a Jakob sighting.

Yikes. Today's session is turning into the antithesis of "Don't Make Me Think". Which is why I don't go to ASIS. Or even read JASIST the journal anymore. Guacala!

I mean, the first warning was shortly into the pre-morning-break part, where she was going through a methodology for establishing baseline, comparable usability metrics with user testing subjects ... not only do I think that's a waste of time for most situations (see below), in seconds I could identify weaknesses in validity. Well, obviously, it's possible to construct some form of statistical validity for that methodology, at least validity that academics will recognize. But, for common sense... forget it.



Also, later, I'll outline all the usability challenges of this event. (sheesh)

Wait til y'all see the script she gave us to use for our first "practice" test. Mannn. Is she serious??

I quote: "Usability is your perception of how consistent, efficient, productive, organized, easy to use, intuitive, straightforward it is to accomplish tasks within a system".

We're supposed to read that to the test subject.

Sheesh

And I won't mention this "rate the proportional size of 10 individual circles, without seeing any circle more than once" exercise that is supposed to be the practice that you have the subjects do before you have them do the real test.

I mean, the usability testing methods are not usable!

Man... now I'm so glad I never studied HCI ... would have been like linguistics ... good in the thinking about it, but ridiculous in practice

After lunch, she skipped "ethnographic testing" and "international testing" and went straight to "Back to Basics". "Usability Metrics" -- a lengthy set of slides outlining "what you can test" but not "how do you test it". Particularly problematic when the "what" is memory burden. Given that this workshop is titled "Advanced Usability Testing", I would have thought that how to test would have been kind of central to the topic of the workshop.

Jeez, I don't believe it. The exercise is "what 4 measures would you use to measure the effectiveness of Google" and one guy actually said, measure heart rate, respiration rate, perspiration rate of the test subjects while they are doing the test. Oh man. Is that guy serious??