We’re looking for

  • med students
  • graduate students in health sciences
  • people preparing to go to med school or grad school in health sciences

to spend 30 minutes in a phone interview.

We want to learn how you chose (or are choosing) a program or school to attend.

We’re scheduling phone interviews during the month of October and in early November.

If you are interested, please respond to Sandy Olson at usabilityworks@att.net with:

  • Full name
  • Email address
  • Phone number (include your area code)
  • Name of the school you are attending and program you’re in
  • The time zone you are in

The information you provide will be used only for the purposes of this study.

Tell your friends!

If you qualify based on these questions, someone will contact you either by email or phone.

Note: To qualify to participate and receive your payment, you must answer all of our questions truthfully, and have a fluent command of English.

We will do our best, but may not be able to contact everyone. If we contact you, we may ask additional questions before making final selections. You may not be selected for the study.

If you are selected, we will send appointment confirmation information by email.

Thank you!

Advertisements

I’ve seen it dozens of times. The team meets after observing people use their design, and they’re excited and energized by what they saw and heard during the sessions. They’re all charged up about fixing the design. Everyone comes in with ideas, certain they have the right solution to the remedy frustrations users had. Then what happens?

On a super collaborative team everyone is in the design together, just with different skills. Splendid! Everyone was involved in the design of the usability test, they all watched most of the sessions, they participated in debriefs between sessions. They took detailed, copious notes. And now the “what ifs” begin:

What if we just changed the color of the icon? What if we made the type bigger? What if we moved the icon to the other side of the screen? Or a couple of pixels? What if?

How do you know you’re solving the right problem? Well, the team thinks they’re on the right track because they paid close attention to what participants said and did. But teams often leave that data behind when they’re trying to decide what to do. This is not ideal.

Getting beyond “what if”

On a super collaborative team, everyone is rewarded for doing the right thing for the user, which in turn, is the right thing for the business. Everyone is excited about learning about the goodness (or badness) of the design by watching users use it. But a lot of teams get stuck in the step after observation. They’re anxious to get to design direction. Who can blame them? That’s where the “what ifs” and power plays happen. Some teams get stuck and others try random things because they’re missing one crucial step: going back to the evidence for the design change.

Observing with an open mind

Observations tell you what happened. That is, you heard participants say things and you saw them do things — many, many interesting, sometimes baffling things. Good things, and bad things. Some of those things backed up your theories about how the design would work. Some of the observations blew your theories out of the water. And that’s what we do usability testing to see: In a low risk situation like a small, closed test, what will it be like when our design is out in the wild.

Brainstorming the why

The next natural step is to make inferences. These are guesses or judgments about why the things you observed happened. We all do this. It’s usually what the banter is all about in the observation room.

“Why” is why we do this usability testing thing. You can’t get to why from surveys or focus groups. But even in direct observation, with empirical evidence, why is sometimes difficult to ferret out. A lot of times the participants just say it. “That’s not what I was looking for.” “I didn’t expect it to work that way.” “I wouldn’t have approached it that way.” “That’s not where I’d start.” You get the idea.

But they don’t always tell you the right thing. You have to watch. Where did they start? What wrong turns did they take? Where did they stop? What happened in the 3 minutes before they succeed or failed? What happened in the 3 minutes after?

It’s important to get judgments and guesses out into the fresh air and sunshine by brainstorming them within the team. When teams make the guessing of the why an explicit act that they do in a room together, they test the boundaries of their observations. It’s also easy to see when different people on the team saw things similarly and where they saw them differently.

Weighing the evidence

And so we come to the crucial step, the one that most teams skip over, and why they end up in the “what ifs” and opinion wars: analysis. I’m not talking about group therapy, though some teams I’ve worked with could use some. Rather, the team now looks at the strength of the data to support design decisions. Without this step, it is far too easy to choose the wrong inference to direct the design decisions. You’re working from the gut, and the gut can be wrong.

Analysis doesn’t have to be difficult or time-consuming. It doesn’t even have to involved spreadsheets.* And it doesn’t have to be lonely. The team can do it together. The key is examining the weight of the evidence for the most likely inferences.

Take all those brainstormed inferences. Throw them in to a hat. Draw one out and start looking at data you have that supports that being the reason for the frustration or failure. Is there a lot? A little? Any? Everyone in the room should be poring through their notes. What happened in the sessions? How much? How many participants had a problem? What kinds of participants had the problem? What were they trying to do and how did they describe it?

Answering questions like these, among the team, gets us to understanding how likely is it that this particular inference is the cause of the frustration. After a few minutes of this, it is not uncommon for the team to collectively have an “ah ha!” moment. Breakthrough comes as the team eliminates some inferences because they’re weak, and keeps others because they are strong. Taking the strong inferences together, along with the data that shows what happened and why snaps the design direction right into focus.

Eliminating frustration is a process of elimination

The team comes to the design direction meeting knowing what the priority issues were. Everyone has at least one explanation for the gap between what the design does and what the participant tried to do. Narrowing those guesses to what is the most likely root cause based on the weight of the evidence – in an explicit, open, and conscious act –takes the “what ifs” out of the next version of a design, and shares the design decisions across the team.

* Though 95% of data analysis does. Sorry.

Sports teams drill endlessly. They walk through plays, they run plays, they practice plays in scrimmages. They tweak and prompt in between drills and practice. And when the game happens, the ball just knows where to go.

This seems like such an obvious thing, but we researchers often poo-poo dry runs and rehearsals. In big studies, it is common to run large pilot studies to get the kinks out of an experiment design before running the experiment with a large number of participants.

But I’ve been getting the feeling that we general research practitioners are afraid of rehearsals. One researcher I know told me that he doesn’t do dry runs or pilot sessions because he fears that makes it look to his team like he doesn’t know what he is doing. Well, guess what. The first “real” session ends up being your rehearsal, whether you like it or not. Because you actually don’t know exactly what you’re doing — yet. If it goes well, you were lucky and you have good, valid, reliable data. But if it didn’t go well, you just wasted a lot of time and probably some money.

The other thing I hear is that researchers are pressured for time. In an Agile team, for example, everyone feels like they just have to keep moving forward all the time. This is an application development methodology in desperate want of thinking time, of just practicing craft. The person doing the research this week has finite time. The participants are only available at certain times. The window for considering the findings closes soon. So why waste it rehearsing what you want to do in the research session?

Conducting dry runs, practice sessions, pilots, and rehearsals — call them whatever works in your team — gives you the superpower of confidence. That confidence gives you focus and relaxation in the session so you can open your mind and perception to what is happening with the user rather than focusing on managing the session. And who doesn’t want the super power of control? Or of deep insight? These things don’t come without preparation, practice, and poking at improving the protocol.

You can’t get that deep insight in situ if you’re worried about things like how to transfer the control of the mouse to someone else in a remote session. Or whether the observers are going to say something embarrassing at just the wrong time. Or how you’re going to ask that one really important question without leading or priming the participant.

The way to get to be one with the experience of observing the user’s experience is to practice the protocol ahead of time.

There are 4 levels of rehearsal that I use. I usually do all of them for every study or usability test.

  • Script read-through. You’ve written the script, probably, but have you actually read it? Read it aloud to yourself. Read it aloud to your team. Get feedback about whether you’re describing the session accurately in the introduction for the participant. Tweak interview questions so they feel natural. Make sure that the task scenarios cover all the issues you want to explore. Draft follow-up questions.
  • Dry run with a confederate. Pretending is not a good thing in a real session. But having someone act as your participant while you go through the script or protocol or checklist can give you initial feedback about whether the things you’re saying and asking are understandable. It’s the first indication of whether you’ll get the data you are looking for.
  • Rehearsal with a team member. Do a full rehearsal on all the parts. First, do a technical rehearsal. Does the prototype work? Do you know what you’re doing in the recording software? Does the camera on the mobile sled hold together? If there will remote observers, make sure whatever feed you want to use will work for them by going through every step. When everything feels comfortable on the technical side, get a team member to be the participant and go through every word of the script. If you run into something that doesn’t seem to be working, change it in the script right now.
  • Pilot session with a real participant. This looks a lot like the rehearsal with the team member except for 3 things. First, the participant is not a team member, but a user or customer who was purposely selected for this session. Second, you will have refined the script after your experience of running a session using it with a team member. Third, you will now have been through the script at least 3 other times before this, so you should be comfortable with what the team is trying to learn and with the best way to ask about it. How many times have you run a usability test only to get to the 5th session and hear in your mind, ‘huh, now I know what this test is about’? It happens.

All this rehearsal? As the moderator of a research session, you’re not the star — the participant is. But if you aren’t comfortable with what you’re doing and how you’re doing it, the participant won’t be comfortable and relaxed. either. And you won’t get the most out of the session. But after you get into the habit of rehearsing, when it comes game time, you can concentrate on what is happening with the participant. Instead, those rehearsal steps become ways to test the test, rather than testing you.

There’s a lot of truth to “practice makes perfect.” When it comes to conducting user research sessions, that practice can make all the difference in getting valid data and useful insights. As Yogi Bera said, “In theory, there’s no difference between theory and practice. In practice there is.”

It was a spectacularly beautiful Saturday in San Francisco. Exactly the perfect day to do some field usability testing. But this was no ordinary field usability test. Sure, there’d been plenty of planning and organizing ahead of time. And there would be data analysis afterward. What made this test different from most usability tests?

  • 16 people gathered to make 6 research teams
  • Most of the people on the teams had never met
  • Some of the research teams had people who had never taken part in usability
    testing before
  • The teams were going to intercept people on the street, at libraries, in farmers’
  • markets

Ever heard of Improv Everywhere? This was the UX equivalent. Researchers just appeared out of the crowd to ask people to try out a couple of designs and then talk about their experiences. Most of the interactions with participants were about 20 minutes long. That’s it. But by the time the sun was over the yardarm (time for cocktails, that is), we had data on two designs from 40 participants. The day was amazingly energizing.

How the day worked
The timeline for the day looked something like this:

8:00
Coordinator checks all the packets of materials and supplies

10:00
Coordinator meets up with all the researchers for a briefing

10:30
Teams head to their assigned locations, discuss who should lead, take notes, and intercept

11:00
Most teams reach their locations, check in with contacts (if there are contacts), set up

11:15-ish
Intercept the first participants and start gathering data

Break when needed

14:00
Finish up collecting data, head back to the meeting spot

14:30
Teams start arriving at the meeting spot with data organized in packets

15:00-17:00
Everybody debriefs about their experiences, observations

17:00
Researchers head home, energized about what they’ve learned

Later
Researchers upload audio and video recordings to an online storage space

On average, teams came back with data from 6 or 7 participants. Not bad for a 3-hour stretch of doing sessions.

The role of the coordinator
I was excited about the possibilities, about getting a chance to work with some old friends, and to expose a whole bunch of people to a set of design problems they had not been aware of before. If you have thought about getting everyone on your team to do usability testing and user research, but have been afraid of what might happen if you’re not with them, conducting a study by flash mob will certainly test your resolve. It will be a
lesson in letting go.

There was no way I could join a team for this study. I was too busy coordinating. And I wanted to be available in case there was some kind of emergency. (In fact, one team left the briefing without copies of the thing they were testing. So I jumped in a car to deliver to them.)

Though you might think that the 3-or-so hours of data collection might be dull and boring for the coordinator, there were all kinds of things for me to do: resolve issues with locations, answer questions about possible participants, reconfigure teams when people had to leave early. Cell phones were probably the most important tool of the day.

I had to believe that the planning and organizing I had done up front would work for people who were not me. And I had to trust that all the wonderful people who showed up to be the flash mob were as keen on making this work as I was. (They were.)

Keys to making flash mob testing work
I am still astonished that a bunch of people would show up on a Saturday morning to conduct a usability study in the street without much preparation. If your team is half as excited about the designs you are working on as this team was, taking a field trip to do a flash mob usability test should be a great experience. That is the most important ingredient to making a flash mob test work: people to do research who are engaged with the project, and enthusiastic about getting feedback from users.

Contrary to what you might think, coordinating a “flash” test doesn’t happen out of thin air, or a bunch of friends declaring, “Let’s put on a show!” Here are 10 things that made the day work really well to give us quick and dirty data:

1.    Organize up front
2.    Streamline data collection
3.    Test the data collection forms
4.    Minimize scripting
5.    Brief everyone on test goals, dos and don’ts
6.    Practice intercepting
7.    Do an inventory check before spreading out
8.    Be flexible
9.    Check in
10.    Reconvene the same day

Organize up front

Starting about 3 or 4 weeks ahead of time, pick the research questions, put together what needs to be tested, create the necessary materials, choose a date and locations, and recruit researchers.

Introduce all the researchers ahead of time, by email. Make the materials available to everyone to review or at least peek at as soon as possible. Nudge everyone to look at the stuff ahead of time, just to prepare.

Put together everything you could possibly need on The Day in a kit. I used a small roll-aboard suitcase to hold everything. Here’s my list:

  • Pens (lots of them)
  • Clipboards, one for each team
  • Flip cameras (people took them but did most of the recording on their phones)
  • Scripts (half a page)
  • Data collecting forms (the other half of the page)
  • Printouts of the designs, or device-accessible prototypes to test
  • Lists of names and phone numbers for researchers and me
  • Lists of locations, including addresses, contact names, parking locations, and public transit routes
  • Signs to post at locations about the study
  • Masking tape
  • Badges for each team member – either company IDs, or nice printed pages with the first names and “Researcher” printed large
  • A large, empty envelope

About 10 days ahead, I chose a lead for each of the teams (these were all people who I knew were experienced user researchers) and talked with them. I put all the stuff listed above in a large, durable envelope with the team lead’s name on it.

Streamline data collection

The sessions were going to be short, and the note-taking awkward because of doing this research in ad hoc places, so I wanted to make data collection as easy as possible. Working from a form I borrowed from Whitney Quesenbery, I made something that I hoped would be quick and easy to fill in and easy for me to understand what the data meant later.

Data collector for our flash mob usability test

The data collection form was the main thing I spent time on in the briefing before everyone went off to collect data. There are things I will emphasize more, next time, but overall, this worked pretty well. One note: It is quite difficult to collect qualitative data in the wild by writing things down. Better to audio record.

Test the data collection forms

While the form was reasonably successful, there were some parts of it that didn’t work that well. Though a version of the form had been used in other studies before, I didn’t ask enough questions about the success or failure of the open text (qualitative data) part of the form. I wanted that data desperately, but it came back pretty messy. Testing the data collection form with someone else would have told me what questions researchers would have about that (meta, no?), and I could have done something else. Next time.

Minimize scripting

Maximize participant time by dedicating as much time to the session as possible to their interacting with the design. That means that the moderator does nothing to introduce the session, instead relying on an informed consent form that one of the team members can administer to the next participant while the current one is finishing up.

The other tip here is to write out the exact wording for the session (with primary and follow up questions), and threaten the researchers with being flogged with a wet noodle if they don’t follow the script.

Brief everyone on test goals, dos and don’ts

All the researchers and I met up at 10am and had a stand-up meeting in which I thanked everyone profusely for joining me in the study. And then I talked about and took questions on:

  • The main thing I wanted to get out of each session. (There was one key concept that we wanted to know whether people understood from the design.)
  • How to use the data collection forms. (We walked through every field.)
  • How to use the script. (“You must follow the script.”)
  • How to intercept people, inviting them to participate. (More on this below.)
  • Rules about recordings. (Only hands and voices, no faces.)
  • When to check in with me. (When you arrive at your location; at the top of each hour, when you’re on the way back.)
  • When and where to meet when they were done.

I also handed cash that the researchers could use for transit or parking or lunch, or just keep.

Practice intercepting people

Intercepting people to participate is the hardest part. You walk up to a stranger on the street asking them for a favor. This might not be bad in your town. But in San Francisco, there’s no shortage of competition. Homeless people, political parities registering voters, hucksters, buskers, and kids working for Greenpeace all wanting attention from passers-by. And there you are, trying to do a research study. So, how to get some attention without freaking people out? A few things that worked well:

  • Put the youngest and/or best-looking person on the task.
  • Smile and make eye contact.
  • Using cute pets to attract people. Two researchers who own golden retrievers brought their lovely dogs with them, which was a nice icebreaker.
  • Start off with what you’re not: “I’m not selling anything, and I don’t work for Greenpeace. I’m doing a research study.”
  • Start by asking for what you want: “Would you have a few minutes to help us make ballots easier to use?”
  • Take turns – it can be exhausting enduring rejection.

Do an inventory check before spreading out

Before the researchers went off to their assigned locations, I asked each team to check that they had everything they needed, which apparently was not thorough enough for one of my teams. Next time, I will ask each team to empty out the contents of the packet and check the contents. I’ll use the list of things I wanted to include in each team’s packet and my agenda items for the briefing to ask the teams to look for each item.

Be flexible

Even with lots of planning and organizing, things happen that you couldn’t have anticipated. Researchers don’t show up, or their schedules have shifted. Locations turn out to not be so perfect. Give teams permission to do whatever they think is the right thing to get the data – short of breaking the law.

Check in

Teams checked in when they found their location, between sessions, and when they were on their way back to the meeting spot. I wanted to know that they weren’t lost, that everything was okay, and that they were finding people to take part. Asking teams to check in also gave them permission to ask me questions or help them make decisions so they could get the best data, or tell me what they were doing that was different from the plan. Basically, it was one giant exercise in The Doctrine of No Surprise.

Reconvene the same day

I needed to get the data from the research teams at some point. Why not meet up again and share experiences? Turns out that the stories from each team were important to all the other teams, and extremely helpful to me. They talked about the participants they’d had and the issues participants ran into with the designs we were testing. They also talked about their experiences with testing this way, which they all seemed to love. Afterward, I got emails from at least half the group volunteering to do it again. They had all had an adventure, met a lot of new people, got some practice with skills, and helped the world be a become a better place through design.

Wilder than testing in the wild, but trust that it will work

On that Saturday in San Francisco the amazing happened: 16 people who were strangers to one another came together to learn from 40 users about how well a design worked for them. The researchers came out from behind their monitors and out of their labs to gather data in the wild. The planning and organizing that I did ahead of time let it feel like a flash mob event to the researchers, and it gave them room to improvise as long as they collected valid data. And it worked. (See the results.)

P.S. I did not originate this approach to usability testing. As far as I know, the first person to do it was Whitney Quesenbery in New York City in the autumn of 2010.

Everybody’s talking about designing for delight. Even me! Well, it does get a bit sad when you spend too much time finding bad things in design. So, I went positive. I looked at positive psychology, and behavioral economics, and the science of play, and hedonics, and a whole bunch of other things, and came away from all that with a framework in mind for what I call “happy design.” It comes in three flavors: pleasure, flow, and meaning.

I used to think of the framework as being in layers or levels. But it’s not like that when you start looking at great digital designs and the great experiences they are part of. Pleasure, flow and meaning end up commingled.

So, I think we need to deconstruct what we mean by “delight.” I’ve tried to do that in a talk that I’ve been giving, most recently at RefreshPDX (April 2012). Here are the slides. Have a look: