Archives for posts with tag: planning

In the fall of 2012, I seized the opportunity to do some research I’ve wanted to do for a long time. Millions of users would be available and motivated to take part. But I needed to figure out how to do a very large study in a short time. By large, I’m talking about reviewing hundreds of websites. How could we make that happen within a couple of months?

Do election officials and voters talk about elections the same way?

I had BIG questions. What were local governments offering on their websites, and how did they talk about it? And, what questions did voters have?  Finally, if voters went to local government websites, were they able to find out what they needed to know?

Brain trust

To get this going, I enlisted a couple of colleagues and advisors. Cyd Harrell is a genius when it comes to research method (among other things). Ethan Newby sees the world in probabilities and confidence intervals. Jared Spool came up with the cleverest twist, which actually prevented us from evaluating using techniques we were prone to use just out of habit. Great team, but I knew we weren’t enough to do everything that needed doing.

Two-phases of research: What first, then whether

We settled on splitting the research into 2 steps. First, we’d go look at a bunch of county election websites to see what was on them. We decided to do this by simply cataloging the words in links, headings, and graphics on a big pile of election sites. Next, we’d do some remote, moderated usability test sessions asking voters what questions they had and then observe as they looked for satisfactory answers on their local county websites.

Cataloging the sites would tell us what counties thought was important enough to put on the home pages of their election websites. It also would reveal the words used in the information architecture. Would the labels match voters’ mental models?

Conducting the usability test would tell us what voters cared about, giving us a simple mental model. Having voters try to find answers on websites close to them would tell us whether there was a gap between how election officials talk about elections and how voters think about elections. If there was a gap, we could get a rough measure of how wide the gap might be.

When we had the catalog and the usability test data, we could look at what was on the sites and where it appeared against how easily and successfully voters found answers. (At some point, I’ll write about the usability test because there were fun challenges in that phase, too. Here I want to focus on the cataloging.)

Scoping the sample

Though most of us only think of elections when it’s time to vote for president every four years, there are actually elections going on all the time. Right now, at this very moment, there’s an election going on somewhere in the US. And, contrary to what you might think, most elections are run at the county or town level.  There are a lot of counties, boroughs, and parishes in the US. And then there’s Wisconsin and New England where elections are almost exclusively run by towns. There are about 3,057 counties or equivalent. If you count all the towns and other jurisdictions that put on elections in the US and it’s territories and protectorates, there are over 8,000 voting jurisdictions. Most of them have websites.

We decided to focus on counties or equivalents, which brings us back to roughly 3,000 to choose from. The question then was how to narrow the sample to be big enough to give us reliable statistics, but small enough to gather the data within a reasonable time.

So, our UX stats guy, Ethan, gave us some guidance. 200 counties seemed like a reasonable number to start with. Cyd created selection criteria based on US Census data. In the first pass, we selected counties based on population size (highest and lowest), population density (highest and lowest), and diversity (majority white or majority non-white). We also looked across geographic regions. When we reviewed which counties showed up under what criteria, we saw that there were several duplicates. For example, Maricopa County, Arizona is highly populated, densely populated, and mostly racial minorities. When we removed the duplicates, we had 175 counties left.

The next step was to determine whether they all had websites. Here we had one of our first insights: Counties with populations somewhere between 7,000 and 10,000 are less likely to have websites about elections than counties that are larger. We eliminated counties that either didn’t have websites or had a one-pager with the clerk’s name and phone number. This brought our sample down to 147 websites to catalog. Insanely, 147 seemed so much more reasonable than 200.

One more constraint we faced was timing. Election websites change all the time, because, well, there are elections going on all the time. Because we wanted to do this before the 2012 Presidential election in November, we had to start cataloging sites in about August. But with just a few people on the team, how would we ever manage that and conduct usability test sessions?

Crowd-sourced research FTW

With 147 websites to catalog, if we could get helpers to do 5 websites each, we’d need about 30 co-researchers. Could we find people to give us a couple of hours in exchange for nothing but our undying gratitude?

I came to learn to appreciate social networks in a whole new way. I’ve always been a big believer in networking, even before the Web gave us all these new tools. The scary part was asking friends and strangers for this kind of favor.

Fortunately, I had 320 new friends from a Kickstarter campaign I had conducted earlier in the year to raise funds to publish a series of little books called Field Guides To Ensuring Voter Intent. Even though people had already backed the project financially, many of them told me that they wanted to do more, to be directly involved. Twitter and Facebook seemed like options for sources of co-researchers, too. I asked, and they came. All together, 17 people cataloged websites.

Now we had a new problem: We didn’t know the skills of our co-researchers, and we didn’t want to turn anyone away. That would just be ungrateful.

A good data collector, some pilot testing, and a little briefing

Being design researchers, we all wanted to evaluate the websites as we were reviewing and cataloging them. But how do you deal with all those subjective judgements? What heuristics could we apply? We didn’t have the data to base heuristics on. And though Cyd, Ethan, Jared, and I have been working on website usability since the dawn of time, these election websites are particular and not like e-commerce sites and not exactly like information-rich sites. Heuristic evaluation was out of the question. As Jared suggested — and here’s the twist — let the data speak for itself rather than evaluating the information architecture or the design. After we got over the idea of evaluating, the question was how to proceed. Without judgement, what did we have?

Simple data collection. It seemed clear that the way to do the cataloging was to put the words into a spreadsheet. The format of the spreadsheet would be important. Cyd set up a basic template that looks amazingly like a website layout. It had different regions that reflected different areas of a website: banner, left column, center area, right column, footer. She added color coding and instructions and examples.

I wrote up a separate sheet with step-by-step instructions and file naming conventions. It also listed the simple set of codes to mark the words collected. And then we tested the hell out of it. Cyd’s mom was one of our first co-researchers. She had excellent questions about what to do with what. We incorporated her feedback in the spreadsheet and the instructions, and tried the process and instruments out with a few other people. After 5 or 6 pilots, when we thought we’d smoothed out the kinks, we invited our co-researchers to briefing sessions through GoToMeeting, and gave assignments.

To our delight, the data that came back was really clean and consistent. And there were more than 8,000 data items to analyze.

Lessons learned: focus, prepare, pilot, trust

It’s so easy in user research to just say, Hey, we’ll put it in front of people and ask a couple of questions, and we’ll be good.  I’ve been a loud voice for a long time crying, Just do it! Just put your design in front of users and watch. This is good for some kinds of exploratory, formative research where you’re early in a design.

But there’s a place, too, for specific, tightly bounded, narrowed scope, and a thoroughly designed research study. We wanted to answer specific questions at scale. This takes a different kind of preparation from a formative study. Getting the data collection right was key to the success of the project.

To get the data collecting right, we had to take out as much judgement as possible for 2 reasons:

• we wanted the data to be consistently gathered

• we had people whose skills we didn’t know collecting the data

Though the findings from the study are fascinating (at least to me), what makes me proud of this project was how we invited other people in. It was not easy letting go. But I just couldn’t do it all. I couldn’t even have got it done with the help of Cyd and Ethan. Setting up training helped. Setting up office hours helped. Giving specific direction helped. And now 17 people own parts of this project, which means 17 people can tell at least a little part of the story of these websites. That’s what I want out of user research. I can’t wait to do something like this with a client team full of product managers, marketers, and developers.

If you’d like to see some stats on the 8,000+ data items we collected, check out the slide deck that Ethan Newby created that lays out when, where, and how often key words that might help voters answer their questions appeared on 147 county election websites in November 2012.

Advertisements

Sports teams drill endlessly. They walk through plays, they run plays, they practice plays in scrimmages. They tweak and prompt in between drills and practice. And when the game happens, the ball just knows where to go.

This seems like such an obvious thing, but we researchers often poo-poo dry runs and rehearsals. In big studies, it is common to run large pilot studies to get the kinks out of an experiment design before running the experiment with a large number of participants.

But I’ve been getting the feeling that we general research practitioners are afraid of rehearsals. One researcher I know told me that he doesn’t do dry runs or pilot sessions because he fears that makes it look to his team like he doesn’t know what he is doing. Well, guess what. The first “real” session ends up being your rehearsal, whether you like it or not. Because you actually don’t know exactly what you’re doing — yet. If it goes well, you were lucky and you have good, valid, reliable data. But if it didn’t go well, you just wasted a lot of time and probably some money.

The other thing I hear is that researchers are pressured for time. In an Agile team, for example, everyone feels like they just have to keep moving forward all the time. This is an application development methodology in desperate want of thinking time, of just practicing craft. The person doing the research this week has finite time. The participants are only available at certain times. The window for considering the findings closes soon. So why waste it rehearsing what you want to do in the research session?

Conducting dry runs, practice sessions, pilots, and rehearsals — call them whatever works in your team — gives you the superpower of confidence. That confidence gives you focus and relaxation in the session so you can open your mind and perception to what is happening with the user rather than focusing on managing the session. And who doesn’t want the super power of control? Or of deep insight? These things don’t come without preparation, practice, and poking at improving the protocol.

You can’t get that deep insight in situ if you’re worried about things like how to transfer the control of the mouse to someone else in a remote session. Or whether the observers are going to say something embarrassing at just the wrong time. Or how you’re going to ask that one really important question without leading or priming the participant.

The way to get to be one with the experience of observing the user’s experience is to practice the protocol ahead of time.

There are 4 levels of rehearsal that I use. I usually do all of them for every study or usability test.

  • Script read-through. You’ve written the script, probably, but have you actually read it? Read it aloud to yourself. Read it aloud to your team. Get feedback about whether you’re describing the session accurately in the introduction for the participant. Tweak interview questions so they feel natural. Make sure that the task scenarios cover all the issues you want to explore. Draft follow-up questions.
  • Dry run with a confederate. Pretending is not a good thing in a real session. But having someone act as your participant while you go through the script or protocol or checklist can give you initial feedback about whether the things you’re saying and asking are understandable. It’s the first indication of whether you’ll get the data you are looking for.
  • Rehearsal with a team member. Do a full rehearsal on all the parts. First, do a technical rehearsal. Does the prototype work? Do you know what you’re doing in the recording software? Does the camera on the mobile sled hold together? If there will remote observers, make sure whatever feed you want to use will work for them by going through every step. When everything feels comfortable on the technical side, get a team member to be the participant and go through every word of the script. If you run into something that doesn’t seem to be working, change it in the script right now.
  • Pilot session with a real participant. This looks a lot like the rehearsal with the team member except for 3 things. First, the participant is not a team member, but a user or customer who was purposely selected for this session. Second, you will have refined the script after your experience of running a session using it with a team member. Third, you will now have been through the script at least 3 other times before this, so you should be comfortable with what the team is trying to learn and with the best way to ask about it. How many times have you run a usability test only to get to the 5th session and hear in your mind, ‘huh, now I know what this test is about’? It happens.

All this rehearsal? As the moderator of a research session, you’re not the star — the participant is. But if you aren’t comfortable with what you’re doing and how you’re doing it, the participant won’t be comfortable and relaxed. either. And you won’t get the most out of the session. But after you get into the habit of rehearsing, when it comes game time, you can concentrate on what is happening with the participant. Instead, those rehearsal steps become ways to test the test, rather than testing you.

There’s a lot of truth to “practice makes perfect.” When it comes to conducting user research sessions, that practice can make all the difference in getting valid data and useful insights. As Yogi Bera said, “In theory, there’s no difference between theory and practice. In practice there is.”

Despite the reality of differences due to aging, research has also shown that in many cases, we do not need a separate design for people who are age 50+. We need better design for everyone.

Everyone performs better on web sites where the interaction matches users’ goals; where navigation and information are grouped well; where navigation elements are consistent and follow conventions; where writing is clear, straightforward, in the active voice, and so on. And, much of what makes up good design for younger people helps older adults as well.

For example, we know that most users, regardless of age, are more successful finding information in broad, shallow information architectures than they are with deep, narrow hierarchies. When web sites make their sites easier to use for older adults, all of their users perform better in usability studies. The key is involving older adults in user research and usability testing throughout design and development.

There are some important considerations in working with older adults in studies. Remembering the points below will ensure that you and your participants have a good experience and you get the data you need to inform design decisions.

Finding participants: Understanding older adults before you recruit

In many places in the world, older adults outnumber people in other age groups. The question is, how do you find the right people to take part in studies? They can be difficult to get to.

We found that these approaches did not work well:

  • Community web sites, message boards, or chat sessions. The oldest old tend not to take part in these groups, so posting ads in those places is not a fruitful way to find participants.
  • Senior centers and community colleges. These are places that offer classes in using computers. If you want computer and web novices for a study, they might be good places to find appropriate participants. They are not good sources if you want to observe people with enough web experience to see them working at a web site without teaching them.
  • Flyers at a senior center, when they did not make clear that we were recruiting for a study. Many older people are much more cautious and skeptical than younger people. They are often fearful of being cheated or “taken.” For example, we had put up flyers at a senior center from which we got no response; later we learned that people thought we might be trying to sell them something.
  • Cold calling from a database. This is probably again because older people are afraid that they may be scammed into buying something.

These ideas did work well:

  • Calling with a personal connection. If we could say that a mutual acquaintance had suggested the contact, potential participants were much more receptive to hearing about the study and considering taking part. It is important to establish credibility and trust with the potential participant.
  • Being careful in the initial call to say where we had gotten the contact information and that we weren’t selling anything.

Recruiting older adults

Recruiting participants in their 70s and 80s is more difficult than recruiting participants in their 50s. The oldest candidates are less receptive to strangers phoning them, and they don’t check email as frequently.

Recruiting by phone. Phoning is important. Plan to phone potential participants at least once (or have your recruiters do so). You need to quickly establish credibility and trustworthiness, to assure potential participants that you are not selling anything, and to establish a connection by letting them know where you got their names. When you can do that, potential participants are often glad to hear from a real person. It is also easier for them to determine legitimacy and to ask questions about the study on the phone. They’ll use your answers to help them decide whether they want to take part.

Another reason for phoning potential participants is so you can judge their English language skills and whether they are hearing impaired. (You may well want to include limited English speakers and hearing impaired users in your study; if you do, you want to be aware of these specifics about the participants before they come.)

Recruiting by email. Email can be very efficient for younger participants; it’s less so with adults in their later 60s and 70s and older. Give yourself more time for these older participants; they generally don’t check their email more than a couple of times per week. This happens for a variety of reasons: They don’t feel the need to check mail frequently. They use a computer at a senior centre or a library. They have limited time available through their Internet service provider.

As suspicious as older adults are of telemarketers, they are also vigilant about spam. If your email address is unknown to them, without an appropriately descriptive subject line, they may delete it. Always put on a very clear subject line.

Screening older adults

Screening older adults demands specificity. Many older users when asked the question “what do you do online?” answer “email.” They often don’t think about practical activities such as banking or bill paying online as “using the web.”

Many older users are also not as familiar with the language of the web as younger users are. They don’t distinguish between the Internet and the web. They don’t always know the difference between the web browser and the web page.

We found that self-reported data about frequency of use and numbers of hours spent online were not good indicators of proficiency, either. For example, we had one participant who spent 60 hours per week online. We didn’t find out until the session started that her sole use of the web was playing games on four web sites that her friend had set up as separate shortcuts on her desktop.

So, asking a variety of specific questions to gauge potential participants’ familiarity with the web can help the person recruiting participants for a study make judgments about how suited the person might be for the study. Even if you’re looking for a mix of proficiency levels, you still have to be able to determine where in the range a potential participant fits.

Tech savviness matrices
An assessment we’ve found to work well asks about frequency of use over a broad range of types of interactions older people can take part in on the web. Here are some example assessment grids we have used.

Scheduling sessions with older adults

Scheduling sessions with older participants can present some logistical challenges that you might not think about in studies involving younger participants.

They arrive early. Because many older people are retired (or at least have ample free time), they almost always arrive for their sessions early – up to an hour early. Be sure to have someone to greet them and set up a comfortable place for them to wait.

They bring their spouses. Older participants often bring their spouses or a friend with them. They may have travelled some distance to get to the session; they may have planned activities for after their session; or they simply may not like driving alone. Have magazines, a phone, and a comfortable chair available for the spouse or friend.

They do best in the morning. Even though people in their later 60s, 70s, and 80s are vital and energetic, they usually have more—and better—attention to give earlier in the day. Try to schedule people who are in their late 60s, 70s and 80s in the morning and save any afternoon sessions for participants in their 50s or early 60s. We don’t recommend running evening sessions.

They don’t like driving in rush hour. If you are holding sessions in a central place (rather than meeting participants in their homes or workplaces), schedule the sessions outside of peak traffic times, if possible.

Reminding older adults of important points before they come

Reminders about one-to-one sessions. Participants can become nervous and uncomfortable if they realize after arriving that they will be the only participant in the session. Usability studies are still fairly new to the general population. Recruiting firms often recruit for focus groups, and participants who come through these firms often assume that they are coming to a focus group.

Reminders about videotaping and observations. Although a good practice is to ask for permission to record and to have people observing the sessions when you recruit, people tend to forget that. Make sure that the person who calls the participant to confirm the session also tells the participant that:

  • “You will be videotaped and observed by people you won’t be able to see during the session.”
  • “This is a one-on-one session. You will be the only participant in the study room with a moderator.”

Special reminders for older adults.
 

  •  Computer glasses. Many participants will have special glasses for using the computer. So another important reminder is, “Don’t forget your computer glasses!”
  • Eat first. Also, for long sessions— anything longer than 45 minutes— ask participants to make sure they eat before they arrive. Because many participants expect to take part in focus groups rather than individual sessions, they also expect to be fed. If you have snacks available, try to have fruit and nuts or other relatively healthy food. Many older participants are diabetic.



Working with older adults during sessions

Many older participants won’t know what to expect coming into a usability study session. Be clear in setting their expectations and be firm but polite about keeping the session focused on what you’re trying to find out.

Make participants comfortable. Be respectful without being patronizing. You can be a neutral moderator but still be polite. “Please” and “thank you” are important. Many older adults expect more statements of politeness like these than younger participants do.

Older participants also deserve extra consideration, politeness, and detailed information about the session. They will feel more comfortable if they know what to expect up front:

  • Clearly explain the session plan, timing, and what they can expect.
  • Warn participants that you’ll interrupt them and that you may stop them before they have completed tasks.
  • Schedule breaks for long sessions (and tell them they can take breaks whenever they need to).
  • Have them practice thinking aloud.
  • Consider including a practice task to help participants understand how the session will work.
  • Take account of beliefs that participants may have learned or created about how to work with computers.
  • Remember that older participants often are not versed in computer and web terminology, so avoid using this jargon when working with them.
  • Be extra patient with older participants; wait longer than you normally might to prompt; consider giving participants permission to ask for hints.
  •  If participants stop talking, consider letting them continue that way; try reflecting on the task later.
  • Teach participants something at the end of the session.

Keep them on track tactfully. Most of the participants we’ve had in sessions are interesting, charming, and very talkative. Many older participants have a lot of stories to tell. Their stories say a lot about who they are, and where they have been— and often provide a context for interpreting data.

But it may be easy for participants to get off track during the session, and while it may feel awkward or mean sometimes, it is the moderator’s job to keep the participant focused on the task, talking about it, and getting data for the study. This is the main reason for warning participants in the introduction to the session that you may interrupt them and that you may stop tasks before they’ve completed them.

Listen for their beliefs about computers and the web. Many people who are in their late 60s and older never used computers at work. This means they have no previous experience from which to make inferences about how a computer or an application might work. Many learn how to use computers and the Internet through friends, family, and neighbours. They inherit the superstitions and myths that those people have developed to help themselves work around problems. Then the older adults bring these myths into sessions with them, and you’ll hear about them as task-solving strategies and workarounds. It’s important to capture these; they are part of the users’ reality and we have to deal with these beliefs when we design web sites.

Be careful of the words you use; avoid computer jargon. Older computer users rarely know much about computer-related terminology, so you should avoid using these terms during your sessions. Older participants often don’t know the names of widgets such as drop-down boxes or cascading menus. Most of our participants also had little knowledge of web-related terminology. For example, they weren’t sure about terms such as “link,” “URL,” and “login.” Many were unclear about the meanings of “online community” and “message boards.” “Browsing” wasn’t always meaningful in the context of a feature called “browse by topic.” The word “emoticon” and the concept behind it were completely foreign to most of our participants. This means that you must pay close attention to what participants do and point at on the screen or device.

Give them time. Older participants almost always take longer to do tasks than younger participants. And, although they seem to struggle, the oldest participants also expect using the computer to be difficult. Plan for tasks to take much longer for older participants than they would for younger participants—up to 25 percent longer in our experience.

Help participants understand the time constraints of the session by explaining the session format in your introduction. Also, wait longer to prompt than you normally might. You might also consider giving participants permission to ask for hints when you introduce the session.

If necessary, hold the think-aloud and ask participants to reflect later. A classic technique in usability testing is to ask participants to think aloud while they work through tasks toward a goal. When tasks become complex or difficult, participants may stop talking. Use your best judgment about nudging them to tell you what they’re thinking. For some participants with short-term memory loss or other cognitive impairment (such as that caused by pain medication), your asking for their thoughts may interrupt their task enough that it causes them to make errors. In those cases, you may get more usable data without the think aloud protocol by asking participants to reflect later.

Don’t lead even when you want to. If, as a session moderator, you have a soft spot in your heart at all for participants, working with older participants will exercise that spot a lot. You may be tempted to give hints; worse, you may lead them in ways you don’t intend. Be patient and firm but polite while keeping to your agenda.

When appropriate, teach something at the end. If the session has been difficult for the participant, or, if there is some small thing that would make using the computer or the web easier, take a little time at the end of the session to teach the participant something. For example, show participants how to change the text size in their browsers and shortcuts for copying and pasting and printing.

Including older adults in user research and usability studies: Older, wiser, wired

Older adults don’t behave differently from younger people online. Just thinking about old age as a collection of disabilities is old business. The new world of designing for older adults is about creating web sites and other technology that is useful and desirable as well as accessible to the broadest range of users. Older adults as a cohort are living longer than their parents because they’re healthier, and many will be affluent because they’ve been saving up for a lifetime – this means they have time, money, and motivation to be online.

Neither a monolithic view of older adults nor an entirely separate design for older adults is necessary. Younger designers developing web sites for older adults need to learn more about older adults’ life experiences. For example, many older adults don’t perceive themselves as old. And so, all technology design – not just designs for older adults – should involve users.

For most teams, the moderator of user research sessions is the main researcher. Depending on the comfort level of the team, the moderator might be a different person from session to session in the same study. (I often will moderate the first few sessions of a study and then hand the moderating over to the first person on the design team who feels ready to take over.)
To make that work, it’s a good practice to create some kind of checklist for the sessions, just to make sure that the team’s priorities are addressed. For a field study or a formative usability test, a checklist might be all a team needs. But if the team is working on sussing out nuanced behaviors or solving subtle problems, we might want a bit more structure.
A couple of the teams I work with ensure that everything is lined up and that *anyone* on the team could conduct the sessions by creating detailed scripts that include stage direction. Here are a couple of samples:
Whether the team is switching up moderators or it’s the same person conducting all the sessions, creating a script for the session that includes logistics is a good idea:
  • think through all the logistics, ideally, together with the team
  • make sure the sessions are conducted consistently, from one to the next
  • back up the main researcher in case something drastic happens — someone else could easily fill in
Logistics rehearsal
When you walk through, step by step, what’s supposed to happen during a session, it helps everyone visualize the steps, pacing, and who should be doing what. My client teams use the stage direction in the script as a check to make sure everything is being covered to reach the objectives of the sessions. It’s also a good way to review what tools, data, and props you might need.
Estimate timing
Teams often ask me about timing. When they get through a draft of a script that includes stage directions, they get a pretty solid feeling pretty quickly for what is going to take how long. From this they can assign timing estimates and make decisions about whether they want participants to keep going on a task after the estimated time is reached or redirect to the next task.
Mapping out location flow
It’s easy to overlook the physical or geographic flow – what a director would call blocking – of a session. Where does the participant start the session? In a waiting room, at her desk, or somewhere else? Will you change locations within a room or building during the session? How do you get from one place to the next?
Consistency and rigor
Including stage directions in a script for a user research session can help reviewer-stakeholders understand what to expect. More importantly, the stage directions act as reminders to the moderator so she’s doing the same things with and saying the same things to every participant in the study. This means nothing gets left out deliberately and nothing gets added that wasn’t agreed on ahead of time. (For example, the team could identify some area to observe for and put a prompt in the script for the moderator to ask follow-up questions that are not specifically scripted, depending on what the participant does.)
Insurance
Any really good project manager is going to have a Plan B. With a script that includes detailed stage directions, anyone who has been involved in the planning of a study should be able to pick up the script and moderate a session. The people I worked with at Tec-Ed called this “the bus test” (as in, If you get hit by a bus we still have to do this work).
Some teams I work with want to spread out and run simultaneous sessions. The stage directions can help ensure consistency across moderators. (Rehearse and refine if you’re really going to do this.)
Finally, when it comes time to write the report about the insights the team gained, the script — with its stage directions — can help answer the questions that often come asking why things were done the way they were done or why the data says what it says.
Stage it

Each person in a session is an actor, whether participant or observer. The moderator is the director. If the script for a study includes instructions for all the actors in the session as well as the director in addition to documenting what words to say, everyone involved will give a great performance.

There are a bunch of things to do to get ready for any test besides designing the test and recruiting participants.

  • make sure you know the design well enough to know what should happen as the participant uses it
  • copy any materials you need for taking notes
  • copy of all the forms and questionnaires for participants, including honorarium receipts
  • organize the forms in some way that makes sense for you. (I like a stand-up accordion file folder, in which I sort a set of forms for each participant into each slot. I stand up the unused sets and then when they’ve been filled out, they go back in on their sides.)
  • check in with Accounting or whoever on money for honoraria or goodies for give-aways
  • get a status report from the recruiter
  • double-check the participant mix
  • make sure you have contact information for each participant
  • check that you have all the equipment, software, or whatever that you need for the participant to be able to do tasks
  • run through the test a couple of times yourself
  • double-check the equipment you’re going to use (I use a digital audio recorder, so I need memory sticks for that, along with rechargeable batteries)
  • charge all the batteries
  • double-check the location

Which gets us to where you’re going to do the sessions. But let’s talk about that later.