Ten User Testing Bad Habits

User testing is a great way to get feedback from actual users/customers about your product. Whether you are new to user testing, or a seasoned testing professional, you want to get the most out of your user testing research. It’s easy to fall into some bad habits though, that will either make your testing time consuming, ineffective, or expensive. So here are 10 bad habits to watch out for when you are doing user testing:

#10 Skip the pilot
A pilot test is a user test you run before you run your “real” test. You run a pilot test so that you can try out your prototype and your instructions – it’s a trial run for everything. Then you can make any changes necessary before you run your “real” test. Sometimes your pilots go without a hitch and then it can be easy to say the next time, “Oh, maybe I’ll skip the pilot”. Don’t skip the pilot! Otherwise you may have to redo the whole test. Pilots are fast and inexpensive and worth it to do.

#9 Draw conclusions from early and insufficient data
People get excited when results start coming in, but don’t start changing things after 1 or 2 participants. You’ve got to wait to see what everyone does or doesn’t do before you start making decisions. And watch out for the confirmation bias – deciding that you know what’s going on after 2 participants and then ignoring other data that comes in later.

#8 Test too many people
If you are used to quantitative measures you might be used to running studies with large numbers of people. But user testing is often qualitative rather than quantitative (there are exceptions). If you aren’t running statistical analyses, so you don’t need lots of people. 7 to 10 people (per cell, see #7 below) will get you the data you need most of the time.

#7 Too many cells
A cell is the smallest unit of categorization for a user test. Let’s say that you want to run your test on men and women, and you want to be able to draw conclusions about differences in men and women. That means you have 2 cells – one for men and one for women and you need to run 7 people per cell. Now you already have 14 people. Next you decide to add young people versus older people, so now you have 4 cells of 7 each. Then you add people who are current customers vs. not current customers…. You can see that this is headed to too many people. The mistake here is a misunderstanding between cells and variation. I can have just one cell of 10 people, and within that cell I can make sure that I have some men, some women, some older people, some younger people – they only have to be a separate cell if I am going to draw conclusions about the variables. If I just want variation, but don’t need conclusions about the variability, then I don’t need all these cells.

#6 Do a data dump
When you conduct a user testing study you are familiar with everything, the instructions, the tasks, the results, and you may not realize that if you just hand the data and the video recordings to someone else they may be overwhelmed. You need to summarize the results, draw conclusions, and present a cohesive summary to your team and stakeholders, not just hand them a lot of data.

#5 Too much stock in after-test surveys
People have bad memories, and they also tend to over-inflate ratings on a survey. Watch out for putting too much stock in a survey that you give them after the task portion of the test.

#4 Test too late
Don’t wait till the product is designed to test. You can test wireframes, prototypes, and even sketches.

#3 Skip over surprises
Some of the best data from user testing comes not from the tasks themselves, but the places that people wander off, or the off-hand comments they make about something you weren’t even officially testing. Don’t ignore these surprises. They can be extremely valuable.

#2 Draw conclusions not supported by the data
You have to interpret the data, but watch out for drawing conclusions (usually with your pet theories) that really aren’t supported from the data.

#1 Skip the highlights video
A picture is worth 1000 words and a video is worth even more. These days highlight videos (made up of video clips) are easy to assemble using tools (for example usertesting.com). Highlight videos are much more persuasive than you just saying what happened. Make a habit of creating video clips the first time you watch the videos. Then you don’t have to go through them again to create a highlights video.

What do you think? Do you have any bad habits to add to the list?

————

If you are interested in learning more about user testing consider these two courses:

User Testing: The What Why and How as an in-person workshop I’m teaching it in San Francisco on July 31, 2014

and

an online video course on User Testing.

 

User Testing In The Spotlight

With Lean UX all the rage (deservedly in my opinion — see my recent slideshare on Lean UX), user testing (an important part of the Lean UX process) is getting even more popular. If you need to convince someone(s) in your organization that user testing is important — well, not just important but CRITICAL — try this video below.  It’s an introduction video to my User Testing course. And if you are interested in the course I’ll be teaching it in San Francisco on July 31, 2014. Bring the whole team! If you are already convinced about how important usability testing is, then stay tuned for an upcoming blog post on Bad Usability Testing Habits To Avoid.

 

Why I’m Still In Love With User Testing

I’ve been doing user testing for (I’m afraid to admit) decades. And I still love it. It’s a great way to get feedback from people about how effective your design, your product, your assumptions are.

In these days of Lean everything you can’t beat user testing as one of the best Lean UX techniques to test your assumptions.

Here’s a short video on Why You Need To Do User Testing. It’s the first lesson in our newest online video course on User Testing. 

Do you know someone who needs to see this video?!

 

 

 

New Blog and Website Design

Picture of new Weinschenk website homepage

You may have noticed that our blog page looks different than usual. That’s because we’ve switched to a new theme. You may or may not have noticed that the menu bar at the blog has also changed. The only link is to the Weinschenk Institute website. We’ve not only changed the blog, but we’ve launched a new website too. We’re in the middle of some user testing of the new site, and the blog, and we’re still iterating the design and the content. We’re up for comments if you have any feedback on either the blog or the website.

The new designs for both the blog and website are “responsive” designs with an emphasis on tablet and small footprint use.

Let us know what you think, either in the comments here, or send me an email: susan@theteamw.com

 

 

How To Test A Web Site Design In An Hour And On a Shoestring Budget

I have a friend who volunteers to be on an advisory board for a land trust conservancy organization. They have been designing a web site for the land trust. But they are all volunteers, and the organization doesn’t have a budget for web site design. They have a programmer donating her time to put together the website.

Can you get user feedback when the site doesn’t even exist yet? — My friend’s background is in usability, and she was concerned that the web site that the programmer was putting together had usability problems. But the group has virtually no budget to do user centered design or get user feedback on the prototype. And all she had were some pictures of a draft of some of the pages. For example, here’s what she had for the home page:

Picture of home page for Conservancy site

The menus didn’t “work” because it was just a picture, so she put together this page showing what would be in the drop downs if you did click on the main navigation on the home page:

Picture of home page with drop down menus

Continue reading “How To Test A Web Site Design In An Hour And On a Shoestring Budget”

Book Review of Steve Krug's Rocket Surgery Made Easy

I’ve been a fan of Steve Krug’s since his original book, Don’t Make Me Think, came out about a decade ago. (And Steve was kind enough to write an endorsement for my book, Neuro Web Design: What makes them click? when it came out last year).

Steve’s new book is all about user testing of web sites (or software or products or anything really). The premise of the book is that ANYONE can conduct a simple user test and that EVERYONE who has a website, software, or a product, should conduct user testing.

So the book is a DIY guide to simple, but effective, user testing.

Here’s my review via video, and below that I’ll summarize the take-aways:

What I like most about the book:

It’s very thorough — This really is everything you need to know to conduct an informal usability test.

Useful checklists — Chapter 7 is called “Some boring checklists” and it has great (not boring) checklists of what to do and when to do it.

All the wording and scripts you need — Chapter 8 gives you all the details you need, for example what to say as the facilitator, and what your consent form should contain. You get the actual forms and scripts.

How to interpret the data you get — Chapters 11 and 12 tell you what to do now that you’ve run the user tests and you have information.

How to think about the results — One of my favorite chapters is #10, where he walks you through how to have a meeting with your team and decide what actions to take based on the feedback you got during the test.

Link to an example video — In the book Steve gives you a URL to watch a video. The video is Steve conducting a user test with a real user. He annotates the video with some call outs so you can learn what he is doing as he goes along.

It’s a great book and I recommend it for anyone who has anything to do with designing or improving a website, or software, or technology product that people use. Whether you are new to user testing, or a pro with many years under your belt, you will find this book to be of immense value.

If you’d like to read more about it on Amazon, here’s a link (affiliate):

——————————————————————–

Did you find this post interesting? If you did, please consider doing one or more of the following:

add your comment
subscribe to the blog via RSS or email
sign up for the Brain Lady newsletter
share this post

How To Save Money And Time On User Testing: Run Multiple, Iterative Pilots

In my last blog post I reported on a study I recently conducted about differences between men and women in what they planned to purchase online for Valentine’s Day. (see Who is the Most Romantic). I used UserTesting.com (affiliate) to collect the data, and I had an interesting insight about running user tests while I was doing the study.

Brief description of the service I used — In case you don’t know Usertesting.com, it’s a service that lets you run what is called an un-moderated user test. Un-moderated means you are not there to moderate or facilitate the test session. You set up the test scenario and specify the web site and tasks you want the user to do by entering this information into a form at the Usertesting.com site. Then the Usertesting.com people recruit the users you have specified (meaning they post it to their database of already screened people), they provide the scenario and tasks to each user, and record the interactions each user has with the web site or sites. You get a notification that your test results are available, and then you can watch the video and the audio of each user session.

It’s very easy to set up and run user tests this way. If you are skilled at writing scenarios and tasks, it takes literally a minute to set up and run a test. It usually takes about 2-5 minutes for users to see the test post and start the test, and I have found that within 20-30 minutes videos are ready for you to watch.  Nice, right?

Running my first pilot for the study — When you fill out the form to set up the test you get to pick how many people you want to run the test. The first time I set up the test, I decided to just run one person. I wanted to make sure I had the wording right in my scenario and tasks. In other words, I was doing what is called a pilot test – I was running a test where I would throw away the data, just to see if my scenario and instructions were clear and would result in getting me the type of data I needed.

Why run a pilot anyway? — Running a pilot is always recommended when you are doing user testing, but I ‘ve seen lots of people skip this step. When you are doing “regular” moderated user testing (i.e.,  you are there in person, you’re renting a facility, you are paying money to recruit users, and you are paying money to the users as incentives), it’s expensive to run a pilot test. You should still do it, but I would say that less than 50% of the people I know even run a pilot test.

But with the Usertesting.com system it’s easy and fast and not very expensive to run a pilot. The entire cost is $39 per user – for everything, so why not run a pilot?

How I came to run multiple pilots — In my test last week I ran a pilot, and found that certain wording in my task instructions was causing people to go off in a direction that was not useful in terms of the data I was interested in. I changed the wording of the instructions and ran the pilot again. Still not quite right, so I modified a little more. I  ran 4 pilots before I was convinced that the wording was clear and would result in the test testing what I actually wanted to test. Then I used that wording to run the real test.

How about running un-moderated pilots before a moderated study? — Now I was sure that the data I had coming would be useful and valid, and not just a reflection of some wording or instructional error I had in my tasks. By spending an hour or two to run the pilots I could be sure that the actual test results would be effective. It dawned on me that this ability to run multiple, iterative pilots was really powerful. I usually run one pilot, but I’d never been able to run iterative, multiple pilots. In fact, I’ve decided this is so powerful, that in the future, even when I am conducting  “regular” moderated user testing, I plan to first run a series of pilots with Usertesting.com to test out my scenario and task instructions.

Contest to give away a free test session — I have a special idea to encourage comments for this blog: I have a special promotional code that the Usertesting.com folks have given to me. You can use the code to run one free test for one user at usertesting.com. I’m going to run a little contest here, and give the promotional code away to the person who writes the best comment to this blog (I get to decide which is the “best” comment). So, what do you think? Do you do user testing? Do you run pilot tests when you do? Have you used Usertesting.com in this way?

>

——————————————————————–

Did you find this post interesting? If you did, please consider doing one or more of the following:

add your comment
subscribe to the blog via RSS or email
sign up for the Brain Lady newsletter
share this post

10 Ways To Get User Feedback

Recently I was talking to someone who is relatively new to the field of usability and user experience. He has developed a web application and wanted some ideas for getting feedback from users. He commented that he was planning on sending out a survey to users to see what they think about the web application. That was his plan for user testing. I’m so entrenched in the concept of usability and user testing that I have to stop sometimes and remember that not everyone else is.

“Well, you do have other choices besides doing a survey, you know”, I said.

“Oh, really?” he asked, “like what?”.

“I’ll send you some ideas,” I replied, and then I thought, “That would make a good blog post”, and, here we are.

1. “Traditional” moderated usability test – Let’s start with the most well-known and most used method of getting feedback from users. In a moderated usability test the user sits down in front of the software, web site web application, or other product that you are testing and uses the product, site or item to get one or more tasks done. The tester designs the test with real-life scenarios and asks the user to use the product or tool or site to go through and actually do the scenarios. The user is asked to talk out loud while they are completing the scenarios, so that the tester can understand what they are thinking and experiencing as they complete the activities they have been asked to do.  It’s called moderated because there is a facilitator to moderate the testing.

It’s important in a moderated usability test that:

  • Users must be representative of the actual user. It doesn’t work to use you or friend in the next cubicle, or your sister. The idea is to have a representative user try to use the site or product to get real tasks done.
  • Although you may be collecting other data, such as time to complete the task or number or types of errors made, the main data comes from the comments users make while they are working (called the “think aloud” technique).
  • Tests are done one-on-one. This isn’t a focus group.
  • Some facilitators “probe” with questions during the test, but this is tricky to do. You don’t want your questions to influence the user. Some facilitators wait until the tasks are completed before asking questions (called “de-briefing”).

Pros: Gives you lots of great data on what the usability issues are

Cons: Fairly expensive to conduct. You do these one at a time, so if you are testing 10 users that’s a lot of your time to be at the sessions, plan them,  analyze and report on data, etc. You may also need to pay for recruiting users and you need to give them an “incentive” (pay them in some way with cash, gift certificates etc). Continue reading “10 Ways To Get User Feedback”