Alfonso de la Nuez On The Role Of User Research In The Future

Alfonso de la Nuez started in the field of usability with his small services company in Spain and ended up in California co-founding the user research software firm UserZoom. Last year UserZoom customers conducted more than 12,000 user research projects.

In this episode of the Human Tech podcast we talk with Alfonso, the CEO of UserZoom, about the current state and likely future of user research and testing, what makes user research successful inside a large enterprise, and much more.

You can check out UserZoom here, and you can email Alfonso at alfonso@userzoom.com


Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.

Steve Fleming-Prot: The Experience Of Designing An Experience

In this episode of the Human Tech podcast we talk with Steve Fleming-Prot. Steve has been designing complex user interfaces and experiences for decades and now is a Senior UX Research Consultant at User Testing.  In this episode we talk about the details of what happens when you are designing a user experience, and we also talk about his “conversion” from a moderated user tester to an unmoderated test planner.


Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.

Why We Still Love User Testing

Logo for HumanTech podcastUser Testing as a way to get feedback from people about a product is still going strong. In this episode Susan quizzes Guthrie about the what and how of user testing, and talks about some of her fun and more memorable moments of the hundreds of tests she’s conducted.


HumanTech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.

Ten User Testing Bad Habits

User testing is a great way to get feedback from actual users/customers about your product. Whether you are new to user testing, or a seasoned testing professional, you want to get the most out of your user testing research. It’s easy to fall into some bad habits though, that will either make your testing time consuming, ineffective, or expensive. So here are 10 bad habits to watch out for when you are doing user testing:

#10 Skip the pilot
A pilot test is a user test you run before you run your “real” test. You run a pilot test so that you can try out your prototype and your instructions – it’s a trial run for everything. Then you can make any changes necessary before you run your “real” test. Sometimes your pilots go without a hitch and then it can be easy to say the next time, “Oh, maybe I’ll skip the pilot”. Don’t skip the pilot! Otherwise you may have to redo the whole test. Pilots are fast and inexpensive and worth it to do.

#9 Draw conclusions from early and insufficient data
People get excited when results start coming in, but don’t start changing things after 1 or 2 participants. You’ve got to wait to see what everyone does or doesn’t do before you start making decisions. And watch out for the confirmation bias – deciding that you know what’s going on after 2 participants and then ignoring other data that comes in later.

#8 Test too many people
If you are used to quantitative measures you might be used to running studies with large numbers of people. But user testing is often qualitative rather than quantitative (there are exceptions). If you aren’t running statistical analyses, so you don’t need lots of people. 7 to 10 people (per cell, see #7 below) will get you the data you need most of the time.

#7 Too many cells
A cell is the smallest unit of categorization for a user test. Let’s say that you want to run your test on men and women, and you want to be able to draw conclusions about differences in men and women. That means you have 2 cells – one for men and one for women and you need to run 7 people per cell. Now you already have 14 people. Next you decide to add young people versus older people, so now you have 4 cells of 7 each. Then you add people who are current customers vs. not current customers…. You can see that this is headed to too many people. The mistake here is a misunderstanding between cells and variation. I can have just one cell of 10 people, and within that cell I can make sure that I have some men, some women, some older people, some younger people – they only have to be a separate cell if I am going to draw conclusions about the variables. If I just want variation, but don’t need conclusions about the variability, then I don’t need all these cells.

#6 Do a data dump
When you conduct a user testing study you are familiar with everything, the instructions, the tasks, the results, and you may not realize that if you just hand the data and the video recordings to someone else they may be overwhelmed. You need to summarize the results, draw conclusions, and present a cohesive summary to your team and stakeholders, not just hand them a lot of data.

#5 Too much stock in after-test surveys
People have bad memories, and they also tend to over-inflate ratings on a survey. Watch out for putting too much stock in a survey that you give them after the task portion of the test.

#4 Test too late
Don’t wait till the product is designed to test. You can test wireframes, prototypes, and even sketches.

#3 Skip over surprises
Some of the best data from user testing comes not from the tasks themselves, but the places that people wander off, or the off-hand comments they make about something you weren’t even officially testing. Don’t ignore these surprises. They can be extremely valuable.

#2 Draw conclusions not supported by the data
You have to interpret the data, but watch out for drawing conclusions (usually with your pet theories) that really aren’t supported from the data.

#1 Skip the highlights video
A picture is worth 1000 words and a video is worth even more. These days highlight videos (made up of video clips) are easy to assemble using tools (for example usertesting.com). Highlight videos are much more persuasive than you just saying what happened. Make a habit of creating video clips the first time you watch the videos. Then you don’t have to go through them again to create a highlights video.

What do you think? Do you have any bad habits to add to the list?

————

If you are interested in learning more about user testing consider these two courses:

User Testing: The What Why and How as an in-person workshop I’m teaching it in San Francisco on July 31, 2014

and

an online video course on User Testing.

 

User Testing In The Spotlight

With Lean UX all the rage (deservedly in my opinion — see my recent slideshare on Lean UX), user testing (an important part of the Lean UX process) is getting even more popular. If you need to convince someone(s) in your organization that user testing is important — well, not just important but CRITICAL — try this video below.  It’s an introduction video to my User Testing course. And if you are interested in the course I’ll be teaching it in San Francisco on July 31, 2014. Bring the whole team! If you are already convinced about how important usability testing is, then stay tuned for an upcoming blog post on Bad Usability Testing Habits To Avoid.

 

Why I’m Still In Love With User Testing

I’ve been doing user testing for (I’m afraid to admit) decades. And I still love it. It’s a great way to get feedback from people about how effective your design, your product, your assumptions are.

In these days of Lean everything you can’t beat user testing as one of the best Lean UX techniques to test your assumptions.

Here’s a short video on Why You Need To Do User Testing. It’s the first lesson in our newest online video course on User Testing. 

Do you know someone who needs to see this video?!

 

 

 

New In-Person Courses

Picture of colored pencilsThere is something about September that makes us all want to sharpen our pencils and learn something new. And so we are very excited to announce the launch of 4 brand new in-person classes that we are offering in Chicago, San Francisco, St. Louis, Edgar WI, and Washington DC. I’m teaching some of them, but we also have some of the BEST and most experienced instructors teaching some as well.

One-day courses:

  • Don’t Guess: Test! The Why, What, and How of User Testing (Chicago, St. Louis, San Francisco, Washington DC)
  • How to Design Intuitive and Usable Products Through User Research (St. Louis, San Francisco, Washington DC)
  • The Science of Persuasive and Engaging Design (Chicago and San Francisco) (

3-day Workshop:  Weinschenk Behavioral Science Workshop (Edgar, WI)

You can see a list of all the courses, dates, instructors, and locations at the Weinschenk Institute website, and link to detailed outlines.

To celebrate the course launch I’m writing a series of blog posts to highlight each course.

So here are 3 reasons why I’m excited about the Don’t Guess: Test! The Why, What, and How of User Testing course:

1) We’re partnering with UserTesting.com and they are going to provide us with FREE tests to use in the class, as well as giving every participant 3 FREE User Test Coupons to use after the class when you go back to your office to apply what you’ve learned.

2) With the free in-class test, you will conduct an actual user test during the course. You will decide what to test, who to test, write the test scenario and tasks, conduct the test, and watch the video.

3) Always wondered what to test? In this course you will learn how to write a usability specification and use that to guide you in deciding who to test, what to test, and what the success criteria should be. And you can use these usability specifications not only in user testing, but also in design.

The “Don’t Guess: Test!” course is intensive, hands-on interactive, and fun.

Text me at 847-909-5946 or send me an email at susan@theteamw.com by September 30, 2013, and I will give you a code to use during registration that will get you 25% off the course fee.

In the next post I’ll talk about the How to Design Intuitive and Usable Products Through User Research course.

100 Things You Should Know About People #79 — People of Different Ages Have Different Error Strategies

Young man taking a picture with a smart phone camera

Let’s say you study two people using a smartphone that has an advanced still and video camera. One is 22 years old, and the other is 47 years old. Neither of them has used this smartphone/camera before. You give them a set of tasks to do. Will there be a difference between them? Will they both be able to complete the tasks? Will they make the same mistakes? Neung Kang and Wan Yoon (2008) conducted a research study to look at the types of errors both young and older (not very old, but older) adults make when learning how to use new technologies. In their study they identified and tracked different error strategies:

Systematic exploration — When people use  systematic exploration, this means that when they make a mistake they stop and think about what procedures they are going to use to correct the error. For example, let’s say that a user is trying to figure out how to email a picture with the smartphone/camera. She tried one menu and that didn’t work, so now she sets out to see what each item in the menu system does for the camera part of the device. She starts at the first item in the first menu and works her way through all the choices in the part of the product controls having to do with the camera. She is systematically exploring.

Trial and error — In contrast to systematic exploration, trial and error means that the person is randomly trying out different actions, menus, icons and controls

Rigid exploration — If someone does the same action over and over, even though it does not solve the error, that is called a rigid exploration. For example, the person is trying to send a picture via a text message, presses a button and gets an error. She then chooses the picture again, and presses the button again. She keeps repeating this combination of actions, even though it doesn’t work.

Continue reading “100 Things You Should Know About People #79 — People of Different Ages Have Different Error Strategies”

100 Things You Should Know About People: #35 — People Make Mistakes

Example error message: This error should not occurI used to collect computer error messages. It was kind of a hobby. I’ve got a great collection of them, some of them going back to the old character based computer screens. Most of them are not error messages that were trying to be humorous. Most of them were written by computer programmers that were trying to explain what was going wrong. But many of them end up being quite funny (unless you are the one who got the message in the middle of trying to do something important. Then nothing seems funny). My favorite was from a company in Texas. When there was a “fatal” error, meaning the system was going to crash, a message came up that said, “Shut er down Henry, she’s spewin’ up mud!”

Error messages are probably the part of a software program that gets the least amount of time and energy, and maybe that is appropriate. After all, the best error message is no error message (meaning that the system is designed so that no one makes errors). But when something goes wrong it is important that people know what to do about it.

The reality is that something always goes wrong. People make mistakes. Whether the user makes a mistake in working with a computer, or a company that makes a mistake by releasing software that has too many errors, or a designer designs something that is unusable because he or she doesn’t understand what the user needs to do. Everyone makes mistakes. So here is my list of important things to consider about people making mistakes:

Think ahead about what the likely mistakes are — Figure out as much as you can about what kinds of mistakes people are going to make when they use whatever it is you have created. And then change your design before it goes out so that those mistakes won’t be made.
Create a prototype of whatever it is you have and then get real people to use it so you can see what the errors are likely to be.
Test your prototype with users (usability testing).
Write error messages in plain language. If you are creating a message to show someone or play audio to someone about a mistake they made, tell them the following:

  • that an error has been made
  • what the error is
  • how they can correct it
  • where to go to get more help in fixing the error

Use active voice and be direct. Instead of saying: “Before the invoice can be paid it is necessary that the invoice payment be earlier than the invoice create date”. Say instead, “Enter an invoice payment date that is BEFORE the invoice create date.”

Need it to be error-proof? It is very difficult to create a “system” that is free of all errors, and that guarantees that people won’t make mistakes. In fact it is impossible.  Ask the people at 3-mile island, or Chernobyl or British Petroleum. The more costly errors are, the more you need to avoid them. The more you need to avoid them the more expensive it is to design the system. If it is critical that people not make mistakes (i.e., you are a nuclear power plant, or an oil rig, or a medical device), then be prepared. You will have to test twice or three times more, and you will have to train two or three times more. It is really expensive to try and design a fail-safe system. And realize you never will fully succeed.

It’s just the way we are. We make mistakes!

If you have some favorite error messages that you have seen, consider sending them to me and I will add them to my collection.

————————————————————————————-

Did you find this post interesting? If you did, please consider doing one or more of the following:

add your comment
subscribe to the blog via RSS or email
sign up for the Brain Lady newsletter
share this post