Alfonso de la Nuez On The Role Of User Research In The Future

Alfonso de la Nuez started in the field of usability with his small services company in Spain and ended up in California co-founding the user research software firm UserZoom. Last year UserZoom customers conducted more than 12,000 user research projects.

In this episode of the Human Tech podcast we talk with Alfonso, the CEO of UserZoom, about the current state and likely future of user research and testing, what makes user research successful inside a large enterprise, and much more.

You can check out UserZoom here, and you can email Alfonso at alfonso@userzoom.com


Human Tech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.

Updated User Testing Course

At The Team W we’ve updated our User Testing online video course. This latest version of the course has been filmed in our new studio. We’ve expanded and updated the content. The video clip below will give you an idea of what’s in the course.

You can get details on the course, preview some lessons, and/or register for the course at the User Testing online video course page.  The User Testing course is also included in our User Experience Certificate curriculum.

To see the catalog of all of our online video courses, go to the main course catalog page. 

Why We Still Love User Testing

Logo for HumanTech podcastUser Testing as a way to get feedback from people about a product is still going strong. In this episode Susan quizzes Guthrie about the what and how of user testing, and talks about some of her fun and more memorable moments of the hundreds of tests she’s conducted.


HumanTech is a podcast at the intersection of humans, brain science, and technology. Your hosts Guthrie and Dr. Susan Weinschenk explore how behavioral and brain science affects our technologies and how technologies affect our brains.

You can subscribe to the HumanTech podcast through iTunes, Stitcher, or where ever you listen to podcasts.

Ten User Testing Bad Habits

User testing is a great way to get feedback from actual users/customers about your product. Whether you are new to user testing, or a seasoned testing professional, you want to get the most out of your user testing research. It’s easy to fall into some bad habits though, that will either make your testing time consuming, ineffective, or expensive. So here are 10 bad habits to watch out for when you are doing user testing:

#10 Skip the pilot
A pilot test is a user test you run before you run your “real” test. You run a pilot test so that you can try out your prototype and your instructions – it’s a trial run for everything. Then you can make any changes necessary before you run your “real” test. Sometimes your pilots go without a hitch and then it can be easy to say the next time, “Oh, maybe I’ll skip the pilot”. Don’t skip the pilot! Otherwise you may have to redo the whole test. Pilots are fast and inexpensive and worth it to do.

#9 Draw conclusions from early and insufficient data
People get excited when results start coming in, but don’t start changing things after 1 or 2 participants. You’ve got to wait to see what everyone does or doesn’t do before you start making decisions. And watch out for the confirmation bias – deciding that you know what’s going on after 2 participants and then ignoring other data that comes in later.

#8 Test too many people
If you are used to quantitative measures you might be used to running studies with large numbers of people. But user testing is often qualitative rather than quantitative (there are exceptions). If you aren’t running statistical analyses, so you don’t need lots of people. 7 to 10 people (per cell, see #7 below) will get you the data you need most of the time.

#7 Too many cells
A cell is the smallest unit of categorization for a user test. Let’s say that you want to run your test on men and women, and you want to be able to draw conclusions about differences in men and women. That means you have 2 cells – one for men and one for women and you need to run 7 people per cell. Now you already have 14 people. Next you decide to add young people versus older people, so now you have 4 cells of 7 each. Then you add people who are current customers vs. not current customers…. You can see that this is headed to too many people. The mistake here is a misunderstanding between cells and variation. I can have just one cell of 10 people, and within that cell I can make sure that I have some men, some women, some older people, some younger people – they only have to be a separate cell if I am going to draw conclusions about the variables. If I just want variation, but don’t need conclusions about the variability, then I don’t need all these cells.

#6 Do a data dump
When you conduct a user testing study you are familiar with everything, the instructions, the tasks, the results, and you may not realize that if you just hand the data and the video recordings to someone else they may be overwhelmed. You need to summarize the results, draw conclusions, and present a cohesive summary to your team and stakeholders, not just hand them a lot of data.

#5 Too much stock in after-test surveys
People have bad memories, and they also tend to over-inflate ratings on a survey. Watch out for putting too much stock in a survey that you give them after the task portion of the test.

#4 Test too late
Don’t wait till the product is designed to test. You can test wireframes, prototypes, and even sketches.

#3 Skip over surprises
Some of the best data from user testing comes not from the tasks themselves, but the places that people wander off, or the off-hand comments they make about something you weren’t even officially testing. Don’t ignore these surprises. They can be extremely valuable.

#2 Draw conclusions not supported by the data
You have to interpret the data, but watch out for drawing conclusions (usually with your pet theories) that really aren’t supported from the data.

#1 Skip the highlights video
A picture is worth 1000 words and a video is worth even more. These days highlight videos (made up of video clips) are easy to assemble using tools (for example usertesting.com). Highlight videos are much more persuasive than you just saying what happened. Make a habit of creating video clips the first time you watch the videos. Then you don’t have to go through them again to create a highlights video.

What do you think? Do you have any bad habits to add to the list?

————

If you are interested in learning more about user testing consider these two courses:

User Testing: The What Why and How as an in-person workshop I’m teaching it in San Francisco on July 31, 2014

and

an online video course on User Testing.

 

User Testing In The Spotlight

With Lean UX all the rage (deservedly in my opinion — see my recent slideshare on Lean UX), user testing (an important part of the Lean UX process) is getting even more popular. If you need to convince someone(s) in your organization that user testing is important — well, not just important but CRITICAL — try this video below.  It’s an introduction video to my User Testing course. And if you are interested in the course I’ll be teaching it in San Francisco on July 31, 2014. Bring the whole team! If you are already convinced about how important usability testing is, then stay tuned for an upcoming blog post on Bad Usability Testing Habits To Avoid.

 

Why I’m Still In Love With User Testing

I’ve been doing user testing for (I’m afraid to admit) decades. And I still love it. It’s a great way to get feedback from people about how effective your design, your product, your assumptions are.

In these days of Lean everything you can’t beat user testing as one of the best Lean UX techniques to test your assumptions.

Here’s a short video on Why You Need To Do User Testing. It’s the first lesson in our newest online video course on User Testing. 

Do you know someone who needs to see this video?!

 

 

 

New In-Person Courses

Picture of colored pencilsThere is something about September that makes us all want to sharpen our pencils and learn something new. And so we are very excited to announce the launch of 4 brand new in-person classes that we are offering in Chicago, San Francisco, St. Louis, Edgar WI, and Washington DC. I’m teaching some of them, but we also have some of the BEST and most experienced instructors teaching some as well.

One-day courses:

  • Don’t Guess: Test! The Why, What, and How of User Testing (Chicago, St. Louis, San Francisco, Washington DC)
  • How to Design Intuitive and Usable Products Through User Research (St. Louis, San Francisco, Washington DC)
  • The Science of Persuasive and Engaging Design (Chicago and San Francisco) (

3-day Workshop:  Weinschenk Behavioral Science Workshop (Edgar, WI)

You can see a list of all the courses, dates, instructors, and locations at the Weinschenk Institute website, and link to detailed outlines.

To celebrate the course launch I’m writing a series of blog posts to highlight each course.

So here are 3 reasons why I’m excited about the Don’t Guess: Test! The Why, What, and How of User Testing course:

1) We’re partnering with UserTesting.com and they are going to provide us with FREE tests to use in the class, as well as giving every participant 3 FREE User Test Coupons to use after the class when you go back to your office to apply what you’ve learned.

2) With the free in-class test, you will conduct an actual user test during the course. You will decide what to test, who to test, write the test scenario and tasks, conduct the test, and watch the video.

3) Always wondered what to test? In this course you will learn how to write a usability specification and use that to guide you in deciding who to test, what to test, and what the success criteria should be. And you can use these usability specifications not only in user testing, but also in design.

The “Don’t Guess: Test!” course is intensive, hands-on interactive, and fun.

Text me at 847-909-5946 or send me an email at susan@theteamw.com by September 30, 2013, and I will give you a code to use during registration that will get you 25% off the course fee.

In the next post I’ll talk about the How to Design Intuitive and Usable Products Through User Research course.

100 Things You Should Know About People #79 — People of Different Ages Have Different Error Strategies

Young man taking a picture with a smart phone camera

Let’s say you study two people using a smartphone that has an advanced still and video camera. One is 22 years old, and the other is 47 years old. Neither of them has used this smartphone/camera before. You give them a set of tasks to do. Will there be a difference between them? Will they both be able to complete the tasks? Will they make the same mistakes? Neung Kang and Wan Yoon (2008) conducted a research study to look at the types of errors both young and older (not very old, but older) adults make when learning how to use new technologies. In their study they identified and tracked different error strategies:

Systematic exploration — When people use  systematic exploration, this means that when they make a mistake they stop and think about what procedures they are going to use to correct the error. For example, let’s say that a user is trying to figure out how to email a picture with the smartphone/camera. She tried one menu and that didn’t work, so now she sets out to see what each item in the menu system does for the camera part of the device. She starts at the first item in the first menu and works her way through all the choices in the part of the product controls having to do with the camera. She is systematically exploring.

Trial and error — In contrast to systematic exploration, trial and error means that the person is randomly trying out different actions, menus, icons and controls

Rigid exploration — If someone does the same action over and over, even though it does not solve the error, that is called a rigid exploration. For example, the person is trying to send a picture via a text message, presses a button and gets an error. She then chooses the picture again, and presses the button again. She keeps repeating this combination of actions, even though it doesn’t work.

Continue reading “100 Things You Should Know About People #79 — People of Different Ages Have Different Error Strategies”

How To Test A Web Site Design In An Hour And On a Shoestring Budget

I have a friend who volunteers to be on an advisory board for a land trust conservancy organization. They have been designing a web site for the land trust. But they are all volunteers, and the organization doesn’t have a budget for web site design. They have a programmer donating her time to put together the website.

Can you get user feedback when the site doesn’t even exist yet? — My friend’s background is in usability, and she was concerned that the web site that the programmer was putting together had usability problems. But the group has virtually no budget to do user centered design or get user feedback on the prototype. And all she had were some pictures of a draft of some of the pages. For example, here’s what she had for the home page:

Picture of home page for Conservancy site

The menus didn’t “work” because it was just a picture, so she put together this page showing what would be in the drop downs if you did click on the main navigation on the home page:

Picture of home page with drop down menus

Continue reading “How To Test A Web Site Design In An Hour And On a Shoestring Budget”