Avoiding Bias in Agile UX

I enjoyed reading  Sway: The Irresitable Pull of Irrational Behavior (Amazon , nice review).  There are many cognitive biases that affect how we think.  The authors did a nice job of distilling the research on cognitive bias into an accessible popular science book.  The book made me think about how I approach web design and evaluation.

Traditional usability testing has its roots in psychology research methods.  You spend lots of time designing the study — randomizing participants into experimental groups, ensuring you don’t ask leading or prompting questions, calculating statistical significance or confidence intervals of findings, etc — so that these cognitive biases are minimized or factored out.

Agile development typically feature 1-3 week sprints, forcing UX designers to shorten traditional evaluation methods, use guerrilla usability testing, or do whatever they can to get SOME user feedback in the time allotted.  UX designers have been actively discussing how to integrate UX design into agile development teams (or search for “agile ux“). But in speeding up design testing/evaluation, we may become more susceptible to allowing cognitive biases to creep into and taint our study results.

There are two biases in particular I think we need to watch out for.

Confirmation Bias

Confirmation bias is the tendency is a tendency to search for or interpret information in a way that confirms one’s preconceptions (Sciencedaily, or Nickerson 1998 if you’re more psychology-paper inclined).

Give me an example!

You’ve spent the last couple days iterating on an information architecture for a new site.  You’re doing a quick evaluation of a paper prototype with three users in three separate sessions, to determine if the latest iteration is the best.  You’re watching the users interact with the prototype, asking them to think out loud and asking open-ended questions to encourage them to talk.  However, you’ve invested time and effort in this latest prototype: the stakeholders have approved it, and you either really like it or are sick of it and want to move on to the next thing.  Confirmation bias might lead you to focus your questioning on behaviors that you expected to see, that confirm or validate your design, or discount some of the negative comments you hear.

Diagnosis Bias

Diagnosis bias is the tendency to label things based on our initial impressions, and our difficulty or inability to change our minds after that initial impression is made.

Give me an example!

You’re got some ideas for the next web2.0/cloud/service/mashup/[buzzword]*, and you’re doing user research to prioritize new feature development.  A study participant comes in and says, “I don’t know what browser I use…I just fire up AOL to get on the internet.”  Ouch, you think to yourself, how am I going to get any useful info from this yokel?  This diagnosis bias might make you miss that while they don’t do much at home, their online activity at work makes them a perfect candidate.

How do we avoid biases in Agile UX?

The Sway authors take a small stab at answering this question in the Epilogue of their book, but their answers are a bit simplistic.  I guess this is understandable.  Humans developed these biases because they help us solve problems related to surviving in an unstable outdoor environment, and to do so in nearly constant motion (Brain Rules).  Sometimes you need to make quick, simplifying judgments in order to survive or gain an advantage.  So clearly there is no turnkey, 3-step process for overcoming them.

Obviously there is a need to find an appropriate balance between experimental design rigor and doing the least amount of work to get the most value.

  1. Be Aware of Cognitive Biases.
    You can’t do anything about it if you don’t know about them.  And you just read this post, so check this one off your list.
  2. Make a List of Your Assumptions; Reevaluate Assumptions Across Sprints
    This is basically just trying to externalize your assumptions and biases.  If you can put them out there, and make plans to revisit them over time, it might be easier to catch when they have clouded your judgment.  Also, if you publicize your working assumptions it gives others a chance to critique them, or see if many people would draw the same conclusions.
  3. Think About How to Disprove Your Assumptions, Rather Than How to Prove Them
    This goes back to your Research Methods class…it is difficult to objectively critique something if it is not falsifiable.  I’m not saying you have to set up your null hypothesis tests.  But you can change you design evaluation thinking from “how would I know this is good” to “how would I know if this is bad.”  Try to identify 3-4 ways that would indicate something is wrong.  If none of those things are evident, you’re more confident that you’re on the right track.
Advertisements