Archive for category phone
I’ll be honest. I’ve only dabbled in the mobile app design and development space: one app for fun, one app for work (hopefully to be released in the next couple months). At today’s iPhone Design Conference, Brian Fling argued that mobile design is totally different that web. But I still don’t see how mobile app design and development is that different from traditional software or web development. Mobile devices offer new capabilities and require learning new tools, but the fundamental design and development tasks remain the same.
Mobile design reminds me of designing desktop apps in the late 90s. Multiple platforms, small screen real estate, limited computing resources (although an iPhone would probably
run circles around my 486). Each application was an island, with little or no way to share information or task flows between them. Users probably didn’t have that much experience with computers. Your job as a designer was to understand the users key tasks and success criteria, and iteratively perform design and development to reduce time on task or errors. You differentiated your product by closely aligning the user interface metaphor with the users’ mental model of the task or process. Back in the day, we called this User Centered Design, and later Usability Engineering. Over the next decade, hard drives got bigger, screens got bigger, processors got faster, and networks and application mashups were everywhere. Users learned what to expect on websites. We designers stopped talking about usability — how well do people get through the task flows we have created — and started talking more about a more holistic User Experience.
Mobile application design exploded with the iPhone. Again, we find ourselves designing around constraints of small screens, multiple platforms, and limited computing resources. This time around, however, we’ve got some additional capabilities. Geolocation, gesture and multitouch interfaces, photo and video streams, anytime/anywhere network availability. We have cloud processing and data storage that we can use to offset device limitations. Even better, we have a generation of millions of users that are eager to embrace new technologies, pretty much willing to pay for and try out whatever we can think up.
But some things haven’t changed.
The basic cognitive and physiological capabilities of people haven’t changed. We’re still resource constrained people, who can only focus on one thing at a time, have relatively shoddy memories. We can only get our fingers to click on something so fast.
Because of these basic human traits, designers still have to take care of the same basic interaction design requirements:
- Visibility (also called perceived affordances or signifiers)
- Consistency (also known as standards)
- Non-destructive operations (hence the importance of undo)
- Discoverability: All operations can be discovered by systematic exploration of menus.
- Scalability: The operation should work on all screen sizes, small and large.
- Reliability: Operations should work. Period. And events should not happen randomly.
As Don Norman recently pointed out, we’re not doing a great job with this on gesture interface devices
When we build these interactions, we’re still not doing it by ourselves. We want to continually align our designs with users expectations and developer feedback.
Mobile gives us some new tools in our design toolbox, and we lose the assumption that the user is sitting at a desk, working on a single task by themselves. New device capabilities…natural voice control, natural human gestures, thought controlled interfaces, semantic or linked data…are in active research and will make even more things possible. But the basic job of the UX designer is still the same…to use the resources available to make our users more efficient, effective, safe, and if we’re lucky happier. We still need to work iteratively with developers, business stakeholders to make that happen.
Am I missing something? Am I thinking about it at the wrong level of abstraction?
On a side note, I’ve previously discussed that UX Designer/Developers should have a strong foundation in human factors, psychology, and computer science. I think that (and experience) gives you the background to see beyond the new shiny toys and identify the real trends and innovations. Jared Spool seems to agree.
I’m a Verizon mobile phone subscriber. I’m not happy about it…I think they have a poor selection of phones, poor customer service, excessive cost and fees. But most people I know are on Verizon, which means most of my calls/texts are free, so they’re still the best deal for me. *sigh*
About 25 days ago, I went to upgrade my broken RAZR phone. I wandered around the store trying to find something that looked half as nice as an iPhone. *sigh* I looked at some of the Blackberry’s, but I’m not willing to spend the extra monthly rate for a data plan. I dejectedly picked the Motorola Krave. There was a high out of pocket cost, but it looked nice, the touch-through cover was cool. For the next few weeks I tried it out at home, and got less and less happy with it. Today, I returned it and got a much cheaper LG Venus. Hopefully this one will be better.
I’ve thought a lot about why I disliked the Krave so much. It comes down to poor user experience. Now I’m a user experience designer (web software, not mobile software – yet), so I might be hypersensitive to poor user experience. But I don’t think so. Rather, I think I’m just better able to articulate why an experience is good or bad. My experience with the Krave was bad because:
- The phone user interface clearly was not designed to meet user needs. I understand that mobile phones are becoming general communication devices. But its primary function is still to call/text people, and that was hard to do. It took 6 button presses and 2 screen scrolls to call my wife, or my mom…way too many. It took 3 button presses to get to the list of recent calls. There was no way to set speed dial options (eg, set wife to #2, set mom to #3).
- Instead, it seems to have been designed to sell Verizon services. Of the 12 options on the main menu screen, I cared about 2 of them (recent calls, media center). All the rest lead to something that Verizon charged extra for – V Cast TV, Visual voicemail, web browsing, etc. It had 2 sets of shortcut buttons (one set when the cover was open, one set when the touch-sensitive cover was closed). Again, only 2-3 of those helped me call anyone. None of those were to my recent calls, which I thought was silly. Also, these were not customizable, which I also thought was silly.
- Touchscreen dialpad…requires focused visual attention to dial. Means you can’t be doing some other task that requries focused visual attention, say for example driving.
- There is a switch on the side of the phone to lock/unlock the touch screen. Great idea, except now this is another mode of the phone that the user must manage. I apparantly couldn’t do it. I would forgot to lock the screen when it was in my pocket. Several times, the phone would answer calls while it was still in my pocket just from bumping against my leg (my wife got to listen in on more than 1 work meeting) Whenever I forgot to lock it, I’d take it out of my pocket and it would try to make me watch MTV or buy ringtones.
- When the phone was locked, it did not show the time, or my selected wallpaper, on the screen; it covered the entire screen with a ‘locked’ icon. So I couldn’t even quickly look and see what time it was. I couldn’t quickly pull out my phone to show people how cute my baby is.
- Week vibrate…could not feel it in my pocket. This defeats the purpose of the vibrate mode.
- The touch screen locked automatically when placing a call. Usually this is ok, but it was annoying when calling some automated service when you had to use a touchtone menu. This is usually the time when I made misdials, which was really annoying. I’d try to press #, and the phone would send ’9′, which would send the touchtone service into a tizzy. This leads to the next problem
- Touch screen interface was pretty good, except on the edges…I had a hard time typing ‘p’ or ‘z’ in full QWERTY keyboard mode, I often missed the # or * keys in dialpad mode.
Design with end user or customer focus
I think the fundamental flaw in the Krave user experience is that it appears to be designed to showcase what the phone can do, and to get the user to buy more expensive service contracts. Instead, the designers should have focused on the end users and their goals. The designers need to ask themselves, “who’s using this product, what are their motivations, what goals are they trying to achieve, what tasks are they doing” and then base design decisions around those insights.
Two user experience design tools which would help them gain more customer focus are personas and scenarios. Personas are a detailed description of an archetypal user. A persona is a fictional user that is created to represent a group of users with common goals, motivations, or needs (here is my collection of links on creating and using personas for UX design). The persona is given a name, job description, car, family, whatever is required to make it seem more real to the development team. Developing personas are an important activity because it gives product designers and developers a concrete target for their design activity. They stop saying nebulous things like “if I were a user, I’d…” or “the users want…” Instead, they ask questions like, “how would feature X help Jim?” By designing for the persona, therefore, they satisfy the needs of the actual users the persona represents.
The Krave designers also could have identified some key customer-centric stories or scenarios to further focus design activities around common things that their customers want to do, not what Verizon wants the customers to buy. The scenarios should be based on the personas that have been developed. In other words, first get a sense for who the users are, what their motiviations and goals are, and then generate stories and scenarios that describe how they would want to use the product, and what tasks they want to accomplish. Some possible examples:
Jerry received a call from his wife to pick up a few things from the grocery store on his way home from work. He is now in the store, and he would like to quickly call his wife to see if he has forgotten anything she had asked for.
Jerry is putting his two year old daughter to bed. She’s being extra cute. Jerry wants to call his mom so that she can say good night to her granddaughter.
Jerry is calling his credit card company to dispute a charge. He needs to enter his 16 digit card number.
Scenarios help the design and development team evaluate whether or not the product supports common or important tasks. Certainly, the Krave designers would have seen that, for all its bells and whistles, it is much more difficult and time consuming to complete these scenarios on the Krave than it is on other phone (like the RAZR).
I should mention that there were things about the Krave I did like. Overall the touchscreen interface was good. I didn’t misdial numbers very often (although it was annoying when it happened). I liked to be able to rotate the screen to get a widescreen interface. I loved the camera UI — navigating through photos and videos was great. But I couldn’t get over the fact that this was supposed to be a communication device, and calling and texting my people was hard, and every miskey seemed to land me in an area with a Verizon upsell…Visual Voicemail, GPS Navigation, V Cast TV. I’m sure these are great, but damn it I should be able to call my wife or mom in 1-2 buttons.
I’m not sure who’s at fault for the poor user experience…Motorola or Verizon. My guess is Verizon…as I looked around the store, many of the phones shared UI icons, navigation styles. I know that Verizon is very strict on what they allow in their phones (for example, Verizon disabled some of the features that my RAZR was capable of, probably because they offered a competing for-fee service).
Then again…maybe they did personas and scenarios, and just didn’t design for me. :)