Is Mobile Design Really That Different?

I’ll be honest.  I’ve only dabbled in the mobile app design and development space: one app for fun, one app for work (hopefully to be released in the next couple months).  At today’s iPhone Design Conference, Brian Fling argued that mobile design is totally different that web.  But I still don’t see how mobile app design and development is that different from traditional software or web development.  Mobile devices offer new capabilities and require learning new tools, but the fundamental design and development tasks remain the same.

Mobile design reminds me of designing desktop apps in the late 90s.  Multiple platforms, small screen real estate, limited computing resources (although an iPhone would probably

Back in the day, VisualAge for Java provided a way for designers and developers to jointly build "cross-platform apps." It used a drag/drop GUI editor to generate java code. In other news, I'm old.

run circles around my 486).  Each application was an island, with little or no way to share information or task flows between them. Users probably didn’t have that much experience with computers.  Your job as a designer was to understand the users key tasks and success criteria, and  iteratively perform design and development to reduce time on task or errors.  You differentiated your product by closely aligning the user interface metaphor with the users’ mental model of the task or process.  Back in the day, we called this User Centered Design, and later Usability Engineering.  Over the next decade, hard drives got bigger, screens got bigger, processors got faster, and networks and application mashups were everywhere.  Users learned what to expect on websites.  We designers stopped talking about usability — how well do people get through the task flows we have created — and started talking more about a more holistic User Experience.

Mobile application design exploded with the iPhone.  Again, we find ourselves designing around constraints of small screens, multiple platforms, and limited computing resources.  This time around, however, we’ve got some additional capabilities.  Geolocation, gesture and multitouch interfaces, photo and video streams, anytime/anywhere network availability.  We have cloud processing and data storage that we can use to offset device limitations.  Even better, we have a generation of millions of users that are eager to embrace new technologies, pretty much willing to pay for and try out whatever we can think up.

But some things haven’t changed.

The basic cognitive and physiological capabilities of people haven’t changed.  We’re still resource constrained people, who can only focus on one thing at a time, have relatively shoddy memories.  We can only get our fingers to click on something so fast.

We all have the same basic needs.

Because of these basic human traits, designers still have to take care of the same basic interaction design requirements:

  • Visibility (also called perceived affordances or signifiers)
  • Feedback
  • Consistency (also known as standards)
  • Non-destructive operations (hence the importance of undo)
  • Discoverability: All operations can be discovered by systematic exploration of menus.
  • Scalability: The operation should work on all screen sizes, small and large.
  • Reliability: Operations should work. Period. And events should not happen randomly.

As Don Norman recently pointed out, we’re not doing a great job with this on gesture interface devices

When we build these interactions, we’re still not doing it by ourselves.  We want to continually align our designs with users expectations and developer feedback.

We still need to understand users’ mental model of the task domain first and foremost.

Mobile gives us some new tools in our design toolbox, and we lose the assumption that the user is sitting at a desk, working on a single task by themselves.  New device capabilities…natural voice control, natural human gestures, thought controlled interfaces, semantic or linked data…are in active research and will make even more things possible.  But the basic job of the UX designer is still the same…to use the resources available to make our users more efficient, effective, safe, and if we’re lucky happier.  We still need to work iteratively with developers, business stakeholders to make that happen.

Am I missing something?  Am I thinking about it at the wrong level of abstraction?

On a side note, I’ve previously discussed that UX Designer/Developers should have a strong foundation in human factors, psychology, and computer science.  I think that (and experience) gives you the background to see beyond the new shiny toys and identify the real trends and innovations.  Jared Spool seems to agree.

About these ads