Archive for category mobile
There are three things which have really made me reconsider and revise the process by which I organize and deploy code to servers:
- The need to make ideas and concepts available for consumption as quickly as possible.
I work in a lot of collaborative areas, so it is important for me to say “what about this?” in an electronic medium where people can play with it, try it out, see how it feels. So I want to maximize the time I spend building awesome new stuff, and minimize, automate, or delagate some of the operations type considerations.
- Cloud and mobile.
The days of deploying a site to one known server, to be viewed on a couple known desktop browsers are gone. You need to be able to scale up your app very quickly, and that’s much easier if you plan for that from the beginning. Plus, platform as a service providers make it so easy to try an experiment with different technology stacks, there’s really no reason not to start with this approach.
- Keeping up with the craft.
I need to keep up with the state of the art, so that I can remain vibrant, creative, and competitive in the industry.
So I shared with the group a few tools that have helped me to continue to grow in these areas:
Super Awesome Code Completion
Platform as a Service
Browser-based online IDE
And you can see the slides here:
I’ll be honest. I’ve only dabbled in the mobile app design and development space: one app for fun, one app for work (hopefully to be released in the next couple months). At today’s iPhone Design Conference, Brian Fling argued that mobile design is totally different that web. But I still don’t see how mobile app design and development is that different from traditional software or web development. Mobile devices offer new capabilities and require learning new tools, but the fundamental design and development tasks remain the same.
Mobile design reminds me of designing desktop apps in the late 90s. Multiple platforms, small screen real estate, limited computing resources (although an iPhone would probably
run circles around my 486). Each application was an island, with little or no way to share information or task flows between them. Users probably didn’t have that much experience with computers. Your job as a designer was to understand the users key tasks and success criteria, and iteratively perform design and development to reduce time on task or errors. You differentiated your product by closely aligning the user interface metaphor with the users’ mental model of the task or process. Back in the day, we called this User Centered Design, and later Usability Engineering. Over the next decade, hard drives got bigger, screens got bigger, processors got faster, and networks and application mashups were everywhere. Users learned what to expect on websites. We designers stopped talking about usability — how well do people get through the task flows we have created — and started talking more about a more holistic User Experience.
Mobile application design exploded with the iPhone. Again, we find ourselves designing around constraints of small screens, multiple platforms, and limited computing resources. This time around, however, we’ve got some additional capabilities. Geolocation, gesture and multitouch interfaces, photo and video streams, anytime/anywhere network availability. We have cloud processing and data storage that we can use to offset device limitations. Even better, we have a generation of millions of users that are eager to embrace new technologies, pretty much willing to pay for and try out whatever we can think up.
But some things haven’t changed.
The basic cognitive and physiological capabilities of people haven’t changed. We’re still resource constrained people, who can only focus on one thing at a time, have relatively shoddy memories. We can only get our fingers to click on something so fast.
Because of these basic human traits, designers still have to take care of the same basic interaction design requirements:
- Visibility (also called perceived affordances or signifiers)
- Consistency (also known as standards)
- Non-destructive operations (hence the importance of undo)
- Discoverability: All operations can be discovered by systematic exploration of menus.
- Scalability: The operation should work on all screen sizes, small and large.
- Reliability: Operations should work. Period. And events should not happen randomly.
As Don Norman recently pointed out, we’re not doing a great job with this on gesture interface devices
When we build these interactions, we’re still not doing it by ourselves. We want to continually align our designs with users expectations and developer feedback.
Mobile gives us some new tools in our design toolbox, and we lose the assumption that the user is sitting at a desk, working on a single task by themselves. New device capabilities…natural voice control, natural human gestures, thought controlled interfaces, semantic or linked data…are in active research and will make even more things possible. But the basic job of the UX designer is still the same…to use the resources available to make our users more efficient, effective, safe, and if we’re lucky happier. We still need to work iteratively with developers, business stakeholders to make that happen.
Am I missing something? Am I thinking about it at the wrong level of abstraction?
On a side note, I’ve previously discussed that UX Designer/Developers should have a strong foundation in human factors, psychology, and computer science. I think that (and experience) gives you the background to see beyond the new shiny toys and identify the real trends and innovations. Jared Spool seems to agree.