Archive for category social media
I’ve been interested in recent twitters, posts, and presentations about the emerging and changing role of user experience. Yet we’ve seen some resistance to the new terms or methodologies. I’d like to suggest the current transition in the UX field is similar to revolutions in scientific fields. Science historian Thomas Kuhn would say we’re in a period of crisis or revolution. (Don’t worry, it’s not as bad as it sounds). It’s normal, healthy.
Our field is going through change. Change is normal. I think this change is analagous to the changes and evolution of scientific fields described by Thomas Kuhn in 1962. He showed that scientific fields go through cycles:
- Pre-paradigm—Sort of a bright, green, new field of inquiry. There are no established rules, theories, practices, or paradigms. There might be a number of competing ideas.
- Normal —The field sort of settles on a general worldview (what problems are worth investigating, what tools are techniques are correct, etc). An elite group of guardians usually emerges to lead the field, and determine what ideas are shared or published widely to the field. If processes or findings occur that don’t agree with the established canon of the field, they’re regarded as anomalies.
- Revolutionary—As the anomalies pile up, it becomes harder and harder to reconcile them with the old ways. For awhile, the field is in crisis you have competing theories and processes again. Finally, there is a paradigm shift (the new ideas explain more things more elegantly, or are more useful than old ideas, the old elite guard retires or dies, etc) and the revolution becomes normal. Think of the Copernicus declaring that the Earth revoles around the Sun, or the beginnings of the atomic model of chemistry.
This phased evolution model neatly describes what we’ve been seeing in the user experience field.
Evolution of the UX field
Twenty years ago we were talking about interaction design, usability, and user centered design. Computers were for work, graphics were bad, networks were bad, and building and deploying new systems was hard. I remember hating Windows 3.1 because it made me use a mouse…I could do things so much faster in DOS! As usability engineers, if we could improve task completion rates and reduce errors, we were pretty happy.
During the dot com boom and bust of the late 90′s and early 00′s, we talked about User Engineering, and Outside-In Design. I know it was around this time that I stopped being lumped in with the technical, programming team, and started working more with business users and stakeholders. We all agreed to stop using <blink> and <marquee>, and started to focus on not just building software that optimizes performance on a work task, but designing things that people will actually want to use. As we approached the late 00′s, we started building design patterns, and interaction frameworks. We more or less agreed on a set of UX deliverables. In the last two years there has been an explosion of user experience tools, templates, companies, techniques, andservices available. (Those are just a few that I use, that I could think of at the moment…there are hundreds more). In essence, we all now have a robust set of tools and techniques to address our client’s needs.
¡Viva La Revolución!
Today’s new challenges and assumptions that make us reconsider how we approach user experience.
- Networks are fast and ubiquitous (except when they’re not).
- Users are not only generally familiar with standard computing metaphors, but are innundated.
- Websites, webapps, mobiapps are all relatively quick and cheap to stand up and deploy to millions.
- Screen size is variable, and tasks may span screens.
- Apps are social — they know you and help you interact with your friends. Storage is cheap, data is abundant, and is available from the cloud.
- Location and context are unknown, and may not take place behind a desk.
While we’re still responsible for creating usable experiences (time on task, error rates, mental models, Fitt’s law, etc) the bigger challenge today is figure out exactly what provides value and delight to users. Developers need designs before the end of a 2 week sprint, it now may be as fast or faster to build experiences in an integrated team than to do visual comps. So we’re talking about UX in new ways, using new terms: Agile UX, Lean UX. We’re doing more more lo-fi prototyping. We’re doing lots of design workshops, story mapping, ethnography. We’re doing more of everything, however we can, so that our products and designs are things people will actually want to use.
At the end of the day, practitioners are looking for ways to redefine the field, and exploring new ideas to address today’s realities and assumptions. I don’t think we should disparage these efforts. It’s a normal cycle that one sees in healthy, evolving fields.
I’m seeing people on the leading edge of the User Experience field are having some success truly integrating user experience methodology into corporate culture and product development cycles. I don’t work for a startup. I’m not a consultant (anymore). But I’m excited because history tells me the current UX revolution will eventually become normal UX. It will eventually get to my UX team of one, nestled within an excellent team of web designers, in a conservative company, in the midwest.
In a follow up post, we’ll apply some systems thinking to find a way to bridge the old UX and the new.—
I’ve been using web browsers since Mosaic and Netscape Navigator. In my entire web browsing history, I think I’ve set my browser home page a total of 3 times. Google Instant has made me switch again…
What do you have as your browser homepage? What’s the first thing you see when you open your browser window? Facebook? Your work site? A weather page? If you think about it (and not many people do), your browser homepage setting is your way of saying, “this page represents the one thing that’s most useful to me — the one thing I want to deal with as soon as I start browsing.”
(As a side note…my wife has set her Firefox profile to start not one, but 5 pages on startup: Facebook, Feedly, NPR, Hulu, and Pandora. No matter what she’s planning on getting on the web to do, Pandora starts playing, which drives me crazy).
Clearly, I’m not the only one thinking about browser homepages. Facebook recently started asking users to make Facebook their homepage, and provided a handy bookmarklet to help users make the switch.
Back in the day, say around 1995, Yahoo was THE place to find new and interesting websites (they had a dedicated staff that searched the web by hand, and sorted them into a directory). That was my first home page. Later, Yahoo created My Yahoo!, which aggregated a variety of sources and let you create a personalized front page. That was my second home page, but not for long. It had two problems: 1) it was slow to load, and 2) there’s no way to really accurately predict what I needed to see when I started my browser. And so, probably in 2002, I just turned off browser home pages; I asked the browser to load a blank page on start up.
Google’s recent launch of Google Instant has made me change my browser home page again. I really think Google has hit a sweet spot between speed and flexibility. When I open my browser, I want it to open and be usable immediately. Just like I want my TV to come on immediately, or I want my computer to book quickly…I don’t want to wait to start doing whatever it is I wanted to do when I double clicked the icon. A couple quick tests indicated that the Google Homepage with Instant completes page load and is usable in less than 3 seconds. Facebook, for comparison, took between 5-6 seconds. Doesn’t sound like much, but how many times do you open a web browser each day, or month, or year? It adds up.
But the real value to me is not the speed of loading, but that Google Instant gets me to whatever I need on the web faster, no matter where it is. I don’t have to pre-select the sites or information I want to see. Instead, the instant feedback between my search query and the results make is seem like Google is an actively helping me ask questions and find whatever information I need at the time.
Let’s look at it another way. There are a few sites — Facebook, Google Mail, My Yahoo, delicious — that I visit very frequently. But there are thousands of sites that I may visit only once, or a handful of times. Last week, for example, I wanted to find a WordPress Bundle for Textmate. I found one, and I may never land on any of those website ever again. This is a classic Long Tail (or power law, if you’re more technical) pattern which appears all over in internet and social media behavior. The Long Tail pattern means that there’s a good chance that when I get on the internet, I may want to see one of my frequently visited sites, but that there are a lot more sites that I use on an as-needed basis, and can’t really predict exactly which one I will want.
Google Instant provides my with better tools to get at this Long Tail of sites, more quickly. Because it queries Google’s massive index on each key stroke, taking advantage of specific information about the context of my search: my location, my search history, and probably a whole host of other things. Because there is so little time lag between my search query, the search term suggestions, and the search results, I can quickly tell Google what I’m looking for today, and I can know immediately whether it understands.
So instead of a command and control interface, where I have to tell the browser exactly what I want to see today, by loading Google Instant as my homepage, I’m engaging in a conversation with the web. I tell it what I want to know, and it answers in a fast, seamless way. That’s going to be tough for Facebook, and others, to beat.
I knew that drawing visualizations of your Facebook friends network was fun, but I didn’t know it could get you published!
This Facebook network visualization was published in the Journal of Social Structure Visualization Symposium. As the author states, this visualization has some nice features:
- The angle of each wing is proportionate to its share of the network. Thus 25 percent of nodes go from 0 to 90º.
- Partitions are distinguished by their position rather than a node’s color or shape.
- The tail indicates the periphery of each partition. A wing with many tail nodes indicates many people who are only tied to other group members.
- Edges crossing the center show between-partition connections. Since nodes are sorted by degree it is easy to see if edges originate from the most highly connected nodes or the entire partition.
In this network it is easy to see a strong series of linkages between high school and university as well as high school and family. There are many ties between the current co-workers and professional colleagues, and neither connects substantially to high school. While just as populous, the professional partition is far less dense than the high school partition.
This visualization is oriented towards well-connected modular networks (meaning they are easily partitioned into distinct communities). Facebook egocentered networks often have these properties, whereby each partition represents a life course stage or social context and close friends link between partitions.
This graph was generated using NodeXL, and Excel based network tool
The Real Life Social Network is a great presentation by Paul Adams, UX Reseacher at Google, on the differences between online and offline social networks, and how those differences cause user confusion and even pain. One of the main reasons for this disconnect, he claims, is that online social networking sites tend to put all your connections in one big bucket (Friends on Facebook, Connections on linkedIn, etc), where in real life, across cultures, people tend to have 4-6 relatively independent groups of connections, with 2-10 people in each group.
This sounds about right to me, but I wanted to see if I could see this in my own social network. I used the excellent gephi network visualization and analysis tool, along with these instructions to generate a network graph of my facebook friends.
It looks like I’ve got about 7 discrete social networks (click the image to see more details, labels, etc):
- College friends (mostly from the Hawkeye Marching Band)
- Graduate School colleagues (fellow grad students and professors)
- Current Work Colleagues
- Church friends
- High School classmates
- Former Job Colleagues (mostly from when I worked and lived in England)
I learned a few things in doing this exercise:
- Facebook turns out to be a pretty decent proxy for my offline social network. If someone were to ask me, as Google did in their social network user research, to identify my people, place them in groups, and name the groups, this is pretty much the list I would’ve come up with.
- I’ve got more than 10 people in most of my groups. However, this graph doesn’t really take into account the strength of the connections. If I were to apply a filter to this graph that only showed people who posted on my wall, or who’s wall I posted on recently, I bet the number would be much closer to 10 per group. And some of those groups would disappear. Which leads to…
- These groups and connections are dynamic. My groups, and my attention to them, wax and wane over time.
- I didn’t need Betweenness Centrality to know that my wife is the center of my world. =)
Adams goes on to describe that the web is fundamentally changing because it is becoming a web not just of documents and data, but also of people and relationships. He argues that designers must learn to build systems with these new constructs. The desktop model of one person, dealing with one system, in a cozy office environment is broken. Relationships, influence, identify, and privacy must be built into next-generation systems. from the ground up.
This is a great use of statistics by a couple of PhD candidates at New York University, identifying and quantifying some strong irregularities released by the Iranian Government.
The numbers look suspicious. We find too many 7s and not enough 5s in the last digit. We expect each digit (0, 1, 2, and so on) to appear at the end of 10 percent of the vote counts. But in Iran’s provincial results, the digit 7 appears 17 percent of the time, and only 4 percent of the results end in the number 5. Two such departures from the average — a spike of 17 percent or more in one digit and a drop to 4 percent or less in another — are extremely unlikely. Fewer than four in a hundred non-fraudulent elections would produce such numbers.
Even more than that, this demonstrates the power of cheap, easy access to information over the internet. Two students can pinpoint problems with election results half a world away, publish them, and have them copied, shared, bookmarked, re-tweeted all over the world in a couple of hours. Using publicly available data, and free software. We haven’t even begun to figure out all the ways in which the internet, social media, open source and open access is changing the way the world works.
This is another nail in the coffin for traditional academic publishing. If and when Scacco and Beber write this up as a journal article, the results won’t be publicly available until it goes through the rigorous academic review cycle, which could be anywhere from a few months to several years. But the events in Iran are happening now. Protestors are dying now. Publishing these results in a respected newspaper gets the results out much more quickly.
Furthermore, the authors made the data and statistical analysis code available for anyone in the world to download and run (remember, its all free software, and free data). This means that instead of a couple journal editors validating their work, they’ve effectively crowdsourced the review process. Brilliant. I don’t know how traditional academic journals can continue to pretend to be socially relevant in a world where anyone can make their work public, that work can be independently verified in minutes instead of years.
This tag cloud was generated from all the paper titles that were presented at the ICWSM ’09 conference (http://www.icwsm.org). I don’t think anyone is surprised that ‘social’ is the major term.
This tag cloud was generated from my liveblog of the ICWSM conference (http://fitzgeraldsteele.wordpress.com/tag/icwsm). I think it is interesting that people shows up as the biggest term here, where it hardly registers in the paper titles.
Tagcloud generated by http://www.wordle.net
Intersection of news media, technology, and the political process. Modern SM technology is a disruptive technology, similar to radio/TV in the 20th century. How does information transmitted broadly by the media interact with the personal influence arising from social networks?
SM erases difference between global and local influence, making more of a continuum. Speed of media reporting increasing, contributing to a 24 hour news cycle. “A Challenge to healthy discourse.” Online media also adds complexity to how political info flows through social networks.
The dynamics of the global news cycle
Examined if the ‘news cycle’ is a metaphorical construct, or is it visible in data. If it’s visible, can we measure it, describe it? Used data from Spinn3r, looked at 1M news articles and blog posts per day, 20K sources.
What basic “units” make up the news cycle? Need some aggregate of articles, vary over the order of days, and handles half-terabyte of data. Look for “memes”, identify text fragments, phrases, quotes that travel through many articles. They create a weighted, directed, acyclic graph of mutational variants, that delentes min total edge weight such that each component has a single “sink” node. This problem is NP-hard, but can apply heuristics based on selecting a single edge out of each quote. Produces a neat stacked histogram graph that shows the relative frequencies of stories related to a particular quote over time.
Use some analogies to describe temporal variations: eg species competing for a resources in an eco system, or biological systems that synchronize to favor a small number of individuals at any point in time. A model to describe this might include: imitation term, recency term.
Found a 2.5 hour gap between peak intensity of the story in mainstream media, vs when it peaked in the blogs.
Can also use the data to find stories where blogs lead the media.
The spread of political messages through social networks
Might look at Chain-letter petitions as ‘tracers’ through global social network. These are good because 1) they are viral – only get via email, 2) comes with its own tracer (signatures on it). Can’t see the full tree, but copies get posted to mailing lists, which can be found by search engine. So they can build a partial tree, compensating for the mutations in the signature tree.
It turns out genetic mutation analogies are good…all kinds of mutations happen (people erase names, put funny names in the middle, etc).
Built the tree from two chain letters, and it looked funny. If we’re in a small world network (six degrees of separation), why is the tree very deep and narrow, like a depth-first search tree. Why? Possible timing effects, assuming that nodes act on messages according to some delay.
So we can make some initial analogies like mutation, biology. But these are really complex, global phenomena, that require richer models and knowledge of human behavior. Ideas from computing and online media will be crucial to the next steps.
Stochastic Models of User-Contributory Web Sites
Interested in modeling how to people view and rate existing content. The talk is an extended example using Digg.
Votes on stories is a combination of visibility (do they see the story) and interest (do they like it, vote on it). In this experiement, they don’t have info on visibility so they need to model it.
Their model captures key Digg qualitative features: slow initial, fast growth as it gets more views.
A model for promotion of an article is created.
Stochastic process approach used to connect user and system behaviors. Applies to users with limited information and tasks
Personal Information Management vs. Resource Sharing: Towards a Model of Information Behavior in Social Tagging Systems
Why do people tag? Towards a model of tagging as info interaction behavior.
Is tagging a way to get around the vocabulary problem (different communities, different terms)
Emerging tag models for Language (Linguistic Tag model), function, tag-relationship. Found almost all tags relate to content, not time, task, emotion
TACS – web based tool for tag analysis
Used Amazon Mechanical Turk as a cheap way to get survey subjects, although there may be some problems (verification, biased population, platform)
Assume different motivations for tagging. Organizing your own content (PIM) vs Media and information sharing.
Designed a questionnaire of Delicious, Connotea, Flickr, YouTube users 7pm Likert scale
Qualitative analysis showed strong differences in motivation for using different sites.
Ease of tagging not significantly different. Tagging is useful (connotea users really think so).
Compares to Shneiderman 2002 Two dimensions of social interaction (activity vs. social sphere)
In terms of IR, people thought tags on flickr/youtube were more helpful than delicous/connotea. I’m surprised by that…I use tags on delicous to locate information all the time. For me, its one of the key features. When I asked the speaker, he said his qualitative/quantitative results had no indication of that type of behavior. I think that’s really interesting. Time for a paper?
Activity Types (Cool & Belkin 2002). May be applicable, but lacks a social dimension.
Motivation, Structure, and Tenure Factors that Impact Online Photo Sharing
Why do people in online communities share? Photos, info, meta-information, code. Want to quantify drivers for sharing and actual behavior. Can look at the area in terms of WHY people share, WHAT they share, and WHERE.
Note a difference between creating and sharing. They are separate, but many studies assume creation is coupled with sharing. Looked at Flickr data; combined survey data with system reported data.
Looks at 3 factors: Motivation (Intrinsic vs Extrinsic, Self vs. others)
Structure: Number of contacts
Tenure: Years since started sharing
Looked at artifact sharing per year tenure.
I wonder why they went shares/year, not per month. Seems like you could really see different outcomes for people that post habitually, vs people that share their one time trip.
Commitment, Number of contacts positively correlated with sharing. Personal Enjoyment is not correlated (maybe because people motivated by creating more than sharing). Self-development is negatively correlated with sharing (maybe because they are more interested in quality than quantity). Time since first upload strongly negatively correlated with sharing (the longer you’re with a community, the less likely you are to share). Maybe because of loss of interest.
Modeling Blog Dynamics
The blogosphere is a system of interactions of posts, topics, links, etc. The purpose here is to create a generative model of the blogosphere that matches properties of the real blogosphere for prediction and motivation.
Actually 2 networks combined into one: Blog network and post network.
Goal: Model micro-level interactions to create the macro-level patterns (structure, and dynamic over time) of the blogosphere.
Structure/Topological Patterns: Power Laws (interposting time)
Temporal/Dynamic Patterns: Burstiness and Self-similarity
Proposed Model: ZC
In every timestep, for every blog, assign a state as part of an FSM, depending on how likely they wil blog. If they blog, randomly decide if they will create a link to a neighbor or ‘random blog’.
This creates a post distribution, burstiness, post popularity similar to real blogosphere.
System Design and Community Culture
The role of rules and algorithms in shaping human behavior
- Lukas Biewald, Dolores Labs
- Rashimi Sinha, Slideshare
- Cameron Marlow, Facebook
Dolores Labs – Making Crowds Efficient and Reliable. They pay people to perform tasks, aka Amazon Mechanical Turk.
Slideshare – Focus on social design. Presentations are fundamentally social – you don’t make them for yourself. The social networking tools (commenting, favoriting, tagging) has lead to the creation of a community.
Facebook – Runs the Data Science team, which uses machine language and research to understand how users use the site, and that leads to design changes.
Examples of Unexpected Community Behavior?
RS: What gets spam, what does not. Particularly in their comment system. They went through lots of iterations
LB: Prompting a task affected the outcome. So now they work with people to define
What sort design decisions are based on difficulty?
LB: Try to break a task into the smallest possible unit.
RS: Presentations are less frequent than say photos, so there are different rules. Also differentiate between user types: content creator, readers, aggregators.
CM: Facebook isn’t really designed around a task. They do lots of things to enable use at different levels.
Range of tasks across the three systems. How do you learn how social interactions change tasks?
RS: Observed real life events (people gather around a presentation). Create a unit, and a construct around that.
CM: FB tries to lower the barrier of trying now tasks. For example, someone can upload a photo, others can tag photos, add metadata, etc.
Design by Intuition vs. Design by Data. What is your approach/process in developing new features?
RS: Start with intuition, primary hypothesis. Look at what data in the world. Once its up, there’s lots of data to see what people like, what people talk about. Also do AB Testing.
LB: Can nicely segment users along whatever dimension you want, so you have lots of options.
CM: People react to change. Some like it, some hate it. What fraction of the population respond to the change.
We know you can prompt people to get certain types of behavior. How do you compensate for that?
RS: Not so worried about that — doesn’t have to be scientific. Of course, you can also do experiments to deal with it.
CM: There are many sources of bias in these large ecosystems. Important that decision makers know about them.
Community, communicate, share. What makes for successful conversation?
CM: Allow them to happen at a different scale, use aggregated tools to understand entire conversation. For example, they have a tool that can find a term/keyword across all of Facebook, as a percentage of all text. Helps them make sense (in some small way) of everything.
RS: Twitter hashtags are a really good, scalable way to communicate a topic. Well, maybe partially scalable. When a hashtag makes twitter trending topics, bots take over. But things are good up until then.
How do people discover your content, features?
RS: Email, social network links, but mainly Google search
CM: The Wall. Now have two feeds: 1 real time, 1 algorithm driven.
Twitter innovations: #hashtags, @replies, ReTweets – users came up with those. How do you design so that users can extend the design on their own?
RS: Initial version of Slideshare was barebones. Keep the initial design to the core, get feedback, refine. Build new features based on what works. Also, develop and API so people can extend your site.
CM: Design a platform so that people can build their own specific tools.
How do you enable the conversation/feedback between designers and community? How do you differentiate edge case complaints vs real problems.
LB: Designers do customer support
RS: Ditto. Also, use numbers, percentages of people that complain.
CM: Collect as many signals as possible. If something shows up across many areas, it may be a real problem.
You are currently browsing the archives for the social media category.
- @expensify Also very happy with how quickly your support answered my "help! I deleted the wrong thing" email. 1 day ago
- @expensify Your undelete made me a very happy bunny today! 1 day ago
- @KnightStormGame any updates on the update? 4 weeks ago
- @CenturyLinkHelp No, we're back up and running. Thanks for the followup. 1 month ago
- @CenturyLinkHelp Ahh...your IVR said there are connection issues in my area. That, and the 'email when fixed,' are very helpful. Thanks! 1 month ago