Social media versus Oil Can Henry’s

It’s the banal that determines whether social media will succeed in the mainstream, and today I had an experience that I think demonstrates how far away we are from achieving the the ubiquitously useful social media experience we deserve.

Specifically, I got my oil changed.

The epitome of banal, right?

Yeah, except, see, I don’t really know anything about cars (yeah, I’m man enough to admit it… what? What?!), — and so when the Oil Can Henry’s technician suggested that I use synthetic motor oil instead of the conventional stuff I’d been using, I had no idea what to tell him — though the significant price difference definitely put me off.

Famous 20-Point Full-Service Oil Change

Pressed for an answer, I did what anyone in this situation would do (yeah right): I posted to Twitter and CC’d Aardvark (a question-answer service that follows my tweets):

Twitter / Chris Messina: I've got ~26K miles on a 2 ...

Within seconds @vark sent me a direct message confirming that they’d received my query and were on the case:

Twitter / Direct Messages

Of course by now the attendant needed an answer — I was there for an oil change after all — and stalling until I got a definitive answer would have just been awkward.

“Sure,” I said, “what the hell.”

Then the responses started rolling in.

The first came from Derek S. on Aardvark 3 minutes later:

I’m far from a car expert, but my experience with my Honda Fit is that Hondas are generally engineered to run on the basics… regular unleaded gas, regular oil, etc. My guess is it’s probably not worth it.

Hmm, okay, that’s basically what I thought too, but it sounds like Derek knows as much about cars as I do.

Then came the first response on Twitter from Kasey Skala:

@chrismessina synthetic is for 75k+

Hmm, well, that’s pretty definitive. Guess I got punk’d.

But then more answers came in. A total of 17 tweets overall:

Erik Marden:

@chrismessina synthetic costs more, but lasts longer. I always go for it.

Rex Hammock:

@chrismessina For the record, Castrol is 100% owned by BP. Just saying. For the record.

Nick Cairns:

@chrismessina castrol is a bp co

Jon Bringhurst:

@chrismessina If you go synthetic, keep in mind that time between oil changes can jump up to like 10k+ miles, depending on how you drive.


@chrismessina Started doing 15Kmile synthetic on my 98 Honda. Need to read up more, but think fewer oil changes = less oil used.

Mark Boszko:

@chrismessina Synthetic oil is always a good idea, in my experience. I’ve taken cars to nearly 300K miles with its help.

Sam Herren:

@chrismessina Only if you wanna keep synthetic for the rest of the time you own the car.  Can’t go back and forth.


@chrismessina I’ve heard that’s about the time to do it. Advantage = less frequent oil changes but nary any cost savings in my experience.

Frank Stallone (2, 3):

@chrismessina I put only synthetic oils in my cars — check your manual you may find you were suppose to be putting that in from the start!

@chrismessina I just looked up your car – every engine that Honda built for it should use synthetic

@chrismessina I love Amsoil the most but I’ll use Castrol and Mobile 1 any day — very trust worthy brands

B J Clark:

@chrismessina yes, go with synthetic and then only change it once every 5k – 10k miles.

Todd Zaki Warfel (2):

@chrismessina primary benefit of synthetic is if you drive hard or want to go longer on oil changes (e.g. 6-10k).

@chrismessina it’s the only thing I ran in my Mini Cooper S Works Edition (street legal race car)

Osman Ali:

@chrismessina Mobil 1

Christopher Loggins:

@chrismessina Prob too late, but Castrol Syntec is good oil. Good viscocity, temperature range, and zinc. Would use vs conventional.

I’ve captured all the responses here to give you a sense for the variety of answers I received from respondents who were all presumably unaware of each other’s responses.

If you ask me, this is a pretty good range — and is an excellent demonstration of both social search and distributed cognition and illustrates why “social” can’t be solved by an algorithm (this is the stuff that Brynn‘s an expert on).

The reality is that that my social network (including my 22,000+ Twitter followers and extended network through Aardvark) failed me. I probably made a premature decision to switch to synthetic oil — or at best, a decision without sufficient knowledge of the consequences (i.e. that once you switch, you really shouldn’t switch back). It’s not like it’s the end of the world or anything, but this is the kind of experience that I’d expect social networks to be really good at. And it’s not like I didn’t get good answers — they just weren’t there when I needed them.

And it’s all the more funny because I actually tweeted my plans two hours before I left… why didn’t the network anticipate that I might need this kind of information and prepare it in advance? Better yet: why didn’t my car tell me its opinion (I’m half serious — it should be the authority, right?)? Surely the answer I sought was out there in the world some where — why didn’t my network tee this up for me? (And no doubt I’m not the first person to find himself in this situation!)

The network responded, but only after it was too late. So the next time I’m confronted by a question like this, what’s the likelihood that I’ll turn to my network? What if I didn’t work on this stuff for a living?

Out of curiosity, I submitted this question to Fluther, Quora, and tried to cross-post to Facebook (since Facebook is working on its own Q&A solution) but that failed for some reason.

So far, I’ve received three responses on Fluther, none on Quora, and two on Aardvark. I also posted the full text of my question to Google and Bing but amusingly enough, only my Fluther question came up as a result.

My takeaway? We’ve certainly made progress on the accessibility of social networks in aiding in question answering, but until our networks are able to provide better real-time or anticipatory responses, caveat emptor still applies.

Then again, YMMV.

The social agent, part 2: Connect

Mozilla Labs Official ConceptThis is the second part of the five part Mozilla Labs Concept Series on Online Identity. This post introduces and examines the verb “Connect” as the foundation of a more personalized browser — which I outlined in Part 1: The Social Agent.

Also take a look at the rest of my mockups (view as a slideshow) or visited the project overview.

. . .

When was the last time you created a new username and password so that you could make use of some website? Do you remember what username you picked, or which email address you used to sign up? Probably. But what about that support forum that you signed up for a couple weeks ago while you were home for the holidays? Did you write it down somewhere? Or worse: did you just use the same username and password that you use everywhere else?

Spreadsheets, text files, sticky notes, cheat-sheets, software and browser extensions — you name it, people have probably found some way to recruit every kind of notational tool there is to help them remember the countless passwords, PINs, IDs, usernames, and secrets needed to access the apps, websites, and services that they use on a regular basis. But we can do better.

Step 1: Activate

The social agent is designed to unify your online social experience. With that in mind, a social agent must become an extension of you in order to mediate your online interactions.

This is achieved by activating your browser against your preferred account provider when you first begin your online session, just as you activate your mobile phone before being able to make or receive calls. This is how the browser is turned into a social agent.

By activating your browser, you are effectively telling your browser who you are and where to store and access your data online.

Account Manager - Activate a New Account

Fortunately, you can activate using any account that you already have that supports a Connect API, like Twitter Connect or Facebook Connect (or soon, OpenID Connect). It is also conceivable to use the browser in an anonymous or “incognito mode”.

Step 2: Connect

Once activated, you can visit any site that supports Connect and with the click of a button, sign up and bring your profile, relationships, content, activities, and any other portable data with you. This process is identical to Facebook Connect or Twitter Connect, except that the interaction occurs between your social agent and the site you’re visiting.

What is a Connect API? Writing for the O’Reilly Radar blog in February last year, David Recordon defined the anatomy of “connect” as meeting four criteria:

  • Profile: Everything having to do with identity, account management and profile information ranging from sign in to sign out on the site I’m connecting with.
  • Relationships: Think social graph. Answers the questions of who do I know, who do I know who’s already here, and how I can invite others.
  • Content: Stuff. All of my posts, photos, bookmarks, video, links, etc that I’ve created on the site I’ve connected with.
  • Activity: Poked, bought, shared, posted, watched, loved, etc. All of the actions that things like the Activity Streams project are starting to take on.

OpenID ConnectThis is what the verb “connect” means for the social agent. The “connect” button communicates that your browser is going to share some amount of your profile data with the site that you’re connecting with. You’re not just signing in. You’re connecting — and creating a relationship with the site. You can of course change the data that the website gets — even after you’ve signed in — and the benefit of this model is that you have transparency into what data you’re sharing with whom.

Far from making it impossible for you to share your data, your social agent should help you mediate such decisions, guiding you about which sites to connect with, and providing context to help inform you actions.

Clicking Connect pulls a familiar browser-based UI

For this model to work, your connections are actually made between your preferred account provider and the third parties to which you’ve connected. Your account provider, then, acts as a hub for all of your online doings — collecting, maintaining, and mediating your browsing history, relationships and contacts, activities, transactions, content and media, and online profile. This provider should let you selectively configure how much, how little, or how long such your data is made available to third parties — much in the same way that you manage access on Twitter or Facebook today.

For you, this means that you get to pick an account provider of your choice — without needing to worry about remembering or managing passwords or usernames. Instead, you can have any number of accounts that are available to you wherever the web goes.

As a core feature of the social agent, connecting is the action you take whenever you want to establish an enduring an ongoing relationship with a site, service, or individual.

Designing hashtags for emergency response

I’ve been moved by the devastation wrought by the Haitian earthquake. It’s simply impossible to fathom, with death toll estimates hitting 200,000. In comparison, the Indonesian tsunami of 2004 killed nearly 230,000 people — placing it fourth among the world’s deadliest earthquakes. To give some perspective to those numbers, the atom bomb dropped on Hiroshima in 1945 killed 80,000 people instantly. These are numbers that I simply can’t grasp.

And this disaster still unfolds, with scores pitching in — many turning to the social web and social media to facilitate or amplify their efforts.

Tweak the Tweet logoOne such effort is being lead by Project EPIC, a collection of information scientists, computer scientists and computational linguists at the University of Colorado at Boulder and the University of California, Irvine.

Their initiative, called Tweak the Tweet, provides a dictionary of hashtags for reporting on issues on the ground in Haiti and calling for aid. Here are templates for using their syntax:

Tweak the Tweet

I applaud their efforts and desire to help people communicate their status in a way that facilitates machine-processing. I worry, however, that this approach may limit its success.

Hashtags are metadata for humans first, machines second

The original need for hashtags came from the lack of any formal or public grouping mechanism in Twitter.

For example, when half of Silicon Valley went to SXSW and tweeted for days on end about this speaker or that panel, those who weren’t at the conference desperately wanted some way to filter out such noise. I proposed the hashmark (#) as a way of adding context to a tweet, so that people could choose for themselves to filter out or follow tweets tagged with certain keywords. In July last year, Twitter decided to hyperlink hashtags to their respective search results, and the format became widely adopted — more often than not used to game the trending topics on Twitter’s homepage.

Initially, most people thought hashtags were ugly and useless; even the folks at Twitter thought that they were unnecessary because they’d eventually develop natural language processing algorithms that would supersede the need manual tagging. Contrary to initial complaints about their complexity, hashtags become easier to understand and use with repeated exposure and practice because they are so transparent: if you see someone use a hashtag, you know how to use a hashtag.

And so three years later, hashtags still serve a role in helping people express themselves to each other.

Keep it simple, make it memorable

Language is inherently mutable; mathematics (the language of machines) is not. Verbal language can be adapted by a speaker, and what is heard (or read) is itself interpreted; the conversion is never digital, and invariably bears some loss of meaning.

But using hashtags to clarify meaning prioritizes the needs of the machine over the capabilities of the individual.

Such imposed order in a networked environment can succeed, but only if it achieves instant, widespread adoption, and is itself superficial (that is, it doesn’t require deep knowledge to understand or use the new order). In contrast, simpler, smaller and emergent structures tend to fare better over time, but developing them is not easy (see also: slashtags).

Successful structures should also aim for minimal cognitive burden — by being easy to remember and recall in practice. I’ve frequently seen people tweet about how they “forget to use hashtags” in posts — which is not surprising, since most people don’t think about the metadata of what they say. Hashtags and slashtags are most useful, therefore, when you want to provide additional context that is harder to express otherwise.

Learning from previous efforts

The Tweak the Tweet project introduces a “new order” for using Twitter. Though the words it calls out are mostly common, the use of the hashmark seems gratuitous, given the limited length of the medium (something that Stowe Boyd points out) and that the hashed words comprise the meat of the message, rather than the meta. To give you an example, this is Tweak-the-Tweet formatted post (77 characters):

#haiti #offering #volunteers #translators #loc Florida #contact @FranceGlobal

The same message could be reformatted to be human-readable without any loss of meaning (72 characters):

Offering volunteer translators in Florida. Contact @FranceGlobal. #haiti

While the message may not be as machine-friendly, it may reach a wider (human) audience available to respond to this offer.

Now, I don’t want to dismiss this effort, but instead provide a word of caution on focus. Tweak the Tweet is not the first hashtag pidgin language I’ve seen — and previous efforts struggled to gain adoption and awareness. Perhaps by minimizing the metadata and maximizing the meat, the effort poured into this might achieve a greater effect.

Paving the cowpaths and bulldozing fields


Hashtags may never have taken off if it weren’t for Nate Ritter tweeting about the San Diego forest fire in 2007. In fact, his use of the hashtag was the first dedicated use of a hashtag to help coordinate a response to a natural disaster:

Nate Ritter and #sandiegofire

What’s important about his use of hashtags in this case was that he was using them to communicate critical information to people in natural language. His use of the hashtag provided additional context to his followers who weren’t in San Diego, and also modeled a behavior that others could easily emulate when reporting their own news.

When I proposed using #sandiegofire as the hashtag for Nate to use, I first looked at what people were already using the tag their photos of the event on Flickr. At the time, the sandiegofire was one of the trending tags, and that’s how I chose it:

Popular Tags on Flickr Photo Sharing

Had I tried to come up with my own new phrase for the event, Nate’s use of the tag may not have been picked up. #sandiegofire was also better than the alternatives, which were more localized and therefore more obscure to the broader audience. Using “SanDiego” in the tag itself helped bring clarity and context to Nate’s tweets.

Using hashtags effectively means considering the audience and their familiarity with the issue being tweeted about. While tagging lets you be as esoteric as you want, it may limit the reach of your effort, whereas paving the cowpaths means that you build on the familiar and connect with what people already know, reducing friction and inviting contribution.

iList with #ihave and #iwant

iList is an interesting service that originally aimed to take on eBay and Craigslist by leveraging social media. More recently they decided to narrow their efforts to focus on hashtag-based listings and Twitter search. Nonetheless, what I think is interesting about their approach is that it is, on the surface, quite simple.

To use the service, you just tag your tweet with #ihave or #iwant. If you want to get more detailed, you can add your zip code or categories like #forsale or #electronics. But the core service relies on using just two tags which seem to be have moderate usage — proving that getting adoption is always the hard part of any metadata-based communication strategy.

Twitter Vote Report#votereport

The last example is very similar to Tweak the Tweet and was launched by some friends of mine. The Twitter Vote Report project was designed to enable citizens to report on their local voting situation by using a series of hashtags:

  • #[zip code] to indicate the zip code where you’re voting; ex., “#12345?
  • L:[address or city] to drill down to your exact location; ex. “L:1600 Pennsylvania Avenue DC”
  • #machine for machine problems; ex., “#machine broken, using prov. ballot”
  • #reg for registration troubles; ex., “#reg I wasn’t on the rolls”
  • #wait:[minutes] for long lines; ex., “#wait:120 and I’m coming back later”
  • #early if you’re voting before November 4th
  • #good or #bad to give a quick sense of your overall experience
  • #EP[your state] if you have a serious problem and need help from the Election Protection coalition; ex., #EPOH

All tags were optional except the #votereport tag.

They also went through painstaking effort to mobilize people and provide alternative means to participate. They also did a good deal of work to report back their findings in real time (most visualizations appear to be offline) and open sourced their codebase.

They also made sure to make it possible to participate without using Twitter — the hashtags were just a mechanism for getting data into the system.

Design for adoption, stay focused

Around the time it launched, Ethan Zuckerman expressed skepticism about whether Twitter was the appropriate tool for the vote report project, in much the same way I’m wondering whether Tweak the Tweet could take a more focused approach in exchange for wider participation to achieve its goals.

My greatest concern is that there won’t be enough people who can “speak” the “tweaked” syntax, leading to a lot of effort spent building parsers that will be data-starved. While trained volunteers might be able to use this syntax effectively, I wonder if there aren’t alternative approaches that could use the existing corpus of text messages and tweets coming out of Haiti (which probably aren’t geo-coded, unfortunately) to discern the typing patterns that people use naturally in order to facilitate adoption? Perhaps by focusing on fewer tags that are self-evident in their meaning and use, it is possible that this effort could be used to model the proper usage of the tags, making a more direct difference while there’s still time? Unless the audience of this effort is expert users, I’d suggest steering towards simplicity and ease of adoption — and being mindful that typing out a complicated machine-friendly syntax might be the last thing on someone’s mind who’s trying to find or offer help in such a disaster.

Designing for the gut

This post has been translated to Belorussian by Patricia Clausnitzer.

I want you to watch this video from a recent Sarah Palin rally (hat tip: Marshall Kirkpatrick). It gives us “who” I’m talking about.

While you could chalk up the effect of the video to clever editing, I’ve seen similar videos that suggest that the attitudes expressed are probably a pretty accurate portrayal of how some people think (and, for the purposes of this essay, I’m less interested in what they think).

It seems to me that the people in the video largely think with their guts, and not their brains. I’m not making a judgment about their intelligence, only recognizing that they seem to evaluate the world from a different perspective than I do: with less curiosity and apparent skepticism. This approach would explain George W Bush’s appeal as someone who “lead from the gut“. It’s probably also what Al Gore was talking about in his book, Assault on Reason.

Many in my discipline (design) tend to think of the consumers of their products as being rational, thinking beings — not unlike themselves. This seems worse when it comes to engineers and developers, who spend all of their thinking time being mathematically circumspect in their heads. They exhibit a kind of pattern blindness to the notion that some people act completely from gut instinct alone, rarely invoking their higher faculties.

How, then, does this dichotomy impact the utility or usability of products and services, especially those borne of technological innovation, given that designers and engineers tend to work with “information in the mind” while many of the users of their products operate purely on the visceral plane?

In writing about the death of the URL, I wanted to expose some consequences of this division. While the intellectually adventuresome are happy to embrace or create technology to expand and challenge their minds (the popularity and vastness of the web a testament to that fact), anti-intellectuals seem to encounter technology as though it were a form of mysticism. In contrast to the technocratic class, anti-intellectuals on the whole seem less curious about how the technology works, so long as it does. Moreover, for technology to work “well” (or be perceived to work well) it needs to be responsive, quick, and for the most part, completely invisible. A common sentiment I hear is that the less technology intrudes on their lives, the better and happier they believe themselves to be.

So, back to the death of the URL. As has been argued, the URL is ugly, confusing, and opaque. It feels technical and dangerous. And people just don’t get them. This is a sharp edge of the web that seems to demand being sanded off — because the less the inner workings of a technology are exposed in one’s interactions with it, the easier and more pleasurable it will be to operate, within certain limitations, of course. Thus to naively enjoy the web, one needn’t understand servers, DNS, ports, or hypertext — one should just “connect”, pick from a list of known, popular, “destinations”, and then point, click — point, click.

And what’s so wrong with that?

What I find interesting about the social web is not the technology that enables it, but that it bypasses our “central processor” and engages the gut. The single greatest thing about the social web is how it has forced people to overcome their technophobias in order to connect with other humans. I mean, prior to the rise of AOL, being online was something that only nerds did. Few innovations in the past have spread so quickly and irreversibly, and it’s because the benefits of the social web extend beyond the rational mind, and activate our common ancestors’ legacy brain. This widens the potential number of people who can benefit from the technology because rationality is not a requirement for use.

Insomuch as humans have cultivated a sophisticated sociality over millennia, the act of socializing itself largely takes place in the “gut”. That’s not to say that there aren’t higher order cognitive faculties involved in “being social”, but when you interact with someone, especially for the first time, no matter what your brain says, you still rely a great deal on what your gut “tells you” — and that’s not a bad thing. However, when it comes to socializing on sites like Twitter and Facebook, we’re necessarily engaging more of our prefrontal cortex to interpret our experience because digital environments lack the circumstantial information that our senses use to inform our behavior. To make up for the lack of sensory information, we tend to scan pages all at once, rather than read every word from top to bottom, looking for cues or familiar handholds that will guide us forward. Facebook (by name and design) uses the familiarity of our friends’ faces to help us navigate and cope with what is otherwise typically an information-poor environment that we are ill-equipped to evaluate on our own (hence the success of social engineering schemes and phishing).

As we redesign more of our technologies to provide social functionality, we should not proceed with mistaken assumption that users of social technologies are rational, thinking, deliberative actors. Nor should we be under the illusion that those who use these features will care more about neat tricks that add social functionality than the socialization experience itself. That is, technology that shrinks the perceived distance between one person’s gut and another’s and simply gets out of the way, wins. If critical thinking or evaluation is required in order to take advantage of social functionality, the experience will feel, and thus be perceived, as being frustrating and obtuse, leading to avoidance or disuse.

Given this, no where is the recognition of the gut more important than in the design and execution of identity technologies. And this, ultimately, is why I’m writing this essay.

It might seems strange (or somewhat obsessive), but as I watched the Sarah Palin video above, I thought about how I would talk to these people about OpenID. No doubt we would use very different words to describe the same things — and I bet their mental model of the web, Facebook, Yahoo, and Google would differ greatly from mine — but we would find common goals or use cases that would unite us. For example, I’m sure that they keep in touch with their friends and family online. Or they discover or share information — again, even if they do it differently than me or my friends do. Though we may engage with the world very differently — at root we both begin with some kind of conception of our “self” that we “extend” into the network when we go online and connect with other people.

The foundation of those connections is what I’m interested in, and why I think designing for the gut is something that technocrats must consider carefully. Specifically, when I read posts like Jesse Stay’s concept of a future without a login button, or evaluate the mockups for an “active identity client” based on information cards or consider Aza and Alex’s sketches for what identity in the browser could look like, I try to involve my gut in that “thought” process.

Now, I’m not just talking about intuition (though that’s a part of it). I’m talking about why some people feel “safer” experiencing the web with companies like Google or Facebook or Yahoo! at their side, or how frightening the web must seem when everyone seems to need you to keep a secret with them in order to do business (i.e. create a password).

I think the web must seem incredibly scary if you’re also one of those people that’s had a virus destroy your files, or use a computer that’s still infected and runs really slow. For people with that kind of experience as the norm, computers must seem untrustworthy or suspicious. Rationally you could try to explain to them what happened, or how the social web can be safe, but their “gut has already been made up.” It’s not a rational perception that they have of computers, it’s an instinctual one — and one that is not soon overcome.

Thus, when it comes to designing identity technologies, it’s very important that we involve the gut as a constituent of our work. Overloading the log in or registration experience with choice is an engineer’s solution that I’ve come to accept is bound to fail. Instead, the act of selecting an identity to “perform as” must happen early in one’s online session — at a point in time equivalent to waking up in the morning and deciding whether to wear sweatpants or a suit and tie depending on whatever is planned for the rest of the day.

Such an approach is a closer approximation to how people conduct themselves today — in the real world and from the gut — and must inform the next generation of social technologies.

Losing my religion

Last January, writing on the problem of open source design, I said:

I’ve probably said it before, and will say it again, and I’m also sure that I’m not the first, or the last to make this point, but I have yet to see an example of an open source design process that has worked.

Indeed, I’d go so far as to wager that “open source design” is an oxymoron. Design is far too personal, and too subjective, to be given over to the whims and outrageous fancies of anyone with eyeballs in their head.

Lately, I’m feeling the acute reality of this sentiment.

In 2005, I wrote about how I wanted to take an “open source” approach to the design of Flock by posting my mockups to Flickr and soliciting feedback. But that’s more about transparency than “open source”. And I think there’s a big difference between the two that’s often missed, forgotten or ignored altogether: one refers to process, the other refers to governance.

Design can be executed using secretive or transparent processes; it really can’t be “open” because it can’t be evaluated in same way “open source” projects evaluate contributions, where solutions compete on the basis of meritocratic and objective measures. Design is sublime, primal, and intuitive and needs consistency to succeed. Open source code, in contrast, can have many authors and be improved incrementally. Design — visual, interactive or conceptual — requires unity; piecemeal solutions feel disjointed, uncomfortable and obvious when end up in shipping product.

Luke Wroblewski is an interaction designer. He recently made an observation about “openness” that really resonated with me:

I read this quote last week and realized it is symptomatic of a common assertion that in technology (and especially the Web) “completely open” is better than “controlled”.

“But we’ll all know exactly where Apple stands – jealously guarding control of their users […] And that’s not what Apple should be about.” –TechCrunch

Sorry but Apple makes their entire living by tightly controlling the experience of their customers. It’s why everyone praises their designs. From top to bottom, hardware to software -you get an integrated experience. Without this control, Apple could not be what it is today.

He followed up with a post on Facebook’s design process today that I also found exceedingly compelling.

I worry about Mozilla in this respect — and all open source projects that cater to the visible and vocal, ignoring the silent or unengaged majority.

I worry about OpenID similarly — an initiative that will be essential for the future of the social web and yet is hampered by user experience issues because of an attachment to fleeting principles like “freedom” and “individual choice”. Sigh.

I’m not alone in these concerns.

When it comes to open source and design, design — and human factors, more generally — cannot play second fiddle to engineering. But far too often it seems that that’s the case.

And it shouldn’t be.

More often there should be a design dictator that enters into a situation, takes stock of the set of problems that people (read: end users) are facing, and then addresses them through observation, skill, intuition, and drive. You can evaluate their output with surveys, heuristics, and user studies, but without their vision, execution, and insane devotion to see through making it happen, you’ll never see shit get done right.

As Luke says, Most people out there prefer a great experience over complete openness.

I concur. And I think it’s critical that “open source” advocates (myself included) keep that at top of mind.

. . .

I will say this: I’m an advocate for open source and open standards because I believe that open ecosystems — i.e. those with low barriers to entry (low startup costs; low friction to launch; public infrastructure for sustaining productivity) — are essential for competition at the level of user experience.

It may seem paradoxical, but open systems in which secretive design processes are used can result in better solutions, overall.

Thus when I talk about openness, I really mean openness from an economic/competitive perspective.

. . .

Early today I needed access to a client’s internal wiki. Having gone without access for a week, I decided to toss up a project on Basecamp to get things started.

When I presented my solution to the team, I was told that we needed to use something open source that could be hosted on their servers. Somewhat taken aback, I suggested Basecamp was the best tool for the job given our approaching deadline..

“No, no, that won’t do,” was the message I got. “Has to be open source. Self-hosted.”

I asked them for alternatives. “PHProjekt“. Double Choco Latte. I proposed Open Atrium.

Once again, as seems all too common lately, more time was devoted to picking a tool rather than producing solutions. More meta than meat. Worst of all, religion was in the driver’s seat, rather than reality. Where was that open source pragmatism I’d heard so much about?

Anyway, not how I want to begin a design process.

Ultimately, I got the access I needed — to MediaWiki. So, warts and all, we’ll be using that to collaborate. On a closed intranet.

In the back of my head, I can’t help but fear that the tools used for design collaboration bleed into the output. To my eyes, MediaWiki isn’t a flavor that I want stirred into the pot. And it begs the question once and for all: what good can “open source” bring to design if the only result is the product of committee dictate?

Portable Profiles & Preferences on the Citizen-Centric Web

Loyalty Cards by Joe LoongLet me state the problem plainly: in order to provide better service, it helps to know more about your customer, so that you can more effectively anticipate and meet her needs.

But, pray tell, how do you learn about or solicit such information over the course of your first interaction? Moreover, how do you go about learning as much as you can, as quickly as you can, without making the request itself burdensome and off-putting?

Well, as obvious as it seems, the answer is to let her tell you.

The less obvious thing is how.

And that’s where user-centric (or citizen-centric) technologies offer the most promise.

It’s like this:

  • If you let someone use an account or ID that they already use regularly elsewhere, you will save them the hassle of having to create yet another account that works solely with your service;
  • following on that, an account that is reusable is more valuable, and its value can be further increased by attaching certain types of profile attributes to it that are commonly requested;
  • the more common it becomes to reuse an account, the more people will expect this convenience during new sign up experiences, ideally to the point of knowing how to ask for support for their preferred sign-in mechanism from the services that they use;
  • presuming that service providers’ desire for profile information and preferences will not decrease, it will become an added byproduct of user-centric authentication to be able to import such data from identity providers as it is available;
  • as customers realize the convenience of portable profile and preference data, savvy identity providers will make it easier to store and express a wider array of this data, and will subsequently work with relying parties to develop interoperable sign up flows and on ramps (see Google and Plaxo).

For this to work, the individual must be motivated to manage her profile information and preferences, which shouldn’t be hard as her data becomes increasingly reusable (sort once, reuse everywhere). Additionally, organizing, maintaining, and accruing this information becomes less onerous when it’s all in one place (or conveniently accessible through one central customer-picked source), as opposed to sharded across many accounts and unaffiliated services.

You can get similar functionality with form-filling software like 1Password except in the model I’m describing, the data travels with you — beyond the browser and off the desktop — to wherever you need it — because it is stored in the cloud.

As it becomes easier to store and share this information, I think more people will do this as a happenstance of using more social software — and will become acclimated to providing their friends and service providers with varying degrees of access to increasing amounts of personally describing data.

Companies that jump on this and make it easier for people to manage their profile and preference data will benefit — having access to more accurate, timely, and better-maintained information, leading to more personalized user experiences and accelerating the path to satisfaction.

Companies that do get this right will benefit from what is emerging as a new social contract. As a citizen of the web, if you let me manage my relationship with you, and make it easy for me to do so, giving me the choice of how and where I store my profile and preference data, I’ll be more likely, more willing, and more able to share it with you, in an ongoing fashion, increasingly as you use it to improve my experiences with you.

My name is not a URL

Twitter / Mark Zuckerberg: Also just created a public ...

Arrington has a post that claims that Facebook is getting wise to something MySpace has known from the start – users love vanity URLs.

I don’t buy it. In fact, I’m pretty sure that the omission of vanity URLs on Facebook is an intentional design decision from the beginning, and one that I’ve learned to appreciate over time.

From what I’ve gathered, it was co-founder Dustin Moskovitz’s stubbornness that kept Facebook from allowing the use of pseudonymic usernames common on previous-generation social networks like AOL. Considering that Mark Zuckerberg’s plan is to build an online version of the relationships we have in real life, it only makes sense that we should, therefore, call our friends by their IRL names — not the ones left over or suggested by a computer.

But there’s actually something deeper going on here — something that I talked about at DrupalCon — because there are at least two good uses for letting people set their own vanity URLs — three if your service somehow surfaces usernames as an interface handle:

  1. Uniqueness and remembering
  2. Search engine optimization
  3. Facilitating member-to-member communication (as in the case of Twitter’s @replies)

For my own sake, I’ve lately begun decreasing the distance between my real identity and my online persona, switching from @factoryjoe to @chrismessina on Twitter. While there are plenty of folks who know me by my digital moniker, there are far more who don’t and shouldn’t need to in order to interact with me.

When considering SEO, it’s quite obvious that Google has already picked up on the correlation:

chris messina - Google Search

Ironically, in Dustin’s case (intentionally or not) he is not an authority for his own name on Google (despite the uniqueness of his name). Instead, semi-nefarious sites like Spock use SEO to get prominent placement for Dustin’s name (whether he likes it or not):

Dustin Moskovitz - Google Search

Finally, in cases like Twitter, IM or IRC, nicknames or handles are used explicitly to refer to other people on the system, even if (or especially if!) real identities are never revealed. While this separation can afford a number of perceived benefits, long-term it’s hard to quantify the net value of pseudonymity when most assholes on the web seem to act out most aggressively when shrouding their real names.

By shunning vanity URLs for its members, Facebook has achieved three things:

  1. Establishes a new baseline for transparent online identity
  2. Avoids the naming collision problem by scoping relationships within a person’s [reciprocal] social graph
  3. Upgrades expectations for human interaction on social websites

That everyone on Facebook has to use their real name (and Facebook will root out and disable accounts with pseudonyms), there’s a higher degree of accountability because legitimate users are forced to reveal who they are offline. No more “funnybunny345” or “daveman692” creeping around and leaving harassing wall posts on your profile; you know exactly who left the comment because their name is attached to their account.

Go through the comments on TechCrunch and compare those left by Facebook users with those left by everyone else. In my brief analysis, Facebook commenters tend to take their commenting more seriously. It’s not a guarantee, but there is definitely a correlation between durable identity and higher quality participation.

Now, one might point out that, without unique usernames, you’d end up with a bunch of name collisions — and you’d be right. However, combining search-by-email with profile photos largely eliminates this problem, and since Facebook requires bidirectional friendship confirmation, it’s going to be hard to get the wrong “Mike Smith” showing up in your social graph. So instead of futzing with (and probably forgetting) what strange username your friend uses, you can just search by (concept!) their real name using Facebook’s type-ahead find. And with autocompletion, you’ll never spell it wrong (of course Gmail has had this for ages as well).

Let me make a logical leap here and point out here that this is the new namespace — the human-friendly namespace — that Tim O’Reilly observed emerging when he defined Web 2.0, pointing out that a future source of lock-in would be “owning a namespace”. This is why location-based services are so hot. This is also why it matters who gets out in front first by developing a database of places named by humans — rather than by their official names. When it comes to search, search will get better when you can bound it — to the confluence of your known world and the known/colloquial world of your social graph.

When I was in San Diego a couple weeks back, it dawned on me that if I searched for “Joe’s Crab Shack”, no search engine on earth would be able to give me a satisfying result… unless it knew where I was. Or where I had been. Or, where my friends had been. This is where social search and computer-augmented social search becomes powerful (see Aardvark). Not just that, but this is where owning a database of given names tied to real things becomes hugely powerful (see Foursquare). This is where social objects with human-given names become the spimatic web.

So, as this plays out, success will find the designer who most nearly replicates the world offline online. Consider:

Twitter / Rear Adm. Monteiro: @mat and I are in the back ...


Facebook | @replies




Facebook Chat

Ignoring content, it seems to me that the latter examples are much easier to grok without knowing anything about Facebook or Twitter — and are much closer approximations of real life.

Moreover, in EventBox, there is evidence that we truly are in a transitional period, where a large number of people still identity themselves or know their friends by usernames, but an increasing number of newcomers are more comfortable using real names (click to enlarge):

Eventbox Preferences

We’re only going to see more of this kind of thing, where the data-driven design approach will give way to a more overall humane aesthetic. It begins by calling people by the names we humans prefer to — and will always — use. And I think Facebook got it right by leaving out the vanity URLs.