Careful readers would understand that I said that funneling all user authentication (and thus the storage of all identities) through a single provider would be evil. I don’t care who that provider might be — but centralizing so much control — the fate of our collective digital existences! — in the hands of a single entity just can not be permitted.
Simplicity: I have to admit that Facebook impressed me with how simple they’ve made it to integrate with their platform, and how clear the value proposition is. From launching OAuth 2.0 (rather aggressively, since the standards process hasn’t even completed yet!) to removing the 24-hour caching policy, Facebook made considerable changes to their developer platform to ease adoption, integration, and promote implementation. This sets the bar for how easy (ideally) technologies like OpenID and ActivityStreams need to become.
Avoiding NIH (mostly): In particular, Facebook dispensed with their own proprietary authorization protocol and went with the emerging industry standard (OAuth 2.0). I hope that this move reduces complexity and friction for developers implementing secure protocols, increasing the number of available high quality OAuth libraries, and leads to fewer new developers needing to figure out signatures and crypto when sometimes even the experts get these things wrong. By standardizing on OAuth, we’re within range of dispensing with passwords once and for all (…okay, not quite).
Giving credit: I also think that Facebook deserves credit for giving credit to projects like Dublin Core, link-rel canonical, Microformats, and RDFa in their design of the Open Graph Protocol. I’ve seen many other efforts that start from scratch when plenty of other initiatives already exist simply because they’re unawares or don’t do their homework (one of which is the OpenLike effort!). I’m not sure I agree with the parts that Facebook extracted from these efforts, but as David Recordon said, we can fight over “where the quotes and angle-brackets should go“, but at the end of the day, they still shipped something that net-net increases the amount of machine-readable data on the web. And if they’re sincere in their efforts, this is just the beginning of what may emerge as a much wider definition of how more parties can both contribute to — and benefit from — the protocol.
Open licensing: Now that I’ve been involved in this area for a longer period of time, I’ve learned a simple truth: it’s hard to give things away, especially if you want other people to use them, even moreso when some of those potential users are competitors. But, that’s why the Open Web Foundation was created, and why David and I are board members. After setting up foundations over and over again, we decided that it needed to be easier to do! Now all the hard work of the Open Web Foundation’s legal committee is starting to pay off, and I am quite satisfied that Facebook has validated this effort. We’re still so early in the process that it’s not entirely clear how to make use of the Open Web Foundation’s agreement, but surely this will motivate us to find our own Creative Commons-like approach to proclaiming support for open web licensing on individual projects.
So, while I still have my reservations about Facebook’s master plan, they did do a number of things right — not everything — but I’m tough customer to please. When it comes to the identity stuff, I’m definitely non-plussed, but that’s where my ideology and their business needs collide — and I get it.
What this means is that we all need to show more hustle out on the field and get serious. With Facebook’s Hail Mary at F8, we just got set back a touchdown, and a field goal just ain’t gunna cut it.
In fact, I’d argue that Buzz is as much about Google creating a new channel for conversation in a familiar place as it is about how we’re going about building its public developer surfaces. Although today’s Buzz API only offers a real-time read-only activity stream, the goal is to move quickly towards implementing a host of other technologies — most of which should be familiar to readers of this blog.
As Kevin Marks observes, in order to address the mess of the social web that Mike Arrington described, we need widespread use [of common standards] so that we can generalize across sites — and thus enable people to interact and engage across the web , rather than being restricted to any particular silo of activity — which may or may not reflect their true social configuration.
In other words, standards — and in particular social web standards — are the lingua franca that make it possible for uninitiated web services to interact in a consistent manner. When web services use standards to commoditize essential and basic features, it forces them to compete not with user lock-in, but by providing better service, better user experience, or with new functionality and utility. I am an advocate of the open web because I believe the open web leads to increased competition, which in turn affords people better options, and more leverage in the world.
Buzz is both a terrific product, and a great example of how the social web is evolving and becoming truly ubiquitous. Buzz is simply one more stitch in the fabric of the social web.
While you could chalk up the effect of the video to clever editing, I’ve seen similarvideos that suggest that the attitudes expressed are probably a pretty accurate portrayal of how some people think (and, for the purposes of this essay, I’m less interested in what they think).
It seems to me that the people in the video largely think with their guts, and not their brains. I’m not making a judgment about their intelligence, only recognizing that they seem to evaluate the world from a different perspective than I do: with less curiosity and apparent skepticism. This approach would explain George W Bush’s appeal as someone who “lead from the gut“. It’s probably also what Al Gore was talking about in his book, Assault on Reason.
Many in my discipline (design) tend to think of the consumers of their products as being rational, thinking beings — not unlike themselves. This seems worse when it comes to engineers and developers, who spend all of their thinking time being mathematically circumspect in their heads. They exhibit a kind of pattern blindness to the notion that some people act completely from gut instinct alone, rarely invoking their higher faculties.
How, then, does this dichotomy impact the utility or usability of products and services, especially those borne of technological innovation, given that designers and engineers tend to work with “information in the mind” while many of the users of their products operate purely on the visceral plane?
In writing about the death of the URL, I wanted to expose some consequences of this division. While the intellectually adventuresome are happy to embrace or create technology to expand and challenge their minds (the popularity and vastness of the web a testament to that fact), anti-intellectuals seem to encounter technology as though it were a form of mysticism. In contrast to the technocratic class, anti-intellectuals on the whole seem less curious about how the technology works, so long as it does. Moreover, for technology to work “well” (or be perceived to work well) it needs to be responsive, quick, and for the most part, completely invisible. A common sentiment I hear is that the less technology intrudes on their lives, the better and happier they believe themselves to be.
So, back to the death of the URL. As has been argued, the URL is ugly, confusing, and opaque. It feels technical and dangerous. And people just don’t get them. This is a sharp edge of the web that seems to demand being sanded off — because the less the inner workings of a technology are exposed in one’s interactions with it, the easier and more pleasurable it will be to operate, within certain limitations, of course. Thus to naively enjoy the web, one needn’t understand servers, DNS, ports, or hypertext — one should just “connect”, pick from a list of known, popular, “destinations”, and then point, click — point, click.
And what’s so wrong with that?
What I find interesting about the social web is not the technology that enables it, but that it bypasses our “central processor” and engages the gut. The single greatest thing about the social web is how it has forced people to overcome their technophobias in order to connect with other humans. I mean, prior to the rise of AOL, being online was something that only nerds did. Few innovations in the past have spread so quickly and irreversibly, and it’s because the benefits of the social web extend beyond the rational mind, and activate our common ancestors’ legacy brain. This widens the potential number of people who can benefit from the technology because rationality is not a requirement for use.
Insomuch as humans have cultivated a sophisticated sociality over millennia, the act of socializing itself largely takes place in the “gut”. That’s not to say that there aren’t higher order cognitive faculties involved in “being social”, but when you interact with someone, especially for the first time, no matter what your brain says, you still rely a great deal on what your gut “tells you” — and that’s not a bad thing. However, when it comes to socializing on sites like Twitter and Facebook, we’re necessarily engaging more of our prefrontal cortex to interpret our experience because digital environments lack the circumstantial information that our senses use to inform our behavior. To make up for the lack of sensory information, we tend to scan pages all at once, rather than read every word from top to bottom, looking for cues or familiar handholds that will guide us forward. Facebook (by name and design) uses the familiarity of our friends’ faces to help us navigate and cope with what is otherwise typically an information-poor environment that we are ill-equipped to evaluate on our own (hence the success of social engineering schemes and phishing).
As we redesign more of our technologies to provide social functionality, we should not proceed with mistaken assumption that users of social technologies are rational, thinking, deliberative actors. Nor should we be under the illusion that those who use these features will care more about neat tricks that add social functionality than the socialization experience itself. That is, technology that shrinks the perceived distance between one person’s gut and another’s and simply gets out of the way, wins. If critical thinking or evaluation is required in order to take advantage of social functionality, the experience will feel, and thus be perceived, as being frustrating and obtuse, leading to avoidance or disuse.
Given this, no where is the recognition of the gut more important than in the design and execution of identity technologies. And this, ultimately, is why I’m writing this essay.
It might seems strange (or somewhat obsessive), but as I watched the Sarah Palin video above, I thought about how I would talk to these people about OpenID. No doubt we would use very different words to describe the same things — and I bet their mental model of the web, Facebook, Yahoo, and Google would differ greatly from mine — but we would find common goals or use cases that would unite us. For example, I’m sure that they keep in touch with their friends and family online. Or they discover or share information — again, even if they do it differently than me or my friends do. Though we may engage with the world very differently — at root we both begin with some kind of conception of our “self” that we “extend” into the network when we go online and connect with other people.
Now, I’m not just talking about intuition (though that’s a part of it). I’m talking about why some people feel “safer” experiencing the web with companies like Google or Facebook or Yahoo! at their side, or how frightening the web must seem when everyone seems to need you to keep a secret with them in order to do business (i.e. create a password).
I think the web must seem incredibly scary if you’re also one of those people that’s had a virus destroy your files, or use a computer that’s still infected and runs really slow. For people with that kind of experience as the norm, computers must seem untrustworthy or suspicious. Rationally you could try to explain to them what happened, or how the social web can be safe, but their “gut has already been made up.” It’s not a rational perception that they have of computers, it’s an instinctual one — and one that is not soon overcome.
Thus, when it comes to designing identity technologies, it’s very important that we involve the gut as a constituent of our work. Overloading the log in or registration experience with choice is an engineer’s solution that I’ve come to accept is bound to fail. Instead, the act of selecting an identity to “perform as” must happen early in one’s online session — at a point in time equivalent to waking up in the morning and deciding whether to wear sweatpants or a suit and tie depending on whatever is planned for the rest of the day.
Such an approach is a closer approximation to how people conduct themselves today — in the real world and from the gut — and must inform the next generation of social technologies.
18 programmable mouse buttons with double-click functionality
Three different button modes: Key, Keypress, and Macro
Analog Xbox 360-style joystick with optional 4, 8, and 16-key command modes
Clickable scroll wheel
512k of flash memory
63 on-mouse application profiles with hardware, software, and autoswitching capability
1024-character macro support.
50 bazillion dingbats.
Adjustable resolution from 400 to 1,600 CPI.
8,000,000. Nothing specific, just… 8,000,000.
Support for Comic Sans.
20 default profiles for popular games and applications, including Adobe Photoshop, the Gnu Image Manipulation Program, World of Warcraft, and the Call of Duty series.
I’ve decided that rejecting this product out of hand wouldn’t be fair. As much as I’m itchin’ to. And, well, since I’m trying to be more positive these days, I’ll see if I can be more rational in my constructive criticism.
The first thing that needs to be understood about this mouse is that it’s explicitly not for everyone. It was designed by a game designer, largely for game players. Another way to think of it is as the twelve-sided die to your standard six. In the course of designing and developing the product, it quickly became apparent that many non-gaming applications would also benefit from having dozens of commands accessible directly from the mouse, especially in navigating the bajillion dropdown menus that spawn in office productivity apps like OpenOffice, or rotating 3D shapes in apps like 3D Studio Max.
The settings for the MagicMouse, in contrast, are visual, approachable, and show the user exactly how it works with an embedded video:
And while the MagicMouse can be picked up and grokked nearly instantaneously (though it sucks that right-click is disabled by default), the OpenOfficeMouse requires about two days of acclimation according to the FAQ.
At base, these products represent two polar opposite ends of the spectrum: Apple prefers to hide complexity within the technology whereas the open source approach puts the complexity on the surface of the device in order to expose advanced functionality and greater transparency into how to directly manipulate the device. Put another way, the reason that people would buy the $69 Apple MagicMouse is because they want Apple’s designers to just “figure it out” for them, and provide them with an instantly-usable product. The reason why someone would pay $75 for this mouse is because it strictly keeps all the decision-making about what the mouse does in the hands (pun intended?) of the purchaser.
What I worry about, however, is that pockets of the open source community continue to largely be defined and driven by complexity, exclusivity, technocracy, and machismo. While I do support independence and freedom of choice in technology — and therefore open source — I prefer to do so inclusively, with an understanding that there are many more people who are not yet well served by technology because appropriate technology has not been made more usable for them. The beautiful, usable technology in the marketplace need not be the exclusive domain of the proprietary — but so far I’ve see little indication that open source developers take seriously the need for simpler, easier, and more intuitive future-forward interfaces. Perhaps I’m wrong or just uninformed, but so long as products like the OpenOfficeMouse continue to characterize the norm in open source design, I’m not likely going to be able to soon recommend open source solutions to anyone but the most advanced and privileged users.
I’ve probably said it before, and will say it again, and I’m also sure that I’m not the first, or the last to make this point, but I have yet to see an example of an open source design process that has worked.
Indeed, I’d go so far as to wager that “open source design” is an oxymoron. Design is far too personal, and too subjective, to be given over to the whims and outrageous fancies of anyone with eyeballs in their head.
Lately, I’m feeling the acute reality of this sentiment.
Design can be executed using secretive or transparent processes; it really can’t be “open” because it can’t be evaluated in same way “open source” projects evaluate contributions, where solutions compete on the basis of meritocratic and objective measures. Design is sublime, primal, and intuitive and needs consistency to succeed. Open source code, in contrast, can have many authors and be improved incrementally. Design — visual, interactive or conceptual — requires unity; piecemeal solutions feel disjointed, uncomfortable and obvious when end up in shipping product.
I read this quote last week and realized it is symptomatic of a common assertion that in technology (and especially the Web) “completely open” is better than “controlled”.
“But we’ll all know exactly where Apple stands – jealously guarding control of their users […] And that’s not what Apple should be about.” –TechCrunch
Sorry but Apple makes their entire living by tightly controlling the experience of their customers. It’s why everyone praises their designs. From top to bottom, hardware to software -you get an integrated experience. Without this control, Apple could not be what it is today.
I worry about Mozilla in this respect — and all open source projects that cater to the visible and vocal, ignoring the silent or unengaged majority.
I worry about OpenID similarly — an initiative that will be essential for the future of the social web and yet is hampered by user experience issues because of an attachment to fleeting principles like “freedom” and “individual choice”. Sigh.
When it comes to open source and design, design — and human factors, more generally — cannot play second fiddle to engineering. But far too often it seems that that’s the case.
And it shouldn’t be.
More often there should be a design dictator that enters into a situation, takes stock of the set of problems that people (read: end users) are facing, and then addresses them through observation, skill, intuition, and drive. You can evaluate their output with surveys, heuristics, and user studies, but without their vision, execution, and insane devotion to see through making it happen, you’ll never see shit get done right.
As Luke says, Most people out there prefer a great experience over complete openness.
I concur. And I think it’s critical that “open source” advocates (myself included) keep that at top of mind.
. . .
I will say this: I’m an advocate for open source and open standards because I believe that open ecosystems — i.e. those with low barriers to entry (low startup costs; low friction to launch; public infrastructure for sustaining productivity) — are essential for competition at the level of user experience.
It may seem paradoxical, but open systems in which secretive design processes are used can result in better solutions, overall.
Thus when I talk about openness, I really mean openness from an economic/competitive perspective.
. . .
Early today I needed access to a client’s internal wiki. Having gone without access for a week, I decided to toss up a project on Basecamp to get things started.
When I presented my solution to the team, I was told that we needed to use something open source that could be hosted on their servers. Somewhat taken aback, I suggested Basecamp was the best tool for the job given our approaching deadline..
“No, no, that won’t do,” was the message I got. “Has to be open source. Self-hosted.”
Once again, as seems all too common lately, more time was devoted to picking a tool rather than producing solutions. More meta than meat. Worst of all, religion was in the driver’s seat, rather than reality. Where was that open source pragmatism I’d heard so much about?
Anyway, not how I want to begin a design process.
Ultimately, I got the access I needed — to MediaWiki. So, warts and all, we’ll be using that to collaborate. On a closed intranet.
In the back of my head, I can’t help but fear that the tools used for design collaboration bleed into the output. To my eyes, MediaWiki isn’t a flavor that I want stirred into the pot. And it begs the question once and for all: what good can “open source” bring to design if the only result is the product of committee dictate?
I’d like to add my voice to the stream of complaints about the iPhone App Store, but before I say anything critical, I have to promise one thing. No matter how annoyed I get, I will not stop developing for Apple’s platforms or using Apple’s products as long as they continue to produce the best stuff on the market. I never forget how deeply Apple cares about making their users happy, and that counts more than how they treat their developers. Besides, when I have a problem with a friend, I don’t threaten to boycott our friendship until they change, so I’m not going to do that to Apple either.
Having said that, I have only one major complaint with the App Store, and I can state it quite simply: the review process needs to be eliminated completely.
Does that sound scary to you, imagining a world in which any developer can just publish an app to your little touch screen computer without Apple’s saintly reviewers scrubbing it of all evil first? Well, it shouldn’t, because there is this thing called the World Wide Web which already works that way, and it has served millions and millions of people quite well for a long time now.
He goes on to discuss the gargantuan task of having to effectively evaluate the thousands of apps that are submitted each week to the App Store — pointing out that the app developers themselves would be more effective at diagnosing and remedying bugs than the Apple reviewers. He suggests that the review process is really in place to ensure agreement with Apple’s terms of service, rather than to benefit the end user, a point he makes in series of tweets (best read bottom to top):
He concludes his post thus:
If you think that all apps should be held prisoner by Apple until proven safe, you should also be able to convince yourself that this is how the web should work. Perhaps I am just spoiled by my many years of web development. The next time I create a web app I will probably feel a little guilty when I upload the files to my web server, knowing that I didn’t have to ask the web police to review the app first to make sure I wasn’t evil.
Given that Joe works at Facebook and Facebook just hired David Recordon, it’s interesting to watch how Facebook itself wrestles with the yin-yang of the open versus closed models of innovation and design, at times at polar opposite ends of the same spectrum. Facebook has assembled a tream of really smart people to lead their platform efforts — many of whom have worked on open source projects in the past (Joe, Mike Schroepfer and Blake Ross all worked on Firefox, to name a few). Meanwhile, my good friend and Facebook platform manager, Dave Morin, hails from Apple — and the Jobsonian influence runs deep in him.
You can see the push-and-pull of these influences throughout Facebook platform its products.
On the one hand, Facebook talks about itself as though it were an “open source” company — bringing light to the dark realm of social software. On the other, Facebook Connect prioritizes a singular user experience that eliminates choice in order to achieve user acceptance and familiarity.
That kind of challenge — balancing openness, freedom, and choice with convenience, accessibility and visionary design — is a tension that I think leads to great products. Tipping the balance too far in any particular direction can lead to distortions, especially when caused by priorities that are not intrinsically aimed at enhancing the user experience but instead stem from a fear of openness or, as I like to say, embracing the chaos.
Apple is in the center of an increasingly volatile vortex. They have built an incredibly valuable platform and everyone wants a piece, but in putting themselves in between developers and their customers, Apple is taking on a role it is simply ill-equipped for, and one that increasingly makes it look like a bad guy, in spite of the love that most people otherwise feel for the company.
It’s one thing for AT&T to be hated — it’s practically a given. But for Apple to become the butt end of developer complaints is an awkward and unfortunate position that it can’t enjoy. I think Joe Hewitt’s right, and I think it’s time Apple seriously considered the damage being caused by a process that was likely instituted to prevent a different kind of damage — one that, in comparison, seems somewhat irrelevant given Facebook’s experiment — and ongoing success — at implementing a resilient trust-first platform.