Twitter and the Password Anti-Pattern

Twitter / Alex Payne: @factoryjoe Yes, OAuth is ...

I’ve written about the password anti-pattern before, and have, with regards to Twitter, advocated for the adoption of some form of delegated authentication solution for some while.

It’s not as if Twitter or lead developer Alex Payne aren’t aware of the need for such a solution (in fact, it’s not only been publicly recognized (and is Issue #2 in their API issue queue), but the solution will be available as part of a “beta” program shortly). The problem is that it’s taken so long for Twitter’s “password anti-pattern” problem to get the proper attention that it deserves (Twitter acknowledged that they were moving to OAuth last August) that unsuspecting Twitter users have now exposed themselves (i.e. Twitter credentials) to the kind of threat we knew was there all along.

This isn’t the first time either, and it probably won’t be the last, at least until Twitter changes the way third party services access user accounts.

Rather than focus on Twply (which others have done, and whose evidence still lingers), I thought I’d talk about why this is an important problem, what solutions are available, why Twitter hasn’t adopted them and then look at what should happen here.
Continue reading “Twitter and the Password Anti-Pattern”

Responding to criticisms about OpenID: convenience, security and personal agency

Twitter / Chris Drackett:  openID should be dead... its over-rated.

Chris Dracket responded to one of my tweets the other day, saying that “OpenID should be dead… it’s way over-rated”. I’ve of course heard plenty of criticisms of OpenID, but hadn’t really heard that it was “overrated” (which implies that people have a higher opinion of OpenID than it merits).

Intrigued, I replied, asking him to elaborate, which he did via email:

I don’t know if overrated is the right word.. but I just don’t see OpenID ever catching on.. I think the main reason is that its too complex / scary of an idea for the normal user to understand and accept.

In my opinion the only way to make OpenID seem safe (for people who are worried about privacy online) is if the user has full control over the OpenID provider. While this is possible for people like you and me, my mom is never going to get to this point, and if she wants to use OpenID she is going to have to trust her sensitive data to AOL, MS, Google, etc. I think that people see giving this much “power” to a single provider as scary.

Lastly I think that OpenID is too complex to properly explain to someone and get them to use it. People understand usernames and passwords right away, and even OAuth, but OpenID in itself I think is too hard to grasp. I dunno, just a quick opinion.. I think there is a reason that we don’t have a single key on our key rings that opens our house, car, office and mailbox, not that that is a perfect/accurate analogy, but its close to how some people I’ve talked to think OpenID works.

Rather than respond privately, I asked whether it’d be okay if I posted his follow-up and replied on my blog. He obliged.

To summarize my interpretation of his points: OpenID is too complex and scary, potentially too insecure, and too confined to the hands of a few companies.

The summary of my rebuttals:


Convenience

OpenID should not be judged by today’s technological environment alone, but rather should be considered in the context of the migration to “cloud computing”, where people no longer access files on their local harddrive, but increasingly need to access data stored by web services.

All early technologies face criticism based on current trends and dominant behaviors, and OpenID is no different. At one time, people didn’t grok sending email between different services (in fact, you couldn’t). At one time, people didn’t grok IMing their AOL buddies using Google Talk (in fact, you couldn’t). At one time, you had one computer and your browser stored all of your passwords on the client-side (this is basically where we are today) and at one time, people accessed their photos, videos, and documents locally on their desktop (as is still the case for most people).

Cloud computing represents a shift in how people access and share data. Already, people rely less and less on physical media to store data and more and more on internet-based web services.

As a consequence, people will need a mechanism for referencing their data and services as convenient as the c: prompt. An OpenID, therefore, should become the referent people use to indicate where their data is “stored”.

An OpenID is not just about identification and blog comments; nor is it about reducing the number of passwords you have (that’s a by-product of user-centered design). Consider:

  • if I ask you where your photos are, you could say Flickr, and then prove it, because Flickr supports OpenID.
  • if I ask you where friends are, you might say MySpace, and then prove it, because MySpace will support OpenID.
  • if you host your own blog or website, you will be able to provide your address and then prove it, because you are OpenID-enabled.

The long-term benefit of OpenID is being able to refer to all the facets of your online identity and data sources with one handy — ideally memorable — web-friendly identifier. Rather than relying on my email addresses alone to identify myself, I would use my OpenIDs, and link to all the things that represent me online: from my resume to my photos to my current projects to my friends, web services and so on.

The big picture of cloud computing points to OpenIDs simplifying how people access, share and connect data to people and services.


Security

I’ve heard many people complain that if your OpenID gets hacked, then you’re screwed. They claim that it’s like putting all your eggs in one basket.

But that’s really no different than your email account getting hacked. Since your email address is used to reset your password, any or all of your accounts could have their passwords reset and changed; worse, the password and the account email address could be changed, locking you out completely.

At minimum, OpenID is no worse than the status quo.

At best, combined with OAuth, third-parties never need your account password, defeating the password anti-pattern and providing a more secure way to share your data.

Furthermore, because securing your OpenID is outside of the purview of the spec, you can choose an OpenID provider (or set up your own) with a level of security that fits your needs. So while many OpenID providers currently stick with the traditional username and password combo, others offer more sophisticated approaches, from client-side certificates and hardware keys to biometrics and image-based password shields (as in the case of my employer, Vidoop).

One added benefit of OpenID is the ability to audit and manage access to your account, just as you do with a credit card account. This means that you have a record of every time someone (hopefully you!) signs in to one of your accounts with your OpenID, as well as how frequently sign-ins occur, from which IP addresses and on what devices. From a security perspective, this is a major advantage over basic usernames and passwords, as collecting this information from each service provider would prove inconvenient and time-consuming, if even possible.

Given this benefit, it’s worth considering that identity technologies
are being pushed on the government. If you’re worried about putting all your eggs in one basket, would you think differently if the government owned that basket?

OpenID won’t force anyone to change their current behavior, certainly not right away. But wouldn’t it be better to have the option to choose an alternative way to secure your accounts if you wanted it? OpenID starts with the status quo and, coupled with OAuth, provides an opportunity to make things better.

We’re not going to make online computing more secure overnight, but it seems like a prudent place to start.


Personal agency for web citizens

Looking over the landscape of existing social software applications, I see very few (if any) that could not be enhanced by OpenID support.

OpenID is a cornerstone technology of the emerging social web, and adds value anywhere users have profiles, accounts or need access to remote data.

Historically, we’ve seen similar attempts at providing a universal login account. Microsoft even got the name right with “Passport”, but screwed up the network model. Any identity system, if it’s going to succeed on the open web, needs to be designed with user choice at its core, in order to facilitate marketplace competition. A single-origin federated identity network will always fail on the internet (as Joseph Smarr and John McCrea like to say of Facebook Connect: We’ve seen this movie before).

As such, selecting an identity provider should not be relegated to a default choice. Where you come from (what I call provenance) has meaning.

For example, if you connect to a service using your Facebook account, the relying party can presume that the profile information that Facebook supplies will be authentic, since Facebook works hard to ferret out fake accounts from its network (unlike MySpace). Similarly, signing in with a Google Account provides a verified email address.

Just like the issuing country of your passport may say something about you to the immigration official reviewing your documents, the OpenID provider that you use may also say something about you to the relying party that you’re signing in to. It is therefore critical that people make an informed choice about who provides (and protects) their identity online, and that the enabling technologies are built with the option for individuals to vouch for themselves.

In the network model where anyone can host their own independent OpenID (just like anyone can set up their own email server), competition may thrive. Where competition thrives, an ecosystem may arise, developed under the rubric of market dynamics and Darwinian survivalism. And in this model, the individual is at the center, rather than the services he or she uses.

This the citizen-centric model of the web, and each of us are sovereign citizens of the web. Since I define and host my own identity, I do not need to worry about services like Pownce being sold or I Want Sandy users left wanting. I have choice, I have bargaining power, and I have agency, and this is critical to the viability of the social web at scale.


Final words

OpenID is not overrated, it’s just early. We’re just getting started with writing the rules of social software on the web, and we’ve got a lot of bad habits to correct.

As cloud computing goes mainstream (evidenced in part by the growing popularity of Netbooks this holiday season!), we’re going to need a consumer-facing technology and brand like OpenID to help unify this new, more virtualized world, in order to make it universally accessible.

Fortunately, as we stack more and more technologies and services on our OpenIDs, we can independently innovate the security layer, developing increasingly sophisticated solutions as necessary to make sure that only the right people have access to our accounts and our data.

It is with with these changes that we must evaluate OpenID — not as a technology for 2008’s problems — but as a formative building block for 2009 and the future of the social web.

Where we’re going with Activity Streams

The DiSo Project is just over a year old. It’s remained a somewhat amorphous blob of related ideas, concepts and aspirations in my brain, but has resulted in some notable progress, even if such progress appears dubious on the surface.

For example, OAuth is a core aspect of DiSo because it enables site-to-site permissioning and safer data access. It’s not because of the DiSo Project that OAuth exists, but my involvement in the protocol certainly stems from the goals that I have with DiSo. Similarly, Portable Contacts emerged (among other things) as a response to Microsoft’s “beautiful fucking snowflakecontacts API, but it will be a core component of our efforts to distribute and decentralize social networking. And meanwhile, OpenID has had momentum and a following all its own, and yet it too fits into the DiSo model in my head, as a cornerstone technology on which much of the rest relies.

Subscribing to a person

Tonight I gave a talk specifically about activity streams. I’ve talked about them before, and I’ve written about them as well. But I think things started to click tonight for people for some reason. Maybe it was the introduction of the mocked up interface above (thanks Jyri!) that shows how you could consume activities based on human-readable content types, rather than by the service name on which they were produced. Maybe it was providing a narrative that illustrated how these various discreet and abstract technologies can add up to something rather sensible and desirable (and looks familiar, thanks to Facebook Connect).

In any case, I won’t overstate my point, but I think the work that we’ve been doing is going to start accelerating in 2009, and that the activity streams project, like OAuth before, will begin to grow legs.

And if I haven’t made it clear what I’m talking about, well, we’re starting with an assumption that activities (like the ones in Facebook’s newsfeed and that make up the bulk of FriendFeed’s content) are kind of like the synaptic electrical impulses that make social networking work. Consider that people probably read more Twitter content these days than they do conventional blog posts — if only because, with so much more content out there, we need more smaller bite-sized chunks of information in order to cope.

FriendFeed - Add/Edit ServicesSo starting there, we need to look at what it would take to recreate efficient and compelling interfaces for activity streams like we’re used to on FriendFeed and Facebook, but without the benefit of having ever seen any of the services before. I call this the “zero knowledge test”. Let me elaborate.

When I say “without the benefit of having ever seen”, I primarily mean from a programmatic standpoint. In other words, what would it take to be able to deliver an equivalent experience to FriendFeed without hardcoding support for only a few of the more popular services (FriendFeed currently supports 59 out of the thousands of candidate sites out there)? What would we need in a format to be able to join, group, de-dupe, and coalesce individual activities and otherwise make the resulting output look human readable?

Our approach so far has been to research and document what’s already out there (taking a hint from the microformats process). We’ve then begun to specify different approaches to solving this problem, from machine tags to microformats to extending ATOM (or perhaps RSS?).

Of course, we really just need to start writing some code. But fortunately with products like Motion in the wild and plugins like Action Stream, we at least have something to start with. Now it’s just a matter of rinse, wash and repeat.

I’m a candidate for the board of the OpenID Foundation!

I'm kind of a big dealThe OpenID Foundation board election opened up on December 10. After a grueling nominations process (not really), we were left with 17 candidates vying for seven community board member seats. Your candidates are (alphabetized by first name):

So far, a great deal of discussion has gone on about the various candidates’ platforms on the OpenID general mailing list. Candidates have also written about things that they would like to change in the coming year on their blogs as well, notably Dave Recordon and Johannes Ernst.

For my own part, I wrote up many of my ideas when I announced my candidacy. I also maintain a wiki page of goals that I have for OpenID.

The three issues that are at the top of my list should I be elected to the board really come down to:

  • establishing OpenID as a strong consumer brand
  • improving the user experience and ease-of-use of OpenID
  • enhancing the value of adopting OpenID for individuals, businesses, and organizations

I will lay out my rationale for these positions in a series of upcoming posts.

In the meantime, if you’d like to vote in this election, you will need to register for a $25 year-long membership in the OpenID Foundation (basically providing you the privilege to participate in this and other foundation elections and activities).

I also solicit your feedback, concerns and wishes for OpenID. Though I have plenty ideas about the kind of work that needs to go into OpenID to make it into a great cornerstone technology for the open web, I’m also very interested in hearing from other people about their experiences with OpenID, or about their ideas for how we can advance the cause of OpenID in 2009.

Why YouTube should support Creative Commons now

YouTube should support Creative Commons

I was in Miami last week to meet with my fellow screeners from the Knight News Challenge and Jay Dedman and Ryanne Hodson, two vlogger friends whom I met through coworking, started talking about content licensing, specifically as related to President-Elect Barack Obama’s weekly address, which, if things go according to plan, will continue to be broadcast on YouTube.

The question came up: what license should Barack Obama use for his content? This, in turn, revealed a more fundamental question: why doesn’t YouTube let you pick a license for the work that you upload (and must, given the terms of the site, own the rights to in the first place)? And if this omission isn’t intentional (that is, no one decided against such a feature, it just hasn’t bubbled up in the priority queue yet), then what can be done to facilitate the adoption of Creative Commons on the site?

To date, few video sharing sites, save Blip.tv and Flickr (even if they only deal with long photos), have actually embraced Creative Commons to any appreciable degree. Ironically, of all sites, YouTube seems the most likely candidate to adopt Creative Commons, given its rampant remix and republish culture (a culture which continues to vex major movie studies and other fastidious copyright owners).

One might make the argument that, considering the history of illegally shared copyrighted material on YouTube, enabling Creative Commons would simply lead to people mislicensing work that they don’t own… but I think that’s a strawman argument that falls down in practice for a number of reasons:

  • First of all, all sites that enable the use of CC licenses offer the scheme as opt-in, defaulting to the traditional all rights reserved use of copyright. Enabling the choice of Creative Commons wouldn’t necessarily affect this default.
  • Second, unauthorized sharing of content or digital media under any license is still illegal, whether the relicensed work is licensed under Creative Commons or copyright.
  • Third, YouTube, and any other media sharing site, bears some responsibility for the content published on their site, and, regardless of license, reserves the right to remove any material that fails to comply completely with its Terms of Service.
  • Fourth, the choice of a Creative Commons license is usually a deliberate act (going back to my first point) intended to convey an intention. The value of this intention — specifically, to enable the lawful reuse and republishing of content or media by others without prior per-instance consent — is a net positive to the health of a social ecosystem insomuch as this choice enables a specific form of freedom: that is, the freedom to give away one’s work under certain, less-restrictive stipulations than the law allows, to aid in establishing a positive culture of sharing and creativity (as we’ve seen on , SoundCloud and CC Mixter).

Preventing people from choosing a more liberal license conceivably restricts expression, insomuch as it restricts an “efficient, content-enriching value chain” from forming within a legal framework. Or, because all material is currently licensed under the most restrictive regime on YouTube, every re-use of a portion of media must therefore be licensed on a per-instance basis, considerably impeding the legal reuse of other people’s work.

. . .

Now, I want to point out something interesting here… as specifically related to both this moment in time and about government ownership of media. A recently released report from the GAO on Energy Efficiency carried with it the following statement on copyright:

This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.

Though it can’t simply put this work into the public domain because of the potential copyrighted materials embedded therein, this statement is about as close as you can get for an assembled work produced by the government.

Now consider that Obama’s weekly “radio address” is self-contained media, not contingent upon the use or reuse of any other copyrighted work. It bears considering what license (if any) should apply (keeping in mind that the government is funded by tax-payer dollars). If not the public domain, under what license should Obama’s weekly addresses be shared? Certainly not all rights reserved! — unfortunately, YouTube offers no other option and thus, regardless of what Obama or the Change.gov folks would prefer, they’re stuck with a single, monolithic licensing scheme.

Interestingly, Google, YouTube’s owner, has supported Creative Commons in the past, notably with their collaboration with Radiohead on the House of Cards open source initiative and with the licensing of the Summer of Code documentation (Yahoo has a similar project with Flickr’s hosting of the Library of Congress’ photo archive under a liberal license).

I think that it’s critical for YouTube to adopt the Creative Commons licensing scheme now, as Barack Obama begins to use the site for his weekly address, because of the powerful signal it would send, in the context of what I imagine will be a steady increase and importance of the use of social media and web video by government agencies.

Don Norman recently wrote an essay on the importance of social signifiers, and I think it underscores my point as to why this issue is pressing now. In contrast to the popular concept of “affordances” in design and design thinking, Norman writes:

A “signifier” is some sort of indicator, some signal in the physical or social world that can be interpreted meaningfully. Signifiers signify critical information, even if the signifier itself is an accidental byproduct of the world. Social signifiers are those that are relevant to social usages. Some social indicators simply are the unintended but informative result of the behavior of others.

. . .

I call any physically perceivable cue a signifier, whether it is incidental or deliberate. A social signifier is one that is either created or interpreted by people or society, signifying social activity or appropriate social behavior.

The “appropriate social behavior”, or behavior that I think Obama should model in his weekly podcasts is that of open and free licensing, introducing the world of YouTube viewers to an alternative form of licensing, that would enable them to better understand and signal to others their intent and desire to share, and to have their creative works reused, without the need to ask for permission first.

For Obama media to be offered under a CC license (with the licensed embedded in the media itself) would signal his seriousness about embracing openness, transparency and the nature of discourse on the web. It would also signify a shift towards the type of collaboration typified by Web 2.0 social sites, enabling a modern dialectic relationship between the citizenry and its government.

I believe that now is the time for this change to happen, and for YouTube to prioritize the choice of Creative Commons licensing for the entire YouTube community.

Independent study on OpenID awareness using Mechanical Turk

Even though I wasn’t able to attend the eighth Internet Identity Workshop this week in Mountain View (check out the latest episode of TheSocialWeb.tv for a glimpse), I wanted to do my part to contribute so I’m sharing the results of a study that Brynn Evans and I performed on Mechanical Turk a short while ago.

I’ll cut to the chase and then go into some background detail.

Heard of OpenID?Of the 302 responses we received, we only rejected one, leaving us with 301 valid data points to work with. Of those 301:

  • 19.3% had heard of OpenID (58 people)
  • 9.0% knew what OpenID was used for (27) and 8.0% were unsure (24)
  • 1.3% used OpenID (4) and 18.3% were unsure if they used it (55).
  • 5.3% recognized the OpenID icon (16) and 7.0% were unsure (21).

Combining some of the results, we found that:

  • of those who know what OpenID is, 14.81% use it.
  • of those who have merely heard of it, 6.9% use it.

That’s what the data show.

Background

Several weeks ago, Yahoo released usability research and best practices for OpenID (PDF). This research was performed by Beverly Freeman in the Yahoo! Customer Insights division in July of this year and involved 9 female Yahoo! users age 32-39 with self-declared medium-to-high level of Internet savvy.

This research, along with Eric Sachs’ later contributions (Google), have taken us from virtually zero research on the usability of OpenID to having a much more robust pool of information to pull from. And though I’m sure many would agree that this research only points to opportunities for improvement, many people interpreted the results as an indication that “OpenID is too confusing” or that it “befuddles users“.

A lot of people also took cheap shots, using the Yahoo! results to bolster their long-held arguments against the protocol and its unfamiliar interaction flow. The problem with such criticism, as far as I’m concerned, is that generalizing from the experiences of nine female Yahoo! users in their thirties is not necessarily representative of the web at large, nor are the conditions favorable to such research. Y’know, Ford got a lot of flack too when he introduced the Model T because everyone loved their horse and carriages. Good thing Ford was right.

Now, some of the criticism of OpenID is valid, especially if it can be turned into productive outcomes, like making OpenID easier to use, or less awkward.

And it serves no one’s interests to make grandiose claims on the basis of minimal data, so given Brynn’s work using Mechanical Turk (with Ed Chi from PARC), I thought I’d ask her to help me set up a study to discover just what awareness of OpenID might be among a wider segment of the population, especially with Japanese awareness of OpenID topping out around 28% (with usage of OpenID at 15%, more than ten times what we saw with Turkers).

Mechanical Turk Demographics

First, it’s important to point out something about Turker demographics. Because Turkers must have either a US bank account or be willing to be paid in Amazon gift certificates, the quality of participants you get (especially if you design your HIT well) will actually be pretty good (compared with, say, a blog-based survey). Now, Mechanical Turk actually has rules against asking for demographic or personally identifying information, but some information has been gathered by Panos Ipeirotis to shed some light on who the Turkers are and why they participate. I’ll leave the bulk of the analysis up to him, but it’s worth noting that a survey put out on Mechanical Turk about OpenID will likely hit a fairly average segment of the internet-using population (or at least one that doesn’t differ greatly from college undergraduates).

Method

Over the course of a week (October 19 – 26), we fielded 302 responses to our survey, paying $0.02 for each valid reply (yes, we were essentially asking people for their “two cents”). We only rejected one response out of the batch, leaving us with 301 valid data points at a whooping cost of $6.02.

Findings

As I reported above, contrary to the 0% awareness demonstrated in the Yahoo! study of nine participants, we found that nearly 20% of respondents had at least heard of OpenID, though a much smaller percentage (1.3%) actually used it (or at least were consciously aware of using it — nearly everyone (18%) who’d heard of OpenID didn’t know if they used it or not).

There was also at least some familiarity with the OpenID logo/icon (5.3%).

What’s also interesting is that many respondents, upon hearing about “OpenID”, expressed an interest in finding out more: “What is it? LOL.”; “I’ve gotta look it up!”; “This survey has sparked my interest”; “Heading to Google to find out”. I can’t say that this shows clear interest in the concept, but at least some folks showed a curious disposition, as such:

How can I tell for sure whether I’ve used OpenID or not when I don’t know what it is? I’ve surely heard of it. That confuses me mainly in Magnolia {bookmarking service} where I want to sign up, but I can’t as it asks for OpenID. And until you mentioned above, it simply didn’t occur to me to just search it up. Hell, after submitting this hit, I’m going to do that first and foremost. Anyways, thanks a lot for indirectly suggesting a move!!!

Now, I won’t repeat the other findings, as they’ve already been reported above.

Thoughts and next steps

The results of this survey are interesting to me, but not unexpected. They’re not reassuring either, and they tell me that we’re doing well considering that we’ve only just begun.

Consider that 20% of a random sampling of 300 people on the internet had at least heard of OpenID, before Google, MySpace or Microsoft turned on their support for the protocol (MySpace announced their intention to support OpenID in July).

Consider that nearly a year ago Marshall Kirkpatrick sounded the deathknell of what seemed like the forgone conclusion about OpenID:

Big Players are Dragging Their Feet … Sharing User Info is a Whole Other Matter … Public Facing Profiles are Anemic … Ease of Use and Marketing Clarity Remain Low Priorities

Consider that no concerted effort has been made to date to inform or educate the general web population about OpenID, or about the problems with sharing your user credentials all over the web, and that many of the large providers have yet to turn on their OpenID support (despite all coming to the table and agreeing that it’s the way forward for identity on the web (save, as usual, Facebook, looking more Microsoftian by the day).

Consider also that momentum to rev the protocol to accommodate email addresses in OpenID is just now gaining traction.

In other words, with areas of user education becoming obvious, with provider adoption starting to happen (vis-a-via MySpace demonstrating the value and prevalence of URL-based identifiers) and necessary usability improvements starting to take shape (both in terms of the OpenID and OAuth flows being combined, and in terms of email addresses becoming valid in OpenID flows), we’re truly just getting started with making OpenID ready for mainstream audiences. It’s been a hard slog so far, and it’s bound to continue to be challenging, but the shared vision for where we’re going gets clearer every time there’s an Internet Identity Workshop.

I plan to re-run this study every 3-6 months from this point forward to keep track of our progress. I hope that these numbers will shed some much-needed balanced light on the subject of OpenID awareness and adoption — both to demonstrate how far we have to go, and how far we’ve come.

OpenID usability is not an oxymoron

Julie Zhou of Facebook discusses usability findings from Facebook Connect.
Julie Zhou of Facebook discusses usability findings from Facebook Connect. Photo © John McCrea. All rights reserved.

See? We're working on this! Monday last week marked the first ever OpenID UX Summit at Yahoo! in Sunnyvale with over 40 in attendance. Representatives came from MySpace, Facebook, Google, Yahoo!, Vidoop, Janrain, Six Apart, AOL, Chimp, Magnolia, Microsoft, Plaxo, Netmesh, Internet 2 and Liberty Alliance to debate and discuss how best to make implementations of the protocol easier to use and more familiar.

John McCrea covered the significance of the summit on TechCrunchIT (and recognized Facebook’s welcomed participation) and has a good overall summary on his blog.

While the summit was a long-overdue step towards addressing the clear usability issues directly inhibiting the spread of OpenID, there are four additional areas that I think need more attention. I’ll address each separately. Continue reading “OpenID usability is not an oxymoron”

Musings on Chrome, the rebirth of the location bar and privacy in the cloud

Imagine a browser of the web, by the web, and for the web. Not simply a thick client application that simply opens documents with the http:// protocol instead of file://, but one that runs web applications (efficiently!), that plays the web, that connects people across the boundaries of the silos and gives them local-like access to remote data.

It might not be Chrome, but it’s a damn near approximation, given what people today.

Take a step back. You can see the relics of desktop computing in our applications’ file menus… and we can intuit the assumptions that the original designer must have made about the user, her context and the interaction expectations she brought with her:

Firefox Menubar

This is not a start menu or a Dock. This is a document-driven menubar that’s barely changed since Netscape Communicator.

Indeed, the browser is a funny thing, because it’s really just a wrapper for someone else’s content or someone’s else’s application. That’s why it’s not about “features“. It’s all about which features, especially for developers.

It’s a hugely powerful place to insert oneself: between a person and the vast expanse that is the Open Web. Better yet: to be the conduit through which anyone projects herself on to the web, or reaches into the digital void to do something.

So if you were going to design a new browser, how would you handle the enormity of that responsibility? How would you seize the monument of that opportunity and create something great?

Well, for starters, you’d probably want to think about that first run experience — what it’s like to get behind the wheel for the very time with a newly minted driver’s permit — with the daunting realization that you can now go anywhere you please…! Which is of course awesome, until you realize that you have no idea where to go first!

Historically, the solution has been to flip-flop between portals and search boxes, and if we’ve learned anything from Google’s shockingly austere homepage, it comes down to recognizing that the first step of getting somewhere is expressing some notion of where you want to go:

Camino. Start

InquisitorThe problem is that the location field has, up until recently, been fairly inert and useless. With Spotlight-influenced interfaces creeping into the browser (like David Watanabe’s recently acquired Inquisitor Safari plugin — now powered by Yahoo! Search BOSS — or the flyout in Flock that was inspired by it) it’s clear that browsers can and should provide more direction and assistance to get people going. Not everyone’s got a penchant for remembering URLs (or RFCs) like Tantek’s.

This kind of predictive interface, however, has only slowly made its way into the location bar, like fish being washed ashore and gradually sprouting legs. Eventually they’ll learn to walk and breath normally, but until then, things might look a little awkward. But yes, dear reader, things do change.

So you can imagine, having recognized this trend, Google went ahead and combined the search box and the location field in Chrome and is now pushing the location bar as the starting place, as well as where to do your searching:

Chrome Start

This change to such a fundamental piece of real estate in the browser has profound consequences on both the typical use of the browser as well as security models that treat the visibility of the URL bar as sacrosanct (read: phishing):

Omnibox

The URL bar is dead! Long live the URL bar!

While cats like us know intuitively how to use the location bar in combination with URLs to gets us to where to we want to go, that practice is now outmoded. Instead we type anything into the “box” and have some likely chance that we’re going to end up close to something interesting. Feeling lucky?

But there’s something else behind all this that I think is super important to realize… and that’s that our fundamental notions and expectations of privacy on the web have to change or will be changed for us. Either we do without tools that augment our cognitive faculties or we embrace them, and in so doing, shim open a window on our behaviors and our habits so that computers, computing environments and web service agents can become more predictive and responsive to them, and in so doing, serve us better. So it goes.

Underlying these changes are new legal concepts and challenges, spelled out in Google’s updated EULA and Privacy Policy… heretofore places where few feared to go, least of all browser manufacturers:

5. Use of the Services by you

5.1 In order to access certain Services, you may be required to provide information about yourself (such as identification or contact details) as part of the registration process for the Service, or as part of your continued use of the Services. You agree that any registration information you give to Google will always be accurate, correct and up to date.

. . .

12. Software updates

12.1 The Software which you use may automatically download and install updates from time to time from Google. These updates are designed to improve, enhance and further develop the Services and may take the form of bug fixes, enhanced functions, new software modules and completely new versions. You agree to receive such updates (and permit Google to deliver these to you) as part of your use of the Services.

It’s not that any of this is unexpected or Draconian: it is what it is, if it weren’t like this already.

Each of us will eventually need to choose a data brokers or two in the future and agree to similar terms and conditions, just like we’ve done with banks and credit card providers; and if we haven’t already, just as we have as we’ve done in embracing webmail.

Hopefully visibility into Chrome’s source code will help keep things honest, and also provide the means to excise those features, or to redirect them to brokers or service providers of our choosing, but it’s inevitable that effective cloud computing will increasingly require more data from and about us than we’ve previously felt comfortable giving. And the crazy thing is that a great number of us (yes, including me!) will give it. Willingly. And eagerly.

But think one more second about the ramifications (see Matt Cutts) of Section 12 up there about Software Updates: by using Chrome, you agree to allow Google to update the browser. That’s it: end of story. You want to turn it off? Disconnect from the web… in the process, rendering Chrome nothing more than, well, chrome (pun intended).

Welcome to cloud computing. The future has arrived and is arriving.

Google Chrome and the future of browsers

Chrome LogoNews came today confirming Google’s plans for Chrome, its own open source browser based on Webkit.

This is big news. As far as I’m concerned, it doesn’t get much bigger than this, at least in my little shed on the internet.

I’ve been struggling to come to grips with my thoughts on this since I first heard about this this morning over Twitter (thanks @rww @Carnage4Life and @furrier). Once I found out that it was based on Webkit, the pieces all fell into place (or perhaps the puzzle that’s been under construction for the past year or so became clearer).

Chrome is powered by Webkit

Last May I ranted for a good 45 minutes or so about the state of Mozilla and Firefox and my concerns for its future. It’s curious to look back and consider my fears about Adobe Air and Silverlight; it’s more curious to think about what Google Chrome might mean now that it’s been confirmed and that those frameworks have little to offer in the way of standards for the open web.

I read announcement as the kid gloves coming off. I just can’t read this any other way than to think that Google’s finally fed up waiting around for Firefox to get their act together, fix their performance issues in serious ways, provide tangible and near-term vision and make good on their ultimate promise and value-proposition.

Sure, Google re-upped their deal with Firefox, but why wouldn’t they? If this really is a battle against Microsoft, Google can continue to use Firefox as its proxy against the entrenched behemoth. Why not? Mozilla’s lack of concern worries me greatly; if they knew about it, what did they do about it? Although Weave has potential, Google has had Google Browser Sync for ages (announced, to wit, by Chrome’s product manager Brian Rakowski). Aza Raskin might be doing very curious and esoteric experiments on Labs, but how does this demonstrate a wider, clearer, focused vision? Or is that the point?

Therein lies the tragedy: Google is a well-oiled, well-heeled machine. Mozilla, in contrast, is not (and probably never will be). The Webkit team, as a rhizomatic offshoot from Apple, has a similar development pedigree and has consistently produced a high quality — now cross-platform — open source project, nary engaging in polemics or politics. They let the results speak for themselves. They keep their eyes on the ball.

Ultimately this has everything to do with people; with leadership, execution and vision.

When Mozilla lost Ben Goodger I think the damage went deeper than was known or understood. Then Blake Ross and Joe Hewitt went over to Facebook, where they’re probably in the bowels of the organization, doing stuff with FBML and the like, bringing Parakeet into existence (they’ve recently been joined by Mike Schroepfer, previously VP of Engineering at Mozilla). Brad Neuberg joined Google to take Dojo Offline forward in the Gears project (along with efforts from Dylan Schiemann and Alex Russell). And the list goes on.

Start poking around the names in the Google Chrome comic book and the names are there. Scott McCloud’s drawings aren’t just a useful pictorial explanation of what to expect in Chrome; it’s practically a declaration of independence from the yesteryear traditions of browser design of the past 10 years, going all the way back to Netscape’s heyday when the notion of the web was a vast collection of interlinked documents. With Chrome, the web starts to look more like a nodal grid of documents, with cloud applications running on momentary instances, being run directly and indirectly by people and their agents. This is the browser caught up.

We get Gears baked in (note the lack of “Google” prefix — it’s now simply “of the web”) and if you’ve read the fine-print closely, you already know that this means that Chrome will be a self-updating, self-healing browser. This means that the web will rev at the speed of the frameworks and the specifications, and will no longer be tied to the monopoly player’s broken rendering engine.

And on top of Gears, we’re starting to see the light of the site-specific browser revolution and the maturing of the web as an application platform, something Todd Ditchendorf, with his Fluid project, knows something about (also based on Webkit — all your base, etc):

Google Chrome + Gears

In spite of its lofty rhetoric in support of a free Internet, Chrome isn’t Mozilla’s pièce de résistance. Turns out that it’s going to be Apple and Google who will usher in the future of browsers, and who will get to determine just what that future of browsers are going to look like:

Google Chrome, starting from scratch

To put it mildly, things just got a whole lot more exciting.

The Open Web Foundation

Open Web Foundation logoDuring this morning’s keynote at OSCON, David Recordon announced the formation of the Open Web Foundation (his slides), an initiative with which I am involved, aimed at becoming something akin to a “Creative Commons for patents”, with the intention of lowering the costs and barriers to the development and adoption of open and free specifications like OpenID and OAuth.

As I expected, there’s been some healthy skepticism that usually starts with “Another foundation? Really?” or “Wait, doesn’t [insert other organization name] do this?

And the answers are “Yes, exactly” and “No, not exactly” (respectively).

I’ll let John McCrae explain:

…every grass roots effort, whether OpenID, OAuth, or something yet to be dreamt up, needs to work through a whole lot of issues to go from great idea to finalized spec that companies large and small feel comfortable implementing. In particular, large companies want to make sure that they can adopt these building blocks without fear of being sued for infringing on somebody’s intellectual property rights. Absent the creation of this new organization, we were likely to see each new effort potentially creating yet-another-foundation to tackle what is essentially a common set of requirements.

And this is essentially where we were in the OAuth process, following in the footsteps of the OpenID Foundation before us, trying to figure out for ourselves the legal and intellectual property issues that stood in the way of [a few] larger companies being able to adopt the protocol.

Now, I should point out that OAuth and OpenID are the result of somewhat unique and recent phenomena, where, due to the low cost of networked collaboration and the high value of commoditizing common protocols between web services, the OAuth protocol came together in just under a year, written by a small number of highly motivated individuals. The problem is that it’s taken nearly the same amount of time trying to developer our approach to intellectual property, despite the collective desire of the authors to let anyone freely use it! This system is clearly broken, and not just for us, but for every group that wants to provide untethered building blocks for use on the open web — especially those groups who don’t have qualified legal counsel at their disposal.

That other groups exist to remedy this issue is something that we realized and considered very seriously before embarking on our own effort. After all, we really don’t want to have to do this kind of work — indeed it often feels more like a distraction than something that actually adds value to the technology — but the reality is that clarity and understanding is actually critical once you get outside the small circle of original creators, and in that space is where our opportunity lies.

In particular, for small, independent groups to work on open specifications (n.b. not standards!) that may eventually be adopted industry-wide, there needs to be a lightweight and well-articulated path for doing the right thing™ when it comes to intellectual property that does not burden the creative process with defining scope prematurely (a process that is costly and usually takes months, greatly inhibiting community momentum!) and that also doesn’t impose high monetary fees on participation, especially when outcomes may be initially uncertain.

At the same time, the final output of these kinds of efforts should ultimately be free to be implemented by all the participants and the community at large. And rather than forcing the assignment of all related patents owned by all participants to a central foundation (as in the case of the XMPP Foundation) or getting every participant to license their patents to others (something most companies seem loathe to do without some fiscal upside), we’ve seen a trend over the past several years towards patent non-assert agreements which allow companies to maintain their IP, to not have to disclose it, and yet to allow for the free, unencumbered use of the specification.

If this sounds complicated, it’s because it is, and is a significant stumbling block for many community-driven open source and open specification projects that aim for, or have the potential for, widespread adoption. And this is where we hope the Open Web Foundation can provide specific value in creating templates for these kinds of situations and guiding folks through effective use of them, ultimately in support of a more robust, more interoperable and open web.

We do have much work ahead of us, but hopefully, if we are successful, we will reduce the overall cost to the industry of repeating this kind of work, again, in much the same way Creative Commons has done in providing license alternatives to copyright and making salient the notion that the way things are aren’t the only way they have to be.