Inaugural Jelly! Talk this Friday: OpenID vs Facebook Connect

Jelly TalksThis Friday, I’ll be joined by Dave Morin (my good friend from Facebook) at the first ever Jelly! Talk at Joe and Brian’s loft in San Francisco.

If you’re not familiar with Jelly, you should be. I call it the “gateway drug to coworking” — but it really has its own culture and identity independent of coworking, though both movements are rather complementary. Amit Gupta got Jelly started at House 2.0 in New York City back in 2006 (about two months after I initially expressed my desire to create a coworking space in San Francisco). Since then, like coworking, it’s grown into a self-sustaining movement.

Jelly! Talks is an interesting expansion on the concept — where Jellies distributed throughout the world can tune in to hear interesting and relevant talks and interact with speakers, similar to what the 37 Signals guys do with their “Live” show.

This first show I’ll be talking with Dave Morin about the relationship between OpenID and Facebook Connect — and where the two technologies are headed. This should be a pretty interesting conversation, since I’ve long tried to convince the folks at Facebook to adopt OpenID and other elements of the Open Stack (hey, they’ve got hcard already!).

Apparently the event is physically booked up, but you’ll still be to tune in remotely this Friday at 11am PST.

(Tip: The next Jelly! Talk will feature Guy Kawasaki).

OpenID usability is not an oxymoron

Julie Zhou of Facebook discusses usability findings from Facebook Connect.
Julie Zhou of Facebook discusses usability findings from Facebook Connect. Photo © John McCrea. All rights reserved.

See? We're working on this! Monday last week marked the first ever OpenID UX Summit at Yahoo! in Sunnyvale with over 40 in attendance. Representatives came from MySpace, Facebook, Google, Yahoo!, Vidoop, Janrain, Six Apart, AOL, Chimp, Magnolia, Microsoft, Plaxo, Netmesh, Internet 2 and Liberty Alliance to debate and discuss how best to make implementations of the protocol easier to use and more familiar.

John McCrea covered the significance of the summit on TechCrunchIT (and recognized Facebook’s welcomed participation) and has a good overall summary on his blog.

While the summit was a long-overdue step towards addressing the clear usability issues directly inhibiting the spread of OpenID, there are four additional areas that I think need more attention. I’ll address each separately. Continue reading “OpenID usability is not an oxymoron”

The Social Web TV pilot episode

http://www.viddler.com/player/2cf46be8/

My buddies John McCrea, Joseph Smarr have started up a show called The Social Web and have released the pilot episode, featuring David Recordon on the hubbub between Google and Facebook following last week’s Supernova Conference.

As they point out, things are changing and happening so fast in the industry that a show like this, that cuts through the FUD and marketing hype is really necessary. I hope to participate in future episodes — and would love to hear suggestions or recommendations for topics or guests for upcoming episodes.

Here’s the FriendFeed room Dave mentioned.

Thoughts on dynamic privacy

A highly touted aspect of Facebook Connect is the notion of “dynamic privacy“:

As a user moves around the open Web, their privacy settings will follow, ensuring that users’ information and privacy rules are always up-to-date. For example, if a user changes their profile picture, or removes a friend connection, this will be automatically updated in the external website.

Over the course of the Graphing Social Patterns East conference here in DC, Dave Morin and others from Facebook’s Developer Platform have made many a reference to this scheme but have provided frustratingly scant detail on how it will actually work.

Friend Connect - Disable by Facebook

In a conversation with Brian Oberkirch and David Recordon, it dawned on me that the pieces for Dynamic Privacy are already in place and that, to some degree, it seems that it’s really just a matter of figuring out how to effectively enforce policy across distributed systems in order to meet user expectations.

MySpace actually has made similar announcements in their Data Availability approach, and if you read carefully, you can spot the fundamental rift between the OpenSocial and Facebook platforms:

Additionally, rather than updating information across the Web (e.g. default photo, favorite movies or music) for each site where a user spends time, now a user can update their profile in one place and dynamically share that information with the other sites they care about. MySpace will be rolling out a centralized location within the site that allows users to manage how their content and data is made available to third party sites they have chosen to engage with.

Indeed, Recordon wrote about this on O’Reilly Radar last month (emphasis original):

He explained that MySpace said that due to their terms of service the participating sites (e.g. Twitter) would not be allowed to cache or store any of the profile information. In my mind this led to the Data Availability API being structured in one of two ways: 1) on each page load Twitter makes a request to MySpace fetching the protected profile information via OAuth to then display on their site or 2) Twitter includes JavaScript which the browser then uses to fill in the corresponding profile information when it renders the page. Either case is not an example of data portability no matter how you define the term!

Embedding vs sharing

So the major difference here is in the mechanism of data delivery and how the information is “leased” or “tethered” to the original source, such that, as Morin said, “when a user deletes an item on Facebook, it gets deleted everywhere else.”

The approach taken by Google Gadgets, and hence OpenSocial, for the most part, has been to tether data back to the source via embedded iframes. This means that if someone deletes or changes a social object, it will be deleted or changed across OpenSocial containers, though they won’t even notice the difference since they never had access to the data to begin with.

The approach that seems likely from Facebook can be intuited by scouring their developer’s terms of service (emphasis added):

You can only cache user information for up to 24 hours to assist with performance.

2.A.4) Except as provided in Section 2.A.6 below, you may not continue to use, and must immediately remove from any Facebook Platform Application and any Data Repository in your possession or under your control, any Facebook Properties not explicitly identified as being storable indefinitely in the Facebook Platform Documentation within 24 hours after the time at which you obtained the data, or such other time as Facebook may specify to you from time to time;

2.A.5) You may store and use indefinitely any Facebook Properties that are explicitly identified as being storable indefinitely in the Facebook Platform Documentation; provided, however, that except as provided in Section 2.A.6 below, you may not continue to use, and must immediately remove from any Facebook Platform Application and any Data Repository in your possession or under your control, any such Facebook Properties: (a) if Facebook ceases to explicitly identify the same as being storable indefinitely in the Facebook Platform Documentation; (b) upon notice from Facebook (including if we notify you that a particular Facebook User has requested that their information be made inaccessible to that Facebook Platform Application); or (c) upon any termination of this Agreement or of your use of or participation in Facebook Platform;

2.A.6) You may retain copies of Exportable Facebook Properties for such period of time (if any) as the Applicable Facebook User for such Exportable Facebook Properties may approve, if (and only if) such Applicable Facebook User expressly approves your doing so pursuant to an affirmative “opt-in” after receiving a prominent disclosure of (a) the uses you intend to make of such Exportable Facebook Properties, (b) the duration for which you will retain copies of such Exportable Facebook Properties and (c) any terms and conditions governing your use of such Exportable Facebook Properties (a “Full Disclosure Opt-In”);

2.B.8) Notwithstanding the provisions of Sections 2.B.1, 2.B.2 and 2.B.5 above, if (and only if) the Applicable Facebook User for any Exportable Facebook Properties expressly approves your doing so pursuant to a Full Disclosure Opt-In, you may additionally display, provide, edit, modify, sell, resell, lease, redistribute, license, sublicense or transfer such Exportable Facebook Properties in such manner as, and only to the extent that, such Applicable Facebook User may approve.

This is further expanded in the platform documentation on Storable Information:

Per the Developer Terms of Service, you may not cache any user data for more than 24 hours, with the exception of information that is explicitly “storable indefinitely.” Only the following parameters are storable indefinitely; all other information must be requested from Facebook each time.

The storable IDs enable you to keep unique identifiers for Facebook elements that correspond to data gathered by your application. For instance, if you collected information about a user’s musical tastes, you could associate that data with a user’s Facebook uid.

However, note that you cannot store any relations between these IDs, such as whether a user is attending an event. The only exception is the user-to-network relation.

I imagine that Facebook Connect will work by “leasing” or “sharing” information to remote sites and require, through agreement and compliance with their terms, to check in periodically (or to receive directives through a push mechanism) for changes to data, and then to flush caches of stored data every 24 hours or less.

In either model there is still a central provider and store of the data, but the question for implementation really comes down to whether a remote site ever has direct access to the data, and if so, how long it is allowed to store it.

Of note is the OpenSocial RESTful API, which provides a web-friendly mechanism for addressing and defining resources. Recordon pointed out to me that this API affords all the mechanisms necessary to implement the “leased” model of data access (rather than the embedded model), but leaves it up to the OpenSocial applications and containers to set and enforce their own data access policies.

…Which is a world of a difference from Facebook’s approach to date, for which there is neither code nor a spec nor an open discussion about how they’re thinking through the tenuous issues imbued in making decisions around data access, data control, “tethering” and “portability“. While folks like Plaxo and Yahoo are actually shipping code, Facebook is still posturing, assuring us to “wait and see”. With something so central and so important, it’s disheartening that Facebook’s “Open” strategy is anything but open, and everything less than transparent.

The battle for the future of the social web

When I was younger, I used to bring over my Super Nintendo games to my friends’ houses and we’d play for hours… that is, if they had an SNES console. If, for some reason, my friend had a Sega system, my games were useless and we had to play something like Sewer Shark. Inevitably less fun was had.

What us kids didn’t know at the time was that we were suffering from a platform war, that manifested, more or less, in the form of a standards war for the domination of the post-Atari video game market. We certainly didn’t get why Nintendo games didn’t work on Sega systems, they just didn’t, and so we coped, mostly by not going over to the kid’s house who had Sega. No doubt, friendships were made — and destroyed — on the basis of which console you had, and on how many games you had for the preferred platform. Indeed, the kids with the richest parents got a pass, since they simply had every known system and could play anyone’s games, making them by default, super popular (in other words, it was good to be able to afford to ignore the standards war altogether).

Fast-forward 10 years and we’re on the cusp of a new standards war, where the players and stakes have changed considerably but the nature of warfare has remained much the same as Hal R. Varian and Carl Shapiro described in Information Rules in 1999. But the casualties, as before, will likely be the consumers, customers and patrons of the technologies in question. So, while we can learn much from history about how to fight the war, I think that, for the sake of the web and for web citizens generally, this coming war can be avoided, and that, perhaps, it should be.

Continue reading “The battle for the future of the social web”