Portable Profiles & Preferences on the Citizen-Centric Web

Loyalty Cards by Joe LoongLet me state the problem plainly: in order to provide better service, it helps to know more about your customer, so that you can more effectively anticipate and meet her needs.

But, pray tell, how do you learn about or solicit such information over the course of your first interaction? Moreover, how do you go about learning as much as you can, as quickly as you can, without making the request itself burdensome and off-putting?

Well, as obvious as it seems, the answer is to let her tell you.

The less obvious thing is how.

And that’s where user-centric (or citizen-centric) technologies offer the most promise.

It’s like this:

  • If you let someone use an account or ID that they already use regularly elsewhere, you will save them the hassle of having to create yet another account that works solely with your service;
  • following on that, an account that is reusable is more valuable, and its value can be further increased by attaching certain types of profile attributes to it that are commonly requested;
  • the more common it becomes to reuse an account, the more people will expect this convenience during new sign up experiences, ideally to the point of knowing how to ask for support for their preferred sign-in mechanism from the services that they use;
  • presuming that service providers’ desire for profile information and preferences will not decrease, it will become an added byproduct of user-centric authentication to be able to import such data from identity providers as it is available;
  • as customers realize the convenience of portable profile and preference data, savvy identity providers will make it easier to store and express a wider array of this data, and will subsequently work with relying parties to develop interoperable sign up flows and on ramps (see Google and Plaxo).

For this to work, the individual must be motivated to manage her profile information and preferences, which shouldn’t be hard as her data becomes increasingly reusable (sort once, reuse everywhere). Additionally, organizing, maintaining, and accruing this information becomes less onerous when it’s all in one place (or conveniently accessible through one central customer-picked source), as opposed to sharded across many accounts and unaffiliated services.

You can get similar functionality with form-filling software like 1Password except in the model I’m describing, the data travels with you — beyond the browser and off the desktop — to wherever you need it — because it is stored in the cloud.

As it becomes easier to store and share this information, I think more people will do this as a happenstance of using more social software — and will become acclimated to providing their friends and service providers with varying degrees of access to increasing amounts of personally describing data.

Companies that jump on this and make it easier for people to manage their profile and preference data will benefit — having access to more accurate, timely, and better-maintained information, leading to more personalized user experiences and accelerating the path to satisfaction.

Companies that do get this right will benefit from what is emerging as a new social contract. As a citizen of the web, if you let me manage my relationship with you, and make it easy for me to do so, giving me the choice of how and where I store my profile and preference data, I’ll be more likely, more willing, and more able to share it with you, in an ongoing fashion, increasingly as you use it to improve my experiences with you.

My name is not a URL

Twitter / Mark Zuckerberg: Also just created a public ...

Arrington has a post that claims that Facebook is getting wise to something MySpace has known from the start – users love vanity URLs.

I don’t buy it. In fact, I’m pretty sure that the omission of vanity URLs on Facebook is an intentional design decision from the beginning, and one that I’ve learned to appreciate over time.

From what I’ve gathered, it was co-founder Dustin Moskovitz’s stubbornness that kept Facebook from allowing the use of pseudonymic usernames common on previous-generation social networks like AOL. Considering that Mark Zuckerberg’s plan is to build an online version of the relationships we have in real life, it only makes sense that we should, therefore, call our friends by their IRL names — not the ones left over or suggested by a computer.

But there’s actually something deeper going on here — something that I talked about at DrupalCon — because there are at least two good uses for letting people set their own vanity URLs — three if your service somehow surfaces usernames as an interface handle:

  1. Uniqueness and remembering
  2. Search engine optimization
  3. Facilitating member-to-member communication (as in the case of Twitter’s @replies)

For my own sake, I’ve lately begun decreasing the distance between my real identity and my online persona, switching from @factoryjoe to @chrismessina on Twitter. While there are plenty of folks who know me by my digital moniker, there are far more who don’t and shouldn’t need to in order to interact with me.

When considering SEO, it’s quite obvious that Google has already picked up on the correlation:

chris messina - Google Search

Ironically, in Dustin’s case (intentionally or not) he is not an authority for his own name on Google (despite the uniqueness of his name). Instead, semi-nefarious sites like Spock use SEO to get prominent placement for Dustin’s name (whether he likes it or not):

Dustin Moskovitz - Google Search

Finally, in cases like Twitter, IM or IRC, nicknames or handles are used explicitly to refer to other people on the system, even if (or especially if!) real identities are never revealed. While this separation can afford a number of perceived benefits, long-term it’s hard to quantify the net value of pseudonymity when most assholes on the web seem to act out most aggressively when shrouding their real names.

By shunning vanity URLs for its members, Facebook has achieved three things:

  1. Establishes a new baseline for transparent online identity
  2. Avoids the naming collision problem by scoping relationships within a person’s [reciprocal] social graph
  3. Upgrades expectations for human interaction on social websites

That everyone on Facebook has to use their real name (and Facebook will root out and disable accounts with pseudonyms), there’s a higher degree of accountability because legitimate users are forced to reveal who they are offline. No more “funnybunny345” or “daveman692” creeping around and leaving harassing wall posts on your profile; you know exactly who left the comment because their name is attached to their account.

Go through the comments on TechCrunch and compare those left by Facebook users with those left by everyone else. In my brief analysis, Facebook commenters tend to take their commenting more seriously. It’s not a guarantee, but there is definitely a correlation between durable identity and higher quality participation.

Now, one might point out that, without unique usernames, you’d end up with a bunch of name collisions — and you’d be right. However, combining search-by-email with profile photos largely eliminates this problem, and since Facebook requires bidirectional friendship confirmation, it’s going to be hard to get the wrong “Mike Smith” showing up in your social graph. So instead of futzing with (and probably forgetting) what strange username your friend uses, you can just search by (concept!) their real name using Facebook’s type-ahead find. And with autocompletion, you’ll never spell it wrong (of course Gmail has had this for ages as well).

Let me make a logical leap here and point out here that this is the new namespace — the human-friendly namespace — that Tim O’Reilly observed emerging when he defined Web 2.0, pointing out that a future source of lock-in would be “owning a namespace”. This is why location-based services are so hot. This is also why it matters who gets out in front first by developing a database of places named by humans — rather than by their official names. When it comes to search, search will get better when you can bound it — to the confluence of your known world and the known/colloquial world of your social graph.

When I was in San Diego a couple weeks back, it dawned on me that if I searched for “Joe’s Crab Shack”, no search engine on earth would be able to give me a satisfying result… unless it knew where I was. Or where I had been. Or, where my friends had been. This is where social search and computer-augmented social search becomes powerful (see Aardvark). Not just that, but this is where owning a database of given names tied to real things becomes hugely powerful (see Foursquare). This is where social objects with human-given names become the spimatic web.

So, as this plays out, success will find the designer who most nearly replicates the world offline online. Consider:

Twitter / Rear Adm. Monteiro: @mat and I are in the back ...

vs:

Facebook | @replies

and:

iChat

vs.

Facebook Chat

Ignoring content, it seems to me that the latter examples are much easier to grok without knowing anything about Facebook or Twitter — and are much closer approximations of real life.

Moreover, in EventBox, there is evidence that we truly are in a transitional period, where a large number of people still identity themselves or know their friends by usernames, but an increasing number of newcomers are more comfortable using real names (click to enlarge):

Eventbox Preferences

We’re only going to see more of this kind of thing, where the data-driven design approach will give way to a more overall humane aesthetic. It begins by calling people by the names we humans prefer to — and will always — use. And I think Facebook got it right by leaving out the vanity URLs.

What PayPal’s member in the OpenID Foundation could mean

PayPal logoBrian Kissel announced this morning that PayPal has joined the board of the OpenID Foundation as our sixth corporate member, with Andrew Nash, Sr., Director of Information Risk Management and a longstanding advocate for OpenID, as their representative.

That PayPal has joined is certainly good news, and helps to diversify the types of companies sitting on the OpenID Foundation board (PayPal joins Google, IBM, Microsoft, VeriSign and Yahoo!). It also provides a useful opportunity to think about how OpenID could be useful (if not essential) for financial transactions on the web.

For one thing, PayPal already relies on email addresses for identification, and one of the things that I’m strongly advocating for in OpenID 2.1 is the use of email-style identifiers in OpenID flows.

Given that PayPal already assumes that you are your email address, things become more interesting when a company like PayPal starts to assume that you are your OpenID (regardless of the format). With discovery, your OpenID could be useful not just as an indicator of your data resources across the web (essential in cloud computing), but could also be useful for pointing to your financial resources. Compare these two XRDS-Simple entries (the latter is fictional):

<!-- Portable Contacts Delegation -->

    http://portablecontacts.net/spec/1.0
    http://pulse.plaxo.com/pulse/pdata/contacts


<!-- Payment Gateway Delegation -->

    http://portablepayments.net/spec/1.0
    http://paypal.com/payment/

From this simple addition to your discovery profile, third parties would be able to request authorization to payment, without necessarily having to ask you every time who your provider is. And of course no payment would be disbursed without your explicit authorization, but the point is — sellers would be able to offer a much more seamless payment experience by supporting OpenID and discovery.

The pieces are more or less in place here, and with PayPal on board, I think that we’re starting to see how OpenID can be used to smooth the on-boarding process for any number of routine tasks — from specifying where you store your photos to pointing to the service(s) that you use for payment.

I commonly use the metaphor of credit cards for OpenID. One thing that makes credit cards convenient is that the 16-digit unique ID on each card is embedded in the magnetic strip, meaning that it’s trivial for consumers to just swipe their cards rather than typing in their account number. OpenID and discovery, combined, provides a similar kind of experience for the web. I think we need to keep this in mind as we move the state of the art forward, and think about what can be accomplished once people not only have durable identity on the web — but can use those identifiers to access other forms of real-world value (and can secure them however they see fit).

Twitter and the Password Anti-Pattern

Twitter / Alex Payne: @factoryjoe Yes, OAuth is ...

I’ve written about the password anti-pattern before, and have, with regards to Twitter, advocated for the adoption of some form of delegated authentication solution for some while.

It’s not as if Twitter or lead developer Alex Payne aren’t aware of the need for such a solution (in fact, it’s not only been publicly recognized (and is Issue #2 in their API issue queue), but the solution will be available as part of a “beta” program shortly). The problem is that it’s taken so long for Twitter’s “password anti-pattern” problem to get the proper attention that it deserves (Twitter acknowledged that they were moving to OAuth last August) that unsuspecting Twitter users have now exposed themselves (i.e. Twitter credentials) to the kind of threat we knew was there all along.

This isn’t the first time either, and it probably won’t be the last, at least until Twitter changes the way third party services access user accounts.

Rather than focus on Twply (which others have done, and whose evidence still lingers), I thought I’d talk about why this is an important problem, what solutions are available, why Twitter hasn’t adopted them and then look at what should happen here.
Continue reading “Twitter and the Password Anti-Pattern”

Lightweight access PINs: a modest proposal for enabling OpenID in desktop and mobile apps

While the news that Google is now an OpenID Provider was generally welcomed, a common chorus decrying their support (along with others large OPs like Yahoo, Microsoft and others) at best as half-hearted, at worst as ruining OpenID has revealed a significant barrier to such large providers becoming relying parties (even beyond usability).

Eric Sachs (Google Security Team) writes:

One other question that a lot of people asked yesterday is when a large provider like Google will become a relying party. There is one big problem that stands in the way of doing that, but fortunately it is more of a technology problem than a usability issue. That problem is that rich-client apps (desktop apps and mobile apps) are hard-coded to ask a user for their username and password. As an example, all Google rich-client apps would break if we supported federated login for our consumer users, and in fact they do break for the large number of our enterprise E-mail outsourcing customers who run their own identity provider, and for which Google is a relying party today. This problem with rich-client apps also affects other sites like Plaxo who are already relying parties.

Fortunately there is a solution, and it was developed specifically because Ma.gnolia ran into this problem when it became an OpenID relying party. The result, nine months in the making, was OAuth. Eric even recognizes this:

We need standard open-source components on as many platforms as possible to enable those rich-client apps to support OAuth. That includes a lot more platforms then just Windows and Mac. The harder part is mobile devices (Blackberry, Symbian, Windows Mobile, iPhone, and yes even Android), and other Internet connected devices like Tivos, Apple TVs, Playstations, etc. that have rich-client apps that ask users for their passwords to access services like Youtube, Google photos, etc. If we build these components, they will be useful not only to Google, but also to any other relying parties which have rich-client apps or exposes APIs, and it will also help enterprise SaaS vendors like Salesforce.

iPhone Sync CodeAs I’ve been thinking about this problem, I’ve come to see as an intermediate approach to full-on delegated authorization a simpler, perhaps more familiar approach that would be relatively easy to implement given common interface patterns today. For comparison, Pownce’s iPhone app originally used out-of-band browser-based authentication, leading to a swarm of user criticism resulting in a compromised solution that required embedding a web browser in the app. Less than ideal.

In my proposal, rather than ask for a user’s password, an easier-to-remember OP-issued numerical PIN would be used to authenticate requests. Better is that this approach is already supported in OAuth, it’s just not widely used yet (though is similar to how Flickr authorizes mobile clients).

The basic concept is that you’d have one password (or other strong authentication method) for your primary OpenID account and you’d have one (or more) PINs that you would use to access your account remotely — perhaps in limited risk scenarios or where (again) the full browser-based OAuth flow is not possible or warranted.

Although I initially opposed FriendFeed’s use of Remote Keys, I now think that there’s some merit to this approach, as long as the underlying mechanism uses standard OAuth calls.

There are plenty of holes in this approach, but insomuch as it enables an existing pattern to be phased out gently, I think it offers at least the foundation of an idea that could be useful. It also could be used as a counter-balance to some of the current thinking on federated login flows with OAuth.

Consider these three sign in boxes for comparison:

  1. Traditional Password
    traditional password
  2. Lightweight PIN access
    pin-access
  3. Full OAuth
    Full OAuth

Thoughts welcome.

OpenID usability is not an oxymoron

Julie Zhou of Facebook discusses usability findings from Facebook Connect.
Julie Zhou of Facebook discusses usability findings from Facebook Connect. Photo © John McCrea. All rights reserved.

See? We're working on this! Monday last week marked the first ever OpenID UX Summit at Yahoo! in Sunnyvale with over 40 in attendance. Representatives came from MySpace, Facebook, Google, Yahoo!, Vidoop, Janrain, Six Apart, AOL, Chimp, Magnolia, Microsoft, Plaxo, Netmesh, Internet 2 and Liberty Alliance to debate and discuss how best to make implementations of the protocol easier to use and more familiar.

John McCrea covered the significance of the summit on TechCrunchIT (and recognized Facebook’s welcomed participation) and has a good overall summary on his blog.

While the summit was a long-overdue step towards addressing the clear usability issues directly inhibiting the spread of OpenID, there are four additional areas that I think need more attention. I’ll address each separately. Continue reading “OpenID usability is not an oxymoron”

OAuth for the iPhone: Pownce.app

Pownce OAuth flow Step 1

If you’re one of the lucky folks that’s been able to upgrade your iPhone (and activate it) to the 2.0 firmware, I encourage you to give the Pownce application a try, if only to see a real world example of OAuth in action (that link will open in iTunes).

Here’s how it goes in pictures:

Pownce OAuth flow Step 1 Pownce OAuth flow Step 2 Pownce OAuth flow Step 3 Pownce OAuth flow Step 4/Final

And the actual flow:

  1. Launch the Pownce app. You’ll be prompted to login in at Pownce.com
  2. Pownce.app launches Pownce.com via an initial OAuth request; here you signin to your Pownce account using your username or password (if Pownce supported OpenID, you could signin with OpenID as well).
  3. Once successfully signed in to your account, you can grant the Pownce iPhone app permission to access your account.
  4. Once you click Okay, which is basically a pownce:// protocol link that will fire up Pownce.app to complete the transaction.

There are three important aspects of this:

  • First, you’re not entering your username and password into the Pownce application — you’re only entering it into the website. This might not seem like a great distinction, but if a non-Pownce developed iPhone application wanted to access or post to your Pownce account, this flow could be reused, and you’d never need to expose your credentials to that third party app;
  • Second, it creates room for the adoption of OpenID — or something other single sign-on solution — to be implemented at Pownce later on, since OAuth doesn’t specify how you do authentication.
  • Third, if the iPhone is lost or stolen, the owner of the phone could visit Pownce.com and disable access to their account via the Pownce iPhone app — and not need to change their password and disrupt all the other services or applications that might already have been granted access.

Personally, as I’ve fired up an increasing number of native apps on the iPhone 2.0 software, I’ve been increasingly frustrated and annoyed at how many of them want my username and password, and how few of them support this kind of delegated authorization flow.

If you consider that there are already a few Twitter-based applications available, and none of them support OAuth (Twitter still has yet to implement OAuth), in order to even test these apps out, you have to give away your credentials over and over again. Worse, you can guarantee that a third-party will destroy your credentials once you’ve handed them over, even if you uninstall the application.

These are a few reasons to consider OAuth for iPhone application development and authorization. Better yet, Jon Crosby’s Objective-C library can even give you a head start!

Hat tip to Colin Devroe for the suggestion. Cross-posted to the OAuth blog.

Announcing Emailtoid: mapping email addresses to OpenIDs

EmailtoidThe other night at Beer and Blog in Portland, fellow Vidooper Michael T Richardson announced and launched a new service that I’m both excited and a little apprehensive about.

The service is called Emailtoid, and while I prefer to pronounce is “email-toyed”, others might pronounce it “email two eye-dee”. And depending on your pronunciation, you might realize that this service is about using an email address as an ID — specifically an OpenID.

This is not a new idea, and it’s one that been debated and discussed in the OpenID community an awful lot, which culminated in a rough outline of how it might work by Brad Fitzpatrick following the Social Graph FOO Camp this past spring, and that David Fuelling turned into an early draft spec.

Well, we looked at this work and this discussion and felt that sooner or later, in spite of all the benefits of using actual URLs for identity, that someone needed to take a lead and actually build out this concept so we have something real to banter about.

The pragmatic reality is that many people are comfortable using email addresses as their identity online for signing up to new services; furthermore, many, many more people have email addresses who don’t also have URLs or homepages that they call their own (or can readily identify). And forcing people to learn yet another form of identifier for the web to satisfy the design of a protocol for arguably marginal value with a lesser user experience also doesn’t make sense. Put another way: the limitations of the technology should not be forced on end users, especially when it doesn’t need to be. And that’s why Emailtoid is a necessary experiment towards advancing identity on the web.

How it works

Emailtoid is a very simple service, and in fact is designed for obsolescence. It’s meant as a fallback for now, enabling relying parties to accept email addresses as identifiers without requiring the generation of a new local password and without requiring the address owner to give up or reveal their existing email credentials (otherwise known as the “password anti-pattern“).

Enter your email - Emailtoid

The flow works like this:

  1. Users enter either an OpenID or email address into a typical OpenID input field. For the purpose of this flow, we’ll presume an email address is used.
  2. The relying party splits email addresses at the ‘@’ symbol into the username and the domain, generating a directed identity request to the email domain. If an XRDS, YADIS or XRDS-Simple document is discovered at the domain, the typical OpenID flow is invoked.
  3. If no discovery document is found, the service falls back to Emailtoid (sending a request like http://emailtoid.net/mapper?email=jane@example.com), where users verify that they own the supplied email addresses by providing their one-time access token that Emailtoid mailed to them.
  4. At this point, users may optionally associate an existing OpenID with their email address, or use the OpenID auto-generated by Emailtoid. Emailtoid is not intended to serve as a full-featured OpenID provider, and we encourage using an OpenID from a third-party OpenID provider.
  5. In the case where users supply and verify their own OpenID, Emailtoid will create a 302 HTTP redirect removing Emailtoid from future interactions completely.

Should an email provider supply a discovery document after an Emailtoid mapping has been made, the new mapping will take precedence.

Opportunities and issues

The drive behind Emailtoid, again, is to reduce the friction of OpenID by reusing familiar identifiers (i.e. email addresses). Clearly the challenges of achieving OpenID adoption are not simply technological, and to a great degree rely on how the user experience needs to become more streamlined and deliver on the promise of greater security and convenience.

Therefore, if a service advertises that they support signing in with an email address, they must keep that promise.

Unfortunately, until all email providers do some kind of local resolution and OpenID authentication, we will need a centralized mapper such as Emailtoid to provide the fallback mapping. And therein lies the rub, defeating some of the distributed design of OpenID.

If anything, Emailtoid is intended to drive forward a conversation about the experience of OpenID, and about how we can make the protocol compatible with, or complementary to, existing and well-known means of identifying oneself on the web. Is it a final solution? Probably not — but it’s up, it’s running, it works and it forces us now to look critically at the question of emails as OpenIDs, now that we can actually experience the flow, and the feeling, of entering an email address into an OpenID box without ever having to enter, or create, another unnecessary password.

Thoughts on dynamic privacy

A highly touted aspect of Facebook Connect is the notion of “dynamic privacy“:

As a user moves around the open Web, their privacy settings will follow, ensuring that users’ information and privacy rules are always up-to-date. For example, if a user changes their profile picture, or removes a friend connection, this will be automatically updated in the external website.

Over the course of the Graphing Social Patterns East conference here in DC, Dave Morin and others from Facebook’s Developer Platform have made many a reference to this scheme but have provided frustratingly scant detail on how it will actually work.

Friend Connect - Disable by Facebook

In a conversation with Brian Oberkirch and David Recordon, it dawned on me that the pieces for Dynamic Privacy are already in place and that, to some degree, it seems that it’s really just a matter of figuring out how to effectively enforce policy across distributed systems in order to meet user expectations.

MySpace actually has made similar announcements in their Data Availability approach, and if you read carefully, you can spot the fundamental rift between the OpenSocial and Facebook platforms:

Additionally, rather than updating information across the Web (e.g. default photo, favorite movies or music) for each site where a user spends time, now a user can update their profile in one place and dynamically share that information with the other sites they care about. MySpace will be rolling out a centralized location within the site that allows users to manage how their content and data is made available to third party sites they have chosen to engage with.

Indeed, Recordon wrote about this on O’Reilly Radar last month (emphasis original):

He explained that MySpace said that due to their terms of service the participating sites (e.g. Twitter) would not be allowed to cache or store any of the profile information. In my mind this led to the Data Availability API being structured in one of two ways: 1) on each page load Twitter makes a request to MySpace fetching the protected profile information via OAuth to then display on their site or 2) Twitter includes JavaScript which the browser then uses to fill in the corresponding profile information when it renders the page. Either case is not an example of data portability no matter how you define the term!

Embedding vs sharing

So the major difference here is in the mechanism of data delivery and how the information is “leased” or “tethered” to the original source, such that, as Morin said, “when a user deletes an item on Facebook, it gets deleted everywhere else.”

The approach taken by Google Gadgets, and hence OpenSocial, for the most part, has been to tether data back to the source via embedded iframes. This means that if someone deletes or changes a social object, it will be deleted or changed across OpenSocial containers, though they won’t even notice the difference since they never had access to the data to begin with.

The approach that seems likely from Facebook can be intuited by scouring their developer’s terms of service (emphasis added):

You can only cache user information for up to 24 hours to assist with performance.

2.A.4) Except as provided in Section 2.A.6 below, you may not continue to use, and must immediately remove from any Facebook Platform Application and any Data Repository in your possession or under your control, any Facebook Properties not explicitly identified as being storable indefinitely in the Facebook Platform Documentation within 24 hours after the time at which you obtained the data, or such other time as Facebook may specify to you from time to time;

2.A.5) You may store and use indefinitely any Facebook Properties that are explicitly identified as being storable indefinitely in the Facebook Platform Documentation; provided, however, that except as provided in Section 2.A.6 below, you may not continue to use, and must immediately remove from any Facebook Platform Application and any Data Repository in your possession or under your control, any such Facebook Properties: (a) if Facebook ceases to explicitly identify the same as being storable indefinitely in the Facebook Platform Documentation; (b) upon notice from Facebook (including if we notify you that a particular Facebook User has requested that their information be made inaccessible to that Facebook Platform Application); or (c) upon any termination of this Agreement or of your use of or participation in Facebook Platform;

2.A.6) You may retain copies of Exportable Facebook Properties for such period of time (if any) as the Applicable Facebook User for such Exportable Facebook Properties may approve, if (and only if) such Applicable Facebook User expressly approves your doing so pursuant to an affirmative “opt-in” after receiving a prominent disclosure of (a) the uses you intend to make of such Exportable Facebook Properties, (b) the duration for which you will retain copies of such Exportable Facebook Properties and (c) any terms and conditions governing your use of such Exportable Facebook Properties (a “Full Disclosure Opt-In”);

2.B.8) Notwithstanding the provisions of Sections 2.B.1, 2.B.2 and 2.B.5 above, if (and only if) the Applicable Facebook User for any Exportable Facebook Properties expressly approves your doing so pursuant to a Full Disclosure Opt-In, you may additionally display, provide, edit, modify, sell, resell, lease, redistribute, license, sublicense or transfer such Exportable Facebook Properties in such manner as, and only to the extent that, such Applicable Facebook User may approve.

This is further expanded in the platform documentation on Storable Information:

Per the Developer Terms of Service, you may not cache any user data for more than 24 hours, with the exception of information that is explicitly “storable indefinitely.” Only the following parameters are storable indefinitely; all other information must be requested from Facebook each time.

The storable IDs enable you to keep unique identifiers for Facebook elements that correspond to data gathered by your application. For instance, if you collected information about a user’s musical tastes, you could associate that data with a user’s Facebook uid.

However, note that you cannot store any relations between these IDs, such as whether a user is attending an event. The only exception is the user-to-network relation.

I imagine that Facebook Connect will work by “leasing” or “sharing” information to remote sites and require, through agreement and compliance with their terms, to check in periodically (or to receive directives through a push mechanism) for changes to data, and then to flush caches of stored data every 24 hours or less.

In either model there is still a central provider and store of the data, but the question for implementation really comes down to whether a remote site ever has direct access to the data, and if so, how long it is allowed to store it.

Of note is the OpenSocial RESTful API, which provides a web-friendly mechanism for addressing and defining resources. Recordon pointed out to me that this API affords all the mechanisms necessary to implement the “leased” model of data access (rather than the embedded model), but leaves it up to the OpenSocial applications and containers to set and enforce their own data access policies.

…Which is a world of a difference from Facebook’s approach to date, for which there is neither code nor a spec nor an open discussion about how they’re thinking through the tenuous issues imbued in making decisions around data access, data control, “tethering” and “portability“. While folks like Plaxo and Yahoo are actually shipping code, Facebook is still posturing, assuring us to “wait and see”. With something so central and so important, it’s disheartening that Facebook’s “Open” strategy is anything but open, and everything less than transparent.

Relationships are complicated

Facebook | Confirm Requests

I’ve noticed a few interesting responses to my post on simplifying XFN. While my intended audiences were primarily fellow microformat enthusiasts and “lower case semantic web” types, there seems to be a larger conversation underway that I’d missed — one that both and have commented on.

In a treatise against XFN (and similarly reductive expressions of human relationships) from December of last year, Greenfield said a number of profound things:

  • …one of my primary concerns has always been that we not accede to the heedless restructuring of everyday human relations on inappropriate and clumsy models derived from technical systems – and yet, that’s a precise definition of social networking as currently instantiated.
  • All social-networking systems constrain, by design and intention, any expression of the full band of human relationship types to a very few crude options — and those static!
  • …it’s impossible to use XFN to model anything that even remotely resembles an organic human community. I passionately believe that this reductive stance is not merely wrong, but profoundly wrong, in that it deliberately aims to bleed away all the nuance, complication and complexity that makes any real relationship what it is.
  • I believe that technically-mediated social networking at any level beyond very simple, local applications is fundamentally, and probably persistently, a bad idea. From where I stand, the only sane response is to keep our conceptions of friendship and affinity from being polluted by technical metaphors and constraints to begin with.

Whew! Strong stuff, but useful, challenging and insightful.

Meanwhile, TBL defended a semi-autistic perspective in describing the future of the Semantic Web (yes, the uppercase version):

At the moment, people are very excited about all these connections being made between people — for obvious reasons, because people are important — but I think after a while people will realise that there are many other things you can connect to via the web.

While my sympathies actually lie with Greenfield (especially after a weekend getting my mom setup on Facebook so she could send me photos without clogging my inbox with 80MB emails… a deficiency in the design of the technology, not my mother mind you!), I also see the promise of a more self-aware, self-descriptive web. But, realistically, that web is a long way off, and more likely, that web is still going to need human intervention to make it work — at least for humans to benefit from it (oh sure, just get rid of the humans and the network will be just perfect — like planes without passengers, right?).

But in the meantime, there is a social web that needs to be improved, and that can be improved, in fairly simple and straight-forward ways, that will make it easier for regular folks who don’t (and shouldn’t have to) care about “data portability” and “password anti-patterns” and “portable contact lists” to benefit from the fact that the family and friends they care about are increasingly accessible online, and actually want to hear from them!

Even though Justin Smith takes another reductive look at the features Facebook is implementing, claiming that it wants to “own communications with your friends“, the reality is, people actually want to communicate with each other online! Therefore it follows that, if you’re a place where people connect and re-connect with one another, it’s not all that surprising that a site like Facebook would invest in and make improvements to facilitate interaction and communication between their members!

But let’s back up a minute.

If we take for granted that people do want to connect and to communicate on social networks (they seem to do it a lot, so much to that one might could even argue that people enjoy doing it!), what role should so-called “portable contact lists” play in this situation? I buy Greenfield’s assertion that attempts by technologists to reduce human relationships to a predefined schema (based on prior behavior or not) is a failing proposition, but that seems to ignore the opportunity presented by the fact that people are having to maintain many several lists of their friends in many different places, for no other reason than an omission from the design of the social internetwork.

Put another way, it’s not good enough to simply dismiss the trend of social networking because our primitive technological expressions don’t reflect the complexity of real human relationships, or because humans are just one of kind of “object” to be “semantified” in TBL’s “Giant Global Graph“… instead, people are connecting today, and they’re wanting to connect to people outside of their chosen “home” network and frankly the experience sucks and it’s confusing. It’s not good enough to get all prissy about it; the reality is that there are solutions out there today, and there are people working on these things, and we need smart people like Greenfield and Berners-Lee to see that solutions that enable the humanist web (however semantic it needs to be) are being prioritized and built… and that we [need] not accede to the heedless restructuring of everyday human relations on inappropriate and clumsy models derived from technical systems.

I can say that, from what I’ve observed so far, these are things that computers can do for us, to make the social computing experience more humane, should we establish simple and straightforward means to express a basic list of contacts between contexts:

  • help us find and connect to people that we’ve already indicated that we know
  • introduce us to people who we might know, or based on social proximity, should know (with no obligation to make friends, of course!)
  • help us from accidently bumping into people we’d rather not interact with (see block-list portability)
  • helping us to segment our friendships in ways that make sense to us (rather than the semi-arbitrary ways that social networks define)
  • helping us to confidently share things with just the people with whom we intend to share

There may be others here, but off the top of my head, I think satisfying these basic tasks is a good start for any social network that thinks allowing you to connect and interact with people who you might know, but who may not have already signed up for the service, is useful.

I should make one last point: when thinking about importing contacts from one context to another, I do not think that it should be an unthinking act. That is, just because it’s merely data being copied between servers, the reality is that those bits represent things much more sacred and complicated than any computer might ever be programmed to imagine. Therefore, just because we can facilitate and lower the friction of “bringing your friends with you” from one place to another doesn’t mean that it should be an automatic process and that all your friends in one place should be made to be your friends in the new place.

And regardless of how often good ol’ Mark Zuckerberg claims that the end game is to make communications more efficient, when it comes to relationships, every connection transposed from one context to another should have to be reconsidered (hmm, a great argument for tagging of contacts…)! We can and should not make assumptions about the nature of people’s relationships, no matter what kind of semantics they’ve used to describe them in a given context. Human relationships are simply far too complicated to be left up to assumptions and inferences made by technologists whose affinity oftentimes lies closer to the data than to the makers of the data.