When location is everywhere

What if you could take location as a given in the design of web applications and services? By that I mean, what if — when someone who has never used your service before shows up, signs up (ideally with an OpenID!) — and it’s both trivial and desirable for her to provide you with access to some aspect of her physical location in the world… and she does? What would you do? How would you change the architecture of your service to leverage this new “layer” of information?

Would you use it to help her connect to and find others in her proximity (or maybe avoid them)? Would you use it to better target ads at her (as Facebook does)? Would you use it to accelerate serendipity, colliding random people who, for some reason, have strikingly similar habits but don’t yet know each other — or would you only reveal the information in aggregate, to better give members a sense for where people on the service come from, spend their time and hang out? If you could, would you automatically geocode everything that new members upload or post to your service or would you require that metadata to be added or exposed explicitly?

Put still another way, how would a universal “location layer for the social web” change the design and implementation of existing applications? Would it give rise to a class of applications that take advantage of and thrive on knowing where their members live, work and play, and tailor their services accordingly? Or would all services eventually make use of location information? Or will it depend on each service’s unique offering and membership, and why people signed up in the first place? Just because you can integrate with Twitter or Facebook, must you? If the “location layer” were made available, must you take advantage of it? What criteria or metrics would you use to decide?

I would contend that these are all questions that anyone with a modern web service is going to need to start dealing with sooner than later. It’s not really a matter of whether or not members will ever show up with some digital footprint of where they are, where they’ve been or where they’re going; it’s really only a matter of time. When they do, will you be ready to respond to this information or will you carry on like Friendster in the prime of Facebook and pretend like the first bubble never popped?

If you imagine for a minute that the ubiquity of wireless-enabled laptops gave rise to the desire-slash-ability for more productive mobile work, and consequently created the opportunity for the coworking community to blossom; if you consider that the ubiquity of digital cameras and camera phones created the opening for a service like Flickr (et al) to take off; if you consider that the affordability of camcorders, accessibility of video on digital cameras, cell phones and built-in in laptops and iMacs, coupled with simpler tools like iMovie, lead to people being able and wanting to post videos to a service like YouTube (et al); if you look at how the ubiquity of some kind of device technology with [media] output lead to the rise of services/communities that were optimized for that same media, you might start to realize that a huge opportunity is coming for locative devices that make it easy to publish where you are, discover where your friends are, and to generally receive benefits from being able to inform third parties, in a facile way, where you are, where you’ve been and where you’re going.

Especially with the opening of the iPhone with its simple and elegant implementation of the “Locate Me” feature in Google Maps (which has already made its way into Twinkle, a native Twitter app for the iPhone that uses your location to introduce you to nearby Twitterers who are also using the app) I think we’re on the brink of seeing the kind of the ubiquity (in the consumer space) that we need in order to start taking the availability of location information for granted, and, that, like standards-compliant browsers, it could (or should) really inform the way that we build out the social fabric of web applications from thence forward.

The real difference coming that I want to point out here is that 1) location information, like digital photos and videos before it, will become increasingly available and accessible to regular people, in many forms; that 2) people will become increasingly aware that they can use this information to their advantage should they choose to and may, if given the chance, provide this information to third-party services; 3) that when this information is applied to social applications (i.e. where location is exposed at varying levels of publicity), interesting, and perhaps compelling, results may emerge; and that 4) in general, investing in location as an “information layer” or filter within new or existing applications makes increasingly more sense, as more location information is coming available, is being made available by choice, and is appearing in increasing numbers of applications that previously may not have taken physical location into consideration.

Geogeeks can claim credit for presaging this day for some time, but only now does it seems like the reality is nearly upon us. Will the ubiquity of location data, like the adoption of web standards before, catalyze entirely new breeds of applications and web services? It’s anyone’s guess when exactly this reality will come to pass, but I believe that now, increasingly, it’s really only a short matter of time before location is indeed everywhere, a new building block on which new and exciting services and functionality can be stacked.

(Bonus: Everyware by Adam Greenfield is good reading on this general topic, though not necessarily as it relates to web services and applications.)

OAuth Discovery 1.0 Draft 2 released with support from Ma.gnolia, Fire Eagle and Satisfaction

OAuth Discovery LogoEran just announced the second draft of OAuth Discovery, the first implementation of the XRDS-Simple specification that I mentioned here just over a week ago.

What’s significant about this announcement, as Eran points out, is that the new draft is already implemented and deployed by FireEagle (a Yahoo! Brickhouse service), Ma.gnolia, and Get Satisfaction — three leaders in the OAuth community. On the development tools front, Mediamatic will release initial support for discovery early next week with full support due early May in their OAuth PHP library.

OAuth Discovery Logos

This draft is a complete rewrite of the first draft released several months ago and, in the spirit of OAuth, greatly simplifies the concepts and presentation of the protocol, and incorporates a great many of the clarifications provided by the and communities.

OAuth Discovery, simply, is an extensible, machine-readable format for identifying OAuth-protected resources and service endpoints. Take a look at the provided example or Ma.gnolia’s actual discovery profile to get an idea for what these documents look like.

Over time, the goal is to automate the pairing of unacquainted web services, by being able to first identify the location of services on the web and second to discover the authentication requirements for accessing such services. Coupled with XRDS-Simple, you can further specify the types of data available from given services, and to begin to describe the methods you would use to access that data.

To provide a more complete conceptual model, imagine that you run a social network, and in this network, members have collections of bookmarks. Your service provides a way to either upload bookmarks directly or to subscribe to someone’s existing bookmarks stored at, say, services like Delicious or Ma.gnolia. Now say that you also encourage new members to sign in with an OpenID identity. From that identity, you may be able to discover an XRDS-Simple profile that points to an existing social bookmarking account, allowing you to attempt to import those bookmarks immediately. If, however, those bookmarks require authorization, and the authorization protocol happens to be OAuth, you should be able to automate the appropriate authorization requests to the user because the service supports OAuth Discovery.

In contrast, to achieve the same flow today, you must manually provide the names of accounts and services that you use individually, and then hope that the new service supports the remote protocols of your pre-existing services. With XRDS-Simple and OAuth Discovery, much of this work is automatically handled for you, letting you focus more on what you want to share, and less on where your data is stored, and increasingly allowing data to automatically flow between systems, should you decide to provide them authorization to do so on your behalf.

If you’re interest in learning more about OAuth Discovery, the best place to go is the . Since OAuth Discovery also borrows heavily on XRDS-Simple, you might also want to check out that specification and discuss it in the .

XRDS-Simple Draft 1 released

OAuth Discovery LogoThe little guys keep building the future of the web in little steps.

Well, okay, I shouldn’t use such hubris, but I’m pretty proud of our little squad of do-gooders, developing quality technology that I think IS advancing the state of the web, and doing so without any budget whatsoever, and without any kind of centralized resources (can you tell I’ve been hanging out in Redmond at Microsoft’s campus the last couple days?).

Anyway, yesterday Eran Hammer-Lahav, the primary author of the OAuth spec, released the first draft of XRDS-Simple format for public review.

Acronyms aside, what we’re defining here is a data format that allows web authors to link to — in a consistent, ordered way — the various services that they use across the web. What’s important about this format is that 1) we’re not inventing anything new and that 2) it’s already widely implemented, thanks in large part to OpenID 2.0’s discovery mechanism (discovery, in this case, can be loosely defined as the means by which a computer determines where a service exists — for example, where someone stores their photos or blog posts — very useful, of course, in the case of URL-based identities).

Microformats folks will ask why we don’t just use rel-me, and that’s a very valid question. Sure, you could use rel-me to link to an XRDS-Simple document, but that doesn’t answer the question. Specifically, there are three main technical features that, taken together, are superior to rel-me from a service discovery perspective:

  • Media types: essentially the MIME type that the service provides, as defined by (in layman terms: photos, videos, text, etc).
  • Local IDs: the identifier (i.e. a username or email address) associated with the requested resource; think of your Facebook email address (and clearly not something you’d necessarily want to publish publicly on your blog!)
  • Service priority: useful for determining the selection order should there be more than one of a given type of service (for example, use my Vimeo account videos before my Viddler collection, etc)

Promoted by similar questions from Danny Ayers, Eran elaborated on this and similar topics (I’d encourage you to read this response for a good technical and philosophical backgrounder).

I’ll finish briefly by putting XRDS-Simple into a broader context of the work we’re doing, and point out how this fits into DiSo. First, we now have OpenID for URL-based identity on the web so we can identify people between web sites; second, we have OAuth for specifically controlling who has access to your data; and third, we now have a mechanism for discovering the services and data brokers that you use in a consistent, fairly widely supported format. These three building blocks are critical to advancing the smooth and secure flow of people and their data between web sites, web services and changing contexts.

For DiSo, we will enable someone to sign in to a web service for the first time and, at their discretion, have their contacts looked up (eventually this should be more like a buddy list reference), have feeds of her photos or videos or whatever other media she publishes automatically discovered and imported (would be really useful for Adobe Photoshop Express, for example!), to then provide her the option to allow the web service to send information back to her identity provider or data broker (for example, inserting events into her activity stream) which would then be able to be subscribed to or followed by any of her friends running their own sites, or who delegate to some third party aggregator service (a la FriendFeed).

This is actually a pretty simple flow, but the difference is that we can now assemble this entire stack with open tools, open protocols and open formats. Discovery is one of the few remaining components that we need to nail down to get into the next stage of building out this model. So, to that end, I encourage and (the list is moderated and requires you to state your interest to sign up). We’ll be ending the Draft 1 review period April 10, so get your comments in sooner than later!

Relationships are complicated

Facebook | Confirm Requests

I’ve noticed a few interesting responses to my post on simplifying XFN. While my intended audiences were primarily fellow microformat enthusiasts and “lower case semantic web” types, there seems to be a larger conversation underway that I’d missed — one that both and have commented on.

In a treatise against XFN (and similarly reductive expressions of human relationships) from December of last year, Greenfield said a number of profound things:

  • …one of my primary concerns has always been that we not accede to the heedless restructuring of everyday human relations on inappropriate and clumsy models derived from technical systems – and yet, that’s a precise definition of social networking as currently instantiated.
  • All social-networking systems constrain, by design and intention, any expression of the full band of human relationship types to a very few crude options — and those static!
  • …it’s impossible to use XFN to model anything that even remotely resembles an organic human community. I passionately believe that this reductive stance is not merely wrong, but profoundly wrong, in that it deliberately aims to bleed away all the nuance, complication and complexity that makes any real relationship what it is.
  • I believe that technically-mediated social networking at any level beyond very simple, local applications is fundamentally, and probably persistently, a bad idea. From where I stand, the only sane response is to keep our conceptions of friendship and affinity from being polluted by technical metaphors and constraints to begin with.

Whew! Strong stuff, but useful, challenging and insightful.

Meanwhile, TBL defended a semi-autistic perspective in describing the future of the Semantic Web (yes, the uppercase version):

At the moment, people are very excited about all these connections being made between people — for obvious reasons, because people are important — but I think after a while people will realise that there are many other things you can connect to via the web.

While my sympathies actually lie with Greenfield (especially after a weekend getting my mom setup on Facebook so she could send me photos without clogging my inbox with 80MB emails… a deficiency in the design of the technology, not my mother mind you!), I also see the promise of a more self-aware, self-descriptive web. But, realistically, that web is a long way off, and more likely, that web is still going to need human intervention to make it work — at least for humans to benefit from it (oh sure, just get rid of the humans and the network will be just perfect — like planes without passengers, right?).

But in the meantime, there is a social web that needs to be improved, and that can be improved, in fairly simple and straight-forward ways, that will make it easier for regular folks who don’t (and shouldn’t have to) care about “data portability” and “password anti-patterns” and “portable contact lists” to benefit from the fact that the family and friends they care about are increasingly accessible online, and actually want to hear from them!

Even though Justin Smith takes another reductive look at the features Facebook is implementing, claiming that it wants to “own communications with your friends“, the reality is, people actually want to communicate with each other online! Therefore it follows that, if you’re a place where people connect and re-connect with one another, it’s not all that surprising that a site like Facebook would invest in and make improvements to facilitate interaction and communication between their members!

But let’s back up a minute.

If we take for granted that people do want to connect and to communicate on social networks (they seem to do it a lot, so much to that one might could even argue that people enjoy doing it!), what role should so-called “portable contact lists” play in this situation? I buy Greenfield’s assertion that attempts by technologists to reduce human relationships to a predefined schema (based on prior behavior or not) is a failing proposition, but that seems to ignore the opportunity presented by the fact that people are having to maintain many several lists of their friends in many different places, for no other reason than an omission from the design of the social internetwork.

Put another way, it’s not good enough to simply dismiss the trend of social networking because our primitive technological expressions don’t reflect the complexity of real human relationships, or because humans are just one of kind of “object” to be “semantified” in TBL’s “Giant Global Graph“… instead, people are connecting today, and they’re wanting to connect to people outside of their chosen “home” network and frankly the experience sucks and it’s confusing. It’s not good enough to get all prissy about it; the reality is that there are solutions out there today, and there are people working on these things, and we need smart people like Greenfield and Berners-Lee to see that solutions that enable the humanist web (however semantic it needs to be) are being prioritized and built… and that we [need] not accede to the heedless restructuring of everyday human relations on inappropriate and clumsy models derived from technical systems.

I can say that, from what I’ve observed so far, these are things that computers can do for us, to make the social computing experience more humane, should we establish simple and straightforward means to express a basic list of contacts between contexts:

  • help us find and connect to people that we’ve already indicated that we know
  • introduce us to people who we might know, or based on social proximity, should know (with no obligation to make friends, of course!)
  • help us from accidently bumping into people we’d rather not interact with (see block-list portability)
  • helping us to segment our friendships in ways that make sense to us (rather than the semi-arbitrary ways that social networks define)
  • helping us to confidently share things with just the people with whom we intend to share

There may be others here, but off the top of my head, I think satisfying these basic tasks is a good start for any social network that thinks allowing you to connect and interact with people who you might know, but who may not have already signed up for the service, is useful.

I should make one last point: when thinking about importing contacts from one context to another, I do not think that it should be an unthinking act. That is, just because it’s merely data being copied between servers, the reality is that those bits represent things much more sacred and complicated than any computer might ever be programmed to imagine. Therefore, just because we can facilitate and lower the friction of “bringing your friends with you” from one place to another doesn’t mean that it should be an automatic process and that all your friends in one place should be made to be your friends in the new place.

And regardless of how often good ol’ Mark Zuckerberg claims that the end game is to make communications more efficient, when it comes to relationships, every connection transposed from one context to another should have to be reconsidered (hmm, a great argument for tagging of contacts…)! We can and should not make assumptions about the nature of people’s relationships, no matter what kind of semantics they’ve used to describe them in a given context. Human relationships are simply far too complicated to be left up to assumptions and inferences made by technologists whose affinity oftentimes lies closer to the data than to the makers of the data.

An update to my my OpenID Shitlist, Hitlist and Wishlist

OpenID hitlist

Back in December, I posted my OpenID Shitlist, Hitlist and Wishlist. I listed 7 companies on my shitlist, 14 on hitlist and 12 on my wishlist. Given that it’s been all of two months but we’ve made some solid progress, I thought I’d go ahead and give a quick update on recent developments.

First, the biggest news is probably that Yahoo has come online as an OpenID 2.0 identity provider. This is old news for anyone who’s been watching the space, but given that I called them out on my wishlist (and that their coming online tripled the number of OpenIDs) they get serious props, especially since Flickr profile URLs can now be used as identity URLs. MyBlogLog (called out on my shitlist) gets a pass here since they’re owned by Yahoo, but I’d still like to see them specifically support OpenID consumption

Second biggest news is the fact that, via Blogger, Google has become both an OpenID provider (with delegation) and consumer. Separately, Brad Fitzpatrick released the Social Graph API and declared that URLs are People Too.

Next, I’ll give big ups to PBWiki for today releasing support for OpenID consumption. This is a big win considering they were also on my shitlist and I’d previously received assurances that OpenID for PBWiki would be coming. Well, today they delivered, and while there are opportunities to improve their specific implementation, I’d say that Joel Franusic did a great job.

And, in other good news, Drupal 6.0 came out this week, with support for OpenID in core (thanks to Walkah!), so there’s another one to take off my hitlist.

I’d really like to take Satisfaction off my list, since they’ve released their API with support for OAuth, but they’ve still not added support for OpenID, so they’re not out of the woods just yet… even though their implementation of OAuth makes me considerably happy.

So, that’s about it for now. I hear rumblings from Digg that they want to support OpenID, but I’ve got no hard dates from them yet, which is fine. There’re plenty more folks who still need to adopt OpenID, and given the support the foundation has recently received from big guys like Microsoft, Google, Yahoo, Verisign and IBM, my job advocating for this stuff is only getting easier.

After Social Graph FOO Camp — and a challenge for the Data Portability Group

This past weekend I attended a topic-specific FOO Camp called Social Graph FOO Camp (otherwise known as ) organized by Scott Kveton and David Recordon (or ray-chor-dohn according to Larry).

Scott’s write up is pretty complete, but I wanted to call out one specific outcome that I think is worth noting.

On , we had a significant discussion on data portability and about the activities, responsibilities and opportunities of and for the eponymous group which has recently generated much hype and buzz but little, (as far as I’ve see) clarity and/or cogent strategy for advancing its expansive charter:

The purpose of this project is to put all existing data portability technologies and initiatives in context and to promote viable reference implementations (blueprints) to the developer, vendor, and end-user communities.

The frustration over the minimal barrier to “becoming a member” of the group (you simply have to sign up for a mailing list) and the focus on large vendors without advancing an agenda with teeth and clearly defined metrics for success was palpable. But so was the desire to make some progress, and if not come to complete agreement, to at least identify concerns shared by the majority of us and perhaps develop a strategy to deflate the hype to date and get the group moving in a productive direction.

My suggestion was to emulate the work that Tara and I have been doing on the Open Media Web project, which developed out of our work with Songbird where we could sense that there was a real opportunity to explore, but didn’t yet have a clear picture of either the space as it was understood by lead users and experts nor of the outcomes that needed to be advocated. So rather than diving in and promoting technologies or tactics before we had identified the opportunities, challenges and boundaries of the problem domain, we decided to pursue an investigatory strategy, starting with a series of meetups, blog posts and interviews () that might help us flesh out the actors, ideas and conversations that were already ongoing in the space.

The result of my proposal is captured in this post by Chris Saad to the Data Portability mailing list. I think this is a positive step, and one that I hope will give Data Portability some direction and good work to do over the next several weeks and months. I’d like to go a step further and flesh out my thinking however, before this project gets underway.

  1. These interviews should really be conducted assassin-style (as I like to say) where someone (probably Chris Saad) goes to each major vendor represented (and pimped) by the group (i.e. Google and Facebook, Plaxo, Microsoft, LinkedIn, Flickr, Six Apart, MyStrands, et al) and solicits written (or video) answers to the same five or six questions. Each of these interviews should subsequently be posted to the data portability blog over a series of months.
  2. The goal of these ongoing interviews should be to discover primarily: 1) why these companies joined the group and what their goals are; 2) what they think of when they say “data portability” 3) what challenges are they facing when it comes to offering their vision of data portability at their company? 4) what are the greatest benefits of data portability? 5) what are they doing (if anything) to promote and advance data portability within their organization? 6) what technologies have they implemented (or plan to implement in the next six months) in support of data portability? From these answers, I think we can start to recognize trends in both the headspace of large social networking sites as well as begin to call out certain technologies that might be worth picking up and evangelizing, especially in the interest of interop between multiple parties’ sites.
  3. As such, the advocacy of any particular technological solution by the data portability group today should be immediately abandoned until further research and exploration has occurred. While I was happy to see my favorite stable of technologies listed on the group’s homepage in the early days, I now realize that technology is not the hard part; it’s actually the politics, the policies, the usability and impact on and perception of the individual data owners that are really the first order priorities. Without beginning to address issues in those areas first, the technology conversation will never occur.
  4. In terms of timing, I think that the data portability group has come along more or less at the right time, but that it’s actually walking into the problem ass-backwards. What we don’t need right now is a lot of hype and glorification of an abstruse notion of data portability. In fact, data portability by itself is currently meaningless and intangible; without good examples of how it can be applied to make things better for companies’ customers, there will never be an economic imperative to move in this direction (I should point out that data portability is interesting to me because increased customer choice is interesting to me, and thereby competition in the space is beneficial to the customers of such services). For a timely example of a positive case where data portability is making a difference, consider the ability to move your bookmarks from del.icio.us to Ma.gnolia in lieu of Microsoft’s looming acquisition bid of Yahoo!. Surely there are other equally beneficial applications of data portability, and building out these use cases in terms of end-user benefit is critical to continuing to make the case for data portability with credibility.

So anyway, I do believe that there is an opportunity here and Chris Saad is correct that getting a number of the prominent players in this arena to come to the table on this topic is a feat; however, simply bringing them together without engaging with the gnarly problems and policies that have kept data portability from becoming a reality could bring more confusion and angst than benefit. Deflating the hype and going back to humble beginnings and simple questions is, in my not-so-humble opinion, the appropriate and most effective way forward. Data portability is still not obvious for most people or most companies — heck the technologies that enable it are barely out of their 1.0 and 2.0 phases yet — and still this topic is one that captures people’s imaginations and lets them imagine countless “what if” scenarios that seem, somehow, just around the corner. Data portability is a critical topic, and with the advances in the state of the conversation we had over the weekend, I’m eager to see the members of the data portability group pick up the ball and keep moving it forward.

So, if this topic is something that interests you, I recommend you blog about it, talk about it, interpret it and really take some time to consider what data portability means to you, and why it matters (or doesn’t) to you. Me, Larry and Matt Biddulph of Dopplr rapped about this stuff some more on our Citizen Garden podcast today, so if you’re looking for more information, ideas or fodder, you might go ahead and give it a listen.

The Existential DiSo Interview

The Existential DiSo Interview from Chris Messina on Vimeo.

Here’s what I asked myself:

how are you?

we’re going to talk about diso today? is that right?

what is diso?

you say it’s a social network, so how would it work with wordpress?

how is this different from myspace or facebook?

so who’s involved in this project?

so what comes next?

how is this different than opensocial?

what’s going to be the big win for diso?

so do you see this model applying in any other domain on the web?

what kind of support do you need?

are you talking to any of the bigger social networks? like facebook or myspace?

so who cares?

how will you draw customers away from myspace or facebook?

any last thoughts?

The OpenID mobile experience

Two days ago, Ma.gnolia launched their mobile version, and it’s pretty awesome (disclosure: Ma.gnolia is a former client and current friend/partner of Citizen Agency).

In the course of development, Larry asked me what he thought he should do about adding OpenID sign-in to the mobile version. He was reluctant to do so because, he reasoned, the experience of logging in sucks, not just because of the OpenID round-trip dance, but because most identity providers don’t actually support a mobile-friendly interface.

Indeed, if you take a look at the flow from the Ma.gnolia mobile UI to my OpenID provider (using the iPhone simulator app), you can see that it does suck.

Mobile Ma.gnoliaiPhoney OpenID Verification

I strongly encourage Larry to go ahead and add OpenID even if the flow isn’t ideal. As it is, you can sign up to Ma.gnolia with only an OpenID (without a need for creating yet another username and password) and so without offering this login option, the mobile site would be off-limits to folks in this situation.

So there’s clearly an opportunity here, and I’m hoping that out of OpenIDDevCamp today, we can start to develop some best practices and interface guidelines for OpenID providers for the mobile flow (not to mention more generally).

If you’ve seen a good example of an OpenID (or roundtrip authentication flow) for mobile, leave a comment here and let me know. It’s hard to get screenshots of this stuff, so any pointers would be appreciated!

It’s high time we moved to URL-based identifiers

Ugh, I had promised not to read TechMeme anymore, and I’ve actually kept to my promise since then… until today. And as soon as I finish this post, I’m back on the wagon, but for now, it’s useful to point to the ongoing Scoble debacle for context and for backstory.

In a nutshell, Robert Scoble has friends on Facebook. These friends all have contact information and for whatever reason, he wants to dump that data into Outlook, his address book of choice. The problem is that Facebook makes it nearly impossible to do this in an automated fashion because, as a technical barrier, email addresses are provided as opaque images, not as easily-parseable text. So Scoble worked with the heretofore “trustworthy” Plaxo crew (way to blow it guys! Joseph, how could you?!) to write a scraper that would OCR the email addresses out of the images and dump them into his address book. Well, this got him banned from the service.

The controversy seems to over whether Scoble had the right to extract his friends’ email addresses from Facebook. Compounding the matter is the fact that these email addresses were not ones that Robert had contributed himself to Facebook, but that his contacts had provided. Allen Stern summed up the issue pretty well: My Social Network Data Is Not Yours To Steal or Borrow. And as Dare pointed out, Scoble was wrong, Facebook was right.

Okay, that’s all well and fine.

You’ll note that this is the same fundamental design flaw of FOAF, the RDF format for storing contact information that preceded the purposely distinct microformats and :

The bigger issue impeding Plaxo’s public support of FOAF (and presumably the main issue that similar services are also mulling) is privacy: FOAF files make all information public and accessible by all, including the contents of the user’s address book (via foaf:knows).

Now, the concern today and the concern back in 2004 was the exposure of identifiers (email addresses) that can also be used to contact someone! By conflating contact information with unique identifiers, service providers got themselves in the untenable situation of not being able to share the list of identifiers externally or publicly without also revealing a mechanism that could be easily abused or spammed.

I won’t go into the benefits of using email for identifiers, because they do exist, but I do want to put forth a proposal that’s both long time in coming and long overdue, and frankly Kevin Marks and Scott Kveton have said it just as well as I could: URLs are people too. Kevin writes:

The underlying thing that is wrong with an email address is that its affordance is backwards — it enables people who have it to send things to you, but there’s no reliable way to know that a message is from you. Conversely, URLs have the opposite default affordance — people can go look at them and see what you have said about yourself, and computers can go and visit them and discover other ways to interact with what you have published, or ask you permission for more.

This is clearly the design advantage of OpenID. And it’s also clearly the direction that we need to go in for developing out distributed social networking applications. It’s also why OAuth is important to the mix, so that when you arrive at a public URL identifier-slash-OpenID, you can ask for access to certain things (like sending the person a message), and the owner of that identifier can decide whether to grant you that privilege or not. It no longer matters if the Scobles of the world leak my URL-based identifiers: they’re useless without the specific permissions that I grant on a per instance basis.

As well, I can give services permission to share the URL-based identifiers of my friends (on a per-instance basis) without the threat of betraying their confidence since their public URLs don’t reveal their sensitive contact information (unless they choose to publish it themselves or provide access to it). This allows me the dual benefit of being able to show up at any random web service and find my friends while not sharing information they haven’t given me permission to pass on to untrusted third parties.

So screen scrape factoryjoe.com all you want. I even have a starter hcard waiting for you, with all the contact information I care to publicly expose. Anything more than that? Well, you’re going to have to ask more politely to get it. You’ve got my URL, now, tell me, what else do you really need?

Public nuisance #1: Importing your contacts

Facebook Needs OAuth

I’ve talked about this before (as one of the secondary motivators behind OAuth) but I felt it deserved a special call out.

Recently, Simon Willison presented on OpenID and called the practice that Dopplr (and many many others) uses to import your contacts from Gmail absolute horrifying. I would concur, but point out that Dopplr is probably the least offender as they also provide safe and effective hcard importing from Twitter or any URL, just as Get Satisfaction does.

Unfortunately this latter approach is both less widely implemented and also unfamiliar to many regular folks who really just want to find their friends or invite them to try out a new service.

The tragedy here is that these same folks are being trained to hand out their email address and passwords (which also unlock payment services like Google Checkout) regularly just to use a feature that has become more or less commonplace across all social network sites. In fact, it’s so common that Plaxo even has a free widget that sites can use to automate this process, as does Gigya. Unfortunately, the code for these projects is not really open source, whereas Dopplr’s is, providing little assurance or oversight into how the import is done.

What’s most frustrating about this is that we have the technology to solve this problem once and for all (a mix of OpenID, microformats, OAuth, maybe some Jabber), and actually make this situation better and more secure for folks. Why this hasn’t happened yet, well, I’m sure it has something to do with politics and resources and who knows what else. Anyway, I’m eager to see a open and free solution to this problem and I think it’s the first thing we need to solve after January 1.