Clarifying my comments on Twitter’s annotations

Two weeks ago, Mathew Ingram from GigaOM pinged me via my Google Profile to ask what my thoughts — as an open web advocate — are on Twitter’s new annotations feature. He ended up posted portions of my response yesterday in a post titled “Twitter Annotations Are Coming — What Do They Mean For Twitter and the Web?

The portion with my comments reads:

But Google open advocate Chris Messina warns that if Twitter doesn’t handle the new feature properly, it could become a free-for-all of competing standards and markups. “I find them very intriguing,” he said of Annotations, but added: “It could get pretty hairy with lots of non-interoperable approaches,” a concern that others have raised as well. For example, if more than one company wants to support payments through Annotations but they all use proprietary ways of doing that, “getting Twitter clients and apps to actually make sense of that data will be very slow going indeed,” said Messina. However, the Google staffer said he was encouraged by the fact that Twitter was looking at supporting existing standards such as RDFa and microformats (as well as potentially Facebook’s open graph protocol).

Unfortunately some folks found these comments more negative than I intended them to be, so I wanted to flesh out my thinking by providing the entire text of the email I sent to Mathew:

Thanks for the question Mathew. I admit that I’m no expert on Twitter Annotations, but I do find them very intriguing… I see them creating a lot of interesting momentum for the Twitter Dev Community because they allow for all kinds of emergent things to come about… but at the same time, without a sane community stewardship model, it could get pretty hairy with lots of non-interoperable approaches that re-implement the same kinds of features.

That is — say that someone wants to implement support for payments over Twitter Annotations… if a number of different service providers want to offer similar functionality but all use their own proprietary annotations, then that means getting Twitter clients and apps to actually make sense of that data will be very slow going indeed.

I do like that Ryan Sarver et al are looking at supporting existing schema where they exist — rather than supporting an adhocracy that might lead to more reinventions of the wheel than Firestone had blowouts. But it’s unclear, again, how successful that effort will be long term.

Of course, as the weirdo originator of the hashtag, it seems to me that the Twitter community has this funny way of getting the cat paths paved, so it may work out just fine — with just a slight amount of central coordination through the developer mailing lists.

I’d really like to see Twitter adopt ActivityStreams, of course, and went to their hackathon to see what kind of coordination we could do. Our conversation got hijacked so I wasn’t able to make much progress there, but Twitter does seem interested in supporting these other efforts and has reached out to help move things forward.

Not sure how much that helps, but let me know what other questions you might have.

I stand by these comments — though I can see how, spliced and taken out of context, they could be misconstrued.

Considering that we’re facing similar questions about the extensibility model for ActivityStreams, I can speak from experience that guiding chaos into order is actually how “standards” evolve over time. Managing that process determines how quickly an effort like Twitter’s annotations will succeed.

Twitter’s approach of  balancing between going completely open against being centrally managed is a smart approach, and I’m looking forward to both working with them on their efforts, as well as seeing what their developer community produces.

Two interviews on the open web from SXSW

You must have an HTML5-capable browser to watch this video. You may also download this video directly.

Funny how timing works out, but two interviews that I gave in March at SXSW have just been released.

The first — an interview with Abby Johnson for WebProNews — was recorded after my ActivityStreams talk and is embedded above. If you have trouble with the embedded video, you can download it directly. I discuss ActivityStreams, the open web and the role of the Open Web Foundation in providing a legal framework for developing interoperable web technologies. I also explain the historical background of FactoryCity.

In the second interview, with Eric Schwartzman, I discuss ActivityStreams for enterprise, and how information abundance will affect the relative value of data that is hoarded versus data that circulates. Of the interview Eric says: In the 5 years I’ve been producing this podcast, this discussion with Chris, recorded at South by Southwest (SXSW) 2010 directly following his presentation on activity streams, is one of the most compelling interviews I’ve ever recorded. I expect to include many of his ideas in my upcoming book “Social Marketing to the Business Customer” to be published by Wiley early next year.

If you’re interested in these subjects, I’ll be speaking at Northern Voice in Vancouver this weekend, at PARC Forum in Palo Alto on May 13, at Google I/O on May 19, and at GlueCon in Denver, May 27. I also maintain a list of previous interviews that I’ve given.

Understanding the Open Graph Protocol

All likes lead to Facebook

I attended Facebook’s F8 conference yesterday (missed the keynote IRL, but you can catch it online) and came away pondering the Open Graph Protocol.

In they keynote Zuck said (as Luke Shepard calls him):

Today the web exists mostly as a series of unstructured links between pages. This has been a powerful model, but it’s really just the start. The open graph puts people at the center of the web. It means that the web can become a set of personally and semantically meaningful connections between people and things.

While I agree that the web is transmogrifying from a web of documents to a web of people, I have deep misgivings about what the Open Graph Protocol — along with Facebook’s new Like button — means for the open web.

There are three elements of Facebook’s announcements that seem to conspire against the web:

  • A new format
  • Convenient to implement
  • Facebook account required

First, to support the Open Graph Protocol, all you need to do is add some RDFa-formatted metatags to the HEAD of your HTML pages (as this example demonstrates, from IMDB):

Simple right? Indeed.

And from the looks of it, pretty innocuous. Structured data is good for the web, and I’d never argue to the contrary. I’m skeptical about calling this format “open” — because it smells more like openwashing from here, but I’m willing to give it the benefit of the doubt for now. (Similarly, XAuth still has to prove its openness cred, so I understand how these things can come together quickly behind closed doors and then adopt a more open footing over time.)

So, rather than using data that’s already on the web, everyone that wants to play Facebook’s game needs to go and retrofit their pages to include these new metadata types. While they’re busy with that (it should take a few minutes at most, really), won’t they also implement support for Facebook’s Like button? Isn’t that the motivation for supporting the Open Graph Protocol in the first place?

Why yes, yes it is.

And that’s the carrot to convince site publishers to support the Open Graph Protocol.

Here’s the rub though: those Like buttons only work for Facebook. I can’t just be signed in to any social web provider… it’s got to be Facebook. And on top of that, whenever I “like” something, I’m sending a signal back to Facebook that gets recorded on both my profile, and in my activity stream.

Ok, not a big deal, but think laterally: how about this? What if Larry and Sergey wanted to recreate PageRank today?

You know what I bet they wish they could have done? Forced anyone who wanted to add a page to the web to authenticate with them first. It sure would have kept out all those pesky spammers! Oh, and anyone that wanted to be part of the Google index, well they’d have to add additional metadata to their pages so that the content graph would be spic and span. Then add in the “like” button to track user engagement and then use that data to determine which pages and content to recommend to people based on their social connections (also stored on their server) and you’ve got a pretty compelling, centralized service. All those other pages from the long tail? Well, they’re just not that interesting anyway, right?

This sounds a lot to me like “Authenticated PageRank” — where everyone that wants to be listed in the index would have to get a Google account first. Sounds kind of smart, right? Except — shucks — there’s just one problem with this model: it’s evil!

When all likes lead to Facebook, and liking requires a Facebook account, and Facebook gets to hoard all of the metadata and likes around the interactions between people and content, it depletes the ecosystem of potential and chaos — those attributes which make the technology industry so interesting and competitive. It’s one thing for semantic and identity layers to emerge on the web, but it’s something else entirely for the all of the interactions on those layers to be piped through a single provider (and not just because that provider becomes a single point of failure).

I give Facebook credit for launching a compelling product, but it’s dishonest to think that the Facebook Open Graph Protocol benefits anyone more than Facebook — as it exists in its current incarnation, with Facebook accounts as the only valid participants.

As I and others have said before, your identity is too important to be owned by any one company.

Thus I’m looking forward to what efforts like OpenLike might do to tip back the scales, and bring the potential and value of such simple and meaningful interactions to other social identity providers across the web.


Please note that this post only represents my views and opinions as an independent citizen of the web, and not that of my employer.