Clarifying my comments on Twitter’s annotations

Two weeks ago, Mathew Ingram from GigaOM pinged me via my Google Profile to ask what my thoughts — as an open web advocate — are on Twitter’s new annotations feature. He ended up posted portions of my response yesterday in a post titled “Twitter Annotations Are Coming — What Do They Mean For Twitter and the Web?

The portion with my comments reads:

But Google open advocate Chris Messina warns that if Twitter doesn’t handle the new feature properly, it could become a free-for-all of competing standards and markups. “I find them very intriguing,” he said of Annotations, but added: “It could get pretty hairy with lots of non-interoperable approaches,” a concern that others have raised as well. For example, if more than one company wants to support payments through Annotations but they all use proprietary ways of doing that, “getting Twitter clients and apps to actually make sense of that data will be very slow going indeed,” said Messina. However, the Google staffer said he was encouraged by the fact that Twitter was looking at supporting existing standards such as RDFa and microformats (as well as potentially Facebook’s open graph protocol).

Unfortunately some folks found these comments more negative than I intended them to be, so I wanted to flesh out my thinking by providing the entire text of the email I sent to Mathew:

Thanks for the question Mathew. I admit that I’m no expert on Twitter Annotations, but I do find them very intriguing… I see them creating a lot of interesting momentum for the Twitter Dev Community because they allow for all kinds of emergent things to come about… but at the same time, without a sane community stewardship model, it could get pretty hairy with lots of non-interoperable approaches that re-implement the same kinds of features.

That is — say that someone wants to implement support for payments over Twitter Annotations… if a number of different service providers want to offer similar functionality but all use their own proprietary annotations, then that means getting Twitter clients and apps to actually make sense of that data will be very slow going indeed.

I do like that Ryan Sarver et al are looking at supporting existing schema where they exist — rather than supporting an adhocracy that might lead to more reinventions of the wheel than Firestone had blowouts. But it’s unclear, again, how successful that effort will be long term.

Of course, as the weirdo originator of the hashtag, it seems to me that the Twitter community has this funny way of getting the cat paths paved, so it may work out just fine — with just a slight amount of central coordination through the developer mailing lists.

I’d really like to see Twitter adopt ActivityStreams, of course, and went to their hackathon to see what kind of coordination we could do. Our conversation got hijacked so I wasn’t able to make much progress there, but Twitter does seem interested in supporting these other efforts and has reached out to help move things forward.

Not sure how much that helps, but let me know what other questions you might have.

I stand by these comments — though I can see how, spliced and taken out of context, they could be misconstrued.

Considering that we’re facing similar questions about the extensibility model for ActivityStreams, I can speak from experience that guiding chaos into order is actually how “standards” evolve over time. Managing that process determines how quickly an effort like Twitter’s annotations will succeed.

Twitter’s approach of  balancing between going completely open against being centrally managed is a smart approach, and I’m looking forward to both working with them on their efforts, as well as seeing what their developer community produces.

Two interviews on the open web from SXSW

You must have an HTML5-capable browser to watch this video. You may also download this video directly.

Funny how timing works out, but two interviews that I gave in March at SXSW have just been released.

The first — an interview with Abby Johnson for WebProNews — was recorded after my ActivityStreams talk and is embedded above. If you have trouble with the embedded video, you can download it directly. I discuss ActivityStreams, the open web and the role of the Open Web Foundation in providing a legal framework for developing interoperable web technologies. I also explain the historical background of FactoryCity.

In the second interview, with Eric Schwartzman, I discuss ActivityStreams for enterprise, and how information abundance will affect the relative value of data that is hoarded versus data that circulates. Of the interview Eric says: In the 5 years I’ve been producing this podcast, this discussion with Chris, recorded at South by Southwest (SXSW) 2010 directly following his presentation on activity streams, is one of the most compelling interviews I’ve ever recorded. I expect to include many of his ideas in my upcoming book “Social Marketing to the Business Customer” to be published by Wiley early next year.

If you’re interested in these subjects, I’ll be speaking at Northern Voice in Vancouver this weekend, at PARC Forum in Palo Alto on May 13, at Google I/O on May 19, and at GlueCon in Denver, May 27. I also maintain a list of previous interviews that I’ve given.

Understanding the Open Graph Protocol

All likes lead to Facebook

I attended Facebook’s F8 conference yesterday (missed the keynote IRL, but you can catch it online) and came away pondering the Open Graph Protocol.

In they keynote Zuck said (as Luke Shepard calls him):

Today the web exists mostly as a series of unstructured links between pages. This has been a powerful model, but it’s really just the start. The open graph puts people at the center of the web. It means that the web can become a set of personally and semantically meaningful connections between people and things.

While I agree that the web is transmogrifying from a web of documents to a web of people, I have deep misgivings about what the Open Graph Protocol — along with Facebook’s new Like button — means for the open web.

There are three elements of Facebook’s announcements that seem to conspire against the web:

  • A new format
  • Convenient to implement
  • Facebook account required

First, to support the Open Graph Protocol, all you need to do is add some RDFa-formatted metatags to the HEAD of your HTML pages (as this example demonstrates, from IMDB):

Simple right? Indeed.

And from the looks of it, pretty innocuous. Structured data is good for the web, and I’d never argue to the contrary. I’m skeptical about calling this format “open” — because it smells more like openwashing from here, but I’m willing to give it the benefit of the doubt for now. (Similarly, XAuth still has to prove its openness cred, so I understand how these things can come together quickly behind closed doors and then adopt a more open footing over time.)

So, rather than using data that’s already on the web, everyone that wants to play Facebook’s game needs to go and retrofit their pages to include these new metadata types. While they’re busy with that (it should take a few minutes at most, really), won’t they also implement support for Facebook’s Like button? Isn’t that the motivation for supporting the Open Graph Protocol in the first place?

Why yes, yes it is.

And that’s the carrot to convince site publishers to support the Open Graph Protocol.

Here’s the rub though: those Like buttons only work for Facebook. I can’t just be signed in to any social web provider… it’s got to be Facebook. And on top of that, whenever I “like” something, I’m sending a signal back to Facebook that gets recorded on both my profile, and in my activity stream.

Ok, not a big deal, but think laterally: how about this? What if Larry and Sergey wanted to recreate PageRank today?

You know what I bet they wish they could have done? Forced anyone who wanted to add a page to the web to authenticate with them first. It sure would have kept out all those pesky spammers! Oh, and anyone that wanted to be part of the Google index, well they’d have to add additional metadata to their pages so that the content graph would be spic and span. Then add in the “like” button to track user engagement and then use that data to determine which pages and content to recommend to people based on their social connections (also stored on their server) and you’ve got a pretty compelling, centralized service. All those other pages from the long tail? Well, they’re just not that interesting anyway, right?

This sounds a lot to me like “Authenticated PageRank” — where everyone that wants to be listed in the index would have to get a Google account first. Sounds kind of smart, right? Except — shucks — there’s just one problem with this model: it’s evil!

When all likes lead to Facebook, and liking requires a Facebook account, and Facebook gets to hoard all of the metadata and likes around the interactions between people and content, it depletes the ecosystem of potential and chaos — those attributes which make the technology industry so interesting and competitive. It’s one thing for semantic and identity layers to emerge on the web, but it’s something else entirely for the all of the interactions on those layers to be piped through a single provider (and not just because that provider becomes a single point of failure).

I give Facebook credit for launching a compelling product, but it’s dishonest to think that the Facebook Open Graph Protocol benefits anyone more than Facebook — as it exists in its current incarnation, with Facebook accounts as the only valid participants.

As I and others have said before, your identity is too important to be owned by any one company.

Thus I’m looking forward to what efforts like OpenLike might do to tip back the scales, and bring the potential and value of such simple and meaningful interactions to other social identity providers across the web.


Please note that this post only represents my views and opinions as an independent citizen of the web, and not that of my employer.

Google Buzz and the fabric of the social web

Google Buzz IconWhen I joined the company a month ago, I was baited with the promise that Google was ready to get serious about the social web.

Yesterday’s launch of Google Buzz and the fledgling Google Buzz API is like a downpayment on what I see as Google’s broader social web ambitions, that have been bubbling beneath the surface for some time. Understand that Buzz is not entirely an end unto itself, but a way for Google to get some skin in the game to promote the use and adoption of different open technologies for the social web.

In fact, I’d argue that Buzz is as much about Google creating a new channel for conversation in a familiar place as it is about how we’re going about building its public developer surfaces. Although today’s Buzz API only offers a real-time read-only activity stream, the goal is to move quickly towards implementing a host of other technologies — most of which should be familiar to readers of this blog.

As Kevin Marks observes, in order to address the mess of the social web that Mike Arrington described, we need widespread use [of common standards] so that we can generalize across sites — and thus enable people to interact and engage across the web , rather than being restricted to any particular silo of activity — which may or may not reflect their true social configuration.

In other words, standards — and in particular social web standards — are the lingua franca that make it possible for uninitiated web services to interact in a consistent manner. When web services use standards to commoditize essential and basic features, it forces them to compete not with user lock-in, but by providing better service, better user experience, or with new functionality and utility. I am an advocate of the open web because I believe the open web leads to increased competition, which in turn affords people better options, and more leverage in the world.

Buzz is both a terrific product, and a great example of how the social web is evolving and becoming truly ubiquitous. Buzz is simply one more stitch in the fabric of the social web.

Designing for the gut

This post has been translated to Belorussian by Patricia Clausnitzer.

I want you to watch this video from a recent Sarah Palin rally (hat tip: Marshall Kirkpatrick). It gives us “who” I’m talking about.

While you could chalk up the effect of the video to clever editing, I’ve seen similar videos that suggest that the attitudes expressed are probably a pretty accurate portrayal of how some people think (and, for the purposes of this essay, I’m less interested in what they think).

It seems to me that the people in the video largely think with their guts, and not their brains. I’m not making a judgment about their intelligence, only recognizing that they seem to evaluate the world from a different perspective than I do: with less curiosity and apparent skepticism. This approach would explain George W Bush’s appeal as someone who “lead from the gut“. It’s probably also what Al Gore was talking about in his book, Assault on Reason.

Many in my discipline (design) tend to think of the consumers of their products as being rational, thinking beings — not unlike themselves. This seems worse when it comes to engineers and developers, who spend all of their thinking time being mathematically circumspect in their heads. They exhibit a kind of pattern blindness to the notion that some people act completely from gut instinct alone, rarely invoking their higher faculties.

How, then, does this dichotomy impact the utility or usability of products and services, especially those borne of technological innovation, given that designers and engineers tend to work with “information in the mind” while many of the users of their products operate purely on the visceral plane?

In writing about the death of the URL, I wanted to expose some consequences of this division. While the intellectually adventuresome are happy to embrace or create technology to expand and challenge their minds (the popularity and vastness of the web a testament to that fact), anti-intellectuals seem to encounter technology as though it were a form of mysticism. In contrast to the technocratic class, anti-intellectuals on the whole seem less curious about how the technology works, so long as it does. Moreover, for technology to work “well” (or be perceived to work well) it needs to be responsive, quick, and for the most part, completely invisible. A common sentiment I hear is that the less technology intrudes on their lives, the better and happier they believe themselves to be.

So, back to the death of the URL. As has been argued, the URL is ugly, confusing, and opaque. It feels technical and dangerous. And people just don’t get them. This is a sharp edge of the web that seems to demand being sanded off — because the less the inner workings of a technology are exposed in one’s interactions with it, the easier and more pleasurable it will be to operate, within certain limitations, of course. Thus to naively enjoy the web, one needn’t understand servers, DNS, ports, or hypertext — one should just “connect”, pick from a list of known, popular, “destinations”, and then point, click — point, click.

And what’s so wrong with that?

What I find interesting about the social web is not the technology that enables it, but that it bypasses our “central processor” and engages the gut. The single greatest thing about the social web is how it has forced people to overcome their technophobias in order to connect with other humans. I mean, prior to the rise of AOL, being online was something that only nerds did. Few innovations in the past have spread so quickly and irreversibly, and it’s because the benefits of the social web extend beyond the rational mind, and activate our common ancestors’ legacy brain. This widens the potential number of people who can benefit from the technology because rationality is not a requirement for use.

Insomuch as humans have cultivated a sophisticated sociality over millennia, the act of socializing itself largely takes place in the “gut”. That’s not to say that there aren’t higher order cognitive faculties involved in “being social”, but when you interact with someone, especially for the first time, no matter what your brain says, you still rely a great deal on what your gut “tells you” — and that’s not a bad thing. However, when it comes to socializing on sites like Twitter and Facebook, we’re necessarily engaging more of our prefrontal cortex to interpret our experience because digital environments lack the circumstantial information that our senses use to inform our behavior. To make up for the lack of sensory information, we tend to scan pages all at once, rather than read every word from top to bottom, looking for cues or familiar handholds that will guide us forward. Facebook (by name and design) uses the familiarity of our friends’ faces to help us navigate and cope with what is otherwise typically an information-poor environment that we are ill-equipped to evaluate on our own (hence the success of social engineering schemes and phishing).

As we redesign more of our technologies to provide social functionality, we should not proceed with mistaken assumption that users of social technologies are rational, thinking, deliberative actors. Nor should we be under the illusion that those who use these features will care more about neat tricks that add social functionality than the socialization experience itself. That is, technology that shrinks the perceived distance between one person’s gut and another’s and simply gets out of the way, wins. If critical thinking or evaluation is required in order to take advantage of social functionality, the experience will feel, and thus be perceived, as being frustrating and obtuse, leading to avoidance or disuse.

Given this, no where is the recognition of the gut more important than in the design and execution of identity technologies. And this, ultimately, is why I’m writing this essay.

It might seems strange (or somewhat obsessive), but as I watched the Sarah Palin video above, I thought about how I would talk to these people about OpenID. No doubt we would use very different words to describe the same things — and I bet their mental model of the web, Facebook, Yahoo, and Google would differ greatly from mine — but we would find common goals or use cases that would unite us. For example, I’m sure that they keep in touch with their friends and family online. Or they discover or share information — again, even if they do it differently than me or my friends do. Though we may engage with the world very differently — at root we both begin with some kind of conception of our “self” that we “extend” into the network when we go online and connect with other people.

The foundation of those connections is what I’m interested in, and why I think designing for the gut is something that technocrats must consider carefully. Specifically, when I read posts like Jesse Stay’s concept of a future without a login button, or evaluate the mockups for an “active identity client” based on information cards or consider Aza and Alex’s sketches for what identity in the browser could look like, I try to involve my gut in that “thought” process.

Now, I’m not just talking about intuition (though that’s a part of it). I’m talking about why some people feel “safer” experiencing the web with companies like Google or Facebook or Yahoo! at their side, or how frightening the web must seem when everyone seems to need you to keep a secret with them in order to do business (i.e. create a password).

I think the web must seem incredibly scary if you’re also one of those people that’s had a virus destroy your files, or use a computer that’s still infected and runs really slow. For people with that kind of experience as the norm, computers must seem untrustworthy or suspicious. Rationally you could try to explain to them what happened, or how the social web can be safe, but their “gut has already been made up.” It’s not a rational perception that they have of computers, it’s an instinctual one — and one that is not soon overcome.

Thus, when it comes to designing identity technologies, it’s very important that we involve the gut as a constituent of our work. Overloading the log in or registration experience with choice is an engineer’s solution that I’ve come to accept is bound to fail. Instead, the act of selecting an identity to “perform as” must happen early in one’s online session — at a point in time equivalent to waking up in the morning and deciding whether to wear sweatpants or a suit and tie depending on whatever is planned for the rest of the day.

Such an approach is a closer approximation to how people conduct themselves today — in the real world and from the gut — and must inform the next generation of social technologies.

The death of the URL

The red pill, or blue pill

Prelude

You take the blue pill and the story ends. You wake in your bed and believe whatever you want to believe. You take the red pill and you stay in Wonderland and I show you how deep the rabbit-hole goes. Remember — all I am offering is the truth, nothing more.

In the Matrix, Morpheus presents Neo with a choice: he can take the blue pill and continue his somnambulatory existence within the Matrix, or he can take the red pill and become free from the virtual reality that the machines created to enslave humanity.

As you can see from the clip above, Neo chooses the red pill, severing his connection to the Matrix and regaining his free will.

Everyday, when you fire up your browser and type in some arbitrary URL in the browser’s address bar, you are taking the red pill.

Address Bar

Increasingly though, I see signs that the essential freedoms of the web are being undermined by a cadre of companies through the introduction of new technologies and interfaces that, combined, may spell the death of the URL.

Call me crazy, but it seems obvious enough when you put on the right colored paranoia goggles.

Exhibit A: Web TV

Web TV

There’s an article in Friday’s USA Today suggesting that we’re finally at a point where web TV has a chance. But there’s an insidious underbelly to this story. Specifically: Consumers may balk if TV sets become too computerlike and complicated.

From the article:

Manufacturers say they learned an important lesson from earlier convergence failures: Viewers want to relate to sets as televisions, not computers.

That’s why the new Web TV models don’t come with browsers that would give people the freedom to surf the full Internet, even though the TVs connect to the Web via an ethernet cable or home wireless network. The companies want to promote consumer acceptance of Web TV by making the technology simple to use: That means no keyboard or mouse.

It’s just Step 1: Engineers are talking about changes that would make it easy to navigate the Internet. One thought is to program smartphones so they can change channels, send text messages to the set and move a cursor around the screen with the motion-sensitive technology that Nintendo uses with its Wii game system.

For now, though, people just need the TV remote control to select and launch prepackaged applications.

Emphasis mine.

In a twist of McLuhanesque determinism, it would appear that the apparatus and determinism of the television experience will overrule the freedom and flexibility of the web — because, well, frankly — all that choice…! It’s so… unseemly and unmonetizable.

Instead, Web TV will be made easier to use by removing the best parts of the web and augmenting the straightjacket features of the television.

Exhibit B: Litl, ChromeOS, JoliCloud, and Apple Tablet

Litl

I somewhat serendipitously stumbled upon Litl — a little design project of famous design firm Pentagram.

The thing is cool, I admit. The netbook/webbook market needs some design thinking. And heck, I’m as eager as anyone to see what Apple is going to do in this space, so I’m watching it closely… but something tells me that the next generation “PC” devices are going to revolve around slicker, streamlined interfaces that come pre-packaged with fewer choices drawn from a set of likely suspects (i.e. Facebook, Twitter, Google, Yahoo et al.).

Taking a look at the JoliCloud homescreen… you can start to see how this will be the next Firefox search box in terms of monetization:

JoliCloud

Though I imagine you’ll be able to set custom options here, it’s the defaults that matter.

…and these homescreens become yet another funnel to drive users to a predetermined (and paid for) set of options.

Exhibit C: Top Sites

Top Sites

Similar to the netbook homescreens, both Safari and Chrome provide home pages that show you thumbnails of the sites that you visit most often (coincidence? I think not!).

Seems an innocuous feature. I mean, isn’t it easier to just click a picture of where you want to go rather than typing in some awkward string that starts with HTTP into the address bar?

AH HA! So, you’d take the blue pill eh?

See the problem?

Just as browsers currently come with a set of default bookmarks today, there’s no reason why the next generation browsers won’t come with their own predefined set of “Top Sites”, that, not unlikely, will come from the same list of predetermined companies that populate the home screens of the next gen Net/Web Books.

The more that the browser address bar can be made obsolete, the more it becomes just like TV, right?

Exhibit D: Warning interstitials and short URL frames

Facebook | Leaving Facebook...

If you use Facebook, you’ve probably seen the above warning before — usually after clicking a link that a friend sent you. Now, I recognize why they do this. It’s true: on the internet, thar be dragons!

Now, nevermind the dragons on Facebook proper — this innocuous little screen was designed, one assumes, to keep you safe from things outside the Facebook universe. However, the net effect of seeing this page every time you click an outbound link is fatigue. You get worn down by having to click through this page until finally, after a while, you just give up and stop clicking links from your friends altogether. It just could be that a momentary delay like this is enough to change your behavior completely.

Even when you do decide to leave, Facebook comes with you — inserting 45 pixels of itself into your experience as a top frame:

Facebook | External link frame

This make it easier to get back to Facebook, and never skip a beat. But it also removes the need to visit the address bar and think about where you want to go next (let alone type it out). Of course Facebook isn’t the only service doing this — Digg and countless other short URL generators intrude on your web experience and put yet more distance between you and the address bar.

All these little hindrances add up — and if you’ve done any usability work — you know that the smallest changes can lead to huge impacts over time if the changes are so slight as to be essentially unnoticeable.

Exhibit E: The NASCAR

bragster sign in form

Now, this one hits close to home, y’know, since this is what I’ve been working on for the past year or so… but the reality is that more and more, companies are moving to accept this logo-splattered approach to user sign in forms — “the NASCAR” — which dispatches the uncomfortable “URL-based” metaphor of OpenID altogether.

Why?

Because it’s too “complicated“. People don’t get “URLs” for sign in.

Now, we’ve made progress moving forward with “email-style identifiers” for use in OpenID transactions, but we’re not there yet, and we’re not moving fast enough either.

The specter of the Facebook Connect button is ever-present, and, from a UI perspective, it’s hard to argue with one button to rule them all (even if it destroys individual autonomy in the process — hey! freedom is messy! Let’s scrap it!).

The NASCAR, then, is just one more way to put off teaching users to recognize that URLs can represent people too, chaining us to the silos and locking us into brand-mediated identities for yet another generation.

Exhibit F: App Stores

Apps for iPhone

Finally, there’s been plenty written about this already, but what is the App Store except a cleaved out and sanitized portion of the web? In fact, people accustomed to the freedom and “flow” of the web go into anaphylactic shock when they realize that they must submit to the slings and arrows of the outrageous fortune of Steve Jobs when they want their iPhone app to show up in the Apple app store.

And it’s only going to get worse, because now everyone wants a goddamn app store.

Thanks a lot, Steve.

The rise of the “app store mentality” is a direct attack on the web, and on the very nature of free discovery and choice built upon URL-based hyperlinks. By depriving us the ability to pick and choose which “stores” we shop from on these devices — we’re empowering a new breed of middle men and ceding to them monopoly control over our digital experience. The architecture of the web was intended to withstand such threats — but that all changes when the hardware makers get into the content business! Even though developers are beginning to see the dark side of this faustian bargain, the momentum is huge — and big business smells money.

By removing our ability to navigate, choose, and share freely — these app stores are exchanging our freedom for a promise that they’ll keep us safe, give us everything we need, and do all the choosing of what’s “good enough” for us — all starting at ninety-nine cents a hit.

No doubt this model will be emulated and copied — across all platforms — until the last vestige of the URL is patched over and removed… the last reminder of an uncomfortable and much messier era of history.

Epilogue

I don’t know about you, but a future without URLs and without the infinite organicity of the web frightens me. It’s not that I know what we’ll lose by removing this artifact of one of the most generative periods in history — and that’s exactly the point! The URL and the ability for anyone to mint a new one and then propagate it is what makes the web so resilient, so empowering, and so interesting! That I don’t need to ask anyone permission to create a new website or webpage is a kind of ideological freedom that few generations in history have known!

Now, granted, there is still much work to be done to spread the power and privilege of the web, but what I don’t want to see happen in the meantime is the next generation of kids grow up with an “easier” laptop, Web Top, Net Book, Nook, or whatever the hell they’re going to call it — that lacks an address bar. I don’t want the next generation to grow up with TV-stupid controls and a set of predefined widgets that determine the totality and richness of their experience on a mere subset of the web! That future cannot be permitted!

Maybe I’m wrong or just paranoid, and maybe the web has won, forever. But I’m not willing to rest on my laurels. No way.

We all know that the internet has won as the transport medium for all data — but the universal interface for interacting with the web? — well, that battle is just now getting underway.

As a user experience designer, it’s on my discipline and peers to provide the right kind of ideas and leadership. If we get the design right, we can empower while clarifying; we can reduce complexity while enhancing functionality; we can expand freedom while not overwhelming with choice. Surely these are the things that good, thoughtful user experience design can achieve!

Well, friends, I’ve said my piece. Whether this threat is real or imagined, it’s one that I believe bears inspection.

Like Neo, if I were forced to choose between all the messiness of free will over the “comfortability” of a contrived existence, I’d choose the red pill, time and time again. And I hope you would too.

A conversation with Ville Vesterinen about standards and the open social web

Ville Vesterinen by JyriI sat down for a conversation with Ville Vesterinen (@vesterinen) — co-founder and editor of the ArcticStartup blog — last week while he was visiting from Helsinki. Following up on the post that Jyri Engeström and I wrote on the web at a new crossroads, we discussed the need for more open standards to create the underpinnings of a web-wide platform for building more personal social applications.

At one point in our discussion, I suggested that an HTML tag for a person might make sense — with the ability to include a person’s face or list of friends — without the need for services like Facebook or Twitter. This idea was inspired by Mark Pilgrim’s retelling of the origin story of the <img> tag and conversations I’ve had recently with Michael Hanson of Mozilla (who wrote up a concept for supporting WebFinger in the browser after discussions at IIW).

Our conversation goes on around 15 minutes but does a decent job of capturing my current thinking on the social web.

I’d also like to point out that an OpenWebCampHelsinki is happening this weekend, in case anyone happens to be passing through Finland!