OpenSocial and Address Book 2.0: Putting People into the Protocol

Obey by Shepard FaireyI wonder if Tim O’Reilly knows something that he’s not telling the rest of us. Or maybe he knows something that the rest of us know, but that we haven’t been able to articulate yet. Who knows.

In any case, he’s been going on about this “Address Book 2.0for awhile, and if you ask me, it has a lot to do with Google’s upcoming announcement of a set of protocols, formats and technologies they’ve dubbed OpenSocial.

[Aside: I’ll just point out that I like the fact that the name has “open” in it (even if “open” will be the catchphrase that replaces the Web 2.0 meme) because it means that in order to play along, you have to be some shade of open. I mean, if Web 2.0 was all about having lots of bits and parts all over the place and throwing them together just-in-time in the context of a “social” experience, then being “open” will be what separates those who are caught up and have been playing along from those who have been asleep at the wheel for the past four years. Being “open” (or the “most” open) is the next logical stage of the game, where being anything other than open will be met with a sudden and painless death. This is a good thing™ for the web, but remember that we’re in the infancy of the roll-out here, and mistakes (or brilliant insights) will define what kind of apps we’re building (or able to build) for the next 10 years.]

Let me center the context here. A few days ago, I wrote about putting people in the protocol. I was talking about another evolution that will come alongside the rush to be open (I should note that “open” is an ongoing process, not an endpoint in and of itself). This evolution will be painful for those who resist but will bring great advantage to those who embrace it. It’s pretty simple and if you ask me, it lies at the heart of Tim’s Address Book 2.0 and Google’s OpenSocial; in a word, it’s people.

Before I get into that, let me just point out what this is not about. Fortunately, in his assessment of “What to Look for from Google’s Social Networking Platform“, David Card at Jupiter Research spelled it out in blindingly incorrect terms:

As an analyst who used to have the word “Unix” on his business card, I’ve seen a lot of “open” “consortia” fail miserably. Regular readers know my Rule of Partnership: For a deal to be important, two of the following three must occur:

– Money must change hands
– There must be exclusivity
– Product must ship

“Open” “consortia” aren’t deals. That’s one of the reasons they fail. The key here would be “Product must ship.”

This completely misses the point. This is why the first bubble was so lame. So many people had third-world capital in their heads and missed what’s new: the development, accumulation and exchange of first-world social capital through human networks.

Now, the big thing that’s changed (or is changing) is the emphasis on the individual and her role across the system. Look at MyBlogLog. Look at Automattic’s purchase of Gravatar. Look at the sharp rise in OpenID adoption over the past two years. The future is in non-siloed living man! The future is in portable, independent identities valid, like Visa, everywhere that you want to be. It’s not just about social network fatigue and getting fed up with filling out profiles at every social network you join and re-adding all your friends. Yeah, those things are annoying but more importantly, the fact that you have to do it every time just to get basic value from each system means that each has been designed to benefit itself, rather than the individuals coming and going. The whole damn thing needs to be inverted, and like recently rejoined ant segments dumped from many an ant farm, the fractured, divided, shattered into a billion fragments-people of the web must rejoin themselves and become whole in the eyes of the services that, what else?, serve them!

Imagine this: imagine designing a web service where you don’t store the permanent records of facets of people, but instead you simply build services that serve people. In fact, it’s no longer even in your best interest to store data about people long term because, in fact, the data ages so rapidly that it’s next to useless to try to keep up with it. Instead, it’s about looking across the data that someone makes transactionally available to you (for a split second) and offering up the best service given what you’ve observed when similar fingerprint-profiles have come to your system in the past. It’s not so much about owning or storing Address Book 2.0 as much as being ready when all the people that populate the decentralized Address Book 2.0 concept come knocking at your door. Are you going to be ready to serve them immediately or asking them to fill out yet another profile form?

Maybe I’m not being entirely clear here. Admittedly, these are rough thoughts in my head right now and I’m not really self-editing. Forgive me.

But I think that it’s important to say something before the big official announcement comes down, so that we can pre-contextualize this and realize the shift that’s happening even as the hammer drops.

Look, if Google and a bunch of chummy chums are going to make available a whole slew of “social graph” material, we had better start realizing what this means. And we had better start realizing the value that our data and our social capital have in this new eco-system. Forget page views. Forget sticky eyeballs. With OpenID, with OAuth, with microformats, with, yes, with FOAF and other formats — hell with plain ‘ol scrapable HTML! — Google and co. will be amassing a social graph the likes of which has yet to be seen or twiddled upon (that’s a technical term). It means that we’re [finally] moving towards a citizen-centric web and it means great things for the web. It means that things are going to get interesting, for Facebook, for MySpace, for Microsoft, for Yahoo! (who recently closed 360, btw!) And y’know, I don’t know what Spiderman would think of OpenSocial or of what else’s coming down the pipe, but I’m sure in any case, he’d caution that, with great power comes great responsibility.

I’m certainly excited about this, but it’s not all about Google. Moreover, OpenSocial is simply an acknowledgment that things have to (and have) change(d). What comes next is anyone’s guess, but as far as I’m concerned, Tim’s been more or less in the ballpark so far, it just won’t necessarily be about owning the Address Book 2.0, but what it means when it’s taken for granted as a basic building block in the vast clockwork of the open social web.

Site-specific browsers and GreaseKit

GreaseKit - Manage Applications

There’s general class of applications that’s been gaining some traction lately in the Mozilla communities built on a free-standing framework called .

The idea is simple: take a browser, cut out the tabs, the URL bar and all the rest of the window chrome and instead load one website at a time. Hence the colloquial name: “site specific browser”.

Michael McCracken deserves credit for inspiring interest in this idea early on with his Webmail app, that in turn lead to Ben Willmore’s Gmail Browser application. Both literally took WebKit — Apple’s open source rendering engine (like Mozilla’s engine) — had it load Gmail.com and released their apps. It doesn’t get much more straight-forward than that.

For my own part, I’ve been chronically the development of Site-Specific Browsers from the beginning, setting up the WebKit PBWiki and releasing a couple of my own apps, most recently Diet Pibb. I have a strong belief that a full featured rendering engine coupled with a few client side tweaks is the future of browsers and web apps. You can see hints of this in Dashboard Widgets and Adobe’s AIR framework already, though the former’s launching flow conflicts with the traditional “click an icon in my dock to launch an application” design pattern.

Anyway, in developing my Site-Specific Browsers (or Desktop Web Apps?), I became enamored with an input manager for Safari called Creammonkey that allows you to run Greasemonkey scripts inside of Safari (ignore the name — Kato, the developer, is Japanese and English is his second language). An input manager works by matching a particular application’s .plist identifier and injecting its code (via SIMBL) into the application at run-time, effectively becoming part of the host application, in turn giving it access to all the application’s inner workings. When it comes to a rendering engine, it’s this kind of injection that allows you to then inject your own CSS or Javascript into a webpage, allowing you to make whatever modifications you want.

This is what Creammonkey did for Safari. And thank god it was open source or else we never would have ended up with today’s release of the successor to Creammonkey called GreaseKit.

Let me step back a little.

When I found out about Creammonkey, I contacted Kato Kazuyoshi, the developer, and told him how excited I was about what he had created.

“But could I use this on Site-Specific Browsers?” I wanted to know.

In broken English he expressed his uncertainty and so I went about hacking it myself.

I ended up with a crude solution where I would recompile Creammonkey and aim it at a different application every time I wanted to make use of a different Greasemonkey scripts. It was tedious and didn’t really work the way I envisioned, but given my meager programming skills, it demonstrated the idea of Site-Specific Browsers with Site-Specific Hacks.

I called this collection of site-specific scripts with a properly crafted Input Manager a “GreaseKit” and let the idea sit for a while.

Some time later, Kato got in touch with me and we talked about rereleasing Creammonkey with the functionality that I envisioned and a new name. Today he released the results of that work and called it GreaseKit.

I can’t really express how excited I am about this. I suspect that the significance of this development probably won’t shake the foundations of the web, but it’s pretty huge.

For one thing, I’ve found comparable solutions (Web Runner) clunky and hard to use. In contrast, I’m able to create stand-alone WebKit apps in under 2 minutes with native Apple menus and all the fixins using a template that Josh Peek made. No, these apps aren’t cross-platform (yet), but what I lose in spanning the Linux/PC divide, I gain in the use of , Apple’s development environment, and frankly a faster rendering engine. And, as of today, the use of Greasemonkey scripts centrally managed for every WebKit app I develop.

These apps are so easy to make and so frigging useful that people are actually building businesses on them. Consider Mailplane, Ruben Bakker’s Gmail app. It’s only increments better than McCracken’s WebMail or Willmore’s Gmail Browser, but he’s still able to charge $25 for it (which I paid happily). Now, with GreaseKit in the wild, I can add all my favorite Greasemonkey scripts to Mailplane — just like I might have with Firefox — but if Gmail causes a browser crash, I only lose Mailplane, rather than my whole browser and all the tabs I have open. Not to mention that I can command-tab to Mailplane like I can with Mail.app… and I can drag and drop a file on to Mailplane’s dock icon to compose a new email with an attachment. Just add offline storage (inevitable now that WebKit supports HTML5 client-side database storage) and you’ve basically got the best of desktop, web and user-scriptable applications all in one lightweight package. I’ll have more to write on this soon, but for now, give GreaseKit a whirl (or check the source) and let me know what you think.

Did the web fail the iPhone?

Twitter / Ian McKellar: @factoryjoe, wait, so all these "web apps" people have invested time and money in are now second-class applications?

Ian might be right, but not because of Steve’s announcement today about opening up the iPhone.

Indeed, my reaction so far has been one of quasi-resignation and disappointment.

A voice inside me whimpers, “Don’t give up on the web, Steve! Not yet!”

iPhoneDevCampYou have to understand that when I got involved in helping to plan iPhoneDevCamp, we didn’t call it iPhoneWebDevCamp for a reason. As far as we knew, and as far as we could see into the immediate future, the web was the platform of the iPhone (Steve Jobs even famously called Safari the iPhone’s SDK).

The hope that we were turning the corner on desktop-based applications was palpable. By keeping the platform officially closed, Apple brought about a collective channeling of energy towards the development of efficient and elegant web interfaces for Safari, epitomized by Joe Hewitt’s iPhone Facebook App (started as a project around iPhoneDevCamp and now continued on as by Christopher Allen, founder of the ).

And we were just getting started.

…So the questions on my mind today are: was this the plan all along? Or, was Steve forced into action by outside factors?

iPhone Spider WebIf this were the case all along, I’d be getting pretty fed up with these kind of costly and duplicitous shenanigans. For godsake, Steve could at least afford to stop being so contradictory! First he lowers the price of the iPhone months after releasing it, then drops the price of DRM-free tracks (after charging people to “upgrade their music”), and now he’s promising a software SDK in February, pledging that an “open” platform “is a step in the right direction” (after bricking people’s phones and launching an iPhone WebApps directory, seemingly in faux support of iPhone Web App developers).

Now, if this weren’t in the plan all along, then Apple looks like a victim of the promise — and hype — of the web as platform. (I’ll entertain this notion, while keeping in mind that Apple rarely changes direction due to outside influence, especially on product strategy.)

Say that everything Steve said during his keynote were true and he (and folks at Apple) really did believe that the web was the platform of the future — most importantly, the platform of Apple’s future — this kind of reversal would have to be pretty disappointing inside Apple as well. Especially considering their cushy arrangement with Google and the unlikelihood that Mac hardware will ever outsell PCs (so long as Apple has the exclusive right to produce Mac hardware), it makes sense that Apple sees its future in a virtualized, connected world, where its apps, its content and its business is made online and in selling thin clients, rather than in the kind of business where Microsoft made its billions, selling dumb boxes and expiring licenses to the software that ran on them.

If you actually read Apple’s guide for iPhone content and application development, you’d have to believe that they get the web when they call for:

  • Understanding User-iPhone Interaction
  • Using Standards and Tried-and-True Design Practices
  • Integrating with Phone, Mail, and Maps
  • Optimizing for Page Readability
  • Ensuring a Great Audio and Video Experience (while Flash is not supported)

These aren’t the marks of a company that is trying to embrace and extend the web into its own proprietary nutshell. Heck, they even support microformats in their product reviews. It seems so badly that they want the web — the open web — to succeed given all the rhetoric so far. Why backslide now?

Well, to get back to the title of this post, I can’t but help feel like the web failed the iPhone.

For one thing, native apps are a known quantity for developers. There are plenty of tools for developing native applications and interfaces that don’t require you to learn some arcane layout language that doesn’t even have the concept of “columns”. You don’t need to worry about setting up servers and hosting and availability and all the headaches of running web apps. And without offering “services in the cloud” to make web application hosting and serving a piece of cake, Apple kind of shot itself in the foot with its developers who again, aren’t so keen on the ways of the web.

Flipped around, as a proponent of the web, even I can admit how unexciting standard interfaces on the web are. And how much work and knowledge it requires to compete with the likes of Adobe’s AIR and Microsoft’s SilverLight. I mean, us non-proprietary web-types rejoice when Safari gets support for CSS-based rounded corners and the ability to use non-standard typefaces. SRSLY? The latter feature was specified in 1998! What took so long?!

No wonder native app developers aren’t crazy about web development for the iPhone. Why should they be? At least considering where we’re at today, there’s a lot to despise about modern web design and to despair about how little things have improved in the last 10 years.

And yet, there’s a lot to love too, but not the kind of stuff that makes iPhone developers want to abandon what’s familiar, comfortable, safe, accessible and hell, sexy.

It’s true, for example, that with the web you get massive distribution. It means you don’t need a framework like Sparkle to keep your apps up-to-date. You can localize your app in as many languages as you like, and based on your web stats, can get a sense for which languages you should prioritize. With protocols like OpenID and OAuth, you get access to all kind of data that won’t be available solely on a user’s system (especially when it comes to the iPhone which dispenses with “Save” functionality) as well a way to uniquely identify your customers across applications. And you get the heightened probability that someone might come along and look to integrate with or add value to your service via some kind of API, without requiring any additional download to the user’s system. And the benefits go on. But you get the point.

Even still, these benefits weren’t enough to sway iPhone developers, nor, apparently, Steve Jobs. And to the degree to which the web is lacking in features and functionality that would have allowed to Steve to hold off a little longer, there is opportunity to improve and expand upon what I call the collection of “web primitives” that compose the complete palette of interaction options for developers who call the web their native platform. The simple form controls, the lightboxes, the static embedded video and audio, the moo tools and scriptaculouses… they still don’t stack up against native (read: proprietary) interface controls. And we can do better.

We must to do better! We need to improve what’s going inside the browser frame, not just around it. It’s not enough to make a JavaScript compiler faster or even to add support for SVG (though it helps). We need to define, design and construct new primitives for the web, that make it super simple, straight-forward and extremely satisfying to develop for the web. I don’t know how it is that web developers have for so long put up with the frustrations and idiosyncrasies of web application development. And I guess, as far as the iPhone goes, they won’t have to anymore.

It’s a shame really. We could have done so much together. The web and the iPhone, that is. We could have made such sweet music. Especially when folks realize that Steve was right and developing for Safari is the future of application development, they’ll have wished that they had invested in and lobbied for richer and better tools and interfaces for what will inevitably become the future of rich internet application development and, no surprise, the future of the iPhone and all its kin.

Data capital, or: data as common tender

Legal TenderWikipedia states that … is payment that, by law, cannot be refused in settlement of a debt denominated in the same currency. , in turn, is a unit of exchange, facilitating the transfer of goods and/or services.

I was asked a question earlier today about the relative value of open services against open data served in open, non-proprietary data formats. It got me thinking whether — in the pursuit of utter openness in web services and portability in stored data — that’s the right question. Are we providing the right incentives for people and companies to go open? Is it self-fulfilling or manifest destiny to arrive at a state of universal identity and service portability leading to unfettered consumer choice? Is this how we achieve VRM nirvana, or is there something missing in our assumptions and current analysis?

Mary Jo Foley touched on this topic today in a post called Are all ‘open’ Web platforms created equal? She asks the question whether Microsoft’s PC-driven worldview can be modernized to compete in the network-centric world of Web 2.0 where no single player dominates but rather is made up of Best of Breed APIs/services from across the Web. The question she alludes to is a poignant one: even if you go open (and Microsoft has, by any estimation), will anyone care? Even if you dress up your data and jump through hoops to please developers, will they actually take advantage of what you have to offer? Or is there something else to the equation that we’re missing? Some underlying truism that is simply refracting falsely in light of the newfound sexiness of “going open”?

We often tell our clients that one of the first things you can do to “open up” is build out an API, support microformats, adopt OpenID and OAuth. But that’s just the start. That’s just good data hygiene. That’s brushing your teeth once a day. That’s making sure your teeth don’t fall out of your head.

There’s a broader method to this madness, but unfortunately, it’s a rare opportunity when we actually get beyond just brushing our teeth to really getting to sink them in, going beyond remedial steps like adding microformats to web pages to crafting just-in-time, distributed open-data-driven web applications that actually do stuff and make things better. But as I said, it’s a rare occasion for us because we’ve all been asking the wrong questions, providing the wrong incentives and designing solutions from the perspective of the silos instead of from the perspective of the people.

Let me make a point here: if your data were legal tender, you could take it anywhere with you and it couldn’t be refused if you offered to pay with it.

Last.fm top track chartsLet me break that down a bit. The way things are today, we give away our data freely and frequently, in exchange for the use of certain services. Now, in some cases, like Pandora or Last.fm, the use of the service itself is compelling and worthwhile, providing an equal or greater exchange rate for our behavior or taste data. In many other cases, we sign up for a service and provide basic demographic data without any sense of what we’re going to get in return, often leaving scraps of ourselves to fester all across the internet. Why do we value this data so little? Why do we give it away so freely?

I learned of an interesting concept today while researching legal tender called “Gresham’s Law” and commonly stated as: When there is a legal tender currency, bad money drives good money out of circulation.

Don’t worry, it took me a while to get it too. Nicolas Nelson offered the following clarification: if high quality and low quality are forced to be treated equally, then folks will keep good quality things to themselves and use low quality things to exchange for more good stuff.

Think about this in terms of data: if people are forced (or tricked) into thinking that the data that they enter into web applications is not being valued (or protected) by the sites that collect the data, well, eventually they’ll either stop entering the data (heard of social network fatigue?) or they’ll start filling them with bogus information, leading to “bad data” driving out the “good data” from the system, ultimately leading to a kind of data inflation, where suddenly the problem is no longer getting people to just sign up for your service, but to also provide good data of some value. And this is where data portability — or data as legal tender — starts to become interesting and allows us to start seeing around through the distortion of the refraction.

Think: Data as currency. Data to unlock services. Data owned, controlled, exchanged and traded by the creator of said data, instead of by the networks he has joined. For the current glut of web applications to maintain and be sustained, we must move to a system where people are in charge of their data, where they garden and maintain it, and where they are free to deposit and withdraw it from web services like people do money from banks.

If you want to think about what comes next — what the proverbial “Web 3.0” is all about — it’s not just about a bunch of web applications hooked up with protocols like OAuth that speak in microformats and other open data tongue back and forth to each other. That’s the obvious part. The change comes when a person is in control of her data, and when the services that she uses firmly believe that she not only has a right to do as she pleases with her data, but that it is in their best interest to spit her data out in whatever myriad format she demands and to whichever myriad services she wishes.

The “data web” is still a number of years off, but it is rapidly approaching. It does require that the silos popular today open up and transition from repositories to transactional enterprises. Once data becomes a kind of common tender, you no longer need to lock it; in fact, the value comes from its reuse and circulation in commerce.

To some degree, Mint and Wesabe are doing this retroactively for your banking records, allowing you to add “data value” to the your monetary transactions. Next up Google and Microsoft will do this for your health records. For a more generic example, Swivel is doing this today for the OECD but has a private edition coming soon. Slife/Slifeshare, i use this and RescueTime do this for your use of desktop apps.

This isn’t just attention data that I’m talking about (though the recent announcements in support of APML are certainly positive). This goes beyond monitoring what you’re doing and how you’re spending your time. I’m talking about access to all the data that it would take to reconstitute your entire digital existence. And then I’m talking about the ability to slice, dice, and splice it however you like, in pursuit of whatever ends you choose. Or choose not to.


I’ll point to a few references that influenced my thinking: Social Capital To Show Its Worth at This Week’s Web 2.0 Summit, What is Web 2.0?, Tangled Up in the Future – Lessig and Lietaer, , Intentional Economics Day 1, Day 2, Day 3.

So Mozilla wants to go mobile, eh?

As with baseball, on the web we have our home teams and our underdogs and our all-stars; we have our upsets, our defeats, and our glorious wins in the bottom of the ninth. And though I’m actually not much of a baseball fan anymore (though growing up in New England, I was exposed to plenty of Red Sox fever), I do relate my feelings for Mozilla to the way a lot of folks felt about the Red Sox before they finally won the World Series and broke the Curse of the Bambino: that is, I identify with Mozilla as my team, but dammit if they don’t frustrate me on occasion.

Tara wonders why I spend so much time on Mozilla when clearly I’m a perennial critic of the direction they’re headed in and the decisions that they make. But then Tara also didn’t grow up around vocal critics of the Red Sox who expressed their dedication and patronage to the team through their constant criticism and anger. It might not make sense, and it might not seem worth my time, but whatever the case, you really can’t be neutral about Mozilla and still consider yourself a fan. Even if you disagree with everything decision that they make, they’re still the home team of the Open Web and heck, even as you bitch and whine about this or about that, you really just want to see them do well, oftentimes in spite of themselves.

So, with that said, let me give you a superficial summary of what I think about Mozilla’s recent announcement about their mobile strategy:

If you want to stop reading now, you can, but the details and background of my reasoning might be somewhat interesting to you. I make no promises though.

Continue reading “So Mozilla wants to go mobile, eh?”

Theories about Google’s acquisition of Jaiku

Jaiku LogoI think it’s pretty interesting news that Google ended up buying Jaiku. Can’t say I saw it coming, and it’s unexpected enough to make things interesting for a bunch of us for whom such actions cause reverberations in our work and also give rise to fair speculation about why they did it and what it means.

So earlier I just so happened to be talking with Someone Who Shall Remain Unnamed and he put my mind into circles about this thing. He pointed me to some of dotBen’s ideas about the acquisition. While I’m not really sure about his geographic assessment (given that my old roommate and coworker Andy Smith is coming back to San Francisco, which kind of blows the London-relocation theory), I do share his (and others’) concern that this acquisition could spell the demise of Jaiku in priority of other more average-Joe services:

Did Google buy Jaiku for it’s engineering talent rather than for the product? I’m betting the engineering team is going to be siphoned off into the GPhone project.

At the same time, I think that Google’s play will likely continue to be software rather than hardware. Here’s why:

  • Google makes a lot of software, most of it hosted on the web.
  • Google doesn’t make any consumer hardware.
  • Google wouldn’t mind being in the wireless business, but probably not the handset business.
  • And, Apple already has a bunch of deals with the existing phone carriers.
  • Apple makes good consumer hardware.

Yes, those last two are about Apple. Because the deal between Google and Apple is sweetening every day. And Apple doesn’t really buy companies, Google does. So that’s why Google bought Jaiku today and not Apple.

But I digress. Let’s throw some more theories and extrapolations out there.

Ok, first and foremost, the browser. In one of my many unpublished drafts, I talk about how Steve was right about building Web 2.0 apps for the iPhone and why he’s even kind of right trying to prevent people from building so-called “native apps”. Sadly folks don’t really think about the web as a “native” environment yet. It’s got crappy access to hardware and some pretty weak tools for building rich interfaces so they stick with what they know rather than embracing the rough-and-tumble world of designing and coding for web browsers. What’s there to love, really?

Still, Apple (and moreso Google) already think of the web as their native turf. And where the battle for the future will be waged (which incidentally puts Microsoft at a huge disadvantage, since it’s been fighting against the web all along). And the browser is key to this equation, which is why Apple has a world-class (and getting better daily) open source browser that’s had an awful lot of attention paid to its compatibility with Google Apps lately. It’s also mobile ready, as demonstrated by how insanely well Safari works on the iPhone compared to any other mobile browser. I guess to some extent, one can forgive any misgivings Google now has about betting on Firefox, the little engine that could have and should have and then, well, didn’t, at least when it came to having a mobile strategy. But I digress. Again.

Take a look at the iPhone’s Maps feature. Yes, it’s driven by Google, but I didn’t have to tell you that. What’s missing? Well, besides GPS for yourself, it’s missing GPS for your friends and family. Yeah, that sounds a little creepy to me too, but don’t kid yourself, it’s coming. Us early adopters have been geoplotting each other for years now, on sites like Plazes, Jaiku and — oh yeah — before them, on another Google acquisition: Dodgeball. I mean, Google’s full of tenacity. It’s nutty to think that regular people will ever adopt the that we currently check in to these services with. After all, that’s what computers are for.

What Google keeps getting closer to with first its acquisition of Dodgeball a year ago and now Jaiku is the socialization of presence. That’s what it’s all about. That’s how they’ll increase the relevance of their ads in situ. That’s how they’ll improve the quality of their services (like Google Maps). Tim O’Reilly has for some time called for the development of the Web 2.0 Address Book. I think belying that assertion is the fact that more people need to get in the habit and mindset of maintaining their online selves — thereby making their status — or presence — more widely known and available.

The Web 2.0 Address Book isn’t really about how you connect to someone. It’s not really about having their home, work and secret lair addresses. It’s not about having access to their 15 different cell phone numbers that change depending on whether they’re home, at work, in the car, on a plane, in front of their computer and so on. It’s not about knowing the secret handshake and token-based smoke-signal that gains you direct access to send someone a guaranteed email that will bypass their moats of antispam protection. In the real world (outside of Silicon Valley), people want to type in the name of the recipient and hit send and have it reach the destination, in whatever means necessary, and in as appropriate a manner as possible. For this to happen, recipients need to provide a whole lot more information about themselves and their contexts to the system in order for this whole song and dance to work.

If you think far enough into the future, and realize that the iPhone is essentially the Sputnik of next generation of computing and telephony, you’ll realize how important the development of presence technology will be in light of the 2.0 Address Book. Sure, the VoIP folks have known about this stuff forever, and CISCO even has a few decent products built on it, but I’m not talking about IP-routed phone calls. I’m talking about IP-routed people. Believe it or not, this is where things have to go in order for Google and Apple to continue their relentless drive towards ease-of-use and clarity of design.

In the future, you will buy a cellphone-like device. It will have a connection to the internet, no matter what. And it’ll probably be powered over the air. The device will be tradeable with your friends and will retain no solid-state memory. You literally could pick up a device on a park bench, login with your OpenID (IP-routed people, right?) from any number of service providers (though the best ones will be provided by the credit card companies). Your user data will live in the cloud and be delivered in bursts via myriad APIs strung together and then authorized with OAuth to accomplish specific tasks as they manifest. If you want to make a phone call, you call up the function on the touch screen and it’s all web-based, and looks and behaves natively. Your address book lives in Google-land on some server, and not in the phone. You start typing someone’s name and not only does it pull the latest photos of the first five to ten people it matches, but it does so in a distributed fashion, plucking the data from across the web, grabbing both the most up-to-date contact information, the person’s availability and their presence. It’s basically an IM-style buddy list for presence, and the data never grows old and never goes stale. Instead of just seeing someone’s inert photo when you bring up their record in your address book, you see all manner of social and presence data. Hell, you might even get a picture of their current location. This is the lowercase semantic web in action where the people who still hold on to will become increasingly marginalized through obfuscation and increasingly invisible to the network. I don’t have an answer for this or a moral judgement on it; it’s going to happen one way or another.

In any case, Apple will make these ultra-slim, literally stupid devices that don’t do anything but sport a web browser, a scratch-resistant surface and some RAM (and show off ads really crisply). And these devices that are popular today with 40, 60, 80GB will become our generation’s Edsel, when the future music device will be able to access all music from all time on demand, in crystal clear streaming vinyl-quality. Oh, and we’ll have solved Network Latency by turning every single device with any kind of antenna into a network repeater. Scratch that, we’ll turn every human into a repeater. And yeah, we’d have figured out the collision problems that might come with such a network as well.

Ok, all nice and fine. It’s coming and it’s not even that far off so I shouldn’t congratulate myself about being too prescient or anything.

But I do think that there’s a framing here that’s being missed in gawking over the acquisition of Jaiku.

In the scheme of things, it really doesn’t have anything to do with Twitter, other than that Twitter is a dumb network that happens to transport simple messages really well in a format like Jaiku’s while Jaiku is a mobile service that happens to send dumb messages around to demonstrate what social presence on the phone could look like. These services are actually night and day different and it’s no wonder that Google bought Jaiku and not Twitter. And hey, it’s not a contest either, it’s just that Twitter didn’t have anything to offer Google in terms of development for their mobile strategy. Twitter is made up of web people and is therefore a content strategy; Jaiku folks do web stuff pretty well, but they also do client stuff pretty well. I mean, Andy used to work at Flock on a web browser. Jyri used to work at Nokia on devices. Jaiku practically invented the social presence browser for the phone (though, I might add, Facebook snatched up Parakey out of Google’s clenches, denying them of Joe Hewitt, who built the first web-based presence app with the Facebook iPhone app). If anything, the nearest approximation to Jaiku is Plazes, which actually does the location-cum-presense thing and has mobile development experience.

That’s enough for now. I’ll stew on the Google-Apple-iPhone-gPhone-Presence-Jaiku-Dodgeball-Address Book-Maps a little more. But this should give you some fodder to contemplate in the mean time.

OAuth Core 1.0 Final Draft is out — now build stuff!

OAuth token -- not a final logo! by Chris MessinaI should have pushed this post out yesterday and no latter than earlier today, but what are you gunna do, y’know?

Anyway, I spent most of the day in #OAuth with Eran, Gabe and Mark Atwood and others plotting the release of the news about our release of OAuth Core 1.0 Final Draft. This news after three previous public drafts culminating the collective effort leveraged over four months of regular meetings, intense discussions and negotiations and constantly reminding ourselves to keep the scope creep to an absolute minimum.

We overhauled the website, got the blog going (with an official press release!), some initial tumbles, added some content to the wiki and ended up with an excellent Getting Started guide and an Introduction provided by Eran.

All in all, I want to quickly point out the technologies that are making this effort possible with just a small group of admin-advocates:

Without this technology and how far collaborative technology has come in just two years, it wouldn’t be possible for us to be moving as fast as we are and making as much progress as we have been, consistently, with as few person-hours as we have at our disposal.

Next steps

Of course, even with this Final Draft out, we’re only getting started. There are a number of things that still need to be done, and these are a few off the top of my head (I’m maintaining my personal list here if you want to help out):

  • Get the logo figured out. I’ve pinged like five guys about this. Why y’all gots ta be so busy! If you have an idea, please post it to the Flickr group. I’m patient on this one though.
  • Implement support for OAuth! If you can, replace your proprietary API access delegation protocol with OAuth. If you don’t code (like me!) add software that you think could benefit from OAuth to the Advocacy page.
  • Help improve the libraries. A bunch of folks have worked on the first open source implementations of OAuth but we need more testers, more implementors and more folks to get code running in the wild.
  • Spread the word! This is a grassroots-lead effort and if it’s going to be successful, has to spread from the bottom up. Early reviews suggest that the spec is clear and easy to implement. If you can promote this open protocol within your organization and get it adopted, please do…! And then blog it and point me to it so I can tumbleblog it!

I’m sure there’s more that needs to be done, but this is what I got now (yes Gabe, we still have to deal with IPR… I’m on it!).

Making history visible as it happens

I’d like to close by making a point of process explicit for the benefit of those just tuning in. Two defining aspects of successful community efforts with which I’ve been involved are 1) the ability of an enthusiast to tell a story about why some idea or technology is personally meaningful to her 2) and a large body of collected wisdom, notes, drafts, paper trails, ideas, struggles and all the things that went into the final product. Math teachers force you to show your work for a reason: it’s oftentimes not the answer that’s interesting, it’s how you got there.

So, I’m making explicit my intention to be as transparent, straight-forward and forthcoming about my experiences helping to usher OAuth forward. I’ve not really done this kind of thing before, but being surrounded by brilliant and talented people seems to bring out the best in me. That and I’ve watched what’s happened in the OpenID community for over a year now and, for what it’s worth, I want to avoid getting dragged down into endless loops and debilitating scope creep. In fact I’ve not been involved with the OpenID community for months because of this kind of stuff and I want to expressly make an effort to document openly the approach we take with OAuth to see if we can’t avoid the same fate.

Cheers and congrats to all the folks who helped to make this happen. It might be a relatively minor step in terms the development of new technology today, but looking out long enough into the horizon, I think we’re adding a significantly important piece of puzzle that’s been missing for some time.

Putting people into the protocol

I really don’t like the phrase “user-centric identity” and as I struggled to name this post, I came upon Pete Rowley’s 2006 phrase the people are in the protocol.

This isn’t much different from what I used to call “people in the browser” when I was at Flock, so I’ll use it.

Anyway, as part of another post I’m working on, it seemed useful to call out what I see as the benefits to services that “put people into the protocol” or, more aptly, those services that are designed around people who tend to use more than one web service and a single identifier (like an OpenID) to represent themselves across services.

Here’s what I’ve come up with so far:

  • I am me, wherever I go. I may have multiple personas, facets or identities that I use online, but fundamentally, I can manage them more effectively because services are oriented around me and not around the services that I use (it would be like logging into a new user account every time you want to switch applications!).
  • I have access to my stuff, wherever I am. Even though I use lots of different web services, just like I use lots of desktop applications, I can always access my data, no matter where I created it or where it’s stored. And if I want to get all of my data out of a service into another one, I should be able to do so.
  • My friends come with me, but continue to use only the services that they chose to. If I can send email from any domain to any domain, why can’t I join one network and then add friends from any other network?
  • I am the master of my domain. Both literally and figuratively, I should be able to choose any identity provider to manage all my external connections to the world, including running my own, from my own domain. While remote service providers can certainly set the standards for who they allow access to their APIs, this should be done in a clear and transparent way, so that even people who host their own identity can have fair access.

There may of course be other benefits that I’m forgetting or omitting, but I think I’ve covered some of the primary ones, at least so that I continue with my other post!

Announcing OAuth 1.0 Public Draft 1

Well, it’s been a long time coming, and if you’ve been following my Twitters at all, you’ll know that I’ve been working on an open, authorization protocol called OAuth for the past few months. Today we released the first Public Draft for review.

The idea started as a humble effort to accomplish two goals: first, to enable Ma.gnolia members who created their accounts with OpenIDs (and therefore don’t have traditional usernames and passwords) to be able to use Dashboard Widgets; and second, to enable Twitter to adopt OpenID when its current API requires a username and password to authorize access to protected status feeds.

In any case, both of these use cases were part of the same problem: the lack of a uniform and open protocol for what’s called “delegated authentication”. Another useful metaphor that I’ve come to like is what John Panzer and Eran Hammer-Lahav used before him, that of a valet key:

OAuth is like a valet key for all your web services. A valet key lets you give a valet the ability to park your car, but not the ability to get into the trunk or drive more than 2 miles or limit the RPMs on your high end German automobile. In the same way, an OAuth key lets you give a web agent the ability to check your web mail but NOT the ability to pretend to be you and send mail to everybody in your address book.

Arguably the value of OAuth as a technological innovation goes beyond that. After all, anyone can implement their own valet key system that works in their own universe of vehicles. The harder part is actually the social and political work of getting everyone to buy in and follow the same design pattern, leading to interoperability between systems.

In fact that’s where we were before OAuth: Google had AuthSub, AOL had OpenAuth (OAuth’s former name, by the way), Yahoo had BBAuth and Flickr had FlickrAuth (not to mention Facebook Auth and Windows Live ID Web Authentication). Which meant that if you were an independent developer (like Matt Biddulph from Dopplr) you had to pick which auth system you wanted to support unless you had money and time coming out of your armpits, you’d code against all of them.

Of course, that’s not reality. And no one has the time or energy to maintain support for every protocol, so instead, most people take the easy way out and just ask for the veritable keys to all the different services you use:

ShareThis | Import your addresses...

Now, don’t get me wrong, this gets the job done. And it works. But it’s a really really really bad idea.

Not only are people being trained into thinking that it’s okay to fill in any form that looks like a Gmail login box on any old website (trusted or not) but it’s creating an untenable situation where, as a member of these various services, you have no way to control the access you’ve given away without changes your password — which in effect will disable every one of these sites that’s storing your credentials — forcing you to revisit every one of them and share with them your new username and password. What a crappy experience!

Fortunately, Flickr got it right a long time ago and set the bar for user experience. In their model, you can try out a bunch of tools that help you upload photos to the service or use off-site mashups that do cool things with your photos all without giving away your most valuable credentials: your username and password!

Instead, when you sign in to your account, Flickr will assign special keys called “tokens” to each application that wants to access your account. Flickr then lets you configure how much access you want to grant to each app and lets you revoke that access at any time. No changing your password, no running around to have to re-authenticate all the apps that you still want to use if you want to disable one of them.

OAuth takes that approach one step further and extracts the best practices from the popular authentication systems I mentioned above and turns it into one elegant, unified authentication protocol that anyone can implement. And, because it’s an open standard that we hope many people will adopt and replace their own proprietary authentication systems with, it should be a no-brainer for developers to use and to support, resulting in fewer sites that, with a straight face, continue to ask you for your username and password (oh, and yes, it is compatible with OpenID, with Google Accounts, with Yahoo Accounts and any other sign-in system — OAuth doesn’t dictate how you sign-in, only how you delegate authentication).

Even though we’re only releasing the first public draft today, we already have pledges from Ma.gnolia, Twitter, Pownce, Jaiku, Dopplr and others that they intend to implement the protocol.

If you want to get involved, join our mailing list, take a look at the OAuth libraries under development for PHP, Ruby, Python, C# and others. We plan to formally release the final version the OAuth Protocol v1.0 on Oct 1, so watch this space for more news until then.

Stop building social networks

I started writing this post August 8th. Now that Dave Recordon is at Six Apart and blogging about these things and Brad Fitzpatrick has moved on to Google, I thought I should finally finish this post.

I fortuitously ran into Tim O’Reilly, Brad Fitzpatrick and Dave Recordon in Philz yesterday as I was grabbing a cup of coffee. They were talking about some pretty heady ideas and strategies towards wrenching free one’s friends networks from the multiple social networks out there — and recombining them in such a way that it’d be very hard to launch a closed down social network again.

The idea isn’t new. It’s certainly been attempted numerous times, with few successful efforts to show for it to date. I think that Brad and Dave might be on to something with their approach, though, but it begs an important question: once you’ve got a portable social network, what do you do with it?

Fortunately, Brian Oberkirch has been doing a lot of thinking on this subject lately with his series on , starting with a post on designing portable social networks lead up to his most recent post offering some great tips on how to prepare your site for the day when your users come knocking for a list of their friends to populate their new favor hang.

In his kick-off post, Brian laid the problem of social network fatigue as stemming from the:

  • Creation of yet another login/password to manage
  • Need to re-enter profile information for new services
  • Need to search and re-add network contacts at each new service
  • Need to reset notification and privacy preferences for each new service
  • Inability to manage and add value to these networks from a central app/work flow

I think these are the fundamental drivers behind the current surge of progress in user-centric identity services, as opposed to the aging trend of network-centric web services. If Eric Schmidt thinks that Web 3.0 will be made up of small pieces loosely joined and “in the cloud”, my belief, going back to my time with Flock, is that having consistent identifiers for the same person across multiple networks, services or applications is going to be fundamental to getting the next evolution of the web right.

Tim made the point during our discussion that at one point in computing history, SQL databases embedded access permissions in the database itself. In modern times, access controls have been decoupled from the data and are managed, maintained and federated without regard to the data itself, affording a host of new functionality and stability simply by adjusting the architecture of the system.

If we decouple people and their identifiers from the networks that currently define them, we start moving towards greater granularity of privacy control through mechanisms like global social whitelists and buddy list blocklists. It also means that individuals can solicit services to be built that serve their unique social graph across any sites and domains (kind of like a fingerprint of your relationship connections), rather than being restrained to the limited freedom in locked down networks like Facebook. And ultimately, it enables cross-sharing content and media with anyone whom you choose, regardless of the network that they’re on (just like email today, where you can send someone on Yahoo.com email from Gmail.com or even Hotmail.com, and so on, but with finer contact controls). The result is that the crosscut of one’s social network could be as complete (or discreet) as one chose, and that rather than managing it in a social network-centric way, you’d manage it centrally, just as you do your IM buddy list, and it would follow you around on any site that you visit.

So it’s become something of a refrain in the advice that we’ve been giving out lately to our clients that they should think very critically about what social functionality they should (and shouldn’t) build directly into their sites. Rather than assuming that they should “build what Flickr has” or think about which features of Facebook they should absorb, the better question, I think, is to assume that in the next 6-8 months (for the early adopters at least) there’s going to be a shift to these portable networks. Where the basics will mostly be better covered by existing solutions and will not need to be rebuilt. Where each new site — especially those with specific functionality like TripIt (disclosure: we’ve consulted TripIt) — will need to focus less on building out its own social network and more on how social functionality can support their core competency.

We’re still in the early stages of recognizing and identifying the components of this problem. Thus far, the Microformats wiki says:

Why is it that every single social network community site makes you:

  • re-enter all your personal profile info (name, email, birthday, URL etc.)?
  • re-add all your friends?

In addition, why do you have to:

  • re-turn off notifications?
  • re-specify privacy preferences?
  • re-block negative people?

AKA “social network fatigue problem” and “social network update/maintenance problem”.

I’ve yet to be convinced that this is a problem that the “rest of the world” beyond social geeks is suffering, but I do think that the situation can be greatly improved, even for folks who are used to abandoning their profiles when they forget their passwords. For one thing, the world today is too network-centric, and not person-centric. While I do think people should be able to take on multiple personas online (professional, casual, hobby, family, and so on), I don’t think that that means that they should have those boundaries set for them by the networks they join. Instead, they should maintain their multiple personas as separate identifiers: email addresses, IM addresses and/or profile URLs (i.e. OpenIDs). This allows for handy separation based on the way people already materialize themselves online. Projects like NoseRub and even the smaller additions of offsite-identifiers on sites like Digg, Twitter and Pownce also acknowledge that members think of themselves as being more faceted than a single URL indicates.

This is a good thing. And this is where social computing needs to go.

We need to stop building independent spider webs of sticky siloed social activity. We need to stop fighting the nature of the web and embrace the design of uniform resource identifiers for people. We need to have a user agent that actually understands what it means to be a person online. A person with friends, with contacts, with enemies, with multiple personas and surfaces and ambitions and these user agents of the social web need to understand that, though we live in many distinct places on the web and interact with many different services, that we as people still have one unified viewport through which we understand the world.

Until social networks understand this reality and start to adapt to it, the problem that Dave is describing is only going to continue to get worse for more and more people until truly, the problem of social network fatigue will spread beyond social geeks and start cutting into the bottom lines of companies that rely on the regularity of “sticky eyeballs” showing up.

While I will always and continue to bet on the open web, we’re reaching an inflection point where some fundamental conceptions of the web (and social networks) need to change. Fortunately, if us geeks have our way, it’ll probably be for the general betterment of the whole thing.