Privacy, publicity and open data

Intelligence deputy to America: Rethink privacy - CNN.com

This one should be a quickie.

A fascinating article came out of CNN today: “Intelligence deputy to America: Rethink privacy“.

This is a topic I’ve had opinions about for some time. My somewhat pessimistic view is that privacy is an illusion, and that more and more historic vestiges of so-called privacy are slipping through our fingers with the advent of increasingly ubiquitous and promiscuous technologies, the results of which are not all necessarily bad (take a look at just how captivating the Facebook Newsfeed is!).

Still, the more reading I’ve been doing lately about international issues and conflict, the more I agree with Danny Weitzner that there needs to be a robust dialogue about what it means to live in a post-privacy era, and what demands we must place on those companies, governments and institutions that store data about us, about the habits to which we’re prone and about the friends we keep. He sums up the conversation space thus:

Privacy is not lost simply because people find these services useful and start sharing location. Privacy could be lost if we don’t start to figure what the rules are for how this sort of location data can be used. We’ve got to make progress in two areas:

  • technical: how can users sharing and usage preferences be easily communicated to and acted upon by others? Suppose I share my location with a friend by don’t want my employer to know it. What happens when my friend, intentionally or accidentally shares a social location map with my employer or with the public at large? How would my friend know that this is contrary to the way I want my location data used? What sorts of technologies and standards are needed to allow location data to be freely shared while respective users usage limitation requirements?
  • legal: what sort of limits ought there to be on the use of location data?
  • can employers require employees to disclose real time location data?
  • is there any difference between real-time and historical location data traces? (I doubt it)
  • under what conditions can the government get location data?

There’s clearly a lot to think about with these new services. I hope that we can approach this from the perspective that lots of location data will being flowing around and realize the the big challenge is to develop social, technical and legal tools to be sure that it is not misused.

I want to bring some attention to his first point about the technical issues surrounding New Privacy. This is the realm where we play, and this is the realm where we have the most to offer. This is also an area that’s the most contentious and in need of aggressive policies and leadership, because the old investment model that treats silos of data as gold mines has to end.

I think Tim O’Reilly is really talking about this when he lambasts Google’s OpenSocial, proclaiming, “It’s the data, stupid!” The problem of course is what open data actually means in the context of user control and ownership, in terms of “licensing” and in terms of proliferation. These are not new problems for technologists as permissioning dates back to the earliest operating systems, but the problem becomes infinitely complex now that it’s been unbounded and non-technologists are starting to realize a) how many groups have been collecting data about them and b) how much collusion is going on to analyze said data. (Yeah, those discounts that that Safeway card gets you make a lot more money for Safeway than they save you, you better believe it!)

With Donald Kerr, the principal deputy director of national intelligence, taking an equally pessimistic (or Apocalyptic) attitude about privacy, I think there needs to be a broader, eyes-wide-open look at who has what data about whom and what they’re doing about — and perhaps more importantly — how the people about whom the data is being collected can get in on the game and get access to this data in the same way you’re guaranteed access and the ability to dispute your credit report. The same thing should be true for web services, the government and anyone else who’s been monitoring you, even if you’ve been sharing that information with them willingly. In another post, I talked about the value of this data — calling it “Data Capital“. People need to realize the massive amount of value that their data adds to the bottom line of so many major corporations (not to mention Web 2.0 startups!) and demand ongoing and persistent access to it. Hell, it might even result in better or more accurate data being stored in these mega-databases!

Regardless, when representatives from the government start to say things like:

Those two generations younger than we are have a very different idea of what is essential privacy, what they would wish to protect about their lives and affairs. And so, it’s not for us to inflict one size fits all, said Kerr, 68. Protecting anonymity isn’t a fight that can be won. Anyone that’s typed in their name on Google understands that.

Our job now is to engage in a productive debate, which focuses on privacy as a component of appropriate levels of security and public safety, Kerr said. I think all of us have to really take stock of what we already are willing to give up, in terms of anonymity, but [also] what safeguards we want in place to be sure that giving that doesn’t empty our bank account or do something equally bad elsewhere.

…you know that it’s time we started framing the debate on our own terms… thinking about what this means to the Citizen Centric Web and about how we want to become the gatekeepers for the data that is both rightfully ours and that should willfully be put into the service of our own needs and priorities.

hCard for OpenID Simple Registration and Attribute Exchange

Microformats + OpenID

One of the many promises of OpenID is the ability to more easily federate profile information from an identity provider to secondary consumer applications. A simple example of this is when you fill out a basic personal profile on a new social network: if you’ve ever typed your name, birthday, address and your dog’s name into a form, why on earth would you ever have to type it again? Isn’t that what computers are for? Yeah, I thought so too, and that’s why OpenID — your friendly neighborhood identity protocol — has two competing solutions for this problem.

Unfortunately, both Simple Registration (SREG for short) and its potential successor Attribute Exchange (AX) go to strange lengths to reinvent the profile data wheel with their own exclusive formats. I’d like to propose a vast simplification of this problem by adopting attributes from the standard (also known as ). As you probably know, it’s this standard that defines the attributes of the and therefore is exceptionally appropriate for the case of OpenID (given its reliance on URLs for identifiers!).

Very quickly, I’ll compare the attributes in each format so you get a sense for what data I’m talking about and how each format specifies the data:

Attribute SREG AX vCard
Nickname openid.sreg.nickname namePerson/friendly nickname
Full Name openid.sreg.fullname namePerson n (family-name, given-name) or fn
Birthday openid.sreg.dob birthDate bday
Gender openid.sreg.gender gender gender
Email openid.sreg.email contact/email email
Postal Code openid.sreg.postcode contact/postalCode/home postal-code
Country openid.sreg.country contact/country/home country-name
Language openid.sreg.language pref/language N/A
Timezone openid.sreg.timezone pref/timezone tz
Photo N/A media/image/default photo
Company N/A company/name org
Biography N/A media/biography note
URL N/A contact/web/default URL

Now, there are a number of other other attributes beyond what I’ve covered here that AX and vCard cover, often overlapping in the data being described, but not in attribute names. Of course, this creates a pain for developers who want to do the right thing and import profile data rather than having folks repeat themselves yet again. Do you support SREG? AX? hCard? All of the above?

Well, regular people don’t care, and shouldn’t care, but that doesn’t mean that this isn’t a trivial matter.

Microformat glueInstead, the barriers to doing the right thing should be made as low as possible. Given all the work that we’ve done promoting microformats and the subsequent widespread adoption of them (I’d venture that there are many million hcards in the wild), it only makes sense to leverage the upward trend in the adoption of hcard and to use it as a parallel partner in the OpenID-profile-exchange dance.

While AX is well specified, hCard is well supported (as well as well-specified). While AX is extensible, hCard is too. Most importantly, however, hcard and vcard is already widely supported in existing applications like Outlook, Mail.app and just about every mobile phone on the planet and in just about any other application that exchanges profile data.

So the question is, when OpenID is clearly a player in the future and part of that promise is about making things easier, more consistent and more citizen-centric, why would we go and introduce a whole new format for representing person-detail-data when a perfectly good one already exists and is so widely supported?

I’m hoping that reason and sense will prevail and that OpenID and hCard will naturally come together as two essential building blocks for independents. If you’re interested in this issue, please blog about it or express your support to specs@openid.net. I think the folks working on this spec need to hear from folks who make, create, build or design websites and who recognize the importance and validity of betting on, and support and adopting, microformats.

OpenSocial and Address Book 2.0: Putting People into the Protocol

Obey by Shepard FaireyI wonder if Tim O’Reilly knows something that he’s not telling the rest of us. Or maybe he knows something that the rest of us know, but that we haven’t been able to articulate yet. Who knows.

In any case, he’s been going on about this “Address Book 2.0for awhile, and if you ask me, it has a lot to do with Google’s upcoming announcement of a set of protocols, formats and technologies they’ve dubbed OpenSocial.

[Aside: I’ll just point out that I like the fact that the name has “open” in it (even if “open” will be the catchphrase that replaces the Web 2.0 meme) because it means that in order to play along, you have to be some shade of open. I mean, if Web 2.0 was all about having lots of bits and parts all over the place and throwing them together just-in-time in the context of a “social” experience, then being “open” will be what separates those who are caught up and have been playing along from those who have been asleep at the wheel for the past four years. Being “open” (or the “most” open) is the next logical stage of the game, where being anything other than open will be met with a sudden and painless death. This is a good thing™ for the web, but remember that we’re in the infancy of the roll-out here, and mistakes (or brilliant insights) will define what kind of apps we’re building (or able to build) for the next 10 years.]

Let me center the context here. A few days ago, I wrote about putting people in the protocol. I was talking about another evolution that will come alongside the rush to be open (I should note that “open” is an ongoing process, not an endpoint in and of itself). This evolution will be painful for those who resist but will bring great advantage to those who embrace it. It’s pretty simple and if you ask me, it lies at the heart of Tim’s Address Book 2.0 and Google’s OpenSocial; in a word, it’s people.

Before I get into that, let me just point out what this is not about. Fortunately, in his assessment of “What to Look for from Google’s Social Networking Platform“, David Card at Jupiter Research spelled it out in blindingly incorrect terms:

As an analyst who used to have the word “Unix” on his business card, I’ve seen a lot of “open” “consortia” fail miserably. Regular readers know my Rule of Partnership: For a deal to be important, two of the following three must occur:

– Money must change hands
– There must be exclusivity
– Product must ship

“Open” “consortia” aren’t deals. That’s one of the reasons they fail. The key here would be “Product must ship.”

This completely misses the point. This is why the first bubble was so lame. So many people had third-world capital in their heads and missed what’s new: the development, accumulation and exchange of first-world social capital through human networks.

Now, the big thing that’s changed (or is changing) is the emphasis on the individual and her role across the system. Look at MyBlogLog. Look at Automattic’s purchase of Gravatar. Look at the sharp rise in OpenID adoption over the past two years. The future is in non-siloed living man! The future is in portable, independent identities valid, like Visa, everywhere that you want to be. It’s not just about social network fatigue and getting fed up with filling out profiles at every social network you join and re-adding all your friends. Yeah, those things are annoying but more importantly, the fact that you have to do it every time just to get basic value from each system means that each has been designed to benefit itself, rather than the individuals coming and going. The whole damn thing needs to be inverted, and like recently rejoined ant segments dumped from many an ant farm, the fractured, divided, shattered into a billion fragments-people of the web must rejoin themselves and become whole in the eyes of the services that, what else?, serve them!

Imagine this: imagine designing a web service where you don’t store the permanent records of facets of people, but instead you simply build services that serve people. In fact, it’s no longer even in your best interest to store data about people long term because, in fact, the data ages so rapidly that it’s next to useless to try to keep up with it. Instead, it’s about looking across the data that someone makes transactionally available to you (for a split second) and offering up the best service given what you’ve observed when similar fingerprint-profiles have come to your system in the past. It’s not so much about owning or storing Address Book 2.0 as much as being ready when all the people that populate the decentralized Address Book 2.0 concept come knocking at your door. Are you going to be ready to serve them immediately or asking them to fill out yet another profile form?

Maybe I’m not being entirely clear here. Admittedly, these are rough thoughts in my head right now and I’m not really self-editing. Forgive me.

But I think that it’s important to say something before the big official announcement comes down, so that we can pre-contextualize this and realize the shift that’s happening even as the hammer drops.

Look, if Google and a bunch of chummy chums are going to make available a whole slew of “social graph” material, we had better start realizing what this means. And we had better start realizing the value that our data and our social capital have in this new eco-system. Forget page views. Forget sticky eyeballs. With OpenID, with OAuth, with microformats, with, yes, with FOAF and other formats — hell with plain ‘ol scrapable HTML! — Google and co. will be amassing a social graph the likes of which has yet to be seen or twiddled upon (that’s a technical term). It means that we’re [finally] moving towards a citizen-centric web and it means great things for the web. It means that things are going to get interesting, for Facebook, for MySpace, for Microsoft, for Yahoo! (who recently closed 360, btw!) And y’know, I don’t know what Spiderman would think of OpenSocial or of what else’s coming down the pipe, but I’m sure in any case, he’d caution that, with great power comes great responsibility.

I’m certainly excited about this, but it’s not all about Google. Moreover, OpenSocial is simply an acknowledgment that things have to (and have) change(d). What comes next is anyone’s guess, but as far as I’m concerned, Tim’s been more or less in the ballpark so far, it just won’t necessarily be about owning the Address Book 2.0, but what it means when it’s taken for granted as a basic building block in the vast clockwork of the open social web.

Site-specific browsers and GreaseKit

GreaseKit - Manage Applications

There’s general class of applications that’s been gaining some traction lately in the Mozilla communities built on a free-standing framework called .

The idea is simple: take a browser, cut out the tabs, the URL bar and all the rest of the window chrome and instead load one website at a time. Hence the colloquial name: “site specific browser”.

Michael McCracken deserves credit for inspiring interest in this idea early on with his Webmail app, that in turn lead to Ben Willmore’s Gmail Browser application. Both literally took WebKit — Apple’s open source rendering engine (like Mozilla’s engine) — had it load Gmail.com and released their apps. It doesn’t get much more straight-forward than that.

For my own part, I’ve been chronically the development of Site-Specific Browsers from the beginning, setting up the WebKit PBWiki and releasing a couple of my own apps, most recently Diet Pibb. I have a strong belief that a full featured rendering engine coupled with a few client side tweaks is the future of browsers and web apps. You can see hints of this in Dashboard Widgets and Adobe’s AIR framework already, though the former’s launching flow conflicts with the traditional “click an icon in my dock to launch an application” design pattern.

Anyway, in developing my Site-Specific Browsers (or Desktop Web Apps?), I became enamored with an input manager for Safari called Creammonkey that allows you to run Greasemonkey scripts inside of Safari (ignore the name — Kato, the developer, is Japanese and English is his second language). An input manager works by matching a particular application’s .plist identifier and injecting its code (via SIMBL) into the application at run-time, effectively becoming part of the host application, in turn giving it access to all the application’s inner workings. When it comes to a rendering engine, it’s this kind of injection that allows you to then inject your own CSS or Javascript into a webpage, allowing you to make whatever modifications you want.

This is what Creammonkey did for Safari. And thank god it was open source or else we never would have ended up with today’s release of the successor to Creammonkey called GreaseKit.

Let me step back a little.

When I found out about Creammonkey, I contacted Kato Kazuyoshi, the developer, and told him how excited I was about what he had created.

“But could I use this on Site-Specific Browsers?” I wanted to know.

In broken English he expressed his uncertainty and so I went about hacking it myself.

I ended up with a crude solution where I would recompile Creammonkey and aim it at a different application every time I wanted to make use of a different Greasemonkey scripts. It was tedious and didn’t really work the way I envisioned, but given my meager programming skills, it demonstrated the idea of Site-Specific Browsers with Site-Specific Hacks.

I called this collection of site-specific scripts with a properly crafted Input Manager a “GreaseKit” and let the idea sit for a while.

Some time later, Kato got in touch with me and we talked about rereleasing Creammonkey with the functionality that I envisioned and a new name. Today he released the results of that work and called it GreaseKit.

I can’t really express how excited I am about this. I suspect that the significance of this development probably won’t shake the foundations of the web, but it’s pretty huge.

For one thing, I’ve found comparable solutions (Web Runner) clunky and hard to use. In contrast, I’m able to create stand-alone WebKit apps in under 2 minutes with native Apple menus and all the fixins using a template that Josh Peek made. No, these apps aren’t cross-platform (yet), but what I lose in spanning the Linux/PC divide, I gain in the use of , Apple’s development environment, and frankly a faster rendering engine. And, as of today, the use of Greasemonkey scripts centrally managed for every WebKit app I develop.

These apps are so easy to make and so frigging useful that people are actually building businesses on them. Consider Mailplane, Ruben Bakker’s Gmail app. It’s only increments better than McCracken’s WebMail or Willmore’s Gmail Browser, but he’s still able to charge $25 for it (which I paid happily). Now, with GreaseKit in the wild, I can add all my favorite Greasemonkey scripts to Mailplane — just like I might have with Firefox — but if Gmail causes a browser crash, I only lose Mailplane, rather than my whole browser and all the tabs I have open. Not to mention that I can command-tab to Mailplane like I can with Mail.app… and I can drag and drop a file on to Mailplane’s dock icon to compose a new email with an attachment. Just add offline storage (inevitable now that WebKit supports HTML5 client-side database storage) and you’ve basically got the best of desktop, web and user-scriptable applications all in one lightweight package. I’ll have more to write on this soon, but for now, give GreaseKit a whirl (or check the source) and let me know what you think.

Did the web fail the iPhone?

Twitter / Ian McKellar: @factoryjoe, wait, so all these "web apps" people have invested time and money in are now second-class applications?

Ian might be right, but not because of Steve’s announcement today about opening up the iPhone.

Indeed, my reaction so far has been one of quasi-resignation and disappointment.

A voice inside me whimpers, “Don’t give up on the web, Steve! Not yet!”

iPhoneDevCampYou have to understand that when I got involved in helping to plan iPhoneDevCamp, we didn’t call it iPhoneWebDevCamp for a reason. As far as we knew, and as far as we could see into the immediate future, the web was the platform of the iPhone (Steve Jobs even famously called Safari the iPhone’s SDK).

The hope that we were turning the corner on desktop-based applications was palpable. By keeping the platform officially closed, Apple brought about a collective channeling of energy towards the development of efficient and elegant web interfaces for Safari, epitomized by Joe Hewitt’s iPhone Facebook App (started as a project around iPhoneDevCamp and now continued on as by Christopher Allen, founder of the ).

And we were just getting started.

…So the questions on my mind today are: was this the plan all along? Or, was Steve forced into action by outside factors?

iPhone Spider WebIf this were the case all along, I’d be getting pretty fed up with these kind of costly and duplicitous shenanigans. For godsake, Steve could at least afford to stop being so contradictory! First he lowers the price of the iPhone months after releasing it, then drops the price of DRM-free tracks (after charging people to “upgrade their music”), and now he’s promising a software SDK in February, pledging that an “open” platform “is a step in the right direction” (after bricking people’s phones and launching an iPhone WebApps directory, seemingly in faux support of iPhone Web App developers).

Now, if this weren’t in the plan all along, then Apple looks like a victim of the promise — and hype — of the web as platform. (I’ll entertain this notion, while keeping in mind that Apple rarely changes direction due to outside influence, especially on product strategy.)

Say that everything Steve said during his keynote were true and he (and folks at Apple) really did believe that the web was the platform of the future — most importantly, the platform of Apple’s future — this kind of reversal would have to be pretty disappointing inside Apple as well. Especially considering their cushy arrangement with Google and the unlikelihood that Mac hardware will ever outsell PCs (so long as Apple has the exclusive right to produce Mac hardware), it makes sense that Apple sees its future in a virtualized, connected world, where its apps, its content and its business is made online and in selling thin clients, rather than in the kind of business where Microsoft made its billions, selling dumb boxes and expiring licenses to the software that ran on them.

If you actually read Apple’s guide for iPhone content and application development, you’d have to believe that they get the web when they call for:

  • Understanding User-iPhone Interaction
  • Using Standards and Tried-and-True Design Practices
  • Integrating with Phone, Mail, and Maps
  • Optimizing for Page Readability
  • Ensuring a Great Audio and Video Experience (while Flash is not supported)

These aren’t the marks of a company that is trying to embrace and extend the web into its own proprietary nutshell. Heck, they even support microformats in their product reviews. It seems so badly that they want the web — the open web — to succeed given all the rhetoric so far. Why backslide now?

Well, to get back to the title of this post, I can’t but help feel like the web failed the iPhone.

For one thing, native apps are a known quantity for developers. There are plenty of tools for developing native applications and interfaces that don’t require you to learn some arcane layout language that doesn’t even have the concept of “columns”. You don’t need to worry about setting up servers and hosting and availability and all the headaches of running web apps. And without offering “services in the cloud” to make web application hosting and serving a piece of cake, Apple kind of shot itself in the foot with its developers who again, aren’t so keen on the ways of the web.

Flipped around, as a proponent of the web, even I can admit how unexciting standard interfaces on the web are. And how much work and knowledge it requires to compete with the likes of Adobe’s AIR and Microsoft’s SilverLight. I mean, us non-proprietary web-types rejoice when Safari gets support for CSS-based rounded corners and the ability to use non-standard typefaces. SRSLY? The latter feature was specified in 1998! What took so long?!

No wonder native app developers aren’t crazy about web development for the iPhone. Why should they be? At least considering where we’re at today, there’s a lot to despise about modern web design and to despair about how little things have improved in the last 10 years.

And yet, there’s a lot to love too, but not the kind of stuff that makes iPhone developers want to abandon what’s familiar, comfortable, safe, accessible and hell, sexy.

It’s true, for example, that with the web you get massive distribution. It means you don’t need a framework like Sparkle to keep your apps up-to-date. You can localize your app in as many languages as you like, and based on your web stats, can get a sense for which languages you should prioritize. With protocols like OpenID and OAuth, you get access to all kind of data that won’t be available solely on a user’s system (especially when it comes to the iPhone which dispenses with “Save” functionality) as well a way to uniquely identify your customers across applications. And you get the heightened probability that someone might come along and look to integrate with or add value to your service via some kind of API, without requiring any additional download to the user’s system. And the benefits go on. But you get the point.

Even still, these benefits weren’t enough to sway iPhone developers, nor, apparently, Steve Jobs. And to the degree to which the web is lacking in features and functionality that would have allowed to Steve to hold off a little longer, there is opportunity to improve and expand upon what I call the collection of “web primitives” that compose the complete palette of interaction options for developers who call the web their native platform. The simple form controls, the lightboxes, the static embedded video and audio, the moo tools and scriptaculouses… they still don’t stack up against native (read: proprietary) interface controls. And we can do better.

We must to do better! We need to improve what’s going inside the browser frame, not just around it. It’s not enough to make a JavaScript compiler faster or even to add support for SVG (though it helps). We need to define, design and construct new primitives for the web, that make it super simple, straight-forward and extremely satisfying to develop for the web. I don’t know how it is that web developers have for so long put up with the frustrations and idiosyncrasies of web application development. And I guess, as far as the iPhone goes, they won’t have to anymore.

It’s a shame really. We could have done so much together. The web and the iPhone, that is. We could have made such sweet music. Especially when folks realize that Steve was right and developing for Safari is the future of application development, they’ll have wished that they had invested in and lobbied for richer and better tools and interfaces for what will inevitably become the future of rich internet application development and, no surprise, the future of the iPhone and all its kin.

Data capital, or: data as common tender

Legal TenderWikipedia states that … is payment that, by law, cannot be refused in settlement of a debt denominated in the same currency. , in turn, is a unit of exchange, facilitating the transfer of goods and/or services.

I was asked a question earlier today about the relative value of open services against open data served in open, non-proprietary data formats. It got me thinking whether — in the pursuit of utter openness in web services and portability in stored data — that’s the right question. Are we providing the right incentives for people and companies to go open? Is it self-fulfilling or manifest destiny to arrive at a state of universal identity and service portability leading to unfettered consumer choice? Is this how we achieve VRM nirvana, or is there something missing in our assumptions and current analysis?

Mary Jo Foley touched on this topic today in a post called Are all ‘open’ Web platforms created equal? She asks the question whether Microsoft’s PC-driven worldview can be modernized to compete in the network-centric world of Web 2.0 where no single player dominates but rather is made up of Best of Breed APIs/services from across the Web. The question she alludes to is a poignant one: even if you go open (and Microsoft has, by any estimation), will anyone care? Even if you dress up your data and jump through hoops to please developers, will they actually take advantage of what you have to offer? Or is there something else to the equation that we’re missing? Some underlying truism that is simply refracting falsely in light of the newfound sexiness of “going open”?

We often tell our clients that one of the first things you can do to “open up” is build out an API, support microformats, adopt OpenID and OAuth. But that’s just the start. That’s just good data hygiene. That’s brushing your teeth once a day. That’s making sure your teeth don’t fall out of your head.

There’s a broader method to this madness, but unfortunately, it’s a rare opportunity when we actually get beyond just brushing our teeth to really getting to sink them in, going beyond remedial steps like adding microformats to web pages to crafting just-in-time, distributed open-data-driven web applications that actually do stuff and make things better. But as I said, it’s a rare occasion for us because we’ve all been asking the wrong questions, providing the wrong incentives and designing solutions from the perspective of the silos instead of from the perspective of the people.

Let me make a point here: if your data were legal tender, you could take it anywhere with you and it couldn’t be refused if you offered to pay with it.

Last.fm top track chartsLet me break that down a bit. The way things are today, we give away our data freely and frequently, in exchange for the use of certain services. Now, in some cases, like Pandora or Last.fm, the use of the service itself is compelling and worthwhile, providing an equal or greater exchange rate for our behavior or taste data. In many other cases, we sign up for a service and provide basic demographic data without any sense of what we’re going to get in return, often leaving scraps of ourselves to fester all across the internet. Why do we value this data so little? Why do we give it away so freely?

I learned of an interesting concept today while researching legal tender called “Gresham’s Law” and commonly stated as: When there is a legal tender currency, bad money drives good money out of circulation.

Don’t worry, it took me a while to get it too. Nicolas Nelson offered the following clarification: if high quality and low quality are forced to be treated equally, then folks will keep good quality things to themselves and use low quality things to exchange for more good stuff.

Think about this in terms of data: if people are forced (or tricked) into thinking that the data that they enter into web applications is not being valued (or protected) by the sites that collect the data, well, eventually they’ll either stop entering the data (heard of social network fatigue?) or they’ll start filling them with bogus information, leading to “bad data” driving out the “good data” from the system, ultimately leading to a kind of data inflation, where suddenly the problem is no longer getting people to just sign up for your service, but to also provide good data of some value. And this is where data portability — or data as legal tender — starts to become interesting and allows us to start seeing around through the distortion of the refraction.

Think: Data as currency. Data to unlock services. Data owned, controlled, exchanged and traded by the creator of said data, instead of by the networks he has joined. For the current glut of web applications to maintain and be sustained, we must move to a system where people are in charge of their data, where they garden and maintain it, and where they are free to deposit and withdraw it from web services like people do money from banks.

If you want to think about what comes next — what the proverbial “Web 3.0” is all about — it’s not just about a bunch of web applications hooked up with protocols like OAuth that speak in microformats and other open data tongue back and forth to each other. That’s the obvious part. The change comes when a person is in control of her data, and when the services that she uses firmly believe that she not only has a right to do as she pleases with her data, but that it is in their best interest to spit her data out in whatever myriad format she demands and to whichever myriad services she wishes.

The “data web” is still a number of years off, but it is rapidly approaching. It does require that the silos popular today open up and transition from repositories to transactional enterprises. Once data becomes a kind of common tender, you no longer need to lock it; in fact, the value comes from its reuse and circulation in commerce.

To some degree, Mint and Wesabe are doing this retroactively for your banking records, allowing you to add “data value” to the your monetary transactions. Next up Google and Microsoft will do this for your health records. For a more generic example, Swivel is doing this today for the OECD but has a private edition coming soon. Slife/Slifeshare, i use this and RescueTime do this for your use of desktop apps.

This isn’t just attention data that I’m talking about (though the recent announcements in support of APML are certainly positive). This goes beyond monitoring what you’re doing and how you’re spending your time. I’m talking about access to all the data that it would take to reconstitute your entire digital existence. And then I’m talking about the ability to slice, dice, and splice it however you like, in pursuit of whatever ends you choose. Or choose not to.


I’ll point to a few references that influenced my thinking: Social Capital To Show Its Worth at This Week’s Web 2.0 Summit, What is Web 2.0?, Tangled Up in the Future – Lessig and Lietaer, , Intentional Economics Day 1, Day 2, Day 3.

So Mozilla wants to go mobile, eh?

As with baseball, on the web we have our home teams and our underdogs and our all-stars; we have our upsets, our defeats, and our glorious wins in the bottom of the ninth. And though I’m actually not much of a baseball fan anymore (though growing up in New England, I was exposed to plenty of Red Sox fever), I do relate my feelings for Mozilla to the way a lot of folks felt about the Red Sox before they finally won the World Series and broke the Curse of the Bambino: that is, I identify with Mozilla as my team, but dammit if they don’t frustrate me on occasion.

Tara wonders why I spend so much time on Mozilla when clearly I’m a perennial critic of the direction they’re headed in and the decisions that they make. But then Tara also didn’t grow up around vocal critics of the Red Sox who expressed their dedication and patronage to the team through their constant criticism and anger. It might not make sense, and it might not seem worth my time, but whatever the case, you really can’t be neutral about Mozilla and still consider yourself a fan. Even if you disagree with everything decision that they make, they’re still the home team of the Open Web and heck, even as you bitch and whine about this or about that, you really just want to see them do well, oftentimes in spite of themselves.

So, with that said, let me give you a superficial summary of what I think about Mozilla’s recent announcement about their mobile strategy:

If you want to stop reading now, you can, but the details and background of my reasoning might be somewhat interesting to you. I make no promises though.

Continue reading “So Mozilla wants to go mobile, eh?”

Theories about Google’s acquisition of Jaiku

Jaiku LogoI think it’s pretty interesting news that Google ended up buying Jaiku. Can’t say I saw it coming, and it’s unexpected enough to make things interesting for a bunch of us for whom such actions cause reverberations in our work and also give rise to fair speculation about why they did it and what it means.

So earlier I just so happened to be talking with Someone Who Shall Remain Unnamed and he put my mind into circles about this thing. He pointed me to some of dotBen’s ideas about the acquisition. While I’m not really sure about his geographic assessment (given that my old roommate and coworker Andy Smith is coming back to San Francisco, which kind of blows the London-relocation theory), I do share his (and others’) concern that this acquisition could spell the demise of Jaiku in priority of other more average-Joe services:

Did Google buy Jaiku for it’s engineering talent rather than for the product? I’m betting the engineering team is going to be siphoned off into the GPhone project.

At the same time, I think that Google’s play will likely continue to be software rather than hardware. Here’s why:

  • Google makes a lot of software, most of it hosted on the web.
  • Google doesn’t make any consumer hardware.
  • Google wouldn’t mind being in the wireless business, but probably not the handset business.
  • And, Apple already has a bunch of deals with the existing phone carriers.
  • Apple makes good consumer hardware.

Yes, those last two are about Apple. Because the deal between Google and Apple is sweetening every day. And Apple doesn’t really buy companies, Google does. So that’s why Google bought Jaiku today and not Apple.

But I digress. Let’s throw some more theories and extrapolations out there.

Ok, first and foremost, the browser. In one of my many unpublished drafts, I talk about how Steve was right about building Web 2.0 apps for the iPhone and why he’s even kind of right trying to prevent people from building so-called “native apps”. Sadly folks don’t really think about the web as a “native” environment yet. It’s got crappy access to hardware and some pretty weak tools for building rich interfaces so they stick with what they know rather than embracing the rough-and-tumble world of designing and coding for web browsers. What’s there to love, really?

Still, Apple (and moreso Google) already think of the web as their native turf. And where the battle for the future will be waged (which incidentally puts Microsoft at a huge disadvantage, since it’s been fighting against the web all along). And the browser is key to this equation, which is why Apple has a world-class (and getting better daily) open source browser that’s had an awful lot of attention paid to its compatibility with Google Apps lately. It’s also mobile ready, as demonstrated by how insanely well Safari works on the iPhone compared to any other mobile browser. I guess to some extent, one can forgive any misgivings Google now has about betting on Firefox, the little engine that could have and should have and then, well, didn’t, at least when it came to having a mobile strategy. But I digress. Again.

Take a look at the iPhone’s Maps feature. Yes, it’s driven by Google, but I didn’t have to tell you that. What’s missing? Well, besides GPS for yourself, it’s missing GPS for your friends and family. Yeah, that sounds a little creepy to me too, but don’t kid yourself, it’s coming. Us early adopters have been geoplotting each other for years now, on sites like Plazes, Jaiku and — oh yeah — before them, on another Google acquisition: Dodgeball. I mean, Google’s full of tenacity. It’s nutty to think that regular people will ever adopt the that we currently check in to these services with. After all, that’s what computers are for.

What Google keeps getting closer to with first its acquisition of Dodgeball a year ago and now Jaiku is the socialization of presence. That’s what it’s all about. That’s how they’ll increase the relevance of their ads in situ. That’s how they’ll improve the quality of their services (like Google Maps). Tim O’Reilly has for some time called for the development of the Web 2.0 Address Book. I think belying that assertion is the fact that more people need to get in the habit and mindset of maintaining their online selves — thereby making their status — or presence — more widely known and available.

The Web 2.0 Address Book isn’t really about how you connect to someone. It’s not really about having their home, work and secret lair addresses. It’s not about having access to their 15 different cell phone numbers that change depending on whether they’re home, at work, in the car, on a plane, in front of their computer and so on. It’s not about knowing the secret handshake and token-based smoke-signal that gains you direct access to send someone a guaranteed email that will bypass their moats of antispam protection. In the real world (outside of Silicon Valley), people want to type in the name of the recipient and hit send and have it reach the destination, in whatever means necessary, and in as appropriate a manner as possible. For this to happen, recipients need to provide a whole lot more information about themselves and their contexts to the system in order for this whole song and dance to work.

If you think far enough into the future, and realize that the iPhone is essentially the Sputnik of next generation of computing and telephony, you’ll realize how important the development of presence technology will be in light of the 2.0 Address Book. Sure, the VoIP folks have known about this stuff forever, and CISCO even has a few decent products built on it, but I’m not talking about IP-routed phone calls. I’m talking about IP-routed people. Believe it or not, this is where things have to go in order for Google and Apple to continue their relentless drive towards ease-of-use and clarity of design.

In the future, you will buy a cellphone-like device. It will have a connection to the internet, no matter what. And it’ll probably be powered over the air. The device will be tradeable with your friends and will retain no solid-state memory. You literally could pick up a device on a park bench, login with your OpenID (IP-routed people, right?) from any number of service providers (though the best ones will be provided by the credit card companies). Your user data will live in the cloud and be delivered in bursts via myriad APIs strung together and then authorized with OAuth to accomplish specific tasks as they manifest. If you want to make a phone call, you call up the function on the touch screen and it’s all web-based, and looks and behaves natively. Your address book lives in Google-land on some server, and not in the phone. You start typing someone’s name and not only does it pull the latest photos of the first five to ten people it matches, but it does so in a distributed fashion, plucking the data from across the web, grabbing both the most up-to-date contact information, the person’s availability and their presence. It’s basically an IM-style buddy list for presence, and the data never grows old and never goes stale. Instead of just seeing someone’s inert photo when you bring up their record in your address book, you see all manner of social and presence data. Hell, you might even get a picture of their current location. This is the lowercase semantic web in action where the people who still hold on to will become increasingly marginalized through obfuscation and increasingly invisible to the network. I don’t have an answer for this or a moral judgement on it; it’s going to happen one way or another.

In any case, Apple will make these ultra-slim, literally stupid devices that don’t do anything but sport a web browser, a scratch-resistant surface and some RAM (and show off ads really crisply). And these devices that are popular today with 40, 60, 80GB will become our generation’s Edsel, when the future music device will be able to access all music from all time on demand, in crystal clear streaming vinyl-quality. Oh, and we’ll have solved Network Latency by turning every single device with any kind of antenna into a network repeater. Scratch that, we’ll turn every human into a repeater. And yeah, we’d have figured out the collision problems that might come with such a network as well.

Ok, all nice and fine. It’s coming and it’s not even that far off so I shouldn’t congratulate myself about being too prescient or anything.

But I do think that there’s a framing here that’s being missed in gawking over the acquisition of Jaiku.

In the scheme of things, it really doesn’t have anything to do with Twitter, other than that Twitter is a dumb network that happens to transport simple messages really well in a format like Jaiku’s while Jaiku is a mobile service that happens to send dumb messages around to demonstrate what social presence on the phone could look like. These services are actually night and day different and it’s no wonder that Google bought Jaiku and not Twitter. And hey, it’s not a contest either, it’s just that Twitter didn’t have anything to offer Google in terms of development for their mobile strategy. Twitter is made up of web people and is therefore a content strategy; Jaiku folks do web stuff pretty well, but they also do client stuff pretty well. I mean, Andy used to work at Flock on a web browser. Jyri used to work at Nokia on devices. Jaiku practically invented the social presence browser for the phone (though, I might add, Facebook snatched up Parakey out of Google’s clenches, denying them of Joe Hewitt, who built the first web-based presence app with the Facebook iPhone app). If anything, the nearest approximation to Jaiku is Plazes, which actually does the location-cum-presense thing and has mobile development experience.

That’s enough for now. I’ll stew on the Google-Apple-iPhone-gPhone-Presence-Jaiku-Dodgeball-Address Book-Maps a little more. But this should give you some fodder to contemplate in the mean time.

OAuth Core 1.0 Final Draft is out — now build stuff!

OAuth token -- not a final logo! by Chris MessinaI should have pushed this post out yesterday and no latter than earlier today, but what are you gunna do, y’know?

Anyway, I spent most of the day in #OAuth with Eran, Gabe and Mark Atwood and others plotting the release of the news about our release of OAuth Core 1.0 Final Draft. This news after three previous public drafts culminating the collective effort leveraged over four months of regular meetings, intense discussions and negotiations and constantly reminding ourselves to keep the scope creep to an absolute minimum.

We overhauled the website, got the blog going (with an official press release!), some initial tumbles, added some content to the wiki and ended up with an excellent Getting Started guide and an Introduction provided by Eran.

All in all, I want to quickly point out the technologies that are making this effort possible with just a small group of admin-advocates:

Without this technology and how far collaborative technology has come in just two years, it wouldn’t be possible for us to be moving as fast as we are and making as much progress as we have been, consistently, with as few person-hours as we have at our disposal.

Next steps

Of course, even with this Final Draft out, we’re only getting started. There are a number of things that still need to be done, and these are a few off the top of my head (I’m maintaining my personal list here if you want to help out):

  • Get the logo figured out. I’ve pinged like five guys about this. Why y’all gots ta be so busy! If you have an idea, please post it to the Flickr group. I’m patient on this one though.
  • Implement support for OAuth! If you can, replace your proprietary API access delegation protocol with OAuth. If you don’t code (like me!) add software that you think could benefit from OAuth to the Advocacy page.
  • Help improve the libraries. A bunch of folks have worked on the first open source implementations of OAuth but we need more testers, more implementors and more folks to get code running in the wild.
  • Spread the word! This is a grassroots-lead effort and if it’s going to be successful, has to spread from the bottom up. Early reviews suggest that the spec is clear and easy to implement. If you can promote this open protocol within your organization and get it adopted, please do…! And then blog it and point me to it so I can tumbleblog it!

I’m sure there’s more that needs to be done, but this is what I got now (yes Gabe, we still have to deal with IPR… I’m on it!).

Making history visible as it happens

I’d like to close by making a point of process explicit for the benefit of those just tuning in. Two defining aspects of successful community efforts with which I’ve been involved are 1) the ability of an enthusiast to tell a story about why some idea or technology is personally meaningful to her 2) and a large body of collected wisdom, notes, drafts, paper trails, ideas, struggles and all the things that went into the final product. Math teachers force you to show your work for a reason: it’s oftentimes not the answer that’s interesting, it’s how you got there.

So, I’m making explicit my intention to be as transparent, straight-forward and forthcoming about my experiences helping to usher OAuth forward. I’ve not really done this kind of thing before, but being surrounded by brilliant and talented people seems to bring out the best in me. That and I’ve watched what’s happened in the OpenID community for over a year now and, for what it’s worth, I want to avoid getting dragged down into endless loops and debilitating scope creep. In fact I’ve not been involved with the OpenID community for months because of this kind of stuff and I want to expressly make an effort to document openly the approach we take with OAuth to see if we can’t avoid the same fate.

Cheers and congrats to all the folks who helped to make this happen. It might be a relatively minor step in terms the development of new technology today, but looking out long enough into the horizon, I think we’re adding a significantly important piece of puzzle that’s been missing for some time.

Putting people into the protocol

I really don’t like the phrase “user-centric identity” and as I struggled to name this post, I came upon Pete Rowley’s 2006 phrase the people are in the protocol.

This isn’t much different from what I used to call “people in the browser” when I was at Flock, so I’ll use it.

Anyway, as part of another post I’m working on, it seemed useful to call out what I see as the benefits to services that “put people into the protocol” or, more aptly, those services that are designed around people who tend to use more than one web service and a single identifier (like an OpenID) to represent themselves across services.

Here’s what I’ve come up with so far:

  • I am me, wherever I go. I may have multiple personas, facets or identities that I use online, but fundamentally, I can manage them more effectively because services are oriented around me and not around the services that I use (it would be like logging into a new user account every time you want to switch applications!).
  • I have access to my stuff, wherever I am. Even though I use lots of different web services, just like I use lots of desktop applications, I can always access my data, no matter where I created it or where it’s stored. And if I want to get all of my data out of a service into another one, I should be able to do so.
  • My friends come with me, but continue to use only the services that they chose to. If I can send email from any domain to any domain, why can’t I join one network and then add friends from any other network?
  • I am the master of my domain. Both literally and figuratively, I should be able to choose any identity provider to manage all my external connections to the world, including running my own, from my own domain. While remote service providers can certainly set the standards for who they allow access to their APIs, this should be done in a clear and transparent way, so that even people who host their own identity can have fair access.

There may of course be other benefits that I’m forgetting or omitting, but I think I’ve covered some of the primary ones, at least so that I continue with my other post!