Feature request: OAuth in WordPress

Twitter / photomatt: @factoryjoe I would like OA...

In the past couple days, there’s been a bit of a dust-up about some changes coming to WordPress in 2.6 — namely disabling ATOM and XML-RPC APIs by default.

The argument is that this will make WordPress more secure out of the box — but the question is at what cost? And, is there a better solution to this problem rather than disabling features and functionality (even if only a small subset of users currently make use of these APIs) if the changes end up being short-sighted?

This topic hit the wp-xmlrpc mailing list where the conversation quickly devolved into spattering about SSL and other security related topics.

Allan Odgaard (creator TextMate, as far as I can tell!) even proposed inventing another authorization protocol.

Sigh.

There are a number of reasons why WordPress should adopt OAuth — and not just because we’re going to require it for DiSo.

Heck, Stephen Paul Weber already got OAuth + AtomPub working for WordPress, and has completed a basic OAuth plugin for WordPress. The pieces are nearly in place, not to mention the fact that OAuth will pretty much be essential if WordPress is going to adopt OpenID at some point down the road. It’s also going to be quite useful if folks want to post from, say, a Google Gadget or OpenSocial application (or similar) to a WordPress blog if the XML-RPC APIs are going to be off by default (given Google’s wholesale embrace of OAuth).

Now, fortunately, folks within Automattic are supportive of OAuth, including Matt and Lloyd.

There are plenty of benefits to going down this path, not to mention the ability to scope third party applications to certain permissions — like letting Facebook see your private posts but not edit or create new ones — or authorizing desktop applications to post new entries or upload photos or videos without having to remember your username and password (instead you’d type in your blog address — and it would discover the authorization endpoints using XRDS-SimpleEran has more on discovery: Magic, People vs. Machines).

Anyway, WordPress and OAuth are natural complements, and with popular support and momentum behind the protocol, it’s tragic to see needless reinvention when so many modern applications have the same problem of delegated authorization.

I see this is a tremendous opportunity for both WordPress and OAuth and am looking forward to discussing this opportunity — at least consideration for WordPress 2.7 — and tonight’s meetup — for which I’m now late! Doh!

A conversation about social network interop and activity stream relevance

Brian Oberkirch captured some video today of a conversation between him, David Recordon and myself at GSP East about social network interop, among other things.

Hot on the heals on my last post, this conversation is rather timely!

Parsing the “open” in Facebook’s “fbOpen” platform

fbOpenYesterday, as expected, Facebook revealed the code behind their F8 platform, a little over a year after its launch, offering it under the Common Public Attribution License.

I can’t help but notice the glaring addition of Section 15: Network Use and Exhibits A and B to the CPAL license. But I’ll dive into those issues in a moment.

For now it is worth reviewing Facebook’s release in the context of the OSI’s definition of open source; of particular interest are the first three sections: Free Redistribution, Source Code, and Derived Works. Arguably Facebook’s use of the CPAL so far fits the OSI’s definition. It’s when we get to the ninth attribute (License Must Not Restrict Other Software) where it becomes less clear whether Facebook is actually offering “open source” code, or is simply diluting the term for its own gain, given the attribution requirement imposed in Exhibit B:

Each time an Executable, Source Code or Larger Work is launched or initially run (including over a network), a display of the Attribution Information must occur on the graphic user interface employed by the end user to access such Covered Code (which may include a splash screen).

In other words, any derivative work cleft from the rib of Facebook must visibly bear the mark of the “Initial Developer”, namely, Facebook, Inc., and include the following:

Attribution Copyright Notice: Copyright © 2006-2008 Facebook, Inc.
Attribution Phrase (not exceeding 10 words): Based on Facebook Open Platform
Attribution URL: http://developers.facebook.com/fbopen
Graphic Image as provided in the Covered Code: http://developers.facebook.com/fbopen/image/logo.png

Most curious of all is how Facebook addressed a long-held concern of Tim O’Reilly that open source licenses are obsolete in the era of network computing and Web 2.0 (emphasis original):

…it’s clear to me at least that the open source activist community needs to come to grips with the change in the way a great deal of software is deployed today.

And that, after all, was my message: not that open source licenses are unnecessary, but that because their conditions are all triggered by the act of software distribution, they fail to apply to many of the most important types of software today, namely Web 2.0 applications and other forms of software as a service.

And in the Facebook announcement, Ami Vora states:

The CPAL is community-friendly and reflects how software works today by recognizing web services as a major way of distributing software.

Thus Facebook neatly skirts this previous limitation in most open source licenses by amending Section 15 to the CPAL, explicitly covering “Network Use”:

The term ‘External Deployment’ means the use, distribution, or communication of the Original Code or Modifications in any way such that the Original Code or Modifications may be used by anyone other than You, whether those works are distributed or communicated to those persons or made available as an application intended for use over a network. As an express condition for the grants of license hereunder, You must treat any External Deployment by You of the Original Code or Modifications as a distribution under section 3.1 and make Source Code available under Section 3.2.

I read this as referring to network deployments of the Facebook platform on other servers (or available as a web service) and forces both the release of code modifications that hit the public wire as well as imposing the display of the “Attribution Information” (as noted above).

. . .

So okay, first of all, we’re not really dealing with the true historic definition of open source here, but we can mince words later. The code is available, is free to be tinkered with, reviewed, built on top of, redistributed (with that attribution restriction) and there’s even a mechanism for providing feedback and logging bugs. Best of all, if you submit a patch that is accepted, they’ll send you a Facebook T-shirt! (Wha-how! Where do I sign up?!)

Not ironically, Facebook’s approach with smells an awful lot like Microsoft’s Shared Source Initiative (some background). Consider the purpose of one of Microsoft’s three Shared Source licenses, the so-called “Reference License”:

The Microsoft Reference License is a reference-only license that allows licensees to view source code in order to gain a deeper understanding of the inner workings of a given technology. It does not allow for modification or redistribution. Microsoft uses this license primarily for technologies such as its development libraries.

Now compare that with the language of Facebook’s announcement:

The goal of this release is to help you as developers better understand Facebook Platform as a whole and more easily build applications, whether it’s by running your own test servers, building tools, or optimizing your applications on this technology. We’ve built in extensibility points, so you can add functionality to Facebook Open Platform like your own tags and API methods.

While it’s certainly conceivable that there may be intrepid entrepreneurs that decide to extend the platform and release their own implementations (which, arguably would require a considerable amount of effort and infrastructure to duplicate the still-proprietary innards of Facebook proper — remember that the fbOpen platform IS NOT Facebook), they’d still need to attach the Facebook brand to their derivative work and open source their modifications, under a CPAL-compatible license (read: not GPL).

In spite of all this — and whether Facebook is really offering a “true” open source product or not — is really not the important thing. I’m raising issues simply to put this move into a broader context, highlighting some important decision points where Facebook zagged where others might have otherwise zigged, based on their own priorities and aspirations with the move. Put simply: Facebook’s approach to open source is nothing like Google’s, and it’s critical that people considering building on either the fbOpen platform or OpenSocial do themselves a favor and familiarize themselves with the many essential differences.

Furthermore, in light of my recent posts, it occurs to me that the nature of open source is changing (or being changed) by the accelerating move to cloud computing architectures (where the source code is no longer necessarily a strategic asset, but where durable and ongoing access to data is the primary concern (harkening to Tim O’Reilly’s frequent “Data is the Intel Inside” quip) and how Facebook is the first of a new class of enterprises that’s growing up after open source.

I hope to expand on this line of thinking, but I’m starting to wonder — with regards to open source becoming essentially passé nowadays — did we win? Are we on top? Hurray? Or, did we bet on the wrong horse? Or, did the goalposts just move on us (again)? Or, is this just the next stage in an ongoing, ever-volatile struggle to balance the needs of business models that tend towards centralization against those more free-form and freedom seeking and expanding models where information and knowledge must diffuse, and must seek out growth and new hosts in order to continue to become more valuable. Again, pointing to Tim’s contention that Web 2.0 is also at least partly about harnessing collective intelligence, and that data sources that grow richer as more people use them is a facet of the landscape, what does openness mean now? What barriers do we need to dissemble next? If it’s no longer the propriety of software code, then is it time that we began, in earnest, to scale the walls of the proprietary data horders and collectors and take back (or re-federate) what might be rightfully ours? Or that we should at least be given permanent access to? Hmm?


Related coverage:

Facebook, the USSR, communism, and train tracks

Low hills closed in on either side as the train eventually crawled on to high, tabletop grasslands creased with snow. Birds flew at window level. I could see lakes of an unreal cobalt blue to the north. The train pulled into a sprawling rail yard: the Kazakh side of the Kazakhstan-China border.

Workers unhitched the cars, lifted them, one by one, ten feet high with giant jacks, and replaced the wide-gauge Russian undercarriages with narrower ones for the Chinese tracks. Russian gauges, still in use throughout the former Soviet Union, are wider than the world standard. The idea was to the prevent invaders from entering Russia by train. The changeover took hours.

— Robert D. Kaplan, The Ends of the Earth

I read this passage today while sunning myself at Hope Springs Resort near Palm Springs. Tough life, I know.

The passage above immediately made me think of Facebook, and I had visions of the old Facebook logo with a washed out Stalin face next to the wordmark (I’m a visual person). But the thought came from some specific recent developments, and fit into a broader framework that I talked about loosely to Steve Gillmor about on his podcast. I also wrote about it last week, essentially calling for Facebook and Google to come together to co-develop standards for the social web, but, having been reading up on Chinese, Russian, Turkish and Central Asian history, and being a benefactor of the American enterprise system, I’m coming over to Eran and others‘ point that 1) it’s too early to standardize and 2) it probably isn’t necessary anyway. Go ahead, let a thousand flowers bloom.

If I’ve learned anything from Spread Firefox, BarCamp, coworking and the like, it’s that propaganda needs to be free to be effective. In other words, you’re not going to convince people of your way of thinking if you lock down what you have, especially if what you have is culture, a mindset or some other philosophical approach that helps people narrow down what constitutes right and wrong.

Look, if Martin Luther had nailed his Ninety-five Theses to the door but had ensconced them in DRM, he would not have been as effective at bringing about the Reformation.

Likewise, the future of the social web will not be built on proprietary, closed-source protocols and standards. Therefore, it should come as no surprise that Google wants OpenSocial to be an “open standard” and Facebook wants to be the openemest of them all!

The problem is not about being open here. Everyone gets that there’s little marginal competitive advantage to keeping your code closed anymore. Keeping your IP cards close to your chest makes you a worse card player, not better. The problem is with adoption, gaining and maintaining [developer] interest and in stoking distribution. And, that brings me to the fall of the Communism and the USSR, back where I started.

I wasn’t alive back when the Cold War was in its heyday. Maybe I missed something, but let’s just go on the assumption that things are better off now. From what I’m reading in Kaplan’s book, I’d say that the Soviets left not just social, but environmental disaster in their wake. The whole region of Central Asia, at least in the late 90s, was fucked. And while there are many causes, more complex than I can probably comprehend, a lot of it seems to have to do with a lack of cultural identity and a lack of individual agency in the areas affected by, or left behind by, Communist rule.

Now, when we talk about social networks, I mean, c’mon, I realize that these things aren’t exactly nations, nation-states or even tribal groups warring for control of natural resources, food, potable water, and so forth. BUT, the members of social networks number in the millions in some cases, and it would be foolish not to appreciate that the borders — the meticulously crafted hardline boundaries between digital nation-states — are going to be redrawn when the battle for cultural dominance between Google (et al) and Facebook is done. It’s not the same caliber of détente that we saw during the Cold War but it’s certainly a situation where two sides with very different ideological bents are competing to determine the nature of the future of the [world]. On the one hand, we have a nanny state who thinks that it knows best and needs to protect its users from themselves, and on the other, a lassé-faire-trusting band of bros who are looking to the free market to inform the design of the Social Web writ large. On the one hand, there’s uncertainty about how to build a “national identity”-slash-business on top of lots of user data (that, oh yeah, I thought was supposed to be “owned” by the creators), and on the other, a model of the web, that embraces all its failings, nuances and spaghetti code, but that, more than likely, will stand the test of time as a durable provider of the kind of liberty and agency and free choice that wins out time and again throughout history.

That Facebook is attempting to open source its platform, to me, sounds like offering the world a different rail gauge specification for building train tracks. It may be better, it may be slicker, but the flip side is that the Russians used the same tactic to try to keep people from having any kind of competitive advantage over their people or influence over how they did business. You can do the math, but look where it got’em.

S’all I’m sayin’.

The battle for the future of the social web

When I was younger, I used to bring over my Super Nintendo games to my friends’ houses and we’d play for hours… that is, if they had an SNES console. If, for some reason, my friend had a Sega system, my games were useless and we had to play something like Sewer Shark. Inevitably less fun was had.

What us kids didn’t know at the time was that we were suffering from a platform war, that manifested, more or less, in the form of a standards war for the domination of the post-Atari video game market. We certainly didn’t get why Nintendo games didn’t work on Sega systems, they just didn’t, and so we coped, mostly by not going over to the kid’s house who had Sega. No doubt, friendships were made — and destroyed — on the basis of which console you had, and on how many games you had for the preferred platform. Indeed, the kids with the richest parents got a pass, since they simply had every known system and could play anyone’s games, making them by default, super popular (in other words, it was good to be able to afford to ignore the standards war altogether).

Fast-forward 10 years and we’re on the cusp of a new standards war, where the players and stakes have changed considerably but the nature of warfare has remained much the same as Hal R. Varian and Carl Shapiro described in Information Rules in 1999. But the casualties, as before, will likely be the consumers, customers and patrons of the technologies in question. So, while we can learn much from history about how to fight the war, I think that, for the sake of the web and for web citizens generally, this coming war can be avoided, and that, perhaps, it should be.

Continue reading “The battle for the future of the social web”

I’m joining Vidoop to work on DiSo full time

Twitter / Scott Kveton: w00t! @factoryjoe and @willnorris joining Vidoop ... :-) http://twurl.cc/18g

Well, Twitter, along with Marshall and his post on ReadWriteWeb, beat me to it, but I’m pretty excited to announce that, yes, I am joining Vidoop, along with Will Norris, to work full time on the DiSo (distributed social) Project.

For quite some time I’ve wanted to get the chance to get back to focusing on the work that I started with Flock — and that I’ve continued, more or less, with my involvement and advocacy of projects like microformats, OpenID and OAuth. These projects don’t accidentally relate to people using technology to behave socially: they exist to make it easier, and better, for people to use the web (and related technologies) to connect with one another safely, confidently, and without the need to to sign up with any particular network just to talk to their friends and people that they care about.

The reality is that people have long been able to connect to one another using technology — what was the first telegraph transmission if not the earliest poke heard round the world? The problem that we have today is that, with the proliferation of fairly large, non-interoperable social networks, it’s not as easy as email or telephones have been to connect to people, and so, the next generation of social networks are invariably going to need to make the process of connecting over the divides easier, safer and with less friction if people really are going to, as expected, continue to increase their use of the web for communication and social interaction.

So what is the DiSo Project?

DISO-PROJECTThe DiSo Project has humble roots. Basically Steve Ivy and I started hacking on a plugin that I’d written that added hcards to your contact list or blogroll. It was really stupidly simple, but when we combined it with Will Norris’ OpenID plugin, we realized that we were on to something — since contact lists were already represented as URLs, we now had a way to verify whether the person who ostensibly owned one of those URLs was leaving a comment, or signing in, and we could thereby add new features, expose private content or any number of other interesting social networking-like thing!

This lead me to start “sketching” ideas for WordPress plugins that would be useful in a distributed social network, and eventually Steve came up with the name, registered the domain, and we were off!

Since then, Stephen Paul Weber has jumped in and released additional plugins for OAuth, XRDS-Simple, actionstreams and profile import, and this was when the project was just a side project.

What’s this mean?

Working full time on this means that Will and I should be able to make much more progress, much more quickly, and to work with other projects and representatives from efforts like Drupal, BuddyPress and MovableType to get interop happening (eventually) between each project’s implementation.

Will and I will eventually be setting up an office in San Francisco, likely a shared office space (hybrid coworking), so if you’re a small company looking for a space in the city, let’s talk.

Meanwhile, if you want to know more about DiSo in particular, you should probably just check out the interview I did with myself about DiSo to get caught up to speed.

. . .

I’ll probably post more details later on, but for now I’m stoked to have the opportunity to work with a really talented and energized group of folks to work on the social layer of the open web.

XRDS-Simple Draft 1 released

OAuth Discovery LogoThe little guys keep building the future of the web in little steps.

Well, okay, I shouldn’t use such hubris, but I’m pretty proud of our little squad of do-gooders, developing quality technology that I think IS advancing the state of the web, and doing so without any budget whatsoever, and without any kind of centralized resources (can you tell I’ve been hanging out in Redmond at Microsoft’s campus the last couple days?).

Anyway, yesterday Eran Hammer-Lahav, the primary author of the OAuth spec, released the first draft of XRDS-Simple format for public review.

Acronyms aside, what we’re defining here is a data format that allows web authors to link to — in a consistent, ordered way — the various services that they use across the web. What’s important about this format is that 1) we’re not inventing anything new and that 2) it’s already widely implemented, thanks in large part to OpenID 2.0’s discovery mechanism (discovery, in this case, can be loosely defined as the means by which a computer determines where a service exists — for example, where someone stores their photos or blog posts — very useful, of course, in the case of URL-based identities).

Microformats folks will ask why we don’t just use rel-me, and that’s a very valid question. Sure, you could use rel-me to link to an XRDS-Simple document, but that doesn’t answer the question. Specifically, there are three main technical features that, taken together, are superior to rel-me from a service discovery perspective:

  • Media types: essentially the MIME type that the service provides, as defined by (in layman terms: photos, videos, text, etc).
  • Local IDs: the identifier (i.e. a username or email address) associated with the requested resource; think of your Facebook email address (and clearly not something you’d necessarily want to publish publicly on your blog!)
  • Service priority: useful for determining the selection order should there be more than one of a given type of service (for example, use my Vimeo account videos before my Viddler collection, etc)

Promoted by similar questions from Danny Ayers, Eran elaborated on this and similar topics (I’d encourage you to read this response for a good technical and philosophical backgrounder).

I’ll finish briefly by putting XRDS-Simple into a broader context of the work we’re doing, and point out how this fits into DiSo. First, we now have OpenID for URL-based identity on the web so we can identify people between web sites; second, we have OAuth for specifically controlling who has access to your data; and third, we now have a mechanism for discovering the services and data brokers that you use in a consistent, fairly widely supported format. These three building blocks are critical to advancing the smooth and secure flow of people and their data between web sites, web services and changing contexts.

For DiSo, we will enable someone to sign in to a web service for the first time and, at their discretion, have their contacts looked up (eventually this should be more like a buddy list reference), have feeds of her photos or videos or whatever other media she publishes automatically discovered and imported (would be really useful for Adobe Photoshop Express, for example!), to then provide her the option to allow the web service to send information back to her identity provider or data broker (for example, inserting events into her activity stream) which would then be able to be subscribed to or followed by any of her friends running their own sites, or who delegate to some third party aggregator service (a la FriendFeed).

This is actually a pretty simple flow, but the difference is that we can now assemble this entire stack with open tools, open protocols and open formats. Discovery is one of the few remaining components that we need to nail down to get into the next stage of building out this model. So, to that end, I encourage and (the list is moderated and requires you to state your interest to sign up). We’ll be ending the Draft 1 review period April 10, so get your comments in sooner than later!

Picking the open source candidate

I Voted!My buddy whurley is at it again, but this time considering which candidate(s) is the most compatible or supportive of open source — in other words, among the many options, which could be considered the “open source candidate”?

Just as I voted for Obama yesterday, I voted for Obama today. I’m not sure why, except that 1) he’s on Twitter and 2) Hillary is more of a “dynasty” type of candidate as opposed to a “meritocratic” candidate (in my limited view) and given that Obama’s success seems predicated on his previous good works (rather than inheriting a presidential legacy, let’s say), he seems more in line with the nature of open source development. Then again, cognitive science suggests that I can essentially rationalize any irrational decision to explain my actions, so I could just as well chalk it up to gut instinct.

Whatever, here’s the poll if you’ve got an opinion:

http://s3.polldaddy.com/p/290674.js
A couple related thoughts and questions::

  • How might a candidate demonstrate that they understand or value open source? Just by running Linux? Or something deeper?
  • What kind of “open source platform” would the ideal candidate support? (using platform in the political sense) That is, getting beyond the software or hardware, how would their policies be affected by ideals and practices derived from the open source ecosystem?
  • Is it just about transparency, or would the candidate need to understand how open source itself is becoming increasingly important to the economy and to the future of work?
  • As whurley said in his post, where would an “open source” candidate come down on patent and IP reform?

If you’ve got any inside knowledge about where the candidates sit in terms of open source, I’d love to see some references or stories about their leanings. In the meantime, don’t forgot to vote — on whurley’s poll!

After Social Graph FOO Camp — and a challenge for the Data Portability Group

This past weekend I attended a topic-specific FOO Camp called Social Graph FOO Camp (otherwise known as ) organized by Scott Kveton and David Recordon (or ray-chor-dohn according to Larry).

Scott’s write up is pretty complete, but I wanted to call out one specific outcome that I think is worth noting.

On , we had a significant discussion on data portability and about the activities, responsibilities and opportunities of and for the eponymous group which has recently generated much hype and buzz but little, (as far as I’ve see) clarity and/or cogent strategy for advancing its expansive charter:

The purpose of this project is to put all existing data portability technologies and initiatives in context and to promote viable reference implementations (blueprints) to the developer, vendor, and end-user communities.

The frustration over the minimal barrier to “becoming a member” of the group (you simply have to sign up for a mailing list) and the focus on large vendors without advancing an agenda with teeth and clearly defined metrics for success was palpable. But so was the desire to make some progress, and if not come to complete agreement, to at least identify concerns shared by the majority of us and perhaps develop a strategy to deflate the hype to date and get the group moving in a productive direction.

My suggestion was to emulate the work that Tara and I have been doing on the Open Media Web project, which developed out of our work with Songbird where we could sense that there was a real opportunity to explore, but didn’t yet have a clear picture of either the space as it was understood by lead users and experts nor of the outcomes that needed to be advocated. So rather than diving in and promoting technologies or tactics before we had identified the opportunities, challenges and boundaries of the problem domain, we decided to pursue an investigatory strategy, starting with a series of meetups, blog posts and interviews () that might help us flesh out the actors, ideas and conversations that were already ongoing in the space.

The result of my proposal is captured in this post by Chris Saad to the Data Portability mailing list. I think this is a positive step, and one that I hope will give Data Portability some direction and good work to do over the next several weeks and months. I’d like to go a step further and flesh out my thinking however, before this project gets underway.

  1. These interviews should really be conducted assassin-style (as I like to say) where someone (probably Chris Saad) goes to each major vendor represented (and pimped) by the group (i.e. Google and Facebook, Plaxo, Microsoft, LinkedIn, Flickr, Six Apart, MyStrands, et al) and solicits written (or video) answers to the same five or six questions. Each of these interviews should subsequently be posted to the data portability blog over a series of months.
  2. The goal of these ongoing interviews should be to discover primarily: 1) why these companies joined the group and what their goals are; 2) what they think of when they say “data portability” 3) what challenges are they facing when it comes to offering their vision of data portability at their company? 4) what are the greatest benefits of data portability? 5) what are they doing (if anything) to promote and advance data portability within their organization? 6) what technologies have they implemented (or plan to implement in the next six months) in support of data portability? From these answers, I think we can start to recognize trends in both the headspace of large social networking sites as well as begin to call out certain technologies that might be worth picking up and evangelizing, especially in the interest of interop between multiple parties’ sites.
  3. As such, the advocacy of any particular technological solution by the data portability group today should be immediately abandoned until further research and exploration has occurred. While I was happy to see my favorite stable of technologies listed on the group’s homepage in the early days, I now realize that technology is not the hard part; it’s actually the politics, the policies, the usability and impact on and perception of the individual data owners that are really the first order priorities. Without beginning to address issues in those areas first, the technology conversation will never occur.
  4. In terms of timing, I think that the data portability group has come along more or less at the right time, but that it’s actually walking into the problem ass-backwards. What we don’t need right now is a lot of hype and glorification of an abstruse notion of data portability. In fact, data portability by itself is currently meaningless and intangible; without good examples of how it can be applied to make things better for companies’ customers, there will never be an economic imperative to move in this direction (I should point out that data portability is interesting to me because increased customer choice is interesting to me, and thereby competition in the space is beneficial to the customers of such services). For a timely example of a positive case where data portability is making a difference, consider the ability to move your bookmarks from del.icio.us to Ma.gnolia in lieu of Microsoft’s looming acquisition bid of Yahoo!. Surely there are other equally beneficial applications of data portability, and building out these use cases in terms of end-user benefit is critical to continuing to make the case for data portability with credibility.

So anyway, I do believe that there is an opportunity here and Chris Saad is correct that getting a number of the prominent players in this arena to come to the table on this topic is a feat; however, simply bringing them together without engaging with the gnarly problems and policies that have kept data portability from becoming a reality could bring more confusion and angst than benefit. Deflating the hype and going back to humble beginnings and simple questions is, in my not-so-humble opinion, the appropriate and most effective way forward. Data portability is still not obvious for most people or most companies — heck the technologies that enable it are barely out of their 1.0 and 2.0 phases yet — and still this topic is one that captures people’s imaginations and lets them imagine countless “what if” scenarios that seem, somehow, just around the corner. Data portability is a critical topic, and with the advances in the state of the conversation we had over the weekend, I’m eager to see the members of the data portability group pick up the ball and keep moving it forward.

So, if this topic is something that interests you, I recommend you blog about it, talk about it, interpret it and really take some time to consider what data portability means to you, and why it matters (or doesn’t) to you. Me, Larry and Matt Biddulph of Dopplr rapped about this stuff some more on our Citizen Garden podcast today, so if you’re looking for more information, ideas or fodder, you might go ahead and give it a listen.

The problem with open source design

I’ve probably said it before, and will say it again, and I’m also sure that I’m not the first, or the last to make this point, but I have yet to see an example of an open source design process that has worked.

Indeed, I’d go so far as to wager that “open source design” is an oxymoron. Design is far too personal, and too subjective, to be given over to the whims and outrageous fancies of anyone with eyeballs in their head.

Call me elitist in this one aspect, but with all due respect to code artistes, it’s quite clear whether a function computes or not; the same quantifiable measures simply do not exist for design and that critical lack of objective review means that design is a form of Art, and its execution should be treated as such.
Continue reading “The problem with open source design”