The problem with open source design

I’ve probably said it before, and will say it again, and I’m also sure that I’m not the first, or the last to make this point, but I have yet to see an example of an open source design process that has worked.

Indeed, I’d go so far as to wager that “open source design” is an oxymoron. Design is far too personal, and too subjective, to be given over to the whims and outrageous fancies of anyone with eyeballs in their head.

Call me elitist in this one aspect, but with all due respect to code artistes, it’s quite clear whether a function computes or not; the same quantifiable measures simply do not exist for design and that critical lack of objective review means that design is a form of Art, and its execution should be treated as such.
Continue reading “The problem with open source design”

Fluid, Prism, Mozpad and site-specific browsers

Matt Gertner of AllPeers wrote a post the other day titled, “Wither Mozpad?” In it he poses a question about the enduring viability of Mozpad, an initiative begat in May to bring together independent Mozilla Platform Application Developers, to fill the vacuum left by Mozilla’s Firefox-centric developer programs.

Now, many months after its founding, the group is still without a compelling raison d’être, and has failed to mobilize or catalyze widespread interest or momentum. Should the fledgling effort be disbanded? Is there not enough sustaining interest in independent, non-Firefox XUL development to warrant a dedicated group?

Perhaps.

There are many things that I’d like to say both about Mozilla and about Mozpad, but what I’m most interesting in discussing presently is the opportunity that sits squarely at the feet of Mozilla and Mozpad and fortuitously extends beyond the world-unto-itself-land of XUL: namely, the opportunity that I believe lies in the development of site-specific browsers, or, to throw out a marketing term: rich internet applications (no doubt I’ll catch flak for suggesting the combination of these terms, but frankly it’s only a matter of time before any distinctions dissolve).

Fluid LogoIf you’re just tuning in, you may or may not be aware of the creeping rise of SSBs. I’ve personally been working on these glorified rendering engines for some time, primarily inspired first by Mike McCracken’s Webmail.app and then later Ben Willmore’s Gmail Browser, most recently seeing the fruition of this idea culminated in Ruben Bakker’s pay-for Gmail wrapper Mailplane.app. More recently we’ve seen developments like Todd Ditchendorf’s Fluid.app which generates increasingly functional SSBs and prior to that, the stupidly-simple Direct URL.

But that’s just progress on the WebKit side of things.

If you’ve been following the work of Mark Finkle, you’ll be able to both trace the threads of transformation into the full-fledged project, as well as the germination of Mozpad.

Clearly something is going on here, and when measured against Microsoft’s Silverlight and Adobe’s AIR frameworks, we’re starting to see the emergence of an opportunity that I think will turn out to be rather significant in 2008, especially as an alternative, non-proprietary path for folks wishing to develop richer experiences without the cost, or the heaviness, of actually native apps. Yes, the rise of these hybrid apps that look like desktop-apps, but benefit from the connectedness and always-up-to-date-ness of web apps is what I see as the unrecognized fait accompli of the current class of stand-alone, standards compliant rendering engines. This trend is powerful enough, in my thinking, to render the whole discussion about the future of the W3C uninteresting, if not downright frivolous.

A side effect of the rise of SSBs is the gradual obsolescence of XUL (which already currently only holds value in the meta-UI layer of Mozilla apps). Let’s face it: the delivery mechanism of today’s Firefox extensions is broken (restarting an app to install an extension is so Windows! yuck!), and needs to be replaced by built-in appendages that offer better and more robust integration with external web services (a design that I had intended for Flock) that also provides a web-native approach to extensibility. As far as I’m concerned, XUL development is all but dead and will eventually be relegated to the same hobby-sport nichefication of VRML scripting. (And if you happen to disagree with me here, I’m surprised that you haven’t gotten more involved in the doings of Mozpad).

But all this is frankly good for Mozilla, for WebKit (and Apple), for Google, for web standards, for open source, for microformats, for OpenID and OAuth and all my favorite open and non-proprietary technologies.

The more the future is built on — and benefits from — the open architecture of the web, the greater the likelihood that we will continue to shut down and defeat the efforts that attempt to close it up, to create property out of it, to segregate and discriminate against its users, and to otherwise attack the very natural and inclusive design of internet.

Site specific browsers (or rich internet applications or whatever they might end up being called — hell, probably just “Applications” for most people) are important because, for a change, they simply side-step the standards issues and let web developers and designers focus on functionality and design directly, without necessarily worrying about the idiosyncrasies (or non-compliance) of different browsers (Jon Crosby offers an example of this approach). With real competition and exciting additions being made regularly to each rendering engine, there’s also benefit in picking a side, while things are still fairly fluid, and joining up where you feel better supported, with the means to do cooler things and where generally less effort will enable you to kick more ass.

But all this is a way of saying that Mozpad is still a valid idea, even if the form or the original focus (XUL development) was off. In fact, what I think would be more useful is a cross-platform inquiry into what the future of Site Specific Browsers might (or should) look like… regardless of rendering engine. With that in mind, sometime this spring (sooner than later I hope), I’ll put together a meetup with folks like Todd, Jon, Phil “Journler” Dow and anyone else interested in this realm, just to bat around some ideas and get the conversation started. Hell, it’s going on already, but it seems time that we got together face to face to start looking at, seriously, what kind of opportunity we’re sitting on here.

Wither web standards? And a call for new browser wars

There’s been a flurry of activity in web standards land lately, with Opera taking on Microsoft in Europe over their failure to conform to web standards, while Andy “Malarkey” Clarke calls BS on the whole CSS Working Group thing, pointing to the complicity and corruption that comes with entrenched vendors having little to no incentive to innovate at the speed of the web and Mozilla’s Dave Baron getting fed up with backdoor dealings. In support of his case, Alex “Dojo Toolkit” Russell suggests that we abandon the W3C altogether (and Zeldman-kind too) and start burning [our] standards advocacy literature and start telling [our] browser vendors to give [us] the new shiny. Jeff Croft jumps in as well, asking whether we should return to the browser wars of yore.

I attempted to leave a comment on Jeff’s blog, but since I was over his 3000 character limit, I’m blogging it here, without the normal care that I usually take with posts here. Take it as you will:

As much as I appreciate your perspective and agree with the goals that you, Andy M and Alex share, I am at the same time dismayed that it means we’re going to end up essentially with “privileged web experiences” and “unprivileged web experiences” if you take this path.

It also means that the fight against Silverlight and Flash essentially comes to a draw and developers have to pick sides (if they haven’t already) and “standardize” their own work to one of four choices: either two of the above or, confusingly, HTML-compliant and HTML-incompatible. So while you might be able to do some shiny things into the forceable future, it does seem to me that you’re going to wind up creating more work for developers to have to test against and support both HTML variants (not to mention the long history of browser incompatibilities), in which case they might as well just jump the open web standards ship altogether and get in bed with Microsoft or Adobe, given their ability to crank out tools AND formats that rival most of their open alternatives.

It seems to me that rather than necessarily abandoning web standards or, as Alex said, “start burning their standards advocacy literature”, I suppose I’d like to see how we can begin to commoditize the more attractive aspects of Silverlight and AIR/Flash with open formats and standards. Unfortunately we are up against massive marketing dollars and entrenched positions, but in the end, open always wins.

If you’re really serious about this, and I think Alex, with his relationship to the Dojo project is in the perfect position to do so, I think the job of folks who have grown disillusioned with the web standards path should begin to develop “community conventions” that can be implemented today, using what leading browsers support (Opera, Firefox and WebKit/Safari) (see microformats, leading this approach, as well as OAuth). I think rewarding those browser makers by exploiting the features that they ARE implementing is a good way to go, and I also think that developing rich interface libraries in CSS and Javascript will continue to be important to advancing the state of the “unprivileged web”. We’ve made a great deal of progress in a relatively short amount of time with jQuery and similar libraries that deliver effects previously unthought of in regular web pages… it’s just a matter of time before we approach this as a more concerted effort to make web applications compete with their proprietary brethren.

With site-specific browser generators like Todd Ditchendorf’s Fluid and coming out, I think we’re also moving much more quickly towards local desktop integration than you’ll be able to get out of full-fledged generic browsers. In fact, I’m most hopeful about those kinds of application for the kind of innovation you’re talking about.

I think I’m just about as fed up as you with the centralized, top-down web standards process. But then again, I never believed in it from the beginning. Your frustrations to me only indicate that the way of old skool, top down bureaucracies have had their day; the way forward is the way of open source and open communities that produce results. And given that we do already have a body of standards that we can build on top of, I do worry that a lot of effort will be wasted paving a new path towards an uncertain future, when there is still so much potential and opportunity to be had with the technologies that are available today but are simply underutilized and have yet to be exploited.

Ruminating on DiSo and the public domain

There’s been some great pickup of the DiSo Project since Anne blogged about it on GigaOM.

I’m not really a fan of early over-hype, but fortunately the reaction so far has been polarized, which is a good thing. It tells me that people care about this idea enough to sign up, and it also means that people are threatened enough by it to defensively write it off without giving it a shot. That’s pretty much exactly where I’d hope to be.

There are also a number of folks pointing out that this idea has been done before, or is already being worked on, which, if you’re familiar with the microformats process, understand the wisdom in paving well-worn cow paths. In fact, in most cases, as Tom Conrad from Pandora has said, it’s not about giving his listeners 100% of what they want (that’s ridiculous), it’s about moving from the number of good songs from six to seven out of a set of eight. In other words, most people really don’t need a revolution, they just want a little more of what they already have, but with slight, yet appreciable, improvements.

Anyway, that’s all neither here nor there. I have a bunch of thoughts and not much time to put them down.

. . .

I’ve been thinking about mortality a lot lately, stemming from Marc Orchant’s recent tragic death and Dave Winer’s follow up post, capped off with thinking about open data formats, permanence and general digital longevity (when I die, what happens to my digital legacy? my OpenID?, etc).

Tesla Jane MullerMeanwhile, and on a happier note, I had the fortunate occasion to partake in the arrival of new life, something that, as an uncle of ~17 various nieces and nephews, I have some experience with.

In any case, these two dichotomies have been pinging around my brain like marbles in a jar for the past couple days, perhaps bringing some things into perspective.

. . .

Meanwhile, back in the Bubble, I’ve been watching “open” become the new bastard child of industry, its meaning stripped, its bite muzzled. The old corporate allergy to all things open has found a vaccine. And it’s frustrating.

Muddled up in between these thoughts on openness, permanence, and on putting my life to some good use, I started thinking about the work that I do, and the work that we, as technologists do. And I think that term shallow now, especially in indicating my humanist tendencies. I don’t want to just be someone who is technologically literate and whose job it is to advise people about how to be more successful in applying its appropriate use. I want to create culture; I want to build civilization!

And so, to that end, I’ve been mulling over imposing a mandate on the DiSo Project that forces all contributions to be released into the public domain.

Now, there are two possible routes to this end. The first is to use a license compatible with Andrius KulikauskasEthical Public Domain project. The second is to follow the microformats approach, and use the Creative Commons Public Domain Dedication.

While I need to do more research into this topic, I’ve so far been told (by one source) that the public domain exists in murky legal territory and that perhaps using the Apache license might make more sense. But I’m not sure.

In pursuing clarity on this matter, my goals are fairly simple, and somewhat defiant.

For one thing, and speaking from experience, I think that the IPR process for both OpenID and for OAuth were wasteful efforts and demeaning to those involved. Admittedly, the IPR process is a practical reality that can’t be avoided, given the litigious way business is conducted today. Nor do I disparage those who were involved in the process, who were on the whole reasonable and quite rational; I only lament that we had to take valuable time to work out these agreements at all (I’m still waiting on Yahoo to sign the IPR agreement for OAuth, by the way). As such, by denying the creation of any potential IP that could be attached to the DiSo Project, I am effectively avoiding the need to later make promises that assert that no one will sue anyone else for actually using the technology that we co-create.

So that’s one.

Second, Facebook’s “open” platform and Google’s “open” OpenSocial systems diminish the usefulness of calling something “open”.

As far as I’m concerned, this calls for the nuclear option: from this point forward, I can’t see how anyone can call something truly open without resorting to placing the work firmly in the public domain. Otherwise, you can’t be sure and you can’t trust it to be without subsequent encumbrances.

I’m hopeful about projects like Shindig that call themselves “open source” and are able to be sponsored by stringent organizations like the Apache foundation. But these projects are few and far between, and, should they grow to any size or achieve material success, inevitably they end up having to centralize, and the “System” (yes, the one with the big es) ends up channeling them down a path of crystallization, typically leading to the establishment of archaic legal institutions or foundations, predicated on being “host” for the project’s auto-created intellectual property, like trademarks or copyrights.

In my naive view of the public domain, it seems to me that this situation can be avoided.

We did it (and continue to prove out the model) with BarCamp — even if the Community Mark designation still seems onerous to me.

And beyond the legal context of this project, I simply don’t want to have to answer to anyone questioning why I or anyone else might be involved in this project.

Certainly there’s money to be had here and there, and it’s unavoidable and not altogether a bad thing; there’s also more than enough of it to go around in the world (it’s the lack of re-circulation that should be the concern, not what people are working on or why). In terms of my interests, I never start a project with aspirations for control or domination; instead I want to work with intelligent and passionate people — and, insomuch as I am able, enable other people to pursue their passions, demonstrating, perhaps, what Craig Newmark calls nerd values. So if no one (and everyone) can own the work that we’re creating, then the only reason to be involved in this particular instance of the project is because of the experience, and because of the people involved, and because there’s something rewarding or interesting about the problems being tackled, and that their resolution holds some meaning or secondary value for the participants involved.

I can’t say that this work (or anything else that I do) will have any widespread consequences or effects. That’s hardly the point. Instead, I want to devote myself to working with good people, who care about what they do, who hold out some hope and see validity in the existence of their peers, who crave challenge, and who feel accomplished when others share in the glory of achievement.

I guess when you get older and join the “adult world” you have to justify a lot more to yourself and to others. It’s a lot harder to peel off the posture of defensiveness and disbelief that come with age than to allow yourself to respond with excitement, with hope, with incredulity and wonder. But I guess I’m not so much interested in that kind of “adult world” and I guess, too, that I’d rather give all my work away than risk getting caught up in the pettiness that pervades so much of the good that is being done, and that still needs to be done, in all the many myriad opportunities that surround us.

The inside-out social network

DISO-PROJECTAnne Zelenka of Web Worker Daily and GigaOM fame wrote me to ask what I meant by “building a social network with its skin inside out” when I was describing DiSo, the project that Steve Ivy and I (and now Will Norris) are working on.

Since understanding this change that I envision is crucial to the potential wider success of DiSo, I thought I’d take a moment and quote my reply about what I see are the benefits of social network built inside-out:

The analogy might sound a little gruesome I suppose, but I’m basically making the case for more open systems in an ecosystem, rather than investing or producing more closed off or siloed systems.

There are a number of reasons for this, many of which I’ve been blogging about lately.

For starters, “citizen centric web services” will arguably be better for people over the long term. We’re in the toddler days of that situation now, but think about passports and credit cards:

  • your passport provides proof of provenance and allows you to leave home without permanently give up your port of origin (equivalent: logging in to Facebook with your MySpace account to “poke” a friend — why do you need a full Facebook account for that if you’re only “visiting”?);
  • your credit/ATM cards are stored value instruments, making it possible for you to make transactions without cash, and with great convenience. In addition, while you should choose your bank wisely, you’re always able to withdraw your funds and move to a new bank if you want. This portability creates choice and competition in the marketplace and benefits consumers.

It’s my contention that, over a long enough time horizon, a similar situation in social networks will be better for the users of those networks, and that as reputation becomes portable and discoverable, who you choose to be your identity provider will matter. This is a significant change from the kind of temporariness ascribed by some social network users to their accounts today (see danah boyd).

Anyway, I’m starting with WordPress because it already has some of the building blocks in place. I also recognize that, as a white male with privilege, I can be less concerned about my privacy in the short term to prove out this model, and then, if it works, build in strong cross-silo privacy controls later on. (Why do I make this point? Well, because the network that might work for me isn’t one that will necessarily work for everyone, and so identifying this fact right now will hopefully help to reveal and prevent embedding any assumptions being built into the privacy and relationships model early on.)

Again, we’re in the beginning of all this now and there’ll be plenty of ill-informed people crying wolf about not wanting to join their accounts, or have unified reputation and so on, but that’s normal during the course of an inversion of norms. For some time to come, it’ll be optional whether you want to play along of course, but once people witness and come to realize the benefits and power of portable social capital, their tune might change.

But, as Tara pointed out to me today, the arguments for data portability thus far seem predicated on the wrong value statement. Data portability in and of itself is simply not interesting; keeping track of stuff in one place is hard enough as it is, let alone trying to pass it between services or manage it all ourselves, on our own meager hard drives. We need instead to frame the discussion in terms of real-world benefits for regular people over the situation that we have today and in terms of economics that people in companies who might invest in these technologies can understand, and can translate into benefits for both their customers and for their bottom lines.

I hate to put it in such bleak terms, but I’ve learned a bit since I embarked on a larger personal campaign to build technology that is firmly in the service of people (it’s a long process, believe me). What developers and technologists seem to want at this point in time is the ability to own and extract their data from web services to the end of achieving ultimate libertarian nirvana. While I am sympathetic to these goals and see them as the way to arriving at a better future, I also think that we must account for those folks for whom Facebook represents a clean and orderly experience worth the exchange of their personal data for an experience that isn’t confounding or alienating and gives them (at least the perception) of strong privacy controls. And so whatever solutions we develop, I think the objective should not be to obviate Facebook or MySpace, but to build systems and to craft technologies that will benefit and make such sites more sustainable and profitable, but only if they adopt the best practices and ideals of openness, individual choice and freedom of mobility.

As we architect this technology — keeping in mind that we are writing in code what believe should be the rights of autonomous citizens of the web — we must also keep in mind the wide diversity of the constituents of the web, that much of this has been debated and discussed by generations before us, and that our opportunity and ability to impose our desires and aspirations on the future only grows with our successes in freeing from the restraints that bind them, the current generation of wayward web citizens who have yet to be convinced that the vision we share will actually be an improvement over the way they experience “social networking” today.

OAuth 1.0, OpenID 2.0 and up next: DiSo

OFFICIAL OAuth logoIIW 2007b is now over and with its conclusion, we have two significant accomplishments, both the sum of months of hard work by some very dedicated individuals, in the release of the OpenID 2.0 and OAuth Core 1.0 specifications.

These are two important protocols that serve as a foundational unit for enabling what’s being called “user-centric identity”, or that I call “citizen-centric identity”. With OpenID for identity and authentication and OAuth for authorizing access to portions of your private data, we move ever closer to inverting the silos and providing greater mobility and freedom of choice, restoring the balance in the marketplace and elevating the level of competition by enabling the production of more compelling social applications without requiring the huge investment it takes to recreate even a portion of the available social graph.

It means that we now have protocols that can begin to put an end to the habit of treating user’s credentials like confetti and instead can offer people the ability to get very specific about they want to share with third parties. And what’s most significant here is that these protocols are open and available for anyone to implement. You don’t have to ask permission; if you want to get involved and do your customers a huge favor, all you have to do is support this work.

To put my … time? … where my mouth is (I haven’t got a whole lot of money to put there) … Steve Ivy and I have embarked on a prototype project to build a social network with its skin inside out. We’re calling it DiSo, or “Distributed Social Networking applications”. The emphasis here is on “distributed”.

In his talk today on Friends List Portability, Joseph Smarr laid out an import set of roles that help to clarify how pieces of applications should be architected:

  • first of all, people have contact details like email addresses, webpage addresses (URLs), instant messaging handles, phone numbers… and any number of these identifiers can be used to discover someone (you do it now when you import your address book to a social networking site). In the citizen-centric model of the world, it’s up to individuals to maintain these identifiers, and to be very intentional about who they share their identifiers with
  • Second, the various sites and social networks you use need to publish your friends and contacts lists in a way that is publicly accessible and is machine readable (fortunately does well there). This doesn’t mean that your friends list will be exposed for all the world to see; using OAuth, you can limit access to pieces of your personal social graph, but the point is that it’s necessary for social sites to expose, for your reuse, the identifiers of the people that you know.

With that in mind, Steve and I have started working on a strawman version of this idea by extending my wp-microformatted-blogroll plugin, renaming it to wp-contactlist and focusing on how, at a blog level, we can expose our own contact list beyond the realm of any large social network.

Besides, this, we’re doing some interesting magic that would be useful for whitelisting and cross-functional purposes, like those proposed by Tim Berners-Lee. Except our goal is to implement these ideas in more humane HTML using WordPress as our delivery vehicle (note that this project is intended to be an example whose concepts should be able to be implemented on any platform).

So anyway, we’re using Will Norris’ wp-openid plugin, and when someone leaves a comment on one of our blogs using OpenID, and whose OpenID happens to be in blogroll already, they’ll be listed in our respective blogroll with an OpenID icon and a class on the link indicating that, not only are they an XFN contact, but that they logged into our blog and claimed their OpenID URL as an identifier. With this functionality in place, we can begin to build add in permissioning functionality where other people might subscribe to my blogroll as a source of trusted commenters or even to find identifiers for people who could be trusted to make typographic edits to blog posts.

With the combination of XFN and OpenID, we begin to be able to establish distributed trust meshes, though the exposure of personal social graphs. As more people sign in to my blog with OpenID and leave approved comments, I can migrate them to my public blogroll, allowing others to benefit from the work I’ve done evaluating whether a given identifier might be a spam emitter. Over time, my reliability in selecting and promoting trustworthy identifiers becomes a source of social capital accrual and you’ll want to get on my list, demonstrating the value of playing the role of identity provider more widely.

This will lead us towards the development of other DiSo applications, which I’ve begun mapping out as sketches on my wiki but that I think we can begin to discuss on the DiSo mailing list.

Data portability and thinking ahead to 2008

So-called data portability and data ownership is a hot topic of late, and with good reason: with all the talk of the opening of social networking sites and the loss of presumed privacy, there’s been a commensurate acknowledgment that the value is not in the portability of widgets (via OpenSocial et al) but instead, (as Tim O’Reilly eloquently put it) it’s the data, stupid!

Now, Doc’s call for action is well timed, as we near the close of 2007 and set our sights on 2008.

Earlier this year, ZDNet predicted that 2007 would be the year of OpenID, and for all intents and purposes, it has been, if only in that it put the concept of non-siloed user accounts on the map. We have a long way to go, to be sure, but with OpenID 2.0 around the corner, it’s only a matter of time before building user prisons goes out of fashion and building OpenID-based citizen-centric services becomes the norm.

Inspired by the fact that even Mitchell Baker of Mozilla is talking about Firefox’s role in the issue of data ownership (In 2008 … We find new ways to give people greater control over their online lives — access to data, control of data…), this is going to be issue that most defines 2008 — or at least the early part of the year. And frankly, we’re already off to a good start. So here are the things that I think fit into this picture and what needs to happen to push progress on this central issue:

  • Economic incentives and VRM: Doc is right to phrase the debate in terms of VRM. When it comes down to it, nothing’s going to change unless 1) customers refuse to play along anymore and demand change and 2) there’s increased economic benefit to companies that give back control to their customers versus those companies that continue to either restrict or abuse/sell out their customers’ data. Currently, this is a consumer rights battle, but since it’s being fought largely in Silicon Valley where the issues are understood technically while valuations are tied to the attractiveness a platform has to advertisers, consumers are at a great disadvantage since they can’t make a compelling economic case. And given that the government and most bureaucracy is fulled up with stakeholders who are hungry for more and more accurate and technologically-distilled demographic data, it’s unlikely that we could force the issue through the legal system, as has been approximated in places like Germany and the UK.
  • Reframing of privacy and access permissions: I’ve harped on this for awhile, but historic notions of privacy have been out-moded by modern realities. Those who do expect complete and utter control need to take a look at the up and coming generation and realize that, while it’s true that they, on a whole, don’t appreciate the value and sacredness of their privacy, and that they’re certainly more willing to exchange it for access to services or to simply dispense with it altogether and face the consequences later (eavesdroppers be damned!), their apathy indicates the uphill struggle we face in credibly making our case.

    Times have changed. Privacy and our notions of it must adapt too. And that starts by developing the language to discuss these matters in a way that’s obvious and salient to those who are concerned about these issues. Simply demanding the protection of one’s privacy is now a hollow and unrealistic demand; now we should be talking about access, about permissions, about provenance, about review and about federation and delegation. It’s not until we deepen our understanding of the facets of identity, and of personal data and of personal profiles, tastestreams and newsfeeds that can begin to make headway on exploring the economic aspects of customer data and who should control it, have access to it,

  • Data portability and open/non-proprietary web standards and protocols: Since this is an area I’ve been involved in and am passionate about, I have some specific thoughts on this. For one thing, the technologies that I obsess over have data portability at their center: OpenID for identification and “hanging” data, microformats for marking it up, and OAuth for provisioning controlled access to said data… The development, adoption and implementation of this breed of technologies is paramount to demonstrating both the potential and need for a re-orientation of the way web services are built and deployed today. Without the deployment of these technologies and their cousins, we risk web-wide lock-in to vender-specific solutions like Facebook’s FBML or Google’s , greatly inhibiting the potential for market growth and innovation. And it’s not so much that these technologies are necessarily bad in and of themselves, but that they represent a grave shift away from the slower but less commercially-driven development of open and public-domained web standards. Consider me the frog in the luke warm water recognizing that things are starting to get warm in here.
  • Citizen-centric web services: The result of progress in these three topics is what I’m calling the “citizen-centric web”, where a citizen is anyone who inhabits the web, in some form or another. Citizen-centric web services, are, of course, services provided to those inhabitants. This notion is what I think is, and should, going to drive much of thinking in 2008 about how to build better citizen-centric web services, where individuals identify themselves to services, rather than recreating themselves and their so-called social-graph; where they can push and pull their data at their whim and fancy, and where such data is essentially “leased” out to various service providers on an as-needed basis, rather than on a once-and-for-all status using OAuth tokens and proxied delegation to trusted data providers; where citizens control not only who can contact them, but are able to express, in portable terms, a list of people or companies who cannot contact them or pitch ads to them, anywhere; where citizens are able to audit a comprehensive list of profile and behavior data that any company has on file about them and to be able to correct, edit or revoke that data; where “permission” has a universal, citizen-positive definition; where companies have to agree to a Creative Commons-style Terms of Access and Stewardship before being able to even look at a customer’s personal data; and that, perhaps most import to making all this happen, sound business models are developed that actually work with this new orientation, rather than in spite of it.

So, in grandiose terms I suppose, these are the issues that I’m pondering as 2008 approaches and as I ready myself for the challenges and battles that lie ahead. I think we’re making considerable progress on the technology side of things, though there’s always more to do. I think we need to make more progress on the language, economic, business and framing fronts, though. But, we’re making progress, and thankfully we’re having these conversations now and developing real solutions that will result in a more citizen-centric reality in the not too distant future.

If you’re interested in discussing these topics in depth, make it to the Internet Identity Workshop next week, where these topics are going to be front and center in what should be a pretty excellent meeting of the minds on these and related topics.

WP-Imagefit proportionally resizes images to fit your blog template

I’m happy to announce the release of my second ever WordPress plugin called . (My first, which I’ve neglected for sometime, is called WP-Microformatted-Blogroll).

WP-Imagefit is extremely simple and serves one purpose: to get images in blog posts to fit inside the columns that contain them. In fact, this plugin is used on this blog, so if you see ever images load wider than the column and then quickly snap to fit the container’s width, it’s this plugin that’s doing that.

I originally discovered this trick thanks to Oliver Boermans‘ NetNewsWire Ollicle Reflex style. Working together, he extracted the resizing code into a jQuery plugin called jquery.imagefit.js and made it available to me for use in my EasyReader NetNewsWire theme.

I had hacked it to work for my blog theme but decided that I should turn it into a WordPress plugin so I could use it elsewhere (and given that CSS’s max-width attribute not only wasn’t cross-browser, but also shrunk images horizontally, I needed a better solution). So, there you have it.

Go ahead and download it. Installation and setup is standard as long as you have an -compliant theme like K2 or .

I have a WordPress.org project page setup, the source is available (released under GPL), and if you want something to look at it, here’s the official homepage.

Feedback/feature requests/patches certainly appreciated and encouraged!

hCard for OpenID Simple Registration and Attribute Exchange

Microformats + OpenID

One of the many promises of OpenID is the ability to more easily federate profile information from an identity provider to secondary consumer applications. A simple example of this is when you fill out a basic personal profile on a new social network: if you’ve ever typed your name, birthday, address and your dog’s name into a form, why on earth would you ever have to type it again? Isn’t that what computers are for? Yeah, I thought so too, and that’s why OpenID — your friendly neighborhood identity protocol — has two competing solutions for this problem.

Unfortunately, both Simple Registration (SREG for short) and its potential successor Attribute Exchange (AX) go to strange lengths to reinvent the profile data wheel with their own exclusive formats. I’d like to propose a vast simplification of this problem by adopting attributes from the standard (also known as ). As you probably know, it’s this standard that defines the attributes of the and therefore is exceptionally appropriate for the case of OpenID (given its reliance on URLs for identifiers!).

Very quickly, I’ll compare the attributes in each format so you get a sense for what data I’m talking about and how each format specifies the data:

Attribute SREG AX vCard
Nickname openid.sreg.nickname namePerson/friendly nickname
Full Name openid.sreg.fullname namePerson n (family-name, given-name) or fn
Birthday openid.sreg.dob birthDate bday
Gender openid.sreg.gender gender gender
Email openid.sreg.email contact/email email
Postal Code openid.sreg.postcode contact/postalCode/home postal-code
Country openid.sreg.country contact/country/home country-name
Language openid.sreg.language pref/language N/A
Timezone openid.sreg.timezone pref/timezone tz
Photo N/A media/image/default photo
Company N/A company/name org
Biography N/A media/biography note
URL N/A contact/web/default URL

Now, there are a number of other other attributes beyond what I’ve covered here that AX and vCard cover, often overlapping in the data being described, but not in attribute names. Of course, this creates a pain for developers who want to do the right thing and import profile data rather than having folks repeat themselves yet again. Do you support SREG? AX? hCard? All of the above?

Well, regular people don’t care, and shouldn’t care, but that doesn’t mean that this isn’t a trivial matter.

Microformat glueInstead, the barriers to doing the right thing should be made as low as possible. Given all the work that we’ve done promoting microformats and the subsequent widespread adoption of them (I’d venture that there are many million hcards in the wild), it only makes sense to leverage the upward trend in the adoption of hcard and to use it as a parallel partner in the OpenID-profile-exchange dance.

While AX is well specified, hCard is well supported (as well as well-specified). While AX is extensible, hCard is too. Most importantly, however, hcard and vcard is already widely supported in existing applications like Outlook, Mail.app and just about every mobile phone on the planet and in just about any other application that exchanges profile data.

So the question is, when OpenID is clearly a player in the future and part of that promise is about making things easier, more consistent and more citizen-centric, why would we go and introduce a whole new format for representing person-detail-data when a perfectly good one already exists and is so widely supported?

I’m hoping that reason and sense will prevail and that OpenID and hCard will naturally come together as two essential building blocks for independents. If you’re interested in this issue, please blog about it or express your support to specs@openid.net. I think the folks working on this spec need to hear from folks who make, create, build or design websites and who recognize the importance and validity of betting on, and support and adopting, microformats.

Site-specific browsers and GreaseKit

GreaseKit - Manage Applications

There’s general class of applications that’s been gaining some traction lately in the Mozilla communities built on a free-standing framework called .

The idea is simple: take a browser, cut out the tabs, the URL bar and all the rest of the window chrome and instead load one website at a time. Hence the colloquial name: “site specific browser”.

Michael McCracken deserves credit for inspiring interest in this idea early on with his Webmail app, that in turn lead to Ben Willmore’s Gmail Browser application. Both literally took WebKit — Apple’s open source rendering engine (like Mozilla’s engine) — had it load Gmail.com and released their apps. It doesn’t get much more straight-forward than that.

For my own part, I’ve been chronically the development of Site-Specific Browsers from the beginning, setting up the WebKit PBWiki and releasing a couple of my own apps, most recently Diet Pibb. I have a strong belief that a full featured rendering engine coupled with a few client side tweaks is the future of browsers and web apps. You can see hints of this in Dashboard Widgets and Adobe’s AIR framework already, though the former’s launching flow conflicts with the traditional “click an icon in my dock to launch an application” design pattern.

Anyway, in developing my Site-Specific Browsers (or Desktop Web Apps?), I became enamored with an input manager for Safari called Creammonkey that allows you to run Greasemonkey scripts inside of Safari (ignore the name — Kato, the developer, is Japanese and English is his second language). An input manager works by matching a particular application’s .plist identifier and injecting its code (via SIMBL) into the application at run-time, effectively becoming part of the host application, in turn giving it access to all the application’s inner workings. When it comes to a rendering engine, it’s this kind of injection that allows you to then inject your own CSS or Javascript into a webpage, allowing you to make whatever modifications you want.

This is what Creammonkey did for Safari. And thank god it was open source or else we never would have ended up with today’s release of the successor to Creammonkey called GreaseKit.

Let me step back a little.

When I found out about Creammonkey, I contacted Kato Kazuyoshi, the developer, and told him how excited I was about what he had created.

“But could I use this on Site-Specific Browsers?” I wanted to know.

In broken English he expressed his uncertainty and so I went about hacking it myself.

I ended up with a crude solution where I would recompile Creammonkey and aim it at a different application every time I wanted to make use of a different Greasemonkey scripts. It was tedious and didn’t really work the way I envisioned, but given my meager programming skills, it demonstrated the idea of Site-Specific Browsers with Site-Specific Hacks.

I called this collection of site-specific scripts with a properly crafted Input Manager a “GreaseKit” and let the idea sit for a while.

Some time later, Kato got in touch with me and we talked about rereleasing Creammonkey with the functionality that I envisioned and a new name. Today he released the results of that work and called it GreaseKit.

I can’t really express how excited I am about this. I suspect that the significance of this development probably won’t shake the foundations of the web, but it’s pretty huge.

For one thing, I’ve found comparable solutions (Web Runner) clunky and hard to use. In contrast, I’m able to create stand-alone WebKit apps in under 2 minutes with native Apple menus and all the fixins using a template that Josh Peek made. No, these apps aren’t cross-platform (yet), but what I lose in spanning the Linux/PC divide, I gain in the use of , Apple’s development environment, and frankly a faster rendering engine. And, as of today, the use of Greasemonkey scripts centrally managed for every WebKit app I develop.

These apps are so easy to make and so frigging useful that people are actually building businesses on them. Consider Mailplane, Ruben Bakker’s Gmail app. It’s only increments better than McCracken’s WebMail or Willmore’s Gmail Browser, but he’s still able to charge $25 for it (which I paid happily). Now, with GreaseKit in the wild, I can add all my favorite Greasemonkey scripts to Mailplane — just like I might have with Firefox — but if Gmail causes a browser crash, I only lose Mailplane, rather than my whole browser and all the tabs I have open. Not to mention that I can command-tab to Mailplane like I can with Mail.app… and I can drag and drop a file on to Mailplane’s dock icon to compose a new email with an attachment. Just add offline storage (inevitable now that WebKit supports HTML5 client-side database storage) and you’ve basically got the best of desktop, web and user-scriptable applications all in one lightweight package. I’ll have more to write on this soon, but for now, give GreaseKit a whirl (or check the source) and let me know what you think.