Portable contact lists and the case against XFN

XFNI suppose it might come as a surprise that I’ve decided to question, if not reject, XFN as the format for expressing portable friends or contact lists. I’m not throwing out the baby in the bathwater here, but rather focusing on the problem that needs to be solved and choosing to redouble my efforts on an elegant solution that builds on existing work and implementations.

My thinking on this crystalized yesterday during the Building Portable Social Networks panel that I shared with Jeremy Keith, Leslie Chicoine, Joseph Smarr and David Recordon. I further defined my realization last night on Twitter and when Anders Conbere pinged me about a post he’d written more or less on the subject, I knew that I was on to something.

The idea itself is pretty simple, but insomuch as it reduces both complexity and helps narrow the scope of evangelism work needed to push for further adoption, I think the change is a necessary one.

→ Quite simply, contact list portability can be achieved with only rel-contact and rel-me. All the rest is gravy.

Here’s the deal: as it is, we have a pretty nasty anti-pattern that a number of us have been railing against for some time (and, as it turns out, with good friggin’ reason). As I pointed out on the panel yesterday, people shouldn’t be penalized for the fact that the technology allows them to be promiscuous with their account credentials; after all, their desire to connect with people that they know is a valid one and has been shown to increase engagement on social sites. The problem is that, heretofore, importing your list of contacts from various webmail address books required you to provide your account credentials to an untrusted third party. On top of that, your contact list is delivered as email addresses, which I call “resource deficient” (what else can you do with an email address but send messages to it or use it as a key to identify someone? URLs are much richer).

The whole mechanism for bringing with your friends to new social sites is broken.

Enter microformats and XFN

The solution we’ve been harping on for the last couple years is a web-friendly solution for marking up existing and (predominantly) public lists of friends, using 18 pre-defined rel values. WordPress supports XFN natively and is one of the primary reasons we started with WordPress as the foundation of the DiSo Project:

WordPress Add Link

Reading up on the background of XFN, you realize that one of the primary goals of XFN was simplicity. Simplicity is relative however, and you have to remember that XFN’s simplicity was in contrast to FOAF, a much denser and complex format based on RDF.

Given all the values (that is, the existing XFN terms) and the generally semantic specificity of XFN, I decided to contrast the adoption of XFN by publishers and by consumers with the competing (and more ubiquitous) solution for contact list portability (i.e. email address import).

If you use Google’s new Social Graph API and actually go looking for XFN data (for example, on Twitter or Flickr or others), you’ll find that, by and large, the majority of XFN links on the web are using either rel-contact or rel-me.

If you’re lucky, you might find some rel-friends in there, but after rel-me and rel-contact, the use of the other 16 terms falls off considerably. Compound that fact with the minor semantic distinction between “contacts” and “friends” on different sites (sites like Dopplr dispense altogether with these terms, opting for “fellow travelers”) and you quickly begin to wonder if the “semantic richness” of XFN is really just “semantic deadweight”.

And, in terms of evangelism and potential adoption, this is critical. If 16 of the 18 XFN terms are just cruft, how can we maintain our credibility, especially when arguing against the email import approach, in which there are little to no semantic descriptors at the time of import (instead, you basically get a dumb list of email addresses — with no clues whatsoever as to which addresses are “sweethearts”, “crushes”, “kin” or the like). It’s not that XFN in and of itself is bad, it’s that, when compared with the reigning tactic of email import, we look as complicated and convoluted as FOAF did. The reality is, even if it’s “heinous” to data purists or pragmatists, email import works today, and what works, wins.

Defining Contact List Portability

The more I talk to Leslie (of Satisfaction), the more sensitive I become to the language that we use when we talk about the technologies that we work on. I mean, what the fuck is an “XFN”? Even “social network portability” probably causes rational people to break out in hives when they hear the phrase (not like we’ve hit mainstream or anything). I mean, from a usability perspective, the words we use to describe this stuff is about as usable as Drupal was five years ago (zing!). I can only imagine that when we technologists open our mouths, this is what goes through most people’s heads:

SO, I’m not advocating ditching XFN altogether; on the contrary, compared with FOAF, I think we’ve achieved a great deal of mindshare, at least in gaining the support of technologists who work on fairly large social sites (though that’s apparently being disputed). The next stage of the process should be to simplify, and to focus on what people are already doing and on what’s working. If we simply want to defeat the email import approach (which I think is a good idea, albeit with the caveat that we still need a notification mechanism — perhaps something easily ignorable like Facebook-app invites?), then I think we need to consolidate our efforts on rel-contact and rel-me and let people discover (and optionally implement) the remaining 16 values if they’re bored. Or have free time. As far as I’m concerned, they offer little to no actual utility when it comes to contact list portability.

So to the definition of contact list portability, I would suggest that it’s the ability to take a list of identifiers (read: URLs, formerly email addresses) that represent people that you know and connect with them in a new context (bonus points if by “taking” you read that as “subscribing” (but not “syncing”)).

This is consistent with Joseph’s Practical Vision for Friends-List Portability. It also importantly ignores the non-overlapping problems of groupings/relationship semantics and permissioning (things which should not be conflated!).

What’s next

Kveton agrees with me; Recordon dissents, wanting more extensibility.

I get Dave’s point, but before we worry about extensibility, we have to look at what minimal bits of XFN are being picked up. By only specifying that an outgoing link is either a “contact” or “another link of mine”, we greatly reduce the cognitive tax of grokking the problem that XFN set out to solve and minimize the implementation tax of rolling out the necessary logic and template changes. Ultimately, it also simplifies the dataset, and pushes the semantics of relationships deeper into applications where I’d argue they belong (again, looking at the Dopplr model as well as Pownce (friends, fans, fan of) and Twitter (following, followers). While the other 16 XFN values are certainly not off limits, their marginal value is negligible compared with the cost of explaining why anyone should care of about them (let along understand them — i.e. “muse”??). And, compared with emails for identifiers, URLs are definitely the future.

So, with that, I’m no longer going to both with advocating for the complete adoption of XFN. Instead, I’m going to advocate for supporting Contact List Portability by implementing rel-me and rel-contact (a “subset” of XFN). And that’s it.

This won’t solve the problems that Anders is talking about, but I think it’s radical simplification that’s been long overdue in the effort towards social network portability.

After Social Graph FOO Camp — and a challenge for the Data Portability Group

This past weekend I attended a topic-specific FOO Camp called Social Graph FOO Camp (otherwise known as ) organized by Scott Kveton and David Recordon (or ray-chor-dohn according to Larry).

Scott’s write up is pretty complete, but I wanted to call out one specific outcome that I think is worth noting.

On , we had a significant discussion on data portability and about the activities, responsibilities and opportunities of and for the eponymous group which has recently generated much hype and buzz but little, (as far as I’ve see) clarity and/or cogent strategy for advancing its expansive charter:

The purpose of this project is to put all existing data portability technologies and initiatives in context and to promote viable reference implementations (blueprints) to the developer, vendor, and end-user communities.

The frustration over the minimal barrier to “becoming a member” of the group (you simply have to sign up for a mailing list) and the focus on large vendors without advancing an agenda with teeth and clearly defined metrics for success was palpable. But so was the desire to make some progress, and if not come to complete agreement, to at least identify concerns shared by the majority of us and perhaps develop a strategy to deflate the hype to date and get the group moving in a productive direction.

My suggestion was to emulate the work that Tara and I have been doing on the Open Media Web project, which developed out of our work with Songbird where we could sense that there was a real opportunity to explore, but didn’t yet have a clear picture of either the space as it was understood by lead users and experts nor of the outcomes that needed to be advocated. So rather than diving in and promoting technologies or tactics before we had identified the opportunities, challenges and boundaries of the problem domain, we decided to pursue an investigatory strategy, starting with a series of meetups, blog posts and interviews () that might help us flesh out the actors, ideas and conversations that were already ongoing in the space.

The result of my proposal is captured in this post by Chris Saad to the Data Portability mailing list. I think this is a positive step, and one that I hope will give Data Portability some direction and good work to do over the next several weeks and months. I’d like to go a step further and flesh out my thinking however, before this project gets underway.

  1. These interviews should really be conducted assassin-style (as I like to say) where someone (probably Chris Saad) goes to each major vendor represented (and pimped) by the group (i.e. Google and Facebook, Plaxo, Microsoft, LinkedIn, Flickr, Six Apart, MyStrands, et al) and solicits written (or video) answers to the same five or six questions. Each of these interviews should subsequently be posted to the data portability blog over a series of months.
  2. The goal of these ongoing interviews should be to discover primarily: 1) why these companies joined the group and what their goals are; 2) what they think of when they say “data portability” 3) what challenges are they facing when it comes to offering their vision of data portability at their company? 4) what are the greatest benefits of data portability? 5) what are they doing (if anything) to promote and advance data portability within their organization? 6) what technologies have they implemented (or plan to implement in the next six months) in support of data portability? From these answers, I think we can start to recognize trends in both the headspace of large social networking sites as well as begin to call out certain technologies that might be worth picking up and evangelizing, especially in the interest of interop between multiple parties’ sites.
  3. As such, the advocacy of any particular technological solution by the data portability group today should be immediately abandoned until further research and exploration has occurred. While I was happy to see my favorite stable of technologies listed on the group’s homepage in the early days, I now realize that technology is not the hard part; it’s actually the politics, the policies, the usability and impact on and perception of the individual data owners that are really the first order priorities. Without beginning to address issues in those areas first, the technology conversation will never occur.
  4. In terms of timing, I think that the data portability group has come along more or less at the right time, but that it’s actually walking into the problem ass-backwards. What we don’t need right now is a lot of hype and glorification of an abstruse notion of data portability. In fact, data portability by itself is currently meaningless and intangible; without good examples of how it can be applied to make things better for companies’ customers, there will never be an economic imperative to move in this direction (I should point out that data portability is interesting to me because increased customer choice is interesting to me, and thereby competition in the space is beneficial to the customers of such services). For a timely example of a positive case where data portability is making a difference, consider the ability to move your bookmarks from del.icio.us to Ma.gnolia in lieu of Microsoft’s looming acquisition bid of Yahoo!. Surely there are other equally beneficial applications of data portability, and building out these use cases in terms of end-user benefit is critical to continuing to make the case for data portability with credibility.

So anyway, I do believe that there is an opportunity here and Chris Saad is correct that getting a number of the prominent players in this arena to come to the table on this topic is a feat; however, simply bringing them together without engaging with the gnarly problems and policies that have kept data portability from becoming a reality could bring more confusion and angst than benefit. Deflating the hype and going back to humble beginnings and simple questions is, in my not-so-humble opinion, the appropriate and most effective way forward. Data portability is still not obvious for most people or most companies — heck the technologies that enable it are barely out of their 1.0 and 2.0 phases yet — and still this topic is one that captures people’s imaginations and lets them imagine countless “what if” scenarios that seem, somehow, just around the corner. Data portability is a critical topic, and with the advances in the state of the conversation we had over the weekend, I’m eager to see the members of the data portability group pick up the ball and keep moving it forward.

So, if this topic is something that interests you, I recommend you blog about it, talk about it, interpret it and really take some time to consider what data portability means to you, and why it matters (or doesn’t) to you. Me, Larry and Matt Biddulph of Dopplr rapped about this stuff some more on our Citizen Garden podcast today, so if you’re looking for more information, ideas or fodder, you might go ahead and give it a listen.

The Existential DiSo Interview

The Existential DiSo Interview from Chris Messina on Vimeo.

Here’s what I asked myself:

how are you?

we’re going to talk about diso today? is that right?

what is diso?

you say it’s a social network, so how would it work with wordpress?

how is this different from myspace or facebook?

so who’s involved in this project?

so what comes next?

how is this different than opensocial?

what’s going to be the big win for diso?

so do you see this model applying in any other domain on the web?

what kind of support do you need?

are you talking to any of the bigger social networks? like facebook or myspace?

so who cares?

how will you draw customers away from myspace or facebook?

any last thoughts?

The problem with open source design

I’ve probably said it before, and will say it again, and I’m also sure that I’m not the first, or the last to make this point, but I have yet to see an example of an open source design process that has worked.

Indeed, I’d go so far as to wager that “open source design” is an oxymoron. Design is far too personal, and too subjective, to be given over to the whims and outrageous fancies of anyone with eyeballs in their head.

Call me elitist in this one aspect, but with all due respect to code artistes, it’s quite clear whether a function computes or not; the same quantifiable measures simply do not exist for design and that critical lack of objective review means that design is a form of Art, and its execution should be treated as such.
Continue reading “The problem with open source design”

Fluid, Prism, Mozpad and site-specific browsers

Matt Gertner of AllPeers wrote a post the other day titled, “Wither Mozpad?” In it he poses a question about the enduring viability of Mozpad, an initiative begat in May to bring together independent Mozilla Platform Application Developers, to fill the vacuum left by Mozilla’s Firefox-centric developer programs.

Now, many months after its founding, the group is still without a compelling raison d’être, and has failed to mobilize or catalyze widespread interest or momentum. Should the fledgling effort be disbanded? Is there not enough sustaining interest in independent, non-Firefox XUL development to warrant a dedicated group?

Perhaps.

There are many things that I’d like to say both about Mozilla and about Mozpad, but what I’m most interesting in discussing presently is the opportunity that sits squarely at the feet of Mozilla and Mozpad and fortuitously extends beyond the world-unto-itself-land of XUL: namely, the opportunity that I believe lies in the development of site-specific browsers, or, to throw out a marketing term: rich internet applications (no doubt I’ll catch flak for suggesting the combination of these terms, but frankly it’s only a matter of time before any distinctions dissolve).

Fluid LogoIf you’re just tuning in, you may or may not be aware of the creeping rise of SSBs. I’ve personally been working on these glorified rendering engines for some time, primarily inspired first by Mike McCracken’s Webmail.app and then later Ben Willmore’s Gmail Browser, most recently seeing the fruition of this idea culminated in Ruben Bakker’s pay-for Gmail wrapper Mailplane.app. More recently we’ve seen developments like Todd Ditchendorf’s Fluid.app which generates increasingly functional SSBs and prior to that, the stupidly-simple Direct URL.

But that’s just progress on the WebKit side of things.

If you’ve been following the work of Mark Finkle, you’ll be able to both trace the threads of transformation into the full-fledged project, as well as the germination of Mozpad.

Clearly something is going on here, and when measured against Microsoft’s Silverlight and Adobe’s AIR frameworks, we’re starting to see the emergence of an opportunity that I think will turn out to be rather significant in 2008, especially as an alternative, non-proprietary path for folks wishing to develop richer experiences without the cost, or the heaviness, of actually native apps. Yes, the rise of these hybrid apps that look like desktop-apps, but benefit from the connectedness and always-up-to-date-ness of web apps is what I see as the unrecognized fait accompli of the current class of stand-alone, standards compliant rendering engines. This trend is powerful enough, in my thinking, to render the whole discussion about the future of the W3C uninteresting, if not downright frivolous.

A side effect of the rise of SSBs is the gradual obsolescence of XUL (which already currently only holds value in the meta-UI layer of Mozilla apps). Let’s face it: the delivery mechanism of today’s Firefox extensions is broken (restarting an app to install an extension is so Windows! yuck!), and needs to be replaced by built-in appendages that offer better and more robust integration with external web services (a design that I had intended for Flock) that also provides a web-native approach to extensibility. As far as I’m concerned, XUL development is all but dead and will eventually be relegated to the same hobby-sport nichefication of VRML scripting. (And if you happen to disagree with me here, I’m surprised that you haven’t gotten more involved in the doings of Mozpad).

But all this is frankly good for Mozilla, for WebKit (and Apple), for Google, for web standards, for open source, for microformats, for OpenID and OAuth and all my favorite open and non-proprietary technologies.

The more the future is built on — and benefits from — the open architecture of the web, the greater the likelihood that we will continue to shut down and defeat the efforts that attempt to close it up, to create property out of it, to segregate and discriminate against its users, and to otherwise attack the very natural and inclusive design of internet.

Site specific browsers (or rich internet applications or whatever they might end up being called — hell, probably just “Applications” for most people) are important because, for a change, they simply side-step the standards issues and let web developers and designers focus on functionality and design directly, without necessarily worrying about the idiosyncrasies (or non-compliance) of different browsers (Jon Crosby offers an example of this approach). With real competition and exciting additions being made regularly to each rendering engine, there’s also benefit in picking a side, while things are still fairly fluid, and joining up where you feel better supported, with the means to do cooler things and where generally less effort will enable you to kick more ass.

But all this is a way of saying that Mozpad is still a valid idea, even if the form or the original focus (XUL development) was off. In fact, what I think would be more useful is a cross-platform inquiry into what the future of Site Specific Browsers might (or should) look like… regardless of rendering engine. With that in mind, sometime this spring (sooner than later I hope), I’ll put together a meetup with folks like Todd, Jon, Phil “Journler” Dow and anyone else interested in this realm, just to bat around some ideas and get the conversation started. Hell, it’s going on already, but it seems time that we got together face to face to start looking at, seriously, what kind of opportunity we’re sitting on here.

Wither web standards? And a call for new browser wars

There’s been a flurry of activity in web standards land lately, with Opera taking on Microsoft in Europe over their failure to conform to web standards, while Andy “Malarkey” Clarke calls BS on the whole CSS Working Group thing, pointing to the complicity and corruption that comes with entrenched vendors having little to no incentive to innovate at the speed of the web and Mozilla’s Dave Baron getting fed up with backdoor dealings. In support of his case, Alex “Dojo Toolkit” Russell suggests that we abandon the W3C altogether (and Zeldman-kind too) and start burning [our] standards advocacy literature and start telling [our] browser vendors to give [us] the new shiny. Jeff Croft jumps in as well, asking whether we should return to the browser wars of yore.

I attempted to leave a comment on Jeff’s blog, but since I was over his 3000 character limit, I’m blogging it here, without the normal care that I usually take with posts here. Take it as you will:

As much as I appreciate your perspective and agree with the goals that you, Andy M and Alex share, I am at the same time dismayed that it means we’re going to end up essentially with “privileged web experiences” and “unprivileged web experiences” if you take this path.

It also means that the fight against Silverlight and Flash essentially comes to a draw and developers have to pick sides (if they haven’t already) and “standardize” their own work to one of four choices: either two of the above or, confusingly, HTML-compliant and HTML-incompatible. So while you might be able to do some shiny things into the forceable future, it does seem to me that you’re going to wind up creating more work for developers to have to test against and support both HTML variants (not to mention the long history of browser incompatibilities), in which case they might as well just jump the open web standards ship altogether and get in bed with Microsoft or Adobe, given their ability to crank out tools AND formats that rival most of their open alternatives.

It seems to me that rather than necessarily abandoning web standards or, as Alex said, “start burning their standards advocacy literature”, I suppose I’d like to see how we can begin to commoditize the more attractive aspects of Silverlight and AIR/Flash with open formats and standards. Unfortunately we are up against massive marketing dollars and entrenched positions, but in the end, open always wins.

If you’re really serious about this, and I think Alex, with his relationship to the Dojo project is in the perfect position to do so, I think the job of folks who have grown disillusioned with the web standards path should begin to develop “community conventions” that can be implemented today, using what leading browsers support (Opera, Firefox and WebKit/Safari) (see microformats, leading this approach, as well as OAuth). I think rewarding those browser makers by exploiting the features that they ARE implementing is a good way to go, and I also think that developing rich interface libraries in CSS and Javascript will continue to be important to advancing the state of the “unprivileged web”. We’ve made a great deal of progress in a relatively short amount of time with jQuery and similar libraries that deliver effects previously unthought of in regular web pages… it’s just a matter of time before we approach this as a more concerted effort to make web applications compete with their proprietary brethren.

With site-specific browser generators like Todd Ditchendorf’s Fluid and coming out, I think we’re also moving much more quickly towards local desktop integration than you’ll be able to get out of full-fledged generic browsers. In fact, I’m most hopeful about those kinds of application for the kind of innovation you’re talking about.

I think I’m just about as fed up as you with the centralized, top-down web standards process. But then again, I never believed in it from the beginning. Your frustrations to me only indicate that the way of old skool, top down bureaucracies have had their day; the way forward is the way of open source and open communities that produce results. And given that we do already have a body of standards that we can build on top of, I do worry that a lot of effort will be wasted paving a new path towards an uncertain future, when there is still so much potential and opportunity to be had with the technologies that are available today but are simply underutilized and have yet to be exploited.

Ruminating on DiSo and the public domain

There’s been some great pickup of the DiSo Project since Anne blogged about it on GigaOM.

I’m not really a fan of early over-hype, but fortunately the reaction so far has been polarized, which is a good thing. It tells me that people care about this idea enough to sign up, and it also means that people are threatened enough by it to defensively write it off without giving it a shot. That’s pretty much exactly where I’d hope to be.

There are also a number of folks pointing out that this idea has been done before, or is already being worked on, which, if you’re familiar with the microformats process, understand the wisdom in paving well-worn cow paths. In fact, in most cases, as Tom Conrad from Pandora has said, it’s not about giving his listeners 100% of what they want (that’s ridiculous), it’s about moving from the number of good songs from six to seven out of a set of eight. In other words, most people really don’t need a revolution, they just want a little more of what they already have, but with slight, yet appreciable, improvements.

Anyway, that’s all neither here nor there. I have a bunch of thoughts and not much time to put them down.

. . .

I’ve been thinking about mortality a lot lately, stemming from Marc Orchant’s recent tragic death and Dave Winer’s follow up post, capped off with thinking about open data formats, permanence and general digital longevity (when I die, what happens to my digital legacy? my OpenID?, etc).

Tesla Jane MullerMeanwhile, and on a happier note, I had the fortunate occasion to partake in the arrival of new life, something that, as an uncle of ~17 various nieces and nephews, I have some experience with.

In any case, these two dichotomies have been pinging around my brain like marbles in a jar for the past couple days, perhaps bringing some things into perspective.

. . .

Meanwhile, back in the Bubble, I’ve been watching “open” become the new bastard child of industry, its meaning stripped, its bite muzzled. The old corporate allergy to all things open has found a vaccine. And it’s frustrating.

Muddled up in between these thoughts on openness, permanence, and on putting my life to some good use, I started thinking about the work that I do, and the work that we, as technologists do. And I think that term shallow now, especially in indicating my humanist tendencies. I don’t want to just be someone who is technologically literate and whose job it is to advise people about how to be more successful in applying its appropriate use. I want to create culture; I want to build civilization!

And so, to that end, I’ve been mulling over imposing a mandate on the DiSo Project that forces all contributions to be released into the public domain.

Now, there are two possible routes to this end. The first is to use a license compatible with Andrius KulikauskasEthical Public Domain project. The second is to follow the microformats approach, and use the Creative Commons Public Domain Dedication.

While I need to do more research into this topic, I’ve so far been told (by one source) that the public domain exists in murky legal territory and that perhaps using the Apache license might make more sense. But I’m not sure.

In pursuing clarity on this matter, my goals are fairly simple, and somewhat defiant.

For one thing, and speaking from experience, I think that the IPR process for both OpenID and for OAuth were wasteful efforts and demeaning to those involved. Admittedly, the IPR process is a practical reality that can’t be avoided, given the litigious way business is conducted today. Nor do I disparage those who were involved in the process, who were on the whole reasonable and quite rational; I only lament that we had to take valuable time to work out these agreements at all (I’m still waiting on Yahoo to sign the IPR agreement for OAuth, by the way). As such, by denying the creation of any potential IP that could be attached to the DiSo Project, I am effectively avoiding the need to later make promises that assert that no one will sue anyone else for actually using the technology that we co-create.

So that’s one.

Second, Facebook’s “open” platform and Google’s “open” OpenSocial systems diminish the usefulness of calling something “open”.

As far as I’m concerned, this calls for the nuclear option: from this point forward, I can’t see how anyone can call something truly open without resorting to placing the work firmly in the public domain. Otherwise, you can’t be sure and you can’t trust it to be without subsequent encumbrances.

I’m hopeful about projects like Shindig that call themselves “open source” and are able to be sponsored by stringent organizations like the Apache foundation. But these projects are few and far between, and, should they grow to any size or achieve material success, inevitably they end up having to centralize, and the “System” (yes, the one with the big es) ends up channeling them down a path of crystallization, typically leading to the establishment of archaic legal institutions or foundations, predicated on being “host” for the project’s auto-created intellectual property, like trademarks or copyrights.

In my naive view of the public domain, it seems to me that this situation can be avoided.

We did it (and continue to prove out the model) with BarCamp — even if the Community Mark designation still seems onerous to me.

And beyond the legal context of this project, I simply don’t want to have to answer to anyone questioning why I or anyone else might be involved in this project.

Certainly there’s money to be had here and there, and it’s unavoidable and not altogether a bad thing; there’s also more than enough of it to go around in the world (it’s the lack of re-circulation that should be the concern, not what people are working on or why). In terms of my interests, I never start a project with aspirations for control or domination; instead I want to work with intelligent and passionate people — and, insomuch as I am able, enable other people to pursue their passions, demonstrating, perhaps, what Craig Newmark calls nerd values. So if no one (and everyone) can own the work that we’re creating, then the only reason to be involved in this particular instance of the project is because of the experience, and because of the people involved, and because there’s something rewarding or interesting about the problems being tackled, and that their resolution holds some meaning or secondary value for the participants involved.

I can’t say that this work (or anything else that I do) will have any widespread consequences or effects. That’s hardly the point. Instead, I want to devote myself to working with good people, who care about what they do, who hold out some hope and see validity in the existence of their peers, who crave challenge, and who feel accomplished when others share in the glory of achievement.

I guess when you get older and join the “adult world” you have to justify a lot more to yourself and to others. It’s a lot harder to peel off the posture of defensiveness and disbelief that come with age than to allow yourself to respond with excitement, with hope, with incredulity and wonder. But I guess I’m not so much interested in that kind of “adult world” and I guess, too, that I’d rather give all my work away than risk getting caught up in the pettiness that pervades so much of the good that is being done, and that still needs to be done, in all the many myriad opportunities that surround us.