Obama Phone!

Obama PhoneIf you haven’t heard about this yet, the Obama campaign today released an iPhone app that, among other features, enables you to call your friends prioritized by their location in battleground states.

This is critical.

There’s nothing more important, or more influential, than friends encouraging friends to vote, and when it comes to getting informed on the issues and what’s at stake, nothing is more effective than getting an impassioned plea from a personal contact or relative.

Providing a tool that allows people to get in touch with people who they personally know is so much better than cold-calling phone banking (the importance of that tactic notwithstanding given the need to reach out beyond the friends of iPhone owners).

You can get the app in the iTunes App Store.

Obama Phone CreditsThe Obama ’08 app development was spearheaded by personal friends of mine — co-organizers of the popular iPhoneDevCamps that we held the past two years at Adobe’s offices in San Francisco (which are now spreading, like all good *camp events should!). Specifically, props go to Dom Sagolla and Raven Zachary, without whom this application might never have happened. But credit is also due to the entire top-tier team that spent countless hours over the past month putting this app together (*iPhoneDevCamp alumni):

What’s significant is not only the application, but what this move represents for those of us who live and breathe the web and open source: this app is born of both, reusing a number of open source components and, from the outset, leverages the web with presence on social networks like Facebook. This is the Obama campaign reaching out to the open source and iPhone development communities and working with us to do what we know how to do best, and giving us a space in which we can make a difference for the campaign.

We’re nearly a month a way from the election, and that means that if you want to participate, you’re going to need be registered to vote beforehand. It also means that if you’ve been waiting, or holding out, and looking for an opening to get involved, now’s your chance. As Raven says, making a few simple calls with this app enables even ‘The Two Minute Volunteer’ to make a substantial difference by personally involving friends and family in the election.

Seeing this work inspires and gives me hope; if we can keep up this kind of innovative thinking for the next 30 days, I think it’s clear that the best candidate is going to come out on top and get the country back on its proper footing.

After 1984

iTunes Genius

iTunes 8 has added a new feature called “Genius” that harnesses the collective behavior of iTunes Music Store shoppers to generate “perfect” playlists.

Had an interesting email exchange with my mom earlier today about Monica Hesse’s story Bytes of Life. The crux of the story is that more and more people are self-monitoring and collecting data about themselves, in many cases, because, well, it’s gotten so much easier, so, why not?

Well, yes, it is easier, but just because it is easier, doesn’t automatically mean that one should do it, so let’s look at this a little more deeply.

First, my mom asked about the amount of effort involved in tracking all this data:

I still have a hard time even considering all that time and effort spent in detailing every moment of one’s life, and then the other side of it which is that it all has to be read and processed in order to “know oneself”. I think I like the Jon Cabot Zinn philosophy better — just BE in the moment, being mindful of each second doesn’t require one to log or blog it, I don’t think. Just BE in it.

Monica didn’t really touch on too many tools that we use to self-monitor. It’s true that, depending on the kind of data we’re collecting, the effort will vary. But so will the benefits.

MyMileMarkerIf you take a look at MyMileMarker’s iPhone interface, you’ll see how quick and painless it is to record this information. Why bother? Well, for one thing, over time you get to see not only how much fuel you’re consuming, but how much it’s going to cost you to keep running your car in the future:

View my Honda Civic - My Mile Marker

Without collecting this data, you might guess at your MPG, or take the manufacturer’s rating as given, but when you record what actually is happening, you can prove to yourself whether filling up your tires really does save you money (or the planet).

On the topic of the environment, recording my trips on Dopplr gives me an actual view of my carbon footprint (pretty damning, indeed):

DOPPLR Carbon

As my mom pointed out, perhaps having access to this data will encourage me to cut back excess travel — or to consolidate my trips. Ross Mayfield suggests that he could potentially quit smoking if his habit were made more plainly visible to him.

What’s also interesting is how passive monitors, or semi-passive monitoring tools, can also inform, educate or predict — and on this point I’m thinking of Last.fm where of course my music taste is aggregated, or location-based sites like Brightkite, where my locative behavior is tracked (albeit, manually — though Fire Eagle + Spot changes that).

My mom’s other point about the ability to just BE in the moment is also important — because self-tracking should ideally be non-invasive. In other words, it shouldn’t be the tracking that changes your behavior, but your analysis and reflection after the fact.

One of the stronger points I might make about this is that data, especially when collected regularly and when the right indicators are recorded, you can reduce a great amount of distortion from your self-serving biases. Monica writes:

“We all have the tendency to see our behaviors in a little bit of a halo,” says Jayne Gackenbach, who researches the psychology of the Internet at Grant MacEwan College in Alberta, Canada. It’s why dieters underestimate their food intake, why smokers say they go through fewer cigarettes than they do. “If people can get at some objective criteria, it would be wonderfully informative.” That’s the brilliance, she says, of new technology.

big-brotherSo that’s great and all, but all of this, at least for my mom, raises the spectre of George Orwell’s ubiquitous and all-knowing “Big Brother” from Nineteen Eighty-Four and neo-Taylorism:

I do agree that people lie, or misperceive, and that data is a truer bearer of actualities. I guess I don’t care. Story telling is an art form, too. There’s something sort of 1984ish about all this data collection – – as if the accumulated data could eventually turn us all into robotic creatures too self-programmed to suck the real juice out of life.

I certainly am sympathetic to that view, especially because the characterization of life in 1984 was so compelling and visceral. The problem is that this analogy invariably falls short, especially in other conversations when you’re talking about the likes of Google and other web-based companies.

In 1984, Big Brother symbolized the encroachment of the government on the life of the private citizen. Since the government had the ability to lock you up or take you away based on your behavior, you can imagine that this kind of dystopic vision would resonate in a time when increasingly fewer people probably understand the guts of technology and yet increasingly rely on it, shoveling more and more of their data into online repositories, or having it collected about them as they visit various websites. Never before has the human race had so much data about itself, and yet (likely) so little understanding.

The difference, as I explained to my mom, comes down to access to — and leverage over — the data:

I want to write more about this, but I don’t think 1984 is an apt analogy here. In the book, the government knows everything about the citizenry, and makes decisions using that data, towards maximizing efficiency for some unknown — or spiritually void — end. In this case, we’re flipping 1984 on its head! In this case we’re collecting the data on OURSELVES — empowering ourselves to know more than the credit card companies and banks! It’s certainly a daunting and scary thought to realize how much data OTHER people have about us — but what better way to get a leg up then to start looking at ourselves, and collecting that information for our own benefit?

I used to be pretty skeptical of all this too… but since I’ve seen the tools, and I’ve seen the value of data — I just don’t want other people to profit off of my behaviors… I want to be able to benefit from it as well — in ways that I dictate — on my terms!

In any case, Tim O’Reilly is right: data is the new Intel inside. But shouldn’t we be getting a piece of the action if we’re talking about data about us? Shouldn’t we write the book on what 2014 is going to look like so we can put the tired 1984 analogies to rest for awhile and take advantage of what is unfolding today? I’m certainly weary of large corporate behemoths usurping the role the government played in 1984, but frankly, I think we’ve gone beyond that point.

The Open Web Foundation

Open Web Foundation logoDuring this morning’s keynote at OSCON, David Recordon announced the formation of the Open Web Foundation (his slides), an initiative with which I am involved, aimed at becoming something akin to a “Creative Commons for patents”, with the intention of lowering the costs and barriers to the development and adoption of open and free specifications like OpenID and OAuth.

As I expected, there’s been some healthy skepticism that usually starts with “Another foundation? Really?” or “Wait, doesn’t [insert other organization name] do this?

And the answers are “Yes, exactly” and “No, not exactly” (respectively).

I’ll let John McCrae explain:

…every grass roots effort, whether OpenID, OAuth, or something yet to be dreamt up, needs to work through a whole lot of issues to go from great idea to finalized spec that companies large and small feel comfortable implementing. In particular, large companies want to make sure that they can adopt these building blocks without fear of being sued for infringing on somebody’s intellectual property rights. Absent the creation of this new organization, we were likely to see each new effort potentially creating yet-another-foundation to tackle what is essentially a common set of requirements.

And this is essentially where we were in the OAuth process, following in the footsteps of the OpenID Foundation before us, trying to figure out for ourselves the legal and intellectual property issues that stood in the way of [a few] larger companies being able to adopt the protocol.

Now, I should point out that OAuth and OpenID are the result of somewhat unique and recent phenomena, where, due to the low cost of networked collaboration and the high value of commoditizing common protocols between web services, the OAuth protocol came together in just under a year, written by a small number of highly motivated individuals. The problem is that it’s taken nearly the same amount of time trying to developer our approach to intellectual property, despite the collective desire of the authors to let anyone freely use it! This system is clearly broken, and not just for us, but for every group that wants to provide untethered building blocks for use on the open web — especially those groups who don’t have qualified legal counsel at their disposal.

That other groups exist to remedy this issue is something that we realized and considered very seriously before embarking on our own effort. After all, we really don’t want to have to do this kind of work — indeed it often feels more like a distraction than something that actually adds value to the technology — but the reality is that clarity and understanding is actually critical once you get outside the small circle of original creators, and in that space is where our opportunity lies.

In particular, for small, independent groups to work on open specifications (n.b. not standards!) that may eventually be adopted industry-wide, there needs to be a lightweight and well-articulated path for doing the right thing™ when it comes to intellectual property that does not burden the creative process with defining scope prematurely (a process that is costly and usually takes months, greatly inhibiting community momentum!) and that also doesn’t impose high monetary fees on participation, especially when outcomes may be initially uncertain.

At the same time, the final output of these kinds of efforts should ultimately be free to be implemented by all the participants and the community at large. And rather than forcing the assignment of all related patents owned by all participants to a central foundation (as in the case of the XMPP Foundation) or getting every participant to license their patents to others (something most companies seem loathe to do without some fiscal upside), we’ve seen a trend over the past several years towards patent non-assert agreements which allow companies to maintain their IP, to not have to disclose it, and yet to allow for the free, unencumbered use of the specification.

If this sounds complicated, it’s because it is, and is a significant stumbling block for many community-driven open source and open specification projects that aim for, or have the potential for, widespread adoption. And this is where we hope the Open Web Foundation can provide specific value in creating templates for these kinds of situations and guiding folks through effective use of them, ultimately in support of a more robust, more interoperable and open web.

We do have much work ahead of us, but hopefully, if we are successful, we will reduce the overall cost to the industry of repeating this kind of work, again, in much the same way Creative Commons has done in providing license alternatives to copyright and making salient the notion that the way things are aren’t the only way they have to be.

When location is everywhere

What if you could take location as a given in the design of web applications and services? By that I mean, what if — when someone who has never used your service before shows up, signs up (ideally with an OpenID!) — and it’s both trivial and desirable for her to provide you with access to some aspect of her physical location in the world… and she does? What would you do? How would you change the architecture of your service to leverage this new “layer” of information?

Would you use it to help her connect to and find others in her proximity (or maybe avoid them)? Would you use it to better target ads at her (as Facebook does)? Would you use it to accelerate serendipity, colliding random people who, for some reason, have strikingly similar habits but don’t yet know each other — or would you only reveal the information in aggregate, to better give members a sense for where people on the service come from, spend their time and hang out? If you could, would you automatically geocode everything that new members upload or post to your service or would you require that metadata to be added or exposed explicitly?

Put still another way, how would a universal “location layer for the social web” change the design and implementation of existing applications? Would it give rise to a class of applications that take advantage of and thrive on knowing where their members live, work and play, and tailor their services accordingly? Or would all services eventually make use of location information? Or will it depend on each service’s unique offering and membership, and why people signed up in the first place? Just because you can integrate with Twitter or Facebook, must you? If the “location layer” were made available, must you take advantage of it? What criteria or metrics would you use to decide?

I would contend that these are all questions that anyone with a modern web service is going to need to start dealing with sooner than later. It’s not really a matter of whether or not members will ever show up with some digital footprint of where they are, where they’ve been or where they’re going; it’s really only a matter of time. When they do, will you be ready to respond to this information or will you carry on like Friendster in the prime of Facebook and pretend like the first bubble never popped?

If you imagine for a minute that the ubiquity of wireless-enabled laptops gave rise to the desire-slash-ability for more productive mobile work, and consequently created the opportunity for the coworking community to blossom; if you consider that the ubiquity of digital cameras and camera phones created the opening for a service like Flickr (et al) to take off; if you consider that the affordability of camcorders, accessibility of video on digital cameras, cell phones and built-in in laptops and iMacs, coupled with simpler tools like iMovie, lead to people being able and wanting to post videos to a service like YouTube (et al); if you look at how the ubiquity of some kind of device technology with [media] output lead to the rise of services/communities that were optimized for that same media, you might start to realize that a huge opportunity is coming for locative devices that make it easy to publish where you are, discover where your friends are, and to generally receive benefits from being able to inform third parties, in a facile way, where you are, where you’ve been and where you’re going.

Especially with the opening of the iPhone with its simple and elegant implementation of the “Locate Me” feature in Google Maps (which has already made its way into Twinkle, a native Twitter app for the iPhone that uses your location to introduce you to nearby Twitterers who are also using the app) I think we’re on the brink of seeing the kind of the ubiquity (in the consumer space) that we need in order to start taking the availability of location information for granted, and, that, like standards-compliant browsers, it could (or should) really inform the way that we build out the social fabric of web applications from thence forward.

The real difference coming that I want to point out here is that 1) location information, like digital photos and videos before it, will become increasingly available and accessible to regular people, in many forms; that 2) people will become increasingly aware that they can use this information to their advantage should they choose to and may, if given the chance, provide this information to third-party services; 3) that when this information is applied to social applications (i.e. where location is exposed at varying levels of publicity), interesting, and perhaps compelling, results may emerge; and that 4) in general, investing in location as an “information layer” or filter within new or existing applications makes increasingly more sense, as more location information is coming available, is being made available by choice, and is appearing in increasing numbers of applications that previously may not have taken physical location into consideration.

Geogeeks can claim credit for presaging this day for some time, but only now does it seems like the reality is nearly upon us. Will the ubiquity of location data, like the adoption of web standards before, catalyze entirely new breeds of applications and web services? It’s anyone’s guess when exactly this reality will come to pass, but I believe that now, increasingly, it’s really only a short matter of time before location is indeed everywhere, a new building block on which new and exciting services and functionality can be stacked.

(Bonus: Everyware by Adam Greenfield is good reading on this general topic, though not necessarily as it relates to web services and applications.)

Relationships are complicated

Facebook | Confirm Requests

I’ve noticed a few interesting responses to my post on simplifying XFN. While my intended audiences were primarily fellow microformat enthusiasts and “lower case semantic web” types, there seems to be a larger conversation underway that I’d missed — one that both and have commented on.

In a treatise against XFN (and similarly reductive expressions of human relationships) from December of last year, Greenfield said a number of profound things:

  • …one of my primary concerns has always been that we not accede to the heedless restructuring of everyday human relations on inappropriate and clumsy models derived from technical systems – and yet, that’s a precise definition of social networking as currently instantiated.
  • All social-networking systems constrain, by design and intention, any expression of the full band of human relationship types to a very few crude options — and those static!
  • …it’s impossible to use XFN to model anything that even remotely resembles an organic human community. I passionately believe that this reductive stance is not merely wrong, but profoundly wrong, in that it deliberately aims to bleed away all the nuance, complication and complexity that makes any real relationship what it is.
  • I believe that technically-mediated social networking at any level beyond very simple, local applications is fundamentally, and probably persistently, a bad idea. From where I stand, the only sane response is to keep our conceptions of friendship and affinity from being polluted by technical metaphors and constraints to begin with.

Whew! Strong stuff, but useful, challenging and insightful.

Meanwhile, TBL defended a semi-autistic perspective in describing the future of the Semantic Web (yes, the uppercase version):

At the moment, people are very excited about all these connections being made between people — for obvious reasons, because people are important — but I think after a while people will realise that there are many other things you can connect to via the web.

While my sympathies actually lie with Greenfield (especially after a weekend getting my mom setup on Facebook so she could send me photos without clogging my inbox with 80MB emails… a deficiency in the design of the technology, not my mother mind you!), I also see the promise of a more self-aware, self-descriptive web. But, realistically, that web is a long way off, and more likely, that web is still going to need human intervention to make it work — at least for humans to benefit from it (oh sure, just get rid of the humans and the network will be just perfect — like planes without passengers, right?).

But in the meantime, there is a social web that needs to be improved, and that can be improved, in fairly simple and straight-forward ways, that will make it easier for regular folks who don’t (and shouldn’t have to) care about “data portability” and “password anti-patterns” and “portable contact lists” to benefit from the fact that the family and friends they care about are increasingly accessible online, and actually want to hear from them!

Even though Justin Smith takes another reductive look at the features Facebook is implementing, claiming that it wants to “own communications with your friends“, the reality is, people actually want to communicate with each other online! Therefore it follows that, if you’re a place where people connect and re-connect with one another, it’s not all that surprising that a site like Facebook would invest in and make improvements to facilitate interaction and communication between their members!

But let’s back up a minute.

If we take for granted that people do want to connect and to communicate on social networks (they seem to do it a lot, so much to that one might could even argue that people enjoy doing it!), what role should so-called “portable contact lists” play in this situation? I buy Greenfield’s assertion that attempts by technologists to reduce human relationships to a predefined schema (based on prior behavior or not) is a failing proposition, but that seems to ignore the opportunity presented by the fact that people are having to maintain many several lists of their friends in many different places, for no other reason than an omission from the design of the social internetwork.

Put another way, it’s not good enough to simply dismiss the trend of social networking because our primitive technological expressions don’t reflect the complexity of real human relationships, or because humans are just one of kind of “object” to be “semantified” in TBL’s “Giant Global Graph“… instead, people are connecting today, and they’re wanting to connect to people outside of their chosen “home” network and frankly the experience sucks and it’s confusing. It’s not good enough to get all prissy about it; the reality is that there are solutions out there today, and there are people working on these things, and we need smart people like Greenfield and Berners-Lee to see that solutions that enable the humanist web (however semantic it needs to be) are being prioritized and built… and that we [need] not accede to the heedless restructuring of everyday human relations on inappropriate and clumsy models derived from technical systems.

I can say that, from what I’ve observed so far, these are things that computers can do for us, to make the social computing experience more humane, should we establish simple and straightforward means to express a basic list of contacts between contexts:

  • help us find and connect to people that we’ve already indicated that we know
  • introduce us to people who we might know, or based on social proximity, should know (with no obligation to make friends, of course!)
  • help us from accidently bumping into people we’d rather not interact with (see block-list portability)
  • helping us to segment our friendships in ways that make sense to us (rather than the semi-arbitrary ways that social networks define)
  • helping us to confidently share things with just the people with whom we intend to share

There may be others here, but off the top of my head, I think satisfying these basic tasks is a good start for any social network that thinks allowing you to connect and interact with people who you might know, but who may not have already signed up for the service, is useful.

I should make one last point: when thinking about importing contacts from one context to another, I do not think that it should be an unthinking act. That is, just because it’s merely data being copied between servers, the reality is that those bits represent things much more sacred and complicated than any computer might ever be programmed to imagine. Therefore, just because we can facilitate and lower the friction of “bringing your friends with you” from one place to another doesn’t mean that it should be an automatic process and that all your friends in one place should be made to be your friends in the new place.

And regardless of how often good ol’ Mark Zuckerberg claims that the end game is to make communications more efficient, when it comes to relationships, every connection transposed from one context to another should have to be reconsidered (hmm, a great argument for tagging of contacts…)! We can and should not make assumptions about the nature of people’s relationships, no matter what kind of semantics they’ve used to describe them in a given context. Human relationships are simply far too complicated to be left up to assumptions and inferences made by technologists whose affinity oftentimes lies closer to the data than to the makers of the data.

Picking the open source candidate

I Voted!My buddy whurley is at it again, but this time considering which candidate(s) is the most compatible or supportive of open source — in other words, among the many options, which could be considered the “open source candidate”?

Just as I voted for Obama yesterday, I voted for Obama today. I’m not sure why, except that 1) he’s on Twitter and 2) Hillary is more of a “dynasty” type of candidate as opposed to a “meritocratic” candidate (in my limited view) and given that Obama’s success seems predicated on his previous good works (rather than inheriting a presidential legacy, let’s say), he seems more in line with the nature of open source development. Then again, cognitive science suggests that I can essentially rationalize any irrational decision to explain my actions, so I could just as well chalk it up to gut instinct.

Whatever, here’s the poll if you’ve got an opinion:

http://s3.polldaddy.com/p/290674.js
A couple related thoughts and questions::

  • How might a candidate demonstrate that they understand or value open source? Just by running Linux? Or something deeper?
  • What kind of “open source platform” would the ideal candidate support? (using platform in the political sense) That is, getting beyond the software or hardware, how would their policies be affected by ideals and practices derived from the open source ecosystem?
  • Is it just about transparency, or would the candidate need to understand how open source itself is becoming increasingly important to the economy and to the future of work?
  • As whurley said in his post, where would an “open source” candidate come down on patent and IP reform?

If you’ve got any inside knowledge about where the candidates sit in terms of open source, I’d love to see some references or stories about their leanings. In the meantime, don’t forgot to vote — on whurley’s poll!

Ruminating on DiSo and the public domain

There’s been some great pickup of the DiSo Project since Anne blogged about it on GigaOM.

I’m not really a fan of early over-hype, but fortunately the reaction so far has been polarized, which is a good thing. It tells me that people care about this idea enough to sign up, and it also means that people are threatened enough by it to defensively write it off without giving it a shot. That’s pretty much exactly where I’d hope to be.

There are also a number of folks pointing out that this idea has been done before, or is already being worked on, which, if you’re familiar with the microformats process, understand the wisdom in paving well-worn cow paths. In fact, in most cases, as Tom Conrad from Pandora has said, it’s not about giving his listeners 100% of what they want (that’s ridiculous), it’s about moving from the number of good songs from six to seven out of a set of eight. In other words, most people really don’t need a revolution, they just want a little more of what they already have, but with slight, yet appreciable, improvements.

Anyway, that’s all neither here nor there. I have a bunch of thoughts and not much time to put them down.

. . .

I’ve been thinking about mortality a lot lately, stemming from Marc Orchant’s recent tragic death and Dave Winer’s follow up post, capped off with thinking about open data formats, permanence and general digital longevity (when I die, what happens to my digital legacy? my OpenID?, etc).

Tesla Jane MullerMeanwhile, and on a happier note, I had the fortunate occasion to partake in the arrival of new life, something that, as an uncle of ~17 various nieces and nephews, I have some experience with.

In any case, these two dichotomies have been pinging around my brain like marbles in a jar for the past couple days, perhaps bringing some things into perspective.

. . .

Meanwhile, back in the Bubble, I’ve been watching “open” become the new bastard child of industry, its meaning stripped, its bite muzzled. The old corporate allergy to all things open has found a vaccine. And it’s frustrating.

Muddled up in between these thoughts on openness, permanence, and on putting my life to some good use, I started thinking about the work that I do, and the work that we, as technologists do. And I think that term shallow now, especially in indicating my humanist tendencies. I don’t want to just be someone who is technologically literate and whose job it is to advise people about how to be more successful in applying its appropriate use. I want to create culture; I want to build civilization!

And so, to that end, I’ve been mulling over imposing a mandate on the DiSo Project that forces all contributions to be released into the public domain.

Now, there are two possible routes to this end. The first is to use a license compatible with Andrius KulikauskasEthical Public Domain project. The second is to follow the microformats approach, and use the Creative Commons Public Domain Dedication.

While I need to do more research into this topic, I’ve so far been told (by one source) that the public domain exists in murky legal territory and that perhaps using the Apache license might make more sense. But I’m not sure.

In pursuing clarity on this matter, my goals are fairly simple, and somewhat defiant.

For one thing, and speaking from experience, I think that the IPR process for both OpenID and for OAuth were wasteful efforts and demeaning to those involved. Admittedly, the IPR process is a practical reality that can’t be avoided, given the litigious way business is conducted today. Nor do I disparage those who were involved in the process, who were on the whole reasonable and quite rational; I only lament that we had to take valuable time to work out these agreements at all (I’m still waiting on Yahoo to sign the IPR agreement for OAuth, by the way). As such, by denying the creation of any potential IP that could be attached to the DiSo Project, I am effectively avoiding the need to later make promises that assert that no one will sue anyone else for actually using the technology that we co-create.

So that’s one.

Second, Facebook’s “open” platform and Google’s “open” OpenSocial systems diminish the usefulness of calling something “open”.

As far as I’m concerned, this calls for the nuclear option: from this point forward, I can’t see how anyone can call something truly open without resorting to placing the work firmly in the public domain. Otherwise, you can’t be sure and you can’t trust it to be without subsequent encumbrances.

I’m hopeful about projects like Shindig that call themselves “open source” and are able to be sponsored by stringent organizations like the Apache foundation. But these projects are few and far between, and, should they grow to any size or achieve material success, inevitably they end up having to centralize, and the “System” (yes, the one with the big es) ends up channeling them down a path of crystallization, typically leading to the establishment of archaic legal institutions or foundations, predicated on being “host” for the project’s auto-created intellectual property, like trademarks or copyrights.

In my naive view of the public domain, it seems to me that this situation can be avoided.

We did it (and continue to prove out the model) with BarCamp — even if the Community Mark designation still seems onerous to me.

And beyond the legal context of this project, I simply don’t want to have to answer to anyone questioning why I or anyone else might be involved in this project.

Certainly there’s money to be had here and there, and it’s unavoidable and not altogether a bad thing; there’s also more than enough of it to go around in the world (it’s the lack of re-circulation that should be the concern, not what people are working on or why). In terms of my interests, I never start a project with aspirations for control or domination; instead I want to work with intelligent and passionate people — and, insomuch as I am able, enable other people to pursue their passions, demonstrating, perhaps, what Craig Newmark calls nerd values. So if no one (and everyone) can own the work that we’re creating, then the only reason to be involved in this particular instance of the project is because of the experience, and because of the people involved, and because there’s something rewarding or interesting about the problems being tackled, and that their resolution holds some meaning or secondary value for the participants involved.

I can’t say that this work (or anything else that I do) will have any widespread consequences or effects. That’s hardly the point. Instead, I want to devote myself to working with good people, who care about what they do, who hold out some hope and see validity in the existence of their peers, who crave challenge, and who feel accomplished when others share in the glory of achievement.

I guess when you get older and join the “adult world” you have to justify a lot more to yourself and to others. It’s a lot harder to peel off the posture of defensiveness and disbelief that come with age than to allow yourself to respond with excitement, with hope, with incredulity and wonder. But I guess I’m not so much interested in that kind of “adult world” and I guess, too, that I’d rather give all my work away than risk getting caught up in the pettiness that pervades so much of the good that is being done, and that still needs to be done, in all the many myriad opportunities that surround us.