37 Signals’ next app Highrise will support OpenID

Highrise will support OpenID

I got an email today from 37 Signals today about their forthcoming CRM tool called Highrise (formerly known as Sunrise). Curious to see where the project was at, I went and snooped around, trying out some common beta URLs to see if I could get a sneak peak… (naughty naughty) and, finding nothing, it dawned on me that Jason Fried was probably using his standard domain prefix for his account… just as he’s done with the Backpack reviews.

Sure enough, there was a welcome page at jf.highrisehq.com but what else did I discover? None other than a link to “Login with OpenID”. I tried logging in and it went through all the proper steps — so it does look like this is a functioning feature.

So it appears that the 37 Signals guys have finally drunk the Koolaid and will be supporting OpenID… I asked for this awhile ago but now, with DHH on the case and writing code, it seems that it’s actually going to happen.

And I couldn’t be more excited about it. Finally, one login for all my Basecamps, Backpacks, Campfires, Tada Lists… and now, Highrise. This is exactly the way it’s supposed to work.

Under lock and key

Daniel Quinn has written about civilization and how agricultural farming is what has brought us to our current environmental predicament. In his books, particularly Ishmael and My Ishmael, he points out putting the food supply under lock and key (as opposed to being readily available for foraging) is a natural outgrowth of agriculture, given its surpluses and that our entire infrastructure is built around that condition.

Recently I’ve been reading his book Beyond Civilization, which, contrary to what you might think, is a treatise against civilization in general — not an advocation of improving civilization, but of an abandonment of the notion altogether, for in civilization, we find the memes that time and time again lead us down the path of exploitation and environmental desecration.

Rather than just continue building civilization in a different way, he advocates walking away — and developing a new model of making a living based on tribal economics.

While his vision is appealing to me, I’m stuck wanting to see massive change and revolution, sensing the urgency of our situation. On the other hand, no massive and complete upheaval will actually work, since inverting the triangle would simply result in another triangle.

Instead, and this is the way biological systems work, we need incremental change and new memes that shape our thinking and our approach to our reality.

I’ve been thinking about this lately and find that DRM and Intellectual Property Laws represent one side of Daniel’s Quinn’s story — and efforts like Coworking, BarCamp, microformats, open source and others represent, or at least have characteristics, of the other.

In particular, I question any institutional trend towards consolidation, crystallization, centralization or the locking up of naturally occurring resources or readily reproducible resources (like digital data). With much of my work, I’ve attempted to implement or at least follow the framework suggested by Andrius Kulikaukus in his “An Economy for Giving Everything Away”. I’ve also taken lessons from Daniel Quinn’s work and others, and have come to prefer a longer and more incremental approach to the changes that I want to see made real, and I think that this is the path of open source and biomimetic innovation.

Having visited BarCampLondon, I instantly see the value of making BarCamp open and proactively inclusive from the beginning. Retrospectively, I’m proud that there was no urge to trademark or lock down the name, the brand, the model or the community — as anathema to the spirit of BarCamp those actions would have been, they were choices that were made, either explicitly or implicitly, over time. And there are lessons to be had from our experiences.

On occasion, the notion of trademarking the BarCamp name has been brought up, primarily from a defensive perspective, to chill any attempts by “bad actors” or “corporate interests” from taking away from us that which we call our community, much CMP nearly did with their “Web 2.0” trademark. Now, I can tell you that I can understand the reasoning behind this and can sympathize with it. I can also state, quite certainly, that I’d rather the name be taken from us than to bring us back to centralization and the methods of enforcement and protection that I find so unseemly in a gift-based, community context.

Trademarks, patents and copyright all place upon the owners of such Rights obligations that do not beget community. As DRM are the economic shackles of genius, so I would not move to limit the bounds and possibilities that good actors within the community might do. That is not to say that we are immune from abuse, only that our priority should be the encouragement and promotion of proper and positive use.

To that end, we rely on a community of peers to uphold our values and principles, and do not outsource the responsibility of this work to a cathedral, a court of law, a foundation or other centralized establishment. We defer instead to the routing of the network and the creation of nodes in bearing shades of the original.

This is an ecosystem, we are the grid, this is walking away from civilization, this is rise of the tribes of BarCamp.

I’ll conclude with a quote from Daniel Quinn‘s Beyond Civilization, where he invokes an interesting word in describing “A new rule for new minds”:

We deeply believe in taking a military approach to problems. We proclaim a “war” on poverty. When that fails, we proclaim a “war” on drugs. We “fight” crime. We “combat” homelessness. We “battle” hunger. We vow to “defeat” AIDS.

Engineers can’t afford to fail as consistently as politicians and bureaucrats, so they prefer accedence to resistance (as I do). For example, they know that no structure can be made rigid enough to resist an earthquake. So, rather than defy the earthquake’s power by building rigid structures, they accede to it by building flexible ones. To accede is not merely to give in but rather to give in while drawing near; one may accede not only to an argument but to a throne. Thus the earthquake-proof building survives not be defeating the earthquake’s power by by acknowledging it — by drawing it in and dealing with it.

This is the path forward, and the path that I prefer to any kind of control, ownership or dictatorship. I believe that it also the one of the BarCamp community, and so long as we are able to accede to our environment and always respond to it positively, productively and optimistically, I think that we stand a chance to see the change realized that we wish to become.

Microformatting the Future of Web Apps

Update: I’ve updated my schedule corrections to include hcards for all the speakers, so besides adding the entire schedule to your calendar, you can now import all the speakers to your address book.

Lisa from FoWA notified me that she’s since incorporated my hcalendar changes into the official schedule. Nice!

FoWA Banner

I wanted to draw attention to the effort put into the schedule for the upcoming Future of Web Apps (which we’re in London for). One the surface, it’s a great looking schedule — under the hood, you’ll find microformats marking up the times of the sessions. A nice effort, to be sure, except that their effort lacks a certain… accuracy.

I point this out for two reasons: one, I’d love to see the schedule fixed so that you can download it into your calendar. Second, it serves as a good example of why the Microformats community has been wise to minimize the use of both hidden microformatted content as well as invisible meta data as much as possible.

To illustrate the problem, let me point out two important elements of the microformat. These elements specify when an event begins and ends respectively. From the icalendar standard, these values are indicated by the and attributes. For example, this code would indicate that an event starts on Feb 20th at 6pm in London:

<abbr class="dtstart" title="20070220T1800Z">6pm</abbr>

However, when viewed in a browser, it looks like this: 6pm, and taken out of context, that 6pm could happen on any day of any year in any timezone. By marking up that time with an ISO datetime in the context of an hcalendar object, we know exactly what time and in what timezone we’re talking about.

So, looking at the FoWA schedule, you don’t know it, but even though it looks like it’s offering all the right times and correct information in the human-facing data, delving into the microformatted data will reveal a very different agenda, specifically one that takes place in 2006 and goes backwards in time, with some events ending on the day before they started.

Again, they’re certainly to be commended for their efforts to microformat their schedule to make it easy to import and subscribe to, but they seem to have missed an opportunity in actually providing a computer-readable schedule.

Here are some things that need to be fixed on the schedule:

  1. All times need to be contained in <abbr> tags, not <span>s. This is a common error in marking up hcalendar, so watch for this one first.
  2. Second, the dates specified in the title attributes need to be 100% accurate; it’s better to have no data than incorrect data.
  3. Third, all start times should begin before the end times, unless you’re marking up the schedule for a time machine.
  4. I should point out that it would be useful if all people and organization were marked up as , but that’s a separate matter.
  5. Lastly, it always helps to validate your basic XHTML and run your microformatted content through consuming applications like Operator, X2V or Tails to see if the existing tools can make sense of your data. If not, it won’t work for anyone else either.

I’ve gone head and corrected the schedule. I’d love the for the FoWA team to take these basic changes and incorporate them into their schedule, but I know they’re busy, so in the meantime, feel free download the schedule in ICS format using Brian Suda‘s X2V transform script.

Scoping XFN and identifying authoritative hcards

Before I can write up my proposal for transcending social networks, I need to clarify the originating and destination scopes of XFN links.

It’s currently understood that links describe personal relationships between two URLs.

Typically the endpoints of XFN links are URL-centric personal blogs (i.e. horsepigcow.com or tantek.com), but not always. And because we can’t always assume that the outgoing linker speaks for the whole URL, or that the destination linkee is all inclusive, we need a set of standard criteria to help us determine the intended scope of the originating linker.

Put another way, how can we better deduce who is XFN-linking to whom?

Let’s take a concrete example.

The established XFN protocol states that when I XFN-link from my blog at factoryjoe.com to horsepigcow.com, that I’m describing the relationship between me and Tara Hunt, and our blogs act as our online proxies. Readers of our blogs already know to equate factoryjoe.com with Chris Messina and horsepigcow.com with Tara Hunt, but how can computers tell?

Well, if you check our source code, you’ll find an hcard that describes our contact information — marked up in such a way that a computer can understand, “hey, this data represents a person!”

If only things were so simple though.

If I linked to Tara and there were only one hcard on the page, you could probably assume that that single hcard contained authoritative contact details for her since knowing that Tara blogs at horsepigcow.com there’d be a good chance that she put it there. Sure enough, in her case, the hcard on horsepigcow.com does represent Tara.

Now, flip that around and let’s have Tara XFN-link back to my blog. This time instead of one hcard, she’ll most certainly find more than one hcard, and, most perplexing of all, most are not me, but rather people for whom I’ve marked up as hcards in my blog posts.

So, if you’re a computer trying to make sense of this information to determine who Tara’s trying to link to, what are you to think? Which relationship is she trying to describe with her link?

Well, as a stop-gap measure that I think could be easily and universally adapted to add definitiveness to any arbitrary hcard at the end of an XFN link, I propose using the <address> tag. Not only has this been proposed before and not been overruled, but it is actually semantically appropriate. Furthermore, there are already at least a few examples in the wild, notably on my blog, Tara’s blog, and most importantly, Tantek’s.

Therefore, to create a definitive an authoritative hcard on any page, simply follow this example markup (note the self-referencing use of rel-me for good measure):

.code { border: 1px solid #ccc; list-style-type: decimal-leading-zero; padding: 5px; margin: 0; }
.code code { display: block; padding: 3px; margin-bottom: 0; }
.code li { background: #ddd; border: 1px solid #ccc; margin: 0 0 2px 2.2em; }

  1. <address class="vcard" id="hcard">
  2. <a href="https://factoryjoe.com/blog/contact/#hcard" rel="me" class="fn n">Chris Messina</a>.
  3. </address>

At the destination URL, include a fragment identifier (#hcard) for the hcard with the complete contact information and add rel-self in addition to rel-me (as per John Allsopp’s suggestion):

  1. <address class="vcard" id="hcard">
  2. <a href="https://factoryjoe.com/" rel="me self" class="fn n">Chris Messina</a>.
  3. </address>

This practice will primarily help identify who XFN-linkers intend to link to when pointing to a blog or URL with multiple hcards. In the event that no definitive hcard is discovered, the relationship can be recorded until later when the observing agent can piece together who owns the URL by analyzing secondary clues (rel-me or other hcards that point to the URL and claim it).

Oh, and I should note that from the standpoint of multi-author blogs, we should be able to scope XFN links to the author of the entry — with entry-author making this infinitely easier.

hResume is live on LinkedIn

Detecting hResume on LinkedIn

And the hits just keep on comin’.

I’m thrilled to be able to pass along Steve Ganz of LinkedIn‘s Twitter announcement (tweet?) of their support for hResume on LinkedIn (these tweets are becoming trendy!).

Brian Oberkirch is curious about the process they went through in applying microformats post facto — that is, without changing much of the existing codebase and design — and will have a podcast with Steve tomorrow on the topic. Personally I’m curious if they developed any best practices or conventions that might be passed on to other implementors that might improve the appearance and/or import/export of hResumes.

If you’ve been playing along, you’ll note that this is one of the first examples of a successful community-driven effort to create a microformat that wasn’t directly based on some existing RFC (like vcard and ical). Rather, a bunch of folks got together and pushed through the definition, research and iteration cycles and released a spec for the community to digest and expound upon.

Soon after, a WordPress plugin and a handy creator were released, Tails added support and then Emurse got hip: Elegant template has hResume support — long term planning, ya know? It’s your data, and we want to make it as flexible as possible..

I wrote about the importance of hResume in August:

Why is this better than going to Monster.com and others? Well, for one thing, you’re always in charge of your data, so instead of having to fill out forms on 40,000 different sites, you maintain your resume on your site and you update it once and then ping others to let them know that you’ve updated your resume. And, when people discover your resume, they come to you in a context that represents you and lets you stand out rather than blending into a sea of homogeneous-looking documents.

Similar threads have come up recently about XFN, hcard and OpenID on the OpenID mailing list and the possible crossover with hResume should not be ignored. When LinkedIn is already support hcard and XFN — it’s just a matter of time before they jump on OpenID and firmly plant themselves in the future of decentralized professional networks.

Oh, and the possibilities to accelerate candidate discovery for all those job boards shouldn’t be understated either.

Twitter and the future of transmogrification

Technorati on Twitter

I proposed to Ma.gnolia a short while ago that they start using Twitter to broadcast their system status updates and they implemented it shortly thereafter.

The beauty of using Twitter is its flexibility — you can ping it using Jabber, the web, SMS or through its API. You can also receive updates through the same protocols, as well as via feed subscriptions. I call this “” — essentially the ability to morph data between forms and through various inputs.

It seems that others are picking up on the trend towards Twitterification — and I find it very interesting, especially as the differentiation between bot, aggregate and human is essentially nonexistent. Was it a service, a friend or one of many friends pinging you just then? One never knows!

So far I’ve found these non-individual, non-human Twitterers

Organizations & Companies

Weather

I’m sure there are more, but do you know of any more that I missed?

Mac Mash Pit/CocoaDevHouse tomorrow at Obvious Corp

Mash Pit logoJust in case you’re still in town and your fingers are itchin’ to push some pixels or get some code out, tomorrow there’ll be a Mac Mash Pit at Obvious Corp’s offices in South Park from noon till late afternoon. If you’ve got an hour or two to spare, it’ll be a great chance to meet the folks behind ODEO and Twitter and to get a little hacking done.

Rumor has it that Larry from Ma.gnolia will also be there as well as R. Tyler Ballance from the infamous Bleep Software and Blake Burris, the host, from CocoaRadio.

What’s a Mash Pit? Well, historically they’ve been day long events getting together multi-disciplinary and talented folks to work on projects that focus on problems described in human terms, like, how can you make it easier for folks to send contact info to each other. And so on. Recently, Mash Pits have become more theme-driven, with a number of OpenID Mash Pits popping up. So, it only seemed appropriate that while MacWorld was going on to bring the event to Mac developers and designers.

Hope to see you there tomorrow!

Information philanthropy

I hadn’t quite thought about the co-production economy from the standpoint of philanthropy, but in a message from Chris Baskind, the admin of the Lighter Footstep Ma.gnolia Group, he said:

I know there’s nothing more valuable to you than your time, so let me ask for it directly: please contribute great links when you see them. Ma.gnolia’s interface is snappier than ever, and it doesn’t take long to archive a resource that might really make a difference to someone down the line.

It occurs to me that perhaps in the information economy, quality information, links and good ideas really are useful and valuable surrogates in place of donating money, which require centralized bodies, disclosures and other “conversion taxes” (that is, changing your dollars and cents into things that are tangibly useful for an endeavor).

I dunno, thoughts?

Would you like Google Java with that?

Google has open sourced its Google Web Toolkit under the Apache 2.0 license. This is great news for Java-based web developers… but for other folks who prefer PHP and Rails, I’m not sure what to make of it. I do have to admit, their announcement and all the pieces of it make for a great example of a textbook launch of a new open source initiative.