Raising the standard for avatars


Not long ago, Gravatar crawled back out from the shadows and relaunched with a snazzy new service (backed by Amazon S3) that lets you claim multiple email addresses and host multiple gravatars with them for $10 a year.

The beauty of their service is that it makes it possible to centrally control the 80 by 80 pixel face that you put out to the world and to additionally tie a different face to each of your email addresses. And this works tremendously well when it comes to leaving a comment somewhere that a) supports Gravatar and b) requires an email address to leave a comment.

Now, when Gravatar went dark, as you might expect, some enterprising folks came together and attempted to develop a decentralized standard to replace the well-worn service in a quasi-authoritarian spec called Pavatar (for personal avatar).

Aside from the of a new term, the choice to create an overly complicated spec and the sadly misguided attempt to call this effort a microformat, the goal is a worthy one, and given the recent question on the OpenID General list about the same quandary, I thought I’d share my thoughts on the matter.

For one thing, avatar solutions should focus on visible data, just as microformats do — as opposed to hidden and/or spammable meta tags. To that end, whatever convention is adopted or promoted should reflect existing standards. Frankly, the microformat already provides a mechanism for identifying avatars with its “photo” attribute. In fact, if you look at my demo hcard, you’ll see how easy it would be to grab data from this page. There’s no reason why other social networks couldn’t adopt the same convention and make it easy to set a definitive profile for slurping out your current avatar.

In terms of URI locating, I might recommend a standard convention that appends avatar.jpg to the end of an OpenID as a means of conveniently discovering an avatar, like so. This concept follows the favicon.ico convention of sticking the favicon.ico file in the root directory of a site, and then using this icon in bookmarks. There’s no reason why, when URLs come to represent people, we can’t do the same thing for avatars.

Now, off of this idea is probably my most radical suggestion, and I know that when people shoot me down for it, it’s because I’m right, but just early (as usual).

Instead of a miserly 80 pixels square, I think that default personal avatars should be 512 pixels square (yes, a full 262,144 pixels rather than today’s 6,400).

There are a couple reasons and potential benefits for this:

  1. Leopard’s resolution independence supports icons that are 512px square (a good place to draw convention). These avatars could end up being very useful on the desktop (see Apple’s Front Row).
  2. While 80 pixels might be a useful size in an application, it’s often less than useful when trying to recognize someone in a lineup.
  3. We have the bandwidth. We have the digital cameras and iSights. I’m tired of squinting when the technology is there to fix the problem.
  4. It provides a high fidelity source to scale into different contortions for other uses. Trying blowing up an 80 pixel image to 300 pixels. Yuck!
  5. If such a convention is indeed adopted, as was, we should set the bar much higher (or bigger) from the get-go

So, a couple points to close out.

When I was designing Flock, I wanted to push a larger subscribable personal avatar standard so that we could offer richer, more personable (though hopefully not as male-dominated) interfaces like this one (featuring Technorati’s staff at the time):

Friends Feed Reading

In order to make this work across sites, we’d need some basic convention that folks could use in publishing avatars. Even today, avatars vary from one site to the next in both size and shape. This really doesn’t make sense. With the advent of OpenID and URL-based identity mashed up with microformats, it makes even less sense, though I understand that needs do vary.

So, on top of providing the basic convention for locating an avatar on the end of an OpenID (http://tld.com/avatar.jpg), why not use server-side transforms to also provide various avatar sizes, in multiples of 16, like: avatar.jpg (original, 512×512) avatar_256.jpg, avatar_128.jpg, avatar_48.jpg, avatar_32.jpg, avatar_16.jpg. This is similar to the Apple icon .icns format … I see no reason why we can’t move forward with better and richer representations of people.


Getting back to POSH (Plain ol’ Semantic HTML)

Salt and Pepper shakers

Original photo by paul goyette and shared under the Attribution-ShareAlike 2.0 license.

Following Web2Expo/, a number of us got together for a Microformats dinner at Thirsty Bear. Some concern was raised over the increasing influx of proposals for new microformats — instead of sustained work on existing formats or techniques.

In discussing this, we realized a few things. Chief among them is that, as a community, we’ve been spending a great deal of time and effort providing a rationale and explanation for why microformats are important and how we use a community-driven process to derive new microformats. Now, there are historic reasons why our process is different and why we continually refer new members to it. If you consider that web standards themselves are created, reviewed and ratified by the , a consortium of paying members bound to very specific rules and mandates, you’ll realize that the value of our community’s output is measurable by the degree to which we are able to consistently produce high quality, clear and implementable specifications. Without adherence to recognized process, chaos would unfold and we’d end up with a myriad of inconsistent and overlapping formats, which is what essentially killed the Structured Blogging initiative.

In the microformats community, it’s existing behavior discovered through research and prior standards work that most often leads to new formats, and this work is often undertaken and championed by independent individuals, as opposed to corporations. On top of that, our self-imposed mandate is to stay specific, focused and relevant, optimizing for the 80% use cases and ignoring the 20% edge cases.

This story has been replayed and retold the world over, with great effect and consequence. What we have failed to articulate in the same time and space, however, is what work is necessary beyond the creation of new microformats. And because of that, we have more so many folks joining the community, eager to help, and seeing only the opportunity to — what else? — create a new microformat (in spite of the warning to not do so!)!

So, the ultimate result of the conversation that night was to focus on a rebranding of an old idea along with a new process for generally getting involved in the microformats movement with a subset of tasks focused exclusively on advancing POSH.

From now on, we will now be promoting POSH (as coined by kwijibo in IRC) as a first order priority, alongside the development and improvement of existing microformats.

POSH (“Plain Old Semantic HTML”) is a very old idea, and constitutes the superset of semantic patterns within which microformats exist:

POSH Diagram

With POSH thusly established, we have enumerated four classes of actions that collectively represent a Process for Contributing in order to better channel the energy of new-comers and old-timers alike:

  1. Publish: if you’re not already, add valid, semantic markup to your own website. It goes without saying that you should also be publishing microformats wherever it makes sense. Focus on improving the web as it is and that you have access to.
  2. Spread: advocate for and encourage others to follow your lead in implementing valid POSH and microformats. Familiarize yourself with web standards, accessibility, and why POSH is important. Do presentations on POSH at BarCamps and elsewhere; write about it, share it with friends, hold POSH Pits to create and build things with POSH. Add buttons (coming soon) to your site once you’ve been POSHified!
  3. Use: consume microformats — and better yet — add live subscriptions to data marked up in existing formats. With all the microformats , we need to start seeing some really innovative and time-saving uses of microformats, including tools for easily embedding microformatted content into blog posts and elsewhere.
    1. OpenID: meanwhile, consider adding OpenID identity services to your application or service — and support and data syncing.
  4. Improve: once you’ve gone through and added POSH to all your websites, go back and refactor, iterate and provide feedback, tips and learnings about what you did, how you did it and why you did things the way you did to the greater community. Tag your posts with ‘POSH’, contribute them to the wiki and generally seek out opportunities for improving the resources available to the wider audience of web designers and programmers.

In the coming days, we’ll be adding more documentation to the wiki and encouraging others to spread the word (as you should!).

Lastly, to help frame the POSH concept, think of of it as a “Fast-tracked Microformats Process” — wherein you can do your research, develop semantic patterns and then implement them without going through the same drawn out process that accepted formats must go through… because the goal is actually not to develop a new format, but to solve a specific and time-sensitive problem. Over time, these implementations will come to represent the body of prior art necessary to make informed decisions about future formats, but the immediate goal is to simply POSHify the web and not attempt the development of yet another format.

The importance of View Source

Camino View Source

There’s been a long history of innovation on the web founded in open access to the underlying source code that first websites, then later interactive web applications, were built on. The facility of having ready access to the inner workings of any web page has been tantamount to continued inspiration, imitation, and most importantly, the ongoing education of subsequent generations of designer-developer hybrids.

On my panel today on The Hybrid Designer, I took a moment to call out my concerns that the shininess of Rich Internet Application (RIA) frameworks like and (the framework formerly known as WPF/E) is blocking out critical consideration to the gravity and potential consequences of moving to these platforms. As Marc Orchant put it:

One of the most interesting discussions in the session was precipitated when Messina voiced his concerns that “containers” for web functionality like Adobe Apollo and Microsoft Silver[light] would make it harder to create dynamic applications that leverage these data streams as they will, he predicted, created new “walled gardens” by obscuring what is currently a pretty open playing field of ideas and techniques. [Jeremy] Keith added the observation that by hiding the source for the hybrid applications created using these tool, up and coming designers would lose a valuable learning resource that runs counter to the spirit of a read/write web built using open, standardized tools. Needless to say, the room was pretty sympathetic to the sentiments expressed by the panel.

In particular, I was suggesting that these frameworks effectively remove the View Source command — an utter reversal in the trend towards openness in web technologies leading to, in my view, new silos within a more closed web.

Ryan Stewart, who sadly I didn’t get a chance to catch up with afterwards, took me to task for my oversimplification:

Today at the Web 2.0 Expo, I sat in on a panel with Richard MacManus, Kelly Goto, Chris Messina and . They talked about the “hybrid designer” and touched on some points about the web and the richness that has really created the “hybrid” notion. In one bit, Chris said he was lamenting the fact that a lot of RIA technologies are taking away the “view source” and he got applause from the crowd.

I think this is the perfect example of how misunderstood the RIA world is. Chris used the example of Apollo and Silverlight as two technologies that are killing view source. Apollo is meant for desktop applications. We don’t have “view source” on the desktop, but that doesn’t mean we couldn’t. Apollo uses Flex and Ajax to create the desktop applications, and BOTH of those allow for view source. It’s true that Flex developers can turn off that feature, but really how is that any different than obfuscating your JavaScript in an Ajax application? When people want to share, the RIA tools out there have mechanisms in place to let them do that. Can you ask for more than that?

I was also surprised to hear Chris complain about Silverlight in that group. Of all the technologies, I think Silverlight actually has the best “view source” support. It uses JavaScript as the programming language behind the hood, and the XAML is just text based, so you can view source just like any other web page and see both the XAML and JavaScript libraries. That’s pretty open I think.

I’ll plead ignorance here (especially in terms of Silverlight), but I refuse to back off from my point about the importance of View Source (a point that I don’t think Ryan disagrees with in principle).

Whether you can get at the “goods” in Silverlight or Apollo apps is only part of the problem. I’ve examined the contents of four or five Apollo apps and each one had any number of impenetrable .swf binaries that I couldn’t do anything with, and even with the complete source code of TwitterCamp, a rather simple Apollo app, it wasn’t obvious how a design-leaning hybrid designer like myself would actually modify the app without buying into expensive Adobe tools like ($699) or ($499). And that in sence, is no different than removing the View Source command altogether.

…and even when I finally did figure out that I could right click and choose View Source while running TwitterCamp, I received this error message and no source code:


Now, Ryan also claims that We don’t have “view source” on the desktop, and I would argue that 1) it depends on your platform and 2) I’m not fundamentally prevented from tinkering with my desktop apps. And this is key.

Let’s drill down for a moment.

On the Mac, every application has the equivalent of a View Source command: simply right click and choose “Show Package Contents”. Since every Mac application is essentially a special kind of folder, you can actually browse the contents and resources of an application — and, in certain cases, make changes. Now, this isn’t as good as getting to the raw source, since there are still unusable binaries in those directories, but you can at least get to the nib files and make changes to the look and feel of an application without necessarily touching code or having the full source.

And so just like on the web, especially with free and open source tools like Firebug and Greasemonkey, with a little bit of knowledge or persistence, you can modify, tweak or wholly customize your experience without getting permission from the application creator all by way of “viewing the source”. More importantly, you can learn from, adapt and merge prior art — source code that you’ve found elsewhere — and that, in turn, can be improved upon and release, furthering a virtuous cycle of innovation and education.

Nonetheless, I’m glad that Ryan has corrected me, especially about Silverlight, which indeed is put together with a lot of plain-text technologies. However, I still can’t help but be skeptical when there seems to be so much in it for Adobe and Microsoft to build out their own islands of the web where people buy only their tools and live in prefab Second Life worlds of quasi-standards that have been embraced and extended. It feels like déjà vu all over again; like we’ve been here before and though I’d thought that we’d internalized the reasons for not returning to those dark ages, the shininess of the new impairs our ability to remember the not-so-distant past… While Ryan may be technically correct about the availability of the source, if that top-level menu item vanishes from the first-gen of RIAs, I remain increasingly concerned that the net result will constitute the emergence of an increasingly closed and siloed web.

I do hope that Ryan’s optimism, coupled with activism from other open source and open web advocates, will work with great speed and efficacy to counter my fears and keep that which is now the most open and vital aspect of the web the way it is now and the way it was meant to be.

Microformats: Empowering Your Markup for Web 2.0

Microformats book arrived!

Microformats: Empowering Your Markup for Web 2.0I received a copy of John Allsopp’s new book, Microformats: Empowering Your Markup for Web 2.0 in the mail today.

My first impression is certainly positive and I think that John has made a very valuable contribution to the community and to our efforts to get microformats out there on the open web.

We now have a solid resource that describes the community, the process, a number of microformats and how they’re being used today and profiles a number of organizations that are making good use of microformats already (sadly he missed Ma.gnolia in the bunch, but there’s always second printings!).

This book is ideal for web developers looking for a handy reference on the existing formats, for web designers wondering about how to make use of microformats in their code and how to apply CSS effectively using their semantics and finally, there’s even probably a trick or two that folks familiar with microformats might learn in its nearly 350 pages.

So, go buy yourself a copy and let me (and John) know what you think!

Microformatting the Future of Web Apps

Update: I’ve updated my schedule corrections to include hcards for all the speakers, so besides adding the entire schedule to your calendar, you can now import all the speakers to your address book.

Lisa from FoWA notified me that she’s since incorporated my hcalendar changes into the official schedule. Nice!

FoWA Banner

I wanted to draw attention to the effort put into the schedule for the upcoming Future of Web Apps (which we’re in London for). One the surface, it’s a great looking schedule — under the hood, you’ll find microformats marking up the times of the sessions. A nice effort, to be sure, except that their effort lacks a certain… accuracy.

I point this out for two reasons: one, I’d love to see the schedule fixed so that you can download it into your calendar. Second, it serves as a good example of why the Microformats community has been wise to minimize the use of both hidden microformatted content as well as invisible meta data as much as possible.

To illustrate the problem, let me point out two important elements of the microformat. These elements specify when an event begins and ends respectively. From the icalendar standard, these values are indicated by the and attributes. For example, this code would indicate that an event starts on Feb 20th at 6pm in London:

<abbr class="dtstart" title="20070220T1800Z">6pm</abbr>

However, when viewed in a browser, it looks like this: 6pm, and taken out of context, that 6pm could happen on any day of any year in any timezone. By marking up that time with an ISO datetime in the context of an hcalendar object, we know exactly what time and in what timezone we’re talking about.

So, looking at the FoWA schedule, you don’t know it, but even though it looks like it’s offering all the right times and correct information in the human-facing data, delving into the microformatted data will reveal a very different agenda, specifically one that takes place in 2006 and goes backwards in time, with some events ending on the day before they started.

Again, they’re certainly to be commended for their efforts to microformat their schedule to make it easy to import and subscribe to, but they seem to have missed an opportunity in actually providing a computer-readable schedule.

Here are some things that need to be fixed on the schedule:

  1. All times need to be contained in <abbr> tags, not <span>s. This is a common error in marking up hcalendar, so watch for this one first.
  2. Second, the dates specified in the title attributes need to be 100% accurate; it’s better to have no data than incorrect data.
  3. Third, all start times should begin before the end times, unless you’re marking up the schedule for a time machine.
  4. I should point out that it would be useful if all people and organization were marked up as , but that’s a separate matter.
  5. Lastly, it always helps to validate your basic XHTML and run your microformatted content through consuming applications like Operator, X2V or Tails to see if the existing tools can make sense of your data. If not, it won’t work for anyone else either.

I’ve gone head and corrected the schedule. I’d love the for the FoWA team to take these basic changes and incorporate them into their schedule, but I know they’re busy, so in the meantime, feel free download the schedule in ICS format using Brian Suda‘s X2V transform script.

Scoping XFN and identifying authoritative hcards

Before I can write up my proposal for transcending social networks, I need to clarify the originating and destination scopes of XFN links.

It’s currently understood that links describe personal relationships between two URLs.

Typically the endpoints of XFN links are URL-centric personal blogs (i.e. horsepigcow.com or tantek.com), but not always. And because we can’t always assume that the outgoing linker speaks for the whole URL, or that the destination linkee is all inclusive, we need a set of standard criteria to help us determine the intended scope of the originating linker.

Put another way, how can we better deduce who is XFN-linking to whom?

Let’s take a concrete example.

The established XFN protocol states that when I XFN-link from my blog at factoryjoe.com to horsepigcow.com, that I’m describing the relationship between me and Tara Hunt, and our blogs act as our online proxies. Readers of our blogs already know to equate factoryjoe.com with Chris Messina and horsepigcow.com with Tara Hunt, but how can computers tell?

Well, if you check our source code, you’ll find an hcard that describes our contact information — marked up in such a way that a computer can understand, “hey, this data represents a person!”

If only things were so simple though.

If I linked to Tara and there were only one hcard on the page, you could probably assume that that single hcard contained authoritative contact details for her since knowing that Tara blogs at horsepigcow.com there’d be a good chance that she put it there. Sure enough, in her case, the hcard on horsepigcow.com does represent Tara.

Now, flip that around and let’s have Tara XFN-link back to my blog. This time instead of one hcard, she’ll most certainly find more than one hcard, and, most perplexing of all, most are not me, but rather people for whom I’ve marked up as hcards in my blog posts.

So, if you’re a computer trying to make sense of this information to determine who Tara’s trying to link to, what are you to think? Which relationship is she trying to describe with her link?

Well, as a stop-gap measure that I think could be easily and universally adapted to add definitiveness to any arbitrary hcard at the end of an XFN link, I propose using the <address> tag. Not only has this been proposed before and not been overruled, but it is actually semantically appropriate. Furthermore, there are already at least a few examples in the wild, notably on my blog, Tara’s blog, and most importantly, Tantek’s.

Therefore, to create a definitive an authoritative hcard on any page, simply follow this example markup (note the self-referencing use of rel-me for good measure):

.code { border: 1px solid #ccc; list-style-type: decimal-leading-zero; padding: 5px; margin: 0; }
.code code { display: block; padding: 3px; margin-bottom: 0; }
.code li { background: #ddd; border: 1px solid #ccc; margin: 0 0 2px 2.2em; }

  1. <address class="vcard" id="hcard">
  2. <a href="https://factoryjoe.com/blog/contact/#hcard&quot; rel="me" class="fn n">Chris Messina</a>.
  3. </address>

At the destination URL, include a fragment identifier (#hcard) for the hcard with the complete contact information and add rel-self in addition to rel-me (as per John Allsopp’s suggestion):

  1. <address class="vcard" id="hcard">
  2. <a href="https://factoryjoe.com/&quot; rel="me self" class="fn n">Chris Messina</a>.
  3. </address>

This practice will primarily help identify who XFN-linkers intend to link to when pointing to a blog or URL with multiple hcards. In the event that no definitive hcard is discovered, the relationship can be recorded until later when the observing agent can piece together who owns the URL by analyzing secondary clues (rel-me or other hcards that point to the URL and claim it).

Oh, and I should note that from the standpoint of multi-author blogs, we should be able to scope XFN links to the author of the entry — with entry-author making this infinitely easier.

hResume is live on LinkedIn

Detecting hResume on LinkedIn

And the hits just keep on comin’.

I’m thrilled to be able to pass along Steve Ganz of LinkedIn‘s Twitter announcement (tweet?) of their support for hResume on LinkedIn (these tweets are becoming trendy!).

Brian Oberkirch is curious about the process they went through in applying microformats post facto — that is, without changing much of the existing codebase and design — and will have a podcast with Steve tomorrow on the topic. Personally I’m curious if they developed any best practices or conventions that might be passed on to other implementors that might improve the appearance and/or import/export of hResumes.

If you’ve been playing along, you’ll note that this is one of the first examples of a successful community-driven effort to create a microformat that wasn’t directly based on some existing RFC (like vcard and ical). Rather, a bunch of folks got together and pushed through the definition, research and iteration cycles and released a spec for the community to digest and expound upon.

Soon after, a WordPress plugin and a handy creator were released, Tails added support and then Emurse got hip: Elegant template has hResume support — long term planning, ya know? It’s your data, and we want to make it as flexible as possible..

I wrote about the importance of hResume in August:

Why is this better than going to Monster.com and others? Well, for one thing, you’re always in charge of your data, so instead of having to fill out forms on 40,000 different sites, you maintain your resume on your site and you update it once and then ping others to let them know that you’ve updated your resume. And, when people discover your resume, they come to you in a context that represents you and lets you stand out rather than blending into a sea of homogeneous-looking documents.

Similar threads have come up recently about XFN, hcard and OpenID on the OpenID mailing list and the possible crossover with hResume should not be ignored. When LinkedIn is already support hcard and XFN — it’s just a matter of time before they jump on OpenID and firmly plant themselves in the future of decentralized professional networks.

Oh, and the possibilities to accelerate candidate discovery for all those job boards shouldn’t be understated either.

Netizen beware

I think the modern plight of IP is fascinating from a cultural development standpoint. Clearly it was believed when the laws were written that they’d be enforceable. Indeed they were, first at the local community level (think of old Wild West towns with their fool’s gold and cure-alls) and then on a larger scale, during the course of industrial development, when companies like Coke could extend their brand dominion the world over.

Now, owing much to the advancement of self-publishing tools and, of course, the Internet, it’s no longer conceivable to prevent every instance of misuse — in fact, as the RIAA and MPAA may someday learn, protecting your mark at the expense of your those who you want to respect your mark is a losing, and extremely costly, battle.

But for all the railing I do against modern IP, I do understand the purpose it serves, even if I don’t agree with the mechanisms or costs of enforcement. And, the cost of not finding a citizen-driven plan of enforcement could be exceptionally disruptive to the economy and to the establishment of new businesses.

While disruption on the one hand can be good as it destableizes the incumbants and shakes old soil from the roots of the system, it can also lead to fear and paralysis, as uncertainty takes over. If you consider that registering your trademark used to be simply a matter of course, and that enforcement against infringers would lead to a nice monetary settlement, that is no longer the standard. Rather, as has been said recently, to own a mark worthy of enforcement will surely lead to a death from a thousand cuts the moment you decide to try to wrest what you think is yours from the millions of fingers of the world at large.

And this is where the conflict lies: in these new economic circumstances, individuals and small businesses cannot afford the cost, in terms of their attention or their dollars, in pursuing infringement, yet, all the same, there is value in the credibility and reputation of the mark they built, which should be theirs to enjoy the benefits thereof. On the flipside, there is the citizen-consumer, who may wish to publish or publicize their love of said brand, but may do so in an otherwise “infringing” way (see Firefox). Now, at the same time, there is a perceived need to protect the hapless consumer from his or herself by way of preventing false actors from imitating or acting in the stead of someone else (think of the Tylenol scare). This is the flipside to trademark in that it attempts to provision economic rewards for playing nice, doing good and putting the onus of protecting your name on the individual who’s name is in question. Therefore if someone does wrong under the guise of your brand, it’s up to you to stop the infringement since it’s your livelihood at stake.

So originally that was a good plan, but as I’ve been discussing, that enforcement now comes at the risk of your business!

So, what are we to do?

Well, a number of us, including Citizen Agency, will file for and receive trademarks. Another portion of us will try to enforce the mark through various means — those who are offline will have the smallest exposure and will probably be able to enforce their mark against a smaller market. Those who go online, which seems to be as necessary as being in the phone book these days, will find the legal environment frustrating, confusing and to start, disempowering.

The way forward then, or at least a choice that should be considered available, is the one we’ve made for BarCamp and Microformats, and for which I advocated for with Mozilla, Creative Commons and OpenID. The choice is to embrace community enforcement — not in preventing bad actors from behaving badly, but in creating more positive examples of good, representative behavior; of creating good documentation and information flows so you know how to judge a phish (notice I didn’t say ‘rat’); an understanding with the community that the centralized body doesn’t have the resources to police its name and is therefore willing to rely on its community in a non-binding way (that is, protection should be afforded so long as the company is doing good things for the community, earning enforcement and their trust) and that in return, the company will “embrace the chaos” and turn over a good deal of “ownership” of its name to the collective.

Now, this won’t work for everyone and indeed causes confusion, dilusion of consistency and an ocassionally unrepresentive act, but on the whole, the notion of a community mark might at least form the foundation for thinking on a non-legal code of conduct-slash-ethics ready and reflective of the 21st century.

Why the iPhone validates microformats

On the one hand, I’m trying to quell my excitement over the iPhone. After all, in two year’s time or from an objective viewpoint, it’s a beautiful piece of industrial design that, as far as phones go, was a long time in coming. In that Apple has done something important in the advance of phone interface design, elevating the equivalent of the flashing time on VCRs for mobile applications.

And that’s awesome, but not what really has me giddy.

Instead, what excites me about the iPhone is Maciej’s work, ostensibly Mozilla’s missed opportunity. WebKit is open source. WebKit supports JavaScript. WebKit is on the iPhone. And, if you remember, Apple’s dotMac mail supports microformats.

Yes, the iPhone will support Yahoo-based IMAP (a shot across RIM’s bow?). BUT, Yahoo, to date, has been a big supporter and implementor of microformats. Already having support in their web properties means that we can start doing things with WebKit in mobile apps that you simply can’t do elsewhere with the same simple webpages that *aren’t* microformatted.

And, pushing forward, this creates an interesting opportunity to offer choice in map technology provider since it looks like Google gets default billing.

I know Maciej is interested in microformats; I know the Mozilla guys are too. From a web developer perspective, having just had a whole *new* device added to my priorities list, microformats and semantic markup generally suddenly me feel a whole lot better about the work I’ll need to do to make my site mobile-friendly. And, as widget-sized and -styled interfaces come to the fore, providing an equivalent CC-like “do as you please with this data” affordance will seem obvious for web apps that have to-date shirked the opportunity provided by microformats to become future-ready.

On emergent policy and ‘self’ vs ‘governance in common’

I had “gone away” from the microformats-discuss list a month or so back owing to Andy [Mabbett]’s sometimes abrasive tone and pedantic reasoning. I simply didn’t have time to parse through all the hub-bub, as interesting as it might have been to certain folks in the middle of it.

I’m glad that Tantek has taken action, as I previously encouraged him to do, because, though I value Andy’s positive contributions to the list, the wiki and the community, many of his contributions worked to unravel or undue the positive karma they earned him.

As Tantek said, it’s a balancing act — and Andy was very good providing net neutral contributions.

But, I do not wish to dwell on that topic, for, at the very least, groundbreaking action has been taken finally, and action that we can learn from, in light of what’s also come before us.

What I did want to talk about, however, are two things — namely the meta-centralization that the microformats-dot-org community represents and the emergent policy that microformats, as an effort to codify a series of best practices that become standard in web-transmittable computer code, stands for. My goal is to illustrate the broader purpose and perspective of the work we’re doing, to propose a proper ego-placement with regards to this work, and suggest potential parallels which make the cabal-like governance work in certain circumstances, and unravel in others, even within this community.

  1. Where microformats fits in the broader picture.

    I’ll get this out of the way right now. The terms and names of microformat classes, rel-values and so on don’t matter. They don’t. In many sense, they’re arbitrary, just as AJAX and HTML caught on. They’re simply placeholders for meaning, like the dollar bill is used to transmit the meaning of value in society.

    What is valuable, however, is agreement on terms. Agreement and implementation between organizations and institutions — for implementation is non-binding, but by supporting a common cause, both parties stand to benefit in ways neither is quite sure of yet, but sees no reason to act to the contrary.

    In this case, microformats such as hcard and hcalendar have found wide support, because, unlike other external efforts that tried to reinvent schema, we (Tantek in particular) dispensed with coming up with yet more schema and went with existing convention (note that when we have undertaken the “naming” process with new microformats, that process is often where most of this community’s contention and dissension lies).

    But naming is an ego-driven event that is similar to an artist signing his or her work; and when has a community produced a singular piece of artwork? Rarely, if ever!

  2. Why the microformats community operates as a cabal, and why it should continue to do so.

    Anyone who has participated in this community for some time will know how hard it is to get a new microformat “blessed” — that is, accepted, documented, promoted and ‘officialized’ by the community. There are many microformats efforts that have been relegated to the scrapheap of semantic history or to the personal industry of smaller parties, but very few efforts actually result in what we all would call a microformat when we see it.

    Truth be told, coming up with standards of any kind is a difficult and harried process. There are those among us who have direct experience with closed bodies who have and have not been successful with their charge to develop interoperable standards and who could teach us all about the quagmire that is standards development. But there is strength is focus and in defending an ideal by intuitive fiat, even if it seems unfair to those who have a great deal to offer but do not have the same deftness that the incumbents possess.

    As such, those who have been around from the beginning and have weathered the hills and dales of this community have, in my opinion, earned their seat at the table of the cabal. Fortunately, this cabal is dependent upon the support of the community and upon obeisance of its dicta or else it would simply cease to exist. In that way, the controlling cabal is still very much subservient to the implementations and good works of the community to give it its power; if people stopped implementing or caring about microformats tomorrow, regardless of their perceived arrogance or very real self-assurance, their importance would only be to themselves.

    And in that way, there is an important balance achieved, between despotism and collaboration fueled by meritorious leadership. But, this only scales to such a degree — and feudalism can only hold so long as the needs of the tenants are being met often enough. In the case where centralization and cabalism leads to paralysis of natural growth and species development, certain changes are in order.

  3. On the continued rhizomatic development of microformats

    A rhizome is a type of root-based plant that sends out lateral roots to create offshoot new instantiations of itself. Strawberries are rhizomatic as is ginger. What’s important about a rhizome is that it’s growth path is predicated on similar and equal offshoots being cultivated in environments in which the original may not have been borne. As such, the offshoot is better healed to deal with the foreign environment than if the original had simply been cloned or if it had tried to impose itself on a foreign or hostile soil.

    What does this have to do with this community? Well, for one thing, the cabal-like institution of the microformats community leadership is powerful because we give it its power. And I trust it to look out for our best interests; at the same time, I think that there are opportunities to both relieve some pent up pressure as well as consider alternative models that would continue to effectively spread microformats and the practices that this community espouses beyond our areas of natural influence.

    I think a salient example of this came recently when my partner, Tara Hunt, was consider for deletion on Wikipedia (as I have been consider before). Now, Wikipedians obviously have the interest of Wikipedia in mind when they consider removing things from the index and they also, one might surmise, have the readers in mind as well. However, in both discussions over whether to remove Tara and myself from the index (and this has been repeated for other people in the index as well) it was the *individual bias of Wikipedia editors* that ruled out over the unspoken interest of the minority communities that stood to benefit from our inclusion (one person even suggested that I be kept in the index since I was a “Notable programmer that assisted in creating a few notable groups and browsers” — those who know me know that I can’t code for shyte — and thus the reasoning for keeping would have been arbitrary at best).

    So, coming back to microformats, I think that it’s time, as a matter of governance and Darwinian evolution, that we actual begin thinking about allowing new species of microformats to exist in the wild — they may not receive a “blessing” by us, but I hardly think that all the creatures on earth today were predicted in any non-secular books.

    To this end, I would recommend the specific explanation and characterization, vis-a-vis the microformats process, of efforts that fall into any of these categories:

    1. best practice — a technique has been discovered to make the composition of XHTML documents more consistent or more semantically accurate, for example, using the <cite> tag
    2. design pattern — this isn’t necessary a “data format” in the sense that microformats should be about data interchange, but a design pattern is XHTML that can be used to facilitate the development of human interfaces, and may, for example, leverage existing microformats to achieve its affect (an example could be if flickr applied a behavior to hcards that allowed you to add a person marked up with the hcard microformat to your friends list)… the presence of microformats for a design pattern, however, is purely optional
    3. exploratory/brainstorming — gee, wouldn’t it be great to have a format for Smooth Peanut Butter? — primarily at the early stages, no code is necessary to explore a concept, but an interested or committed following is present and is willing to document the problem they’d like to solve and existing behavior
    4. working draft — essentially a series of conventions or best practices have been developed that may show up in the wild and that are probably “good enough” to start putting into use, with the understanding that changes are still likely
    5. recommendation/specification — this is where things solidify enough so that making a change has some impact… in fact, you could use this stage to definitively mark up your documents knowing that a change is unlikely; what separates this stage from becoming a “real” microformat is implementations in the wild; if no one adopts or puts this work into practice, you have a dead standard that would serve only to clutter the microformat ecosystem
    6. microformat — only when there is mass deployment in the wild, such that, given any significant sampling of pages on the open web, you *might* bump into this format, should it then be considered an actual microformat — for in practice, the community at large (the one that subsumes the microformats community and its leading cabal) has shown its support by adopting the conventions recommended in the spec and have shown their approval of it by *actually deploying it*

      The last and final stage is the hardest, as it requires influence, political might and campaigning; but those are the microformats that will likely last and be embraced — and, futhermore, are the most indisputable because there are real, rather than imagined or potential, statistics behind them.

    7. Note that that list is preliminary, but does pay homage to the W3C process stages, but in a much more informal way:

    1. Working Draft (WD)
    2. Last Call Working Draft
    3. Candidate Recommendation (CR)
    4. Proposed Recommendation (PR)
    5. W3C Recommendation (REC)
  4. Finally, to conclude, I would like to suggest that expanding and making more explicity the preliminary stages of “microformat crystalization” allows external communities to take this effort and expand it beyond our natural sphere of influence or first-hand knowledge. The purpose, of course, is to avoid the kind of Wikipedian-myoptic purview that would lead the effort down the path of exclusivity and stagnation. If anything stands out about the current governance structure, it’s that we have a strong political will in Tantek who does a damn fine job keeping us on target but who, to the detriment of the whole, hasn’t allowed for market forces to take care of the nascent efforts that might emerge external to this list.

    If anything else, I want to avoid at all costs, now that we’re seeing popular support from Firefox et al, the conversion of our rich and diverse community into a Tech Crunch-like kingmaker — that people somehow think they have to win favor with in order to be successful. I think the point is that anyone should be able to build out and see through the execution and development of a microformats, potentially entirely outside of this list, simply by religiously adhering to the principals by which we govern ourselves and allow ourselves to be governed.

    For all the times that Andy has asked Tantek “what gives you the right?” there is an equal opportunity to say, “I give myself the right” to take these ideas, these practices, the fundamental goes and assumptions of this community and to strike out on my own, to pursue that which I know is right and is valuable to a community that those who reside on the list are unfamiliar with. For all Andy’s struggles to have his way, there was a larger goal of using simple principles to semanticize the web that he could have, at any point, taken elsewhere and not forked the community, but done his work in an environment that suited him better.

    I know why Tantek did why he did and I support him in his decision. But I also support Andy’s ability to pioneer his own efforts, not necessary under the microformats name, but under the same principles. And should he be successful, well, he certainly would have some valuable bargaining chips to lay down when he offers his opinions to the us and to the cabal, wouldn’t he?