Getting back to POSH (Plain ol’ Semantic HTML)

Salt and Pepper shakers

Original photo by paul goyette and shared under the Attribution-ShareAlike 2.0 license.

Following Web2Expo/, a number of us got together for a Microformats dinner at Thirsty Bear. Some concern was raised over the increasing influx of proposals for new microformats — instead of sustained work on existing formats or techniques.

In discussing this, we realized a few things. Chief among them is that, as a community, we’ve been spending a great deal of time and effort providing a rationale and explanation for why microformats are important and how we use a community-driven process to derive new microformats. Now, there are historic reasons why our process is different and why we continually refer new members to it. If you consider that web standards themselves are created, reviewed and ratified by the , a consortium of paying members bound to very specific rules and mandates, you’ll realize that the value of our community’s output is measurable by the degree to which we are able to consistently produce high quality, clear and implementable specifications. Without adherence to recognized process, chaos would unfold and we’d end up with a myriad of inconsistent and overlapping formats, which is what essentially killed the Structured Blogging initiative.

In the microformats community, it’s existing behavior discovered through research and prior standards work that most often leads to new formats, and this work is often undertaken and championed by independent individuals, as opposed to corporations. On top of that, our self-imposed mandate is to stay specific, focused and relevant, optimizing for the 80% use cases and ignoring the 20% edge cases.

This story has been replayed and retold the world over, with great effect and consequence. What we have failed to articulate in the same time and space, however, is what work is necessary beyond the creation of new microformats. And because of that, we have more so many folks joining the community, eager to help, and seeing only the opportunity to — what else? — create a new microformat (in spite of the warning to not do so!)!

So, the ultimate result of the conversation that night was to focus on a rebranding of an old idea along with a new process for generally getting involved in the microformats movement with a subset of tasks focused exclusively on advancing POSH.

From now on, we will now be promoting POSH (as coined by kwijibo in IRC) as a first order priority, alongside the development and improvement of existing microformats.

POSH (“Plain Old Semantic HTML”) is a very old idea, and constitutes the superset of semantic patterns within which microformats exist:

POSH Diagram

With POSH thusly established, we have enumerated four classes of actions that collectively represent a Process for Contributing in order to better channel the energy of new-comers and old-timers alike:

  1. Publish: if you’re not already, add valid, semantic markup to your own website. It goes without saying that you should also be publishing microformats wherever it makes sense. Focus on improving the web as it is and that you have access to.
  2. Spread: advocate for and encourage others to follow your lead in implementing valid POSH and microformats. Familiarize yourself with web standards, accessibility, and why POSH is important. Do presentations on POSH at BarCamps and elsewhere; write about it, share it with friends, hold POSH Pits to create and build things with POSH. Add buttons (coming soon) to your site once you’ve been POSHified!
  3. Use: consume microformats — and better yet — add live subscriptions to data marked up in existing formats. With all the microformats , we need to start seeing some really innovative and time-saving uses of microformats, including tools for easily embedding microformatted content into blog posts and elsewhere.
    1. OpenID: meanwhile, consider adding OpenID identity services to your application or service — and support and data syncing.
  4. Improve: once you’ve gone through and added POSH to all your websites, go back and refactor, iterate and provide feedback, tips and learnings about what you did, how you did it and why you did things the way you did to the greater community. Tag your posts with ‘POSH’, contribute them to the wiki and generally seek out opportunities for improving the resources available to the wider audience of web designers and programmers.

In the coming days, we’ll be adding more documentation to the wiki and encouraging others to spread the word (as you should!).

Lastly, to help frame the POSH concept, think of of it as a “Fast-tracked Microformats Process” — wherein you can do your research, develop semantic patterns and then implement them without going through the same drawn out process that accepted formats must go through… because the goal is actually not to develop a new format, but to solve a specific and time-sensitive problem. Over time, these implementations will come to represent the body of prior art necessary to make informed decisions about future formats, but the immediate goal is to simply POSHify the web and not attempt the development of yet another format.

The importance of View Source

Camino View Source

There’s been a long history of innovation on the web founded in open access to the underlying source code that first websites, then later interactive web applications, were built on. The facility of having ready access to the inner workings of any web page has been tantamount to continued inspiration, imitation, and most importantly, the ongoing education of subsequent generations of designer-developer hybrids.

On my panel today on The Hybrid Designer, I took a moment to call out my concerns that the shininess of Rich Internet Application (RIA) frameworks like and (the framework formerly known as WPF/E) is blocking out critical consideration to the gravity and potential consequences of moving to these platforms. As Marc Orchant put it:

One of the most interesting discussions in the session was precipitated when Messina voiced his concerns that “containers” for web functionality like Adobe Apollo and Microsoft Silver[light] would make it harder to create dynamic applications that leverage these data streams as they will, he predicted, created new “walled gardens” by obscuring what is currently a pretty open playing field of ideas and techniques. [Jeremy] Keith added the observation that by hiding the source for the hybrid applications created using these tool, up and coming designers would lose a valuable learning resource that runs counter to the spirit of a read/write web built using open, standardized tools. Needless to say, the room was pretty sympathetic to the sentiments expressed by the panel.

In particular, I was suggesting that these frameworks effectively remove the View Source command — an utter reversal in the trend towards openness in web technologies leading to, in my view, new silos within a more closed web.

Ryan Stewart, who sadly I didn’t get a chance to catch up with afterwards, took me to task for my oversimplification:

Today at the Web 2.0 Expo, I sat in on a panel with Richard MacManus, Kelly Goto, Chris Messina and . They talked about the “hybrid designer” and touched on some points about the web and the richness that has really created the “hybrid” notion. In one bit, Chris said he was lamenting the fact that a lot of RIA technologies are taking away the “view source” and he got applause from the crowd.

I think this is the perfect example of how misunderstood the RIA world is. Chris used the example of Apollo and Silverlight as two technologies that are killing view source. Apollo is meant for desktop applications. We don’t have “view source” on the desktop, but that doesn’t mean we couldn’t. Apollo uses Flex and Ajax to create the desktop applications, and BOTH of those allow for view source. It’s true that Flex developers can turn off that feature, but really how is that any different than obfuscating your JavaScript in an Ajax application? When people want to share, the RIA tools out there have mechanisms in place to let them do that. Can you ask for more than that?

I was also surprised to hear Chris complain about Silverlight in that group. Of all the technologies, I think Silverlight actually has the best “view source” support. It uses JavaScript as the programming language behind the hood, and the XAML is just text based, so you can view source just like any other web page and see both the XAML and JavaScript libraries. That’s pretty open I think.

I’ll plead ignorance here (especially in terms of Silverlight), but I refuse to back off from my point about the importance of View Source (a point that I don’t think Ryan disagrees with in principle).

Whether you can get at the “goods” in Silverlight or Apollo apps is only part of the problem. I’ve examined the contents of four or five Apollo apps and each one had any number of impenetrable .swf binaries that I couldn’t do anything with, and even with the complete source code of TwitterCamp, a rather simple Apollo app, it wasn’t obvious how a design-leaning hybrid designer like myself would actually modify the app without buying into expensive Adobe tools like ($699) or ($499). And that in sence, is no different than removing the View Source command altogether.

…and even when I finally did figure out that I could right click and choose View Source while running TwitterCamp, I received this error message and no source code:

Alert

Now, Ryan also claims that We don’t have “view source” on the desktop, and I would argue that 1) it depends on your platform and 2) I’m not fundamentally prevented from tinkering with my desktop apps. And this is key.

Let’s drill down for a moment.

On the Mac, every application has the equivalent of a View Source command: simply right click and choose “Show Package Contents”. Since every Mac application is essentially a special kind of folder, you can actually browse the contents and resources of an application — and, in certain cases, make changes. Now, this isn’t as good as getting to the raw source, since there are still unusable binaries in those directories, but you can at least get to the nib files and make changes to the look and feel of an application without necessarily touching code or having the full source.

And so just like on the web, especially with free and open source tools like Firebug and Greasemonkey, with a little bit of knowledge or persistence, you can modify, tweak or wholly customize your experience without getting permission from the application creator all by way of “viewing the source”. More importantly, you can learn from, adapt and merge prior art — source code that you’ve found elsewhere — and that, in turn, can be improved upon and release, furthering a virtuous cycle of innovation and education.

Nonetheless, I’m glad that Ryan has corrected me, especially about Silverlight, which indeed is put together with a lot of plain-text technologies. However, I still can’t help but be skeptical when there seems to be so much in it for Adobe and Microsoft to build out their own islands of the web where people buy only their tools and live in prefab Second Life worlds of quasi-standards that have been embraced and extended. It feels like déjà vu all over again; like we’ve been here before and though I’d thought that we’d internalized the reasons for not returning to those dark ages, the shininess of the new impairs our ability to remember the not-so-distant past… While Ryan may be technically correct about the availability of the source, if that top-level menu item vanishes from the first-gen of RIAs, I remain increasingly concerned that the net result will constitute the emergence of an increasingly closed and siloed web.

I do hope that Ryan’s optimism, coupled with activism from other open source and open web advocates, will work with great speed and efficacy to counter my fears and keep that which is now the most open and vital aspect of the web the way it is now and the way it was meant to be.

Microformats: Empowering Your Markup for Web 2.0

Microformats book arrived!

Microformats: Empowering Your Markup for Web 2.0I received a copy of John Allsopp’s new book, Microformats: Empowering Your Markup for Web 2.0 in the mail today.

My first impression is certainly positive and I think that John has made a very valuable contribution to the community and to our efforts to get microformats out there on the open web.

We now have a solid resource that describes the community, the process, a number of microformats and how they’re being used today and profiles a number of organizations that are making good use of microformats already (sadly he missed Ma.gnolia in the bunch, but there’s always second printings!).

This book is ideal for web developers looking for a handy reference on the existing formats, for web designers wondering about how to make use of microformats in their code and how to apply CSS effectively using their semantics and finally, there’s even probably a trick or two that folks familiar with microformats might learn in its nearly 350 pages.

So, go buy yourself a copy and let me (and John) know what you think!

Microformatting the Future of Web Apps

Update: I’ve updated my schedule corrections to include hcards for all the speakers, so besides adding the entire schedule to your calendar, you can now import all the speakers to your address book.

Lisa from FoWA notified me that she’s since incorporated my hcalendar changes into the official schedule. Nice!

FoWA Banner

I wanted to draw attention to the effort put into the schedule for the upcoming Future of Web Apps (which we’re in London for). One the surface, it’s a great looking schedule — under the hood, you’ll find microformats marking up the times of the sessions. A nice effort, to be sure, except that their effort lacks a certain… accuracy.

I point this out for two reasons: one, I’d love to see the schedule fixed so that you can download it into your calendar. Second, it serves as a good example of why the Microformats community has been wise to minimize the use of both hidden microformatted content as well as invisible meta data as much as possible.

To illustrate the problem, let me point out two important elements of the microformat. These elements specify when an event begins and ends respectively. From the icalendar standard, these values are indicated by the and attributes. For example, this code would indicate that an event starts on Feb 20th at 6pm in London:

<abbr class="dtstart" title="20070220T1800Z">6pm</abbr>

However, when viewed in a browser, it looks like this: 6pm, and taken out of context, that 6pm could happen on any day of any year in any timezone. By marking up that time with an ISO datetime in the context of an hcalendar object, we know exactly what time and in what timezone we’re talking about.

So, looking at the FoWA schedule, you don’t know it, but even though it looks like it’s offering all the right times and correct information in the human-facing data, delving into the microformatted data will reveal a very different agenda, specifically one that takes place in 2006 and goes backwards in time, with some events ending on the day before they started.

Again, they’re certainly to be commended for their efforts to microformat their schedule to make it easy to import and subscribe to, but they seem to have missed an opportunity in actually providing a computer-readable schedule.

Here are some things that need to be fixed on the schedule:

  1. All times need to be contained in <abbr> tags, not <span>s. This is a common error in marking up hcalendar, so watch for this one first.
  2. Second, the dates specified in the title attributes need to be 100% accurate; it’s better to have no data than incorrect data.
  3. Third, all start times should begin before the end times, unless you’re marking up the schedule for a time machine.
  4. I should point out that it would be useful if all people and organization were marked up as , but that’s a separate matter.
  5. Lastly, it always helps to validate your basic XHTML and run your microformatted content through consuming applications like Operator, X2V or Tails to see if the existing tools can make sense of your data. If not, it won’t work for anyone else either.

I’ve gone head and corrected the schedule. I’d love the for the FoWA team to take these basic changes and incorporate them into their schedule, but I know they’re busy, so in the meantime, feel free download the schedule in ICS format using Brian Suda‘s X2V transform script.

Scoping XFN and identifying authoritative hcards

Before I can write up my proposal for transcending social networks, I need to clarify the originating and destination scopes of XFN links.

It’s currently understood that links describe personal relationships between two URLs.

Typically the endpoints of XFN links are URL-centric personal blogs (i.e. horsepigcow.com or tantek.com), but not always. And because we can’t always assume that the outgoing linker speaks for the whole URL, or that the destination linkee is all inclusive, we need a set of standard criteria to help us determine the intended scope of the originating linker.

Put another way, how can we better deduce who is XFN-linking to whom?

Let’s take a concrete example.

The established XFN protocol states that when I XFN-link from my blog at factoryjoe.com to horsepigcow.com, that I’m describing the relationship between me and Tara Hunt, and our blogs act as our online proxies. Readers of our blogs already know to equate factoryjoe.com with Chris Messina and horsepigcow.com with Tara Hunt, but how can computers tell?

Well, if you check our source code, you’ll find an hcard that describes our contact information — marked up in such a way that a computer can understand, “hey, this data represents a person!”

If only things were so simple though.

If I linked to Tara and there were only one hcard on the page, you could probably assume that that single hcard contained authoritative contact details for her since knowing that Tara blogs at horsepigcow.com there’d be a good chance that she put it there. Sure enough, in her case, the hcard on horsepigcow.com does represent Tara.

Now, flip that around and let’s have Tara XFN-link back to my blog. This time instead of one hcard, she’ll most certainly find more than one hcard, and, most perplexing of all, most are not me, but rather people for whom I’ve marked up as hcards in my blog posts.

So, if you’re a computer trying to make sense of this information to determine who Tara’s trying to link to, what are you to think? Which relationship is she trying to describe with her link?

Well, as a stop-gap measure that I think could be easily and universally adapted to add definitiveness to any arbitrary hcard at the end of an XFN link, I propose using the <address> tag. Not only has this been proposed before and not been overruled, but it is actually semantically appropriate. Furthermore, there are already at least a few examples in the wild, notably on my blog, Tara’s blog, and most importantly, Tantek’s.

Therefore, to create a definitive an authoritative hcard on any page, simply follow this example markup (note the self-referencing use of rel-me for good measure):

.code { border: 1px solid #ccc; list-style-type: decimal-leading-zero; padding: 5px; margin: 0; }
.code code { display: block; padding: 3px; margin-bottom: 0; }
.code li { background: #ddd; border: 1px solid #ccc; margin: 0 0 2px 2.2em; }

  1. <address class="vcard" id="hcard">
  2. <a href="https://factoryjoe.com/blog/contact/#hcard&quot; rel="me" class="fn n">Chris Messina</a>.
  3. </address>

At the destination URL, include a fragment identifier (#hcard) for the hcard with the complete contact information and add rel-self in addition to rel-me (as per John Allsopp’s suggestion):

  1. <address class="vcard" id="hcard">
  2. <a href="https://factoryjoe.com/&quot; rel="me self" class="fn n">Chris Messina</a>.
  3. </address>

This practice will primarily help identify who XFN-linkers intend to link to when pointing to a blog or URL with multiple hcards. In the event that no definitive hcard is discovered, the relationship can be recorded until later when the observing agent can piece together who owns the URL by analyzing secondary clues (rel-me or other hcards that point to the URL and claim it).

Oh, and I should note that from the standpoint of multi-author blogs, we should be able to scope XFN links to the author of the entry — with entry-author making this infinitely easier.

hResume is live on LinkedIn

Detecting hResume on LinkedIn

And the hits just keep on comin’.

I’m thrilled to be able to pass along Steve Ganz of LinkedIn‘s Twitter announcement (tweet?) of their support for hResume on LinkedIn (these tweets are becoming trendy!).

Brian Oberkirch is curious about the process they went through in applying microformats post facto — that is, without changing much of the existing codebase and design — and will have a podcast with Steve tomorrow on the topic. Personally I’m curious if they developed any best practices or conventions that might be passed on to other implementors that might improve the appearance and/or import/export of hResumes.

If you’ve been playing along, you’ll note that this is one of the first examples of a successful community-driven effort to create a microformat that wasn’t directly based on some existing RFC (like vcard and ical). Rather, a bunch of folks got together and pushed through the definition, research and iteration cycles and released a spec for the community to digest and expound upon.

Soon after, a WordPress plugin and a handy creator were released, Tails added support and then Emurse got hip: Elegant template has hResume support — long term planning, ya know? It’s your data, and we want to make it as flexible as possible..

I wrote about the importance of hResume in August:

Why is this better than going to Monster.com and others? Well, for one thing, you’re always in charge of your data, so instead of having to fill out forms on 40,000 different sites, you maintain your resume on your site and you update it once and then ping others to let them know that you’ve updated your resume. And, when people discover your resume, they come to you in a context that represents you and lets you stand out rather than blending into a sea of homogeneous-looking documents.

Similar threads have come up recently about XFN, hcard and OpenID on the OpenID mailing list and the possible crossover with hResume should not be ignored. When LinkedIn is already support hcard and XFN — it’s just a matter of time before they jump on OpenID and firmly plant themselves in the future of decentralized professional networks.

Oh, and the possibilities to accelerate candidate discovery for all those job boards shouldn’t be understated either.

Netizen beware

I think the modern plight of IP is fascinating from a cultural development standpoint. Clearly it was believed when the laws were written that they’d be enforceable. Indeed they were, first at the local community level (think of old Wild West towns with their fool’s gold and cure-alls) and then on a larger scale, during the course of industrial development, when companies like Coke could extend their brand dominion the world over.

Now, owing much to the advancement of self-publishing tools and, of course, the Internet, it’s no longer conceivable to prevent every instance of misuse — in fact, as the RIAA and MPAA may someday learn, protecting your mark at the expense of your those who you want to respect your mark is a losing, and extremely costly, battle.

But for all the railing I do against modern IP, I do understand the purpose it serves, even if I don’t agree with the mechanisms or costs of enforcement. And, the cost of not finding a citizen-driven plan of enforcement could be exceptionally disruptive to the economy and to the establishment of new businesses.

While disruption on the one hand can be good as it destableizes the incumbants and shakes old soil from the roots of the system, it can also lead to fear and paralysis, as uncertainty takes over. If you consider that registering your trademark used to be simply a matter of course, and that enforcement against infringers would lead to a nice monetary settlement, that is no longer the standard. Rather, as has been said recently, to own a mark worthy of enforcement will surely lead to a death from a thousand cuts the moment you decide to try to wrest what you think is yours from the millions of fingers of the world at large.

And this is where the conflict lies: in these new economic circumstances, individuals and small businesses cannot afford the cost, in terms of their attention or their dollars, in pursuing infringement, yet, all the same, there is value in the credibility and reputation of the mark they built, which should be theirs to enjoy the benefits thereof. On the flipside, there is the citizen-consumer, who may wish to publish or publicize their love of said brand, but may do so in an otherwise “infringing” way (see Firefox). Now, at the same time, there is a perceived need to protect the hapless consumer from his or herself by way of preventing false actors from imitating or acting in the stead of someone else (think of the Tylenol scare). This is the flipside to trademark in that it attempts to provision economic rewards for playing nice, doing good and putting the onus of protecting your name on the individual who’s name is in question. Therefore if someone does wrong under the guise of your brand, it’s up to you to stop the infringement since it’s your livelihood at stake.

So originally that was a good plan, but as I’ve been discussing, that enforcement now comes at the risk of your business!

So, what are we to do?

Well, a number of us, including Citizen Agency, will file for and receive trademarks. Another portion of us will try to enforce the mark through various means — those who are offline will have the smallest exposure and will probably be able to enforce their mark against a smaller market. Those who go online, which seems to be as necessary as being in the phone book these days, will find the legal environment frustrating, confusing and to start, disempowering.

The way forward then, or at least a choice that should be considered available, is the one we’ve made for BarCamp and Microformats, and for which I advocated for with Mozilla, Creative Commons and OpenID. The choice is to embrace community enforcement — not in preventing bad actors from behaving badly, but in creating more positive examples of good, representative behavior; of creating good documentation and information flows so you know how to judge a phish (notice I didn’t say ‘rat’); an understanding with the community that the centralized body doesn’t have the resources to police its name and is therefore willing to rely on its community in a non-binding way (that is, protection should be afforded so long as the company is doing good things for the community, earning enforcement and their trust) and that in return, the company will “embrace the chaos” and turn over a good deal of “ownership” of its name to the collective.

Now, this won’t work for everyone and indeed causes confusion, dilusion of consistency and an ocassionally unrepresentive act, but on the whole, the notion of a community mark might at least form the foundation for thinking on a non-legal code of conduct-slash-ethics ready and reflective of the 21st century.