Twitter adds support for hAtom, hCard and XFN

Twitter / Alex Payne: TWITTER CAN HAS A MICROFORMATS

The Cinco de Meow DevHouse was arguably a pretty productive event. Not only did Larry get buckets done on hAtomic but I got to peer pressure Alex Payne from Twitter into adding microformats to the site based on a diagram I did some time ago:

Twitter adds microformats

There are still a few bugs to be worked out in the markup, but it’s pretty incredible to think about how much recognizable data is now available in Twitter’s HTML.

Sam Sethi got the early scoop, and now I’m waiting for the first mashup that takes advantage of the XFN data to plot out all the social connections of the Twittersphere.

A different kind of net neutrality: Carbon Offsetting Web 2.0

Flickr Green

A couple months ago I had an idea that I’ve wanted to socialize since, but had only taken to doing so behind the scenes. Things being as they are, I’ve had little time to really advance this cause further, other than push it on a few friends who, so far, have reacted quite positively.

Prompted by Jeremy Zawodny’s post about Yahoo going carbon neutral and in support of Chris Baskind’s month-long effort to get high quality environmental links added to his Lighter Footstep group, I thought I’d finally write this up to see if it draws any interest.

The idea is rather simple and requires but one piece of support infrastructure that fortunately my fellow citizen coworker Ivan Storck is already hard at work on (more about that later).

So what’s the idea? Well, quite simply, it’s a web service that you use to offset the carbon footprint of your customers using your app. This would be mostly beneficial for larger services, but it’s my belief that every little bits counts!

For freemium services like Basecamp WordPress and Last.fm, providing an option for paying members to add $1/month to their bill in order to offset their use of your web service is where it begins. In exchange for this contribution, they would get a special distinction within the community, like a green avatar or badge to denote their carbon neutral status:

Last.fm Green

Now, this might seem like a trivial incentive, but then you might also be surprised to learn that the number one reason that people pay to upgrade their Flickr accounts is not because they need more storage or unlimited uploads, but instead because they want that tiny little PRO label next to their name. Offering a similar incentive on social networks — and making “offsetting cool” becomes a way to propagate this behavior, ultimately working towards completely offsetting the entirety of Web 2.0.

Now, those of you who have read up on or know anything about the power that servers draw will quickly be able to recognize that $1 month to offset a single user account is going overboard, given that it technically only costs a few cents per month to power most people’s individual use of social networking sites. And while you wouldn’t be wrong, you’ve hit on an interesting social component of this campaign: those who want to offset can do so, and in doing so, won’t just be offsetting their footprint, but some their neighbors as well, in an act straight out of Caterina Fake’s culture of generosity. So it’s not so much about offsetting one’s personal use, but on offsetting at a social level — and that this good deed is reflected a user’s avatar or badge means that anyone can effectively “upgrade” themselves to carbon neutral status — once they get annoyed that all their friends have “leveled up” and they haven’t. Meanwhile, those who have upgraded as a proactive choice can feel reassured that their influence is affecting those around them to make similar decisions, even if for different reasons — in the end, the result doubleplusgood.

So, about that API that I mentioned. It’s important to realize that 1) we’re in the early stages of and the 2) not all carbon offsetting funds are created equal (this is something I’m becoming evermore familiar with as we move to certify Citizen Space as a green office). Therefore, Ivan (who I mentioned and who also runs Sustainable Marketing and Sustainable Websites) has begun work on an API that will allow companies to purchase carbon offsets in bulk based on the actual amount of power consumed in something like a server farm evnironment (where power measurements are fairly easy to come by). Once initiated, the purchase will likely take place through one of Ivan’s affiliates based here in San Francisco called 3 Phases. In any case, we’re in the beginning phases of making this happen, but if you’re interested in helping or in offsetting your customers’ usage, leave a comment or drop me a note and we’ll see if we can’t push this work forward.

Likewise, if you can think of other ways to minimize the environmental footprint of your webservice or web office, blog about it and let others know! We’re doing what we can to create green coworking spaces and the more success stories we come across, the better.

Raising the standard for avatars

FactoryDevil

Not long ago, Gravatar crawled back out from the shadows and relaunched with a snazzy new service (backed by Amazon S3) that lets you claim multiple email addresses and host multiple gravatars with them for $10 a year.

The beauty of their service is that it makes it possible to centrally control the 80 by 80 pixel face that you put out to the world and to additionally tie a different face to each of your email addresses. And this works tremendously well when it comes to leaving a comment somewhere that a) supports Gravatar and b) requires an email address to leave a comment.

Now, when Gravatar went dark, as you might expect, some enterprising folks came together and attempted to develop a decentralized standard to replace the well-worn service in a quasi-authoritarian spec called Pavatar (for personal avatar).

Aside from the of a new term, the choice to create an overly complicated spec and the sadly misguided attempt to call this effort a microformat, the goal is a worthy one, and given the recent question on the OpenID General list about the same quandary, I thought I’d share my thoughts on the matter.

For one thing, avatar solutions should focus on visible data, just as microformats do — as opposed to hidden and/or spammable meta tags. To that end, whatever convention is adopted or promoted should reflect existing standards. Frankly, the microformat already provides a mechanism for identifying avatars with its “photo” attribute. In fact, if you look at my demo hcard, you’ll see how easy it would be to grab data from this page. There’s no reason why other social networks couldn’t adopt the same convention and make it easy to set a definitive profile for slurping out your current avatar.

In terms of URI locating, I might recommend a standard convention that appends avatar.jpg to the end of an OpenID as a means of conveniently discovering an avatar, like so. This concept follows the favicon.ico convention of sticking the favicon.ico file in the root directory of a site, and then using this icon in bookmarks. There’s no reason why, when URLs come to represent people, we can’t do the same thing for avatars.

Now, off of this idea is probably my most radical suggestion, and I know that when people shoot me down for it, it’s because I’m right, but just early (as usual).

Instead of a miserly 80 pixels square, I think that default personal avatars should be 512 pixels square (yes, a full 262,144 pixels rather than today’s 6,400).

There are a couple reasons and potential benefits for this:

  1. Leopard’s resolution independence supports icons that are 512px square (a good place to draw convention). These avatars could end up being very useful on the desktop (see Apple’s Front Row).
  2. While 80 pixels might be a useful size in an application, it’s often less than useful when trying to recognize someone in a lineup.
  3. We have the bandwidth. We have the digital cameras and iSights. I’m tired of squinting when the technology is there to fix the problem.
  4. It provides a high fidelity source to scale into different contortions for other uses. Trying blowing up an 80 pixel image to 300 pixels. Yuck!
  5. If such a convention is indeed adopted, as was, we should set the bar much higher (or bigger) from the get-go

So, a couple points to close out.

When I was designing Flock, I wanted to push a larger subscribable personal avatar standard so that we could offer richer, more personable (though hopefully not as male-dominated) interfaces like this one (featuring Technorati’s staff at the time):

Friends Feed Reading

In order to make this work across sites, we’d need some basic convention that folks could use in publishing avatars. Even today, avatars vary from one site to the next in both size and shape. This really doesn’t make sense. With the advent of OpenID and URL-based identity mashed up with microformats, it makes even less sense, though I understand that needs do vary.

So, on top of providing the basic convention for locating an avatar on the end of an OpenID (http://tld.com/avatar.jpg), why not use server-side transforms to also provide various avatar sizes, in multiples of 16, like: avatar.jpg (original, 512×512) avatar_256.jpg, avatar_128.jpg, avatar_48.jpg, avatar_32.jpg, avatar_16.jpg. This is similar to the Apple icon .icns format … I see no reason why we can’t move forward with better and richer representations of people.

Onward!

Getting back to POSH (Plain ol’ Semantic HTML)

Salt and Pepper shakers

Original photo by paul goyette and shared under the Attribution-ShareAlike 2.0 license.

Following Web2Expo/, a number of us got together for a Microformats dinner at Thirsty Bear. Some concern was raised over the increasing influx of proposals for new microformats — instead of sustained work on existing formats or techniques.

In discussing this, we realized a few things. Chief among them is that, as a community, we’ve been spending a great deal of time and effort providing a rationale and explanation for why microformats are important and how we use a community-driven process to derive new microformats. Now, there are historic reasons why our process is different and why we continually refer new members to it. If you consider that web standards themselves are created, reviewed and ratified by the , a consortium of paying members bound to very specific rules and mandates, you’ll realize that the value of our community’s output is measurable by the degree to which we are able to consistently produce high quality, clear and implementable specifications. Without adherence to recognized process, chaos would unfold and we’d end up with a myriad of inconsistent and overlapping formats, which is what essentially killed the Structured Blogging initiative.

In the microformats community, it’s existing behavior discovered through research and prior standards work that most often leads to new formats, and this work is often undertaken and championed by independent individuals, as opposed to corporations. On top of that, our self-imposed mandate is to stay specific, focused and relevant, optimizing for the 80% use cases and ignoring the 20% edge cases.

This story has been replayed and retold the world over, with great effect and consequence. What we have failed to articulate in the same time and space, however, is what work is necessary beyond the creation of new microformats. And because of that, we have more so many folks joining the community, eager to help, and seeing only the opportunity to — what else? — create a new microformat (in spite of the warning to not do so!)!

So, the ultimate result of the conversation that night was to focus on a rebranding of an old idea along with a new process for generally getting involved in the microformats movement with a subset of tasks focused exclusively on advancing POSH.

From now on, we will now be promoting POSH (as coined by kwijibo in IRC) as a first order priority, alongside the development and improvement of existing microformats.

POSH (“Plain Old Semantic HTML”) is a very old idea, and constitutes the superset of semantic patterns within which microformats exist:

POSH Diagram

With POSH thusly established, we have enumerated four classes of actions that collectively represent a Process for Contributing in order to better channel the energy of new-comers and old-timers alike:

  1. Publish: if you’re not already, add valid, semantic markup to your own website. It goes without saying that you should also be publishing microformats wherever it makes sense. Focus on improving the web as it is and that you have access to.
  2. Spread: advocate for and encourage others to follow your lead in implementing valid POSH and microformats. Familiarize yourself with web standards, accessibility, and why POSH is important. Do presentations on POSH at BarCamps and elsewhere; write about it, share it with friends, hold POSH Pits to create and build things with POSH. Add buttons (coming soon) to your site once you’ve been POSHified!
  3. Use: consume microformats — and better yet — add live subscriptions to data marked up in existing formats. With all the microformats , we need to start seeing some really innovative and time-saving uses of microformats, including tools for easily embedding microformatted content into blog posts and elsewhere.
    1. OpenID: meanwhile, consider adding OpenID identity services to your application or service — and support and data syncing.
  4. Improve: once you’ve gone through and added POSH to all your websites, go back and refactor, iterate and provide feedback, tips and learnings about what you did, how you did it and why you did things the way you did to the greater community. Tag your posts with ‘POSH’, contribute them to the wiki and generally seek out opportunities for improving the resources available to the wider audience of web designers and programmers.

In the coming days, we’ll be adding more documentation to the wiki and encouraging others to spread the word (as you should!).

Lastly, to help frame the POSH concept, think of of it as a “Fast-tracked Microformats Process” — wherein you can do your research, develop semantic patterns and then implement them without going through the same drawn out process that accepted formats must go through… because the goal is actually not to develop a new format, but to solve a specific and time-sensitive problem. Over time, these implementations will come to represent the body of prior art necessary to make informed decisions about future formats, but the immediate goal is to simply POSHify the web and not attempt the development of yet another format.

The importance of View Source

Camino View Source

There’s been a long history of innovation on the web founded in open access to the underlying source code that first websites, then later interactive web applications, were built on. The facility of having ready access to the inner workings of any web page has been tantamount to continued inspiration, imitation, and most importantly, the ongoing education of subsequent generations of designer-developer hybrids.

On my panel today on The Hybrid Designer, I took a moment to call out my concerns that the shininess of Rich Internet Application (RIA) frameworks like and (the framework formerly known as WPF/E) is blocking out critical consideration to the gravity and potential consequences of moving to these platforms. As Marc Orchant put it:

One of the most interesting discussions in the session was precipitated when Messina voiced his concerns that “containers” for web functionality like Adobe Apollo and Microsoft Silver[light] would make it harder to create dynamic applications that leverage these data streams as they will, he predicted, created new “walled gardens” by obscuring what is currently a pretty open playing field of ideas and techniques. [Jeremy] Keith added the observation that by hiding the source for the hybrid applications created using these tool, up and coming designers would lose a valuable learning resource that runs counter to the spirit of a read/write web built using open, standardized tools. Needless to say, the room was pretty sympathetic to the sentiments expressed by the panel.

In particular, I was suggesting that these frameworks effectively remove the View Source command — an utter reversal in the trend towards openness in web technologies leading to, in my view, new silos within a more closed web.

Ryan Stewart, who sadly I didn’t get a chance to catch up with afterwards, took me to task for my oversimplification:

Today at the Web 2.0 Expo, I sat in on a panel with Richard MacManus, Kelly Goto, Chris Messina and . They talked about the “hybrid designer” and touched on some points about the web and the richness that has really created the “hybrid” notion. In one bit, Chris said he was lamenting the fact that a lot of RIA technologies are taking away the “view source” and he got applause from the crowd.

I think this is the perfect example of how misunderstood the RIA world is. Chris used the example of Apollo and Silverlight as two technologies that are killing view source. Apollo is meant for desktop applications. We don’t have “view source” on the desktop, but that doesn’t mean we couldn’t. Apollo uses Flex and Ajax to create the desktop applications, and BOTH of those allow for view source. It’s true that Flex developers can turn off that feature, but really how is that any different than obfuscating your JavaScript in an Ajax application? When people want to share, the RIA tools out there have mechanisms in place to let them do that. Can you ask for more than that?

I was also surprised to hear Chris complain about Silverlight in that group. Of all the technologies, I think Silverlight actually has the best “view source” support. It uses JavaScript as the programming language behind the hood, and the XAML is just text based, so you can view source just like any other web page and see both the XAML and JavaScript libraries. That’s pretty open I think.

I’ll plead ignorance here (especially in terms of Silverlight), but I refuse to back off from my point about the importance of View Source (a point that I don’t think Ryan disagrees with in principle).

Whether you can get at the “goods” in Silverlight or Apollo apps is only part of the problem. I’ve examined the contents of four or five Apollo apps and each one had any number of impenetrable .swf binaries that I couldn’t do anything with, and even with the complete source code of TwitterCamp, a rather simple Apollo app, it wasn’t obvious how a design-leaning hybrid designer like myself would actually modify the app without buying into expensive Adobe tools like ($699) or ($499). And that in sence, is no different than removing the View Source command altogether.

…and even when I finally did figure out that I could right click and choose View Source while running TwitterCamp, I received this error message and no source code:

Alert

Now, Ryan also claims that We don’t have “view source” on the desktop, and I would argue that 1) it depends on your platform and 2) I’m not fundamentally prevented from tinkering with my desktop apps. And this is key.

Let’s drill down for a moment.

On the Mac, every application has the equivalent of a View Source command: simply right click and choose “Show Package Contents”. Since every Mac application is essentially a special kind of folder, you can actually browse the contents and resources of an application — and, in certain cases, make changes. Now, this isn’t as good as getting to the raw source, since there are still unusable binaries in those directories, but you can at least get to the nib files and make changes to the look and feel of an application without necessarily touching code or having the full source.

And so just like on the web, especially with free and open source tools like Firebug and Greasemonkey, with a little bit of knowledge or persistence, you can modify, tweak or wholly customize your experience without getting permission from the application creator all by way of “viewing the source”. More importantly, you can learn from, adapt and merge prior art — source code that you’ve found elsewhere — and that, in turn, can be improved upon and release, furthering a virtuous cycle of innovation and education.

Nonetheless, I’m glad that Ryan has corrected me, especially about Silverlight, which indeed is put together with a lot of plain-text technologies. However, I still can’t help but be skeptical when there seems to be so much in it for Adobe and Microsoft to build out their own islands of the web where people buy only their tools and live in prefab Second Life worlds of quasi-standards that have been embraced and extended. It feels like déjà vu all over again; like we’ve been here before and though I’d thought that we’d internalized the reasons for not returning to those dark ages, the shininess of the new impairs our ability to remember the not-so-distant past… While Ryan may be technically correct about the availability of the source, if that top-level menu item vanishes from the first-gen of RIAs, I remain increasingly concerned that the net result will constitute the emergence of an increasingly closed and siloed web.

I do hope that Ryan’s optimism, coupled with activism from other open source and open web advocates, will work with great speed and efficacy to counter my fears and keep that which is now the most open and vital aspect of the web the way it is now and the way it was meant to be.

The relative value of open source to open services

There’s an active debate going on in the activeCollab community stemming from the announcement that the formerly exclusively community-backed open source project will lose much of its open source trappings to go commercial and focus a closed platform providing open web services.

For those who aren’t aware, activeCollab was created as a free, open source and downloadable response to Basecamp, the project management web app. In June of last year, the project founder and lead developer, Ilija Studen, offered his rationale for creating activeCollab:

First version of activeCollab was written somewhere about May 2005 for personal use. I wanted Basecamp but didn’t want to pay for it. Being a student with few freelance jobs I just couldn’t guaranty that I’ll have money for it every month. So I made one for myself. It’s running on my localhost even today.

Emphasis original.

Ilija offered many of the usual personal reasons for making his project free and open:

  • Learning.
  • Control.
  • Establishing community.
  • Earning money.

Now, the last one is significant, for a couple reasons, as was pointed out at the time of the first release: Ilija wanted to make money by offering commercial support and customization on a product imitating someone else’s established commercial product.

But competition is good, especially for my friends in Chicago, and they’ve said as much.

But, Ilija made one fatal mistake in his introductory post that I think he’s come to regret nearly a year later: I find it normal to expect something in return for your work. activeCollab will always be free.

And so a community of Basecamp-haters and open source freeloaders gathered around the project and around Ilija, eager to build something to rival the smug success of Basecamp, something sprung from the head of the gods of open source and of necessity, to retrace the steps of Phoenix before it (later redubbed Firefox), to fight the evils of capitalism, the injustice of proprietary code, and to stave off the economic realities of trying to make a living creating open source software.

For a little under a year, the project slogged on, a happy alternative to Basecamp, perfect for small groups without the ability to afford its shiny cousin, perfect for those who refuse to pay for software, and perfect for those who need such collaboration tools, but live sheltered behind a firewall.

A funny thing happened on the way to the bank, though, and Ilija realized that simply offering the code for people to download, modify and run on their own servers wasn’t earning him nearly enough to live on. And without an active ecosystem built around activeCollab (as WordPress and Drupal have), it was hard to keep developing the core when he literally was not able to afford continuing to doing so.

Thus to decision to break from his previous promise and close up the code and offer instead an open API on which others could build plugins and services — morphing activeCollab from a commodity download to a pay-for web service:

Perhaps I am naive, and this was the business model all along. i.e. Build a community for the free software during early development and testing, then close it up just as the project matures.

That was not original plan. Original plan was to build a software and make money from support and customization services. After a while we agreed that that would not be the best way to go. We will let other teams do custom development while we keep our focus solely on activeCollab.

But, the way in which he went about announcing this change put the project and the health of his community at risk, as Jason pointed out:

Ilja,

I’m a professional brand strategist, and while nothing is ever certain, I also feel that this is a bad move.

Essentially you’ve divided your following into three camps. For, against and don’t care. A terrible decision.

What you should have done (or should do… its not too late)__

—> Start a completely seperate, differently branded commercial service that offers professional services

—> Leave your existing open-source model the same and continue to develop the project in concert with the community

————————-

Sugar is not a great model to follow. It’s not.

A better example would Bryyght[dot]com, a commercial company hosting Drupal CMS. The people there are still very actively involved in the original open-source project.

Overall, you should choose your steps wisely. While you’re the driving source behind the project – NOBODY fully owns their own brand.

A brand is owned by the community that are a part of it. Without customers, a brand is nothing.

JH

A brand is owned by the community that are a part of it. Without customers, a brand is nothing. (Hmm, sounds like the theory behind the Community Mark).

I think JH has a point, and with regards to open source, one that Ilija would do well to consider. On the one hand, Ilija has every right to change the course of the project — he started it after all and has done the lion’s share of work. He also needs to figure out a way to make a living, and now, having tried one model, is ready to try another. On the other, closing up the core means that he has to work extra hard to counter the perception that activeCollab is not an open source project, when indeed, parts of it still will be, and likely, won’t be the worse for it.

That many of the original Basecamp haters who supported Ilija’s work have now turned their anger towards him suggests that he’s both pioneering a tribrid open business/open service/open source model and doing something right. At least people care enough to express themselves…

And yet, that’s not to say that the path will be easy or clear. As with most projects, the test is now how he manages this transition that will make the difference, not that he made the decision.

All the same, it does suggest that the open source community is going through an evolution where the question of what to be open about and with whom to share is becoming a lot harder to answer than it once was. Or at least how to sustain open source efforts that play into facile operation as web services.

With the Honest Public License coming in advance of the GPL v3 to cover the use of open source software in powering web applications and services, there are obvious issues with releasing code that once you could count on being tied to the personal desktop… now with the hybridization of the desktop/internet environments and the democratization of scripting knowledge, it’s a lot harder to make a living simply through customization and support services for packaged source code when you’re competing against everyone and their aunt, not to mention Yahoo, Google and the rest.

Steve Ivy asked a poignant question in his recent post on Open Source v. Open Services: If the service is open enough, what’s the value of the source?

Truly, that is a question that I think a lot of us, including folks like Ilija, are going to have to consider for some time to come. And as we do consider it, we must also consider what the sustainable models for open source and open services look like in the future, for we are now living finally living web service-based economy, where the quality of your execution and uptime matter nearly as much, if not more, than the quality of your source code.

Pukka 1.5 adds support for Ma.gnolia

Pukka Ma.gnolia Support

Pukka, a favorite tool of the Delicious crowd, has added support for Ma.gnolia with its 1.5 release.

Thanks to (which mirrors the Delicious API), a number of formerly Delicious-only applications can also be used with Ma.gnolia. Pukka now ranks among them, though not with out a few discrepancies, notably no support for spaces in tags or ratings, but these are minor issues that can be worked out over time (note, to enable private bookmarks, check this out).

What’s interesting about apps adding cross-domain API support is the slow emergence of standards in new areas (i.e. outside the standard protocols). A framework for application developers that handles multiple bookmarking APIs that essentially do the same thing would be of great value, similar to the work that Jacob Jay started with his MediaSock framework (for publishing to over a dozen media services). I could see such a framework being really useful in browsers, feed readers, media players and similar applications.

Anyone?

NASA 2.0

Yuri's Night 2007

If you haven’t been wondering what’s up with NASA lately, you’re probably not alone. Though once a bastion for the advancement of humankind, in recent years the space agency has seemingly vanished into a well of bureaucracy and lack of coherent, public-supported vision.

Now, thanks to a number of young, forward-thinking upstarts within the organization, that might all start to change, starting tomorrow night at NASA’s Ames Research Facility in Mountain View, California with the kick off of the World Space Party (aka Yuri’s Night).

With 4,000 expected attendees, this is probably one of the first if not largest raves ever held on government property (you can only imagine the red tape that they had to go through to get this approved!). The space is perfectly suited for this kind of thing — and represents the new thinking and outward focus surging within the organization.

On top of that, there is growing interest in open source (notable given the restrictiveness of the NASA Open Source Agreement), in Second Life, and in coworking, as witnessed by NASA’s tenant status at Citizen Space and in their CoLab project.

I’m certainly excited to see these changes coming to NASA — and if it’s any indicator of what changes might be wrought in the government with the addition of a little 2.0 fever and open source, there’s hope for us yet.

Problems with OpenID on Highrise

Trouble with OpenID

Turns out that 37 Signals’ implementation of OpenID could use some… getting real.

Let me go over these issues and provide either resources or remedies.

Normalization of OpenIDs URLs

Look at these three URLs and make a note to yourself about any differences you see:

To a lay person (or even your average geek), these URLs all represent the same thing — especially if you type any of them into the address bar, they’ll land you on my out-of-date homepage.

But, in the land of OpenID and URI evaluation, these differences can be very significant, especially when you get into the differences between OpenID v1.1 and the forthcoming v2.0 (which adds support for inames).

To the contrary of some discussion on the OpenID list, the way in which you normalize an identity URL very quickly becomes a usability issue if the cause of OpenID login failures are not immediately obvious.

Remedy: Given some of the issues folks have had with OpenID at Highrise, DHH decided to make usability the priority:

I’m going to fix the trailing slash issue on URL-based OpenIDs. We’ll be more liberal in what we take.

This should mean that folks logging in with OpenID shouldn’t have to guess at what their appropriate identity URL looks like, instead only substantively know what the important parts are (i.e. the domain and any sub-domain or path(s)).

Outstanding issues: Of course, 37 Signals can do this, but what happens when the identity URL that someone uses on Highrise doesn’t work elsewhere because other consumers aren’t as liberal with what they accept?

Lack of support for i-names

One of the issues (features?) that OpenID v2.0 brings is the support for i-names, a controversial schema for representing people, businesses and groups using non-familiar formatting codes.

I’ve heard that there’s somewhere in the ballpark of 20,000 i-names users in the wild (I happen to have =chris.messina but never use it), but compared with the over 70 million (and growing) URL-based OpenID users, this is an incredibly small minority of the overall OpenID landscape.

Nevertheless, one potential point of frustration for these users is in the lack of standardization in implementing or indicating support for i-names, as Rod Begbie pointed out in the Highrise forum, to which DHH replied, . We don’t support iname OpenIDs for now, though. We’re just supporting OpenID 1.1.

And this, I imagine, is going to be a common issue, for both OpenID implementors (dealing with support requests for support of i-names) and for i-names users, such that I question, as others have, the wisdom of offering support for i-names identifiers, when issues still clearly remain in the usability of basic URLs.

Remedy: Once the OpenID v2.0 spec has been finalized, there will need to be a new logo to indicate which version of OpenID a consuming site supports; this will hopefully work to set expectations for i-names users.

Outstanding issues: At the same time, the addition of i-names to OpenID v2.0 has caused a lot of concern for folks, many of whom have simply decided to stick with v1.1.

Personally, I don’t see the long term value in fragmenting the OpenID protocol away from more familiar URL-based identifiers. I want something simple, straightforward and obvious. Otherwise, v2.0 is going to be a headache to advocate, to implement and to support that a lot of folks with just stick with v1.1.

Double delegation aka the Sean Coon Problem

My buddy Sean Coon pinged me the other day to see if I could help him debug the problems he was having signing into Highrise with his OpenID account. When he had signed up, he had used seancoon.org as his OpenID URL. He’d started playing with it, but then left it, only to return later, unable to login.

His problem was three-fold, but I’ll first address a basic issue with delegation that some folks might not be familiar with.

As it turned out, Sean had delegated seancoon.org to resolve to ClaimID as his identity provider. The problem was that he used http://claimid.com/spcoon as his identity URL instead of http://openid.claimid.com/spcoon, which is where his OpenID was actually stored.

Typically when people use claimid.com/[username] as their OpenID identity URL to login to sites, this transformation takes place invisibly. This is because ClaimID delegates to themselves.

The problem lies in that Sean delegated seancoon.org to his ClaimID profile, which in turn was delegated to ClaimID’s OpenID server. If this sounds confusing, it is, and that’s why it’s not allowed in OpenID.

As I understand it, delegation can only be done once, or else you might end up in an infinite chain of delegations that may end in some grandious infinite loop. By restricting your delegation hops to one, a lot of problems are avoided.

Remedy: In this case, Sean only needs to re-delegate to openid.claimid.com/spcoon, and fortunately, there’s a handy WordPress plugin that can handle this for him.

Outstanding issues: Delegation is probably one of the coolest aspects of OpenID, since it allows you to use any URL of your choosing as your OpenID and then let someone else deal with the harder part of actually talking to all your services. Furthermore, you can delegate any number of services and set up fallbacks in case your primary identity provider is taking a nap. Communicating how this works and how to resolve and communicate errors when things go wrong is one of the biggest holes in the OpenID offering, and with user experience experts like 37 Signals joining up, I hope that these issues get the amount of due diligence and attention that they deserve.

Untested assumptions

Finally, I discovered a serious mistaken assumption in the Highrise sign-up process. To test out this issue, I created a test account, using http://google.com as my OpenID:

Sign up for Highrise

Now, here’s the problem: they didn’t force me to login to that OpenID when I signed up; instead they just assumed that I knew what I was doing and that I was using a valid OpenID.

So here’s the email that I got confirming my account. Note my username:
Gmail - Welcome to Highrise

Of course when I go to login, I can’t, and I’m locked out of my account — since I can’t login and prove that I own google.com — which, notably, is the same result as if I’d mistyped my OpenID. Fortunately, 37 Signals gave me a backdoor, but it kind of defeats the whole purpose of using OpenID and suggests that you shouldn’t let folks arbitrary set their OpenIDs without having them prove that they really have control of their stated identifier.

Remedy: For implementors, you must get proof that someone controls or owns an OpenID if you’re going to rely on it as their primary identifier. You can’t assume that they’ve typed it correctly or even that they’ve even used a proper OpenID. And, most importantly, you’ve got to stress test such a new system to make sure issues like this are avoided.

Oh, and it does appear that MyOpenID.com OpenIDs are totally not working at this time; I’ve put Scott Kveton and Jason Fried in touch, so hopefully they can resolve the matter. Interestingly, if you’ve delegated to more than one identity provider and you’re using your own OpenID URL to login to Highrise, you should be able to get in.

Conclusion

It’s still promising to see folks like 37 Signals get on board with OpenID, but we clearly have a long way to go.

I hope I’ve clarified a few of the current issues that people might be seeing, or that are generally confusing about OpenID, and I admit that while I’m trying to clarify these things, a lot of this will still sound like Greek to most folks.

Given that, if you’re having issues getting OpenID, feel free to drop me a note and I’ll see if I can’t help resolve it.

Netscape will add support for OpenID

Alex Rudloff from Emurse just pinged me that Netscape will formally launch their support for OpenID on Monday:

One of the most consistent pieces of feedback that we have received thus far is that we should look into allowing people to log in using their AOL accounts that are currently used for Netscape/AOL mail, were once used for the previous My.Netscape site, and are used throughout the AOL network.

You sent this feedback, and we have been listening. In conjunction with AOL announcing its role as an OpenID provider, and spurred by the rapid pace by which OpenID is being adopted on the Web, on Monday, March 26th, Netscape will not only support signing in with your current AOL screen name, but also OpenID as a way of accessing Netscape.com and My.Netscape.

This comes as no surprise given that Netscape’s parent company, AOL, is an OpenID identity provider and is building out places where you can use your AIM screenname to login.

Reporting on OpenID implementations lately has become akin to reporting that companies have discovered and are now starting to use HTML… there’s still a long way to go, but clearly this is the future foundation of identity-based services.