The OpenID mobile experience

Two days ago, Ma.gnolia launched their mobile version, and it’s pretty awesome (disclosure: Ma.gnolia is a former client and current friend/partner of Citizen Agency).

In the course of development, Larry asked me what he thought he should do about adding OpenID sign-in to the mobile version. He was reluctant to do so because, he reasoned, the experience of logging in sucks, not just because of the OpenID round-trip dance, but because most identity providers don’t actually support a mobile-friendly interface.

Indeed, if you take a look at the flow from the Ma.gnolia mobile UI to my OpenID provider (using the iPhone simulator app), you can see that it does suck.

Mobile Ma.gnoliaiPhoney OpenID Verification

I strongly encourage Larry to go ahead and add OpenID even if the flow isn’t ideal. As it is, you can sign up to Ma.gnolia with only an OpenID (without a need for creating yet another username and password) and so without offering this login option, the mobile site would be off-limits to folks in this situation.

So there’s clearly an opportunity here, and I’m hoping that out of OpenIDDevCamp today, we can start to develop some best practices and interface guidelines for OpenID providers for the mobile flow (not to mention more generally).

If you’ve seen a good example of an OpenID (or roundtrip authentication flow) for mobile, leave a comment here and let me know. It’s hard to get screenshots of this stuff, so any pointers would be appreciated!

The problem with open source design

I’ve probably said it before, and will say it again, and I’m also sure that I’m not the first, or the last to make this point, but I have yet to see an example of an open source design process that has worked.

Indeed, I’d go so far as to wager that “open source design” is an oxymoron. Design is far too personal, and too subjective, to be given over to the whims and outrageous fancies of anyone with eyeballs in their head.

Call me elitist in this one aspect, but with all due respect to code artistes, it’s quite clear whether a function computes or not; the same quantifiable measures simply do not exist for design and that critical lack of objective review means that design is a form of Art, and its execution should be treated as such.
Continue reading “The problem with open source design”

Fluid, Prism, Mozpad and site-specific browsers

Matt Gertner of AllPeers wrote a post the other day titled, “Wither Mozpad?” In it he poses a question about the enduring viability of Mozpad, an initiative begat in May to bring together independent Mozilla Platform Application Developers, to fill the vacuum left by Mozilla’s Firefox-centric developer programs.

Now, many months after its founding, the group is still without a compelling raison d’être, and has failed to mobilize or catalyze widespread interest or momentum. Should the fledgling effort be disbanded? Is there not enough sustaining interest in independent, non-Firefox XUL development to warrant a dedicated group?


There are many things that I’d like to say both about Mozilla and about Mozpad, but what I’m most interesting in discussing presently is the opportunity that sits squarely at the feet of Mozilla and Mozpad and fortuitously extends beyond the world-unto-itself-land of XUL: namely, the opportunity that I believe lies in the development of site-specific browsers, or, to throw out a marketing term: rich internet applications (no doubt I’ll catch flak for suggesting the combination of these terms, but frankly it’s only a matter of time before any distinctions dissolve).

Fluid LogoIf you’re just tuning in, you may or may not be aware of the creeping rise of SSBs. I’ve personally been working on these glorified rendering engines for some time, primarily inspired first by Mike McCracken’s and then later Ben Willmore’s Gmail Browser, most recently seeing the fruition of this idea culminated in Ruben Bakker’s pay-for Gmail wrapper More recently we’ve seen developments like Todd Ditchendorf’s which generates increasingly functional SSBs and prior to that, the stupidly-simple Direct URL.

But that’s just progress on the WebKit side of things.

If you’ve been following the work of Mark Finkle, you’ll be able to both trace the threads of transformation into the full-fledged project, as well as the germination of Mozpad.

Clearly something is going on here, and when measured against Microsoft’s Silverlight and Adobe’s AIR frameworks, we’re starting to see the emergence of an opportunity that I think will turn out to be rather significant in 2008, especially as an alternative, non-proprietary path for folks wishing to develop richer experiences without the cost, or the heaviness, of actually native apps. Yes, the rise of these hybrid apps that look like desktop-apps, but benefit from the connectedness and always-up-to-date-ness of web apps is what I see as the unrecognized fait accompli of the current class of stand-alone, standards compliant rendering engines. This trend is powerful enough, in my thinking, to render the whole discussion about the future of the W3C uninteresting, if not downright frivolous.

A side effect of the rise of SSBs is the gradual obsolescence of XUL (which already currently only holds value in the meta-UI layer of Mozilla apps). Let’s face it: the delivery mechanism of today’s Firefox extensions is broken (restarting an app to install an extension is so Windows! yuck!), and needs to be replaced by built-in appendages that offer better and more robust integration with external web services (a design that I had intended for Flock) that also provides a web-native approach to extensibility. As far as I’m concerned, XUL development is all but dead and will eventually be relegated to the same hobby-sport nichefication of VRML scripting. (And if you happen to disagree with me here, I’m surprised that you haven’t gotten more involved in the doings of Mozpad).

But all this is frankly good for Mozilla, for WebKit (and Apple), for Google, for web standards, for open source, for microformats, for OpenID and OAuth and all my favorite open and non-proprietary technologies.

The more the future is built on — and benefits from — the open architecture of the web, the greater the likelihood that we will continue to shut down and defeat the efforts that attempt to close it up, to create property out of it, to segregate and discriminate against its users, and to otherwise attack the very natural and inclusive design of internet.

Site specific browsers (or rich internet applications or whatever they might end up being called — hell, probably just “Applications” for most people) are important because, for a change, they simply side-step the standards issues and let web developers and designers focus on functionality and design directly, without necessarily worrying about the idiosyncrasies (or non-compliance) of different browsers (Jon Crosby offers an example of this approach). With real competition and exciting additions being made regularly to each rendering engine, there’s also benefit in picking a side, while things are still fairly fluid, and joining up where you feel better supported, with the means to do cooler things and where generally less effort will enable you to kick more ass.

But all this is a way of saying that Mozpad is still a valid idea, even if the form or the original focus (XUL development) was off. In fact, what I think would be more useful is a cross-platform inquiry into what the future of Site Specific Browsers might (or should) look like… regardless of rendering engine. With that in mind, sometime this spring (sooner than later I hope), I’ll put together a meetup with folks like Todd, Jon, Phil “Journler” Dow and anyone else interested in this realm, just to bat around some ideas and get the conversation started. Hell, it’s going on already, but it seems time that we got together face to face to start looking at, seriously, what kind of opportunity we’re sitting on here.

Making the most of hashtags

#hashtags logoA couple of days ago a new site called was launched by Cody Marx Bailey and Aaron Farnham, two ambitious college students folks from Bryan & College Station, Texas.

I wanted to take a moment to comment on its arrival and also suggest a slight modification to the purpose and use of hashtags, now that we have a service for making visible this kind of metadata.

First of all, if you’re unfamiliar with hashtags or why people might be prepending words in their tweets with hash symbols (#), read Groups for Twitter; or A Proposal for Twitter Tag Channels to get caught up on where this idea came from.

You should note two things: first, when I made my initial proposal, Twitter didn’t have the track feature; second, I was looking to solve some pretty specific problems, largely related to groupings and to filtering and to amplifying intent (i.e. when making generic statements, appending an additional tag or two might help others better understand your intent). For consistency, my initial proposal required that all important terms be prefixed with the hash, despite how ugly this makes individual updates look. The idea was that, I’d try it out, see how it worked, and if someone built something off of it, or other people adopted the convention, I could decide if the hassle and ugliness were ultimately worth it. A short time after I published my proposal, the track feature launched and obviated parts of my proposal.

Though the track feature provided a means for following explicit information, there was still no official means to add additional information, whether for later recall purposes or to help provide more context for a specific update. And since Twitter currently reformats long links as meaningless TinyURLs, it’s nice to be able to provide folks with a hint about the content at the end of the link. On top of those benefits, hashtags provide a mechanism for leveraging Twitter’s tracking functionality even if your update doesn’t include a specific keyword by itself.

Now, I’ll grant you that a lot of this is esoteric. Especially given that Twitter is predicated on answering the base question “what are you doing?” I mean, a lot of this hashtag stuff is gravy, but for those who use it, it could provide a great deal of value, just like the community-driven @reply convention.

Moreover, we’ve already seen some really compelling and unanticipated uses of hashtags on Twitter — in particular the use of the hashtag as a common means for identifying information related to the San Diego fires.

And that’s really just the beginning. With a service like Tweeterboard providing even more interesting and contextual social statistics, it won’t be long before you’ll be able to discover people who talk about similar topics or ideas that you might enjoy following. And now, with, trends in the frequency of certain topics will become all the more visible and quantifiable.

BUT, there is a limit here, and just because we can add all this fancy value on top of the blogosphere’s central intelligence system doesn’t mean that our first attempt at doing so is the best way to do it, or that we should definitely do it at all, especially if it comes at a high cost (perceived or real) to other users of the system.

Already it’s been made clear to me that the use of hashtags can be annoying, adding more noise than value. Some people just don’t like how they look. Still others feel that they encumber a simple communication system that should do one thing and one thing well, secondary uses be damned if they don’t blend with the how the system is generally used. This isn’t or Ma.gnolia after all.

And these points are all valid and well taken, but I think there’s some middle ground here. Used sparingly, respectfully and in appropriate measure, I think that the value generated from the use of hashtags is substantial enough to warrant their continued use, and it isn’t just that suggests this to me. In fact, I think, in the short term, might do more damage than good, if only because it means people will have to compose messages in unnatural ways to take advantage of the service, and this is never the way to design good software (sorry guys, but I think there’s room to improve the basic track feature yet).

In fact, with the release of the track feature, it became clear that every word used in a post is important and holds value (something that both Jack and Blaine noted in our early discussions). But it’s also true that without certain keywords present in a post, the track feature is useless. In this case in particular, where they provide additional context, I think hashtags serve a purpose. Consider this:

“Tara really rocked that presentation!”


“Tara really rocked that presentation! #barcampblock”

In the latter example, the presence of the hashtag provides two explicit benefits: first, anyone tracking “barcampblock” will get the update, and second, those who don’t know where Tara is presenting will be clued into the context of the post.

In another example:

“300,000 people evacuated in San Diego county now.”


“#sandiegofire: 300,000 people evacuated in San Diego county now.”

Again, the two benefits are present here, demonstrating the value of concatenated hashtags where using the space-separated phrase “San Diego” would not have been caught by the track feature.

What I don’t think is as useful as when I first made my proposal (pre-tracking) is calling out specific words in a post for emphasis (unless you’re referring to a place or airport, but that’s mostly personal preference). For example, revising my previous proposal, I think that this approach is now gratuitous:

“Eating #popcorn at #Batman in #IMAX.”

Removing the hashes doesn’t actually reduce the meaning of this post, nor does it affect the tracking feature. And, leaving them out makes the whole update look much better:

“Eating popcorn at Batman in IMAX.”

If you wanted to give your friends some idea of where you are, it might be okay to use:

“Eating popcorn at Batman in IMAX at #Leows.”

…but even still, the hash is not wholly necessary, if only to help denote some specialness to the term “Leows”.

So, with that, I’m thrilled to see get off the ground, but it’s use should not interfere with the conventional use of Twitter. As well, they provide additional value when used conservatively, at least until there is a better way to insert metadata into a post.

As with most technology development, it’s best to iterate quickly, try a bunch of things (rather than just talk about them) and see what actually sticks. In the case of hashtags, I think we’re gradually getting to a pretty clear and useful application of the idea, if not the perfect implementation so far. Anyway, this kind of “conversational development” that allows the best approach to emerge over time while smoothing out the rough edges of an original idea seems to be a pretty effective way to go about making change, and it’s promising to see efforts like take a simple — if not controversial — proposal, and push it forward yet another step.

Coverflow for People

Address Book Coverflow v1

Ever since Apple bought Coverflow, I thought that it would make an awesome interface for browsing people. In fact, I had previously designed “people in the browser” for Flock to look something like this in the early days:

Friends Feed Reading

Of course, at the time, the design required a few things that we still lack, namely: 1) bigger default personal photos or avatars, 2) ubiquitous universal identifiers for people (this was before OpenID) 3) and free access to public data about people, typically found at the end of those identifiers.

Anyway, CoverFlow for people is something that I think could be a very powerful way of revealing “the ghosts in the machine” — across Leopard — or in interfaces generally. Imagine this kind of view showing up in, Adium, iChat… where your friends, family and the rest get to update their own user pictures on a whim, and set their status and contact preferences in a way that visually makes sense. The new integrated Gtalk features in Gmail seem to be prioritizing your “Top 250”, so this is also something that could be added to the People Coverflow API without much trouble in order for the interface to scale accordingly. Anyone able to hack up a demo of this idea?

Did the web fail the iPhone?

Twitter / Ian McKellar: @factoryjoe, wait, so all these "web apps" people have invested time and money in are now second-class applications?

Ian might be right, but not because of Steve’s announcement today about opening up the iPhone.

Indeed, my reaction so far has been one of quasi-resignation and disappointment.

A voice inside me whimpers, “Don’t give up on the web, Steve! Not yet!”

iPhoneDevCampYou have to understand that when I got involved in helping to plan iPhoneDevCamp, we didn’t call it iPhoneWebDevCamp for a reason. As far as we knew, and as far as we could see into the immediate future, the web was the platform of the iPhone (Steve Jobs even famously called Safari the iPhone’s SDK).

The hope that we were turning the corner on desktop-based applications was palpable. By keeping the platform officially closed, Apple brought about a collective channeling of energy towards the development of efficient and elegant web interfaces for Safari, epitomized by Joe Hewitt’s iPhone Facebook App (started as a project around iPhoneDevCamp and now continued on as by Christopher Allen, founder of the ).

And we were just getting started.

…So the questions on my mind today are: was this the plan all along? Or, was Steve forced into action by outside factors?

iPhone Spider WebIf this were the case all along, I’d be getting pretty fed up with these kind of costly and duplicitous shenanigans. For godsake, Steve could at least afford to stop being so contradictory! First he lowers the price of the iPhone months after releasing it, then drops the price of DRM-free tracks (after charging people to “upgrade their music”), and now he’s promising a software SDK in February, pledging that an “open” platform “is a step in the right direction” (after bricking people’s phones and launching an iPhone WebApps directory, seemingly in faux support of iPhone Web App developers).

Now, if this weren’t in the plan all along, then Apple looks like a victim of the promise — and hype — of the web as platform. (I’ll entertain this notion, while keeping in mind that Apple rarely changes direction due to outside influence, especially on product strategy.)

Say that everything Steve said during his keynote were true and he (and folks at Apple) really did believe that the web was the platform of the future — most importantly, the platform of Apple’s future — this kind of reversal would have to be pretty disappointing inside Apple as well. Especially considering their cushy arrangement with Google and the unlikelihood that Mac hardware will ever outsell PCs (so long as Apple has the exclusive right to produce Mac hardware), it makes sense that Apple sees its future in a virtualized, connected world, where its apps, its content and its business is made online and in selling thin clients, rather than in the kind of business where Microsoft made its billions, selling dumb boxes and expiring licenses to the software that ran on them.

If you actually read Apple’s guide for iPhone content and application development, you’d have to believe that they get the web when they call for:

  • Understanding User-iPhone Interaction
  • Using Standards and Tried-and-True Design Practices
  • Integrating with Phone, Mail, and Maps
  • Optimizing for Page Readability
  • Ensuring a Great Audio and Video Experience (while Flash is not supported)

These aren’t the marks of a company that is trying to embrace and extend the web into its own proprietary nutshell. Heck, they even support microformats in their product reviews. It seems so badly that they want the web — the open web — to succeed given all the rhetoric so far. Why backslide now?

Well, to get back to the title of this post, I can’t but help feel like the web failed the iPhone.

For one thing, native apps are a known quantity for developers. There are plenty of tools for developing native applications and interfaces that don’t require you to learn some arcane layout language that doesn’t even have the concept of “columns”. You don’t need to worry about setting up servers and hosting and availability and all the headaches of running web apps. And without offering “services in the cloud” to make web application hosting and serving a piece of cake, Apple kind of shot itself in the foot with its developers who again, aren’t so keen on the ways of the web.

Flipped around, as a proponent of the web, even I can admit how unexciting standard interfaces on the web are. And how much work and knowledge it requires to compete with the likes of Adobe’s AIR and Microsoft’s SilverLight. I mean, us non-proprietary web-types rejoice when Safari gets support for CSS-based rounded corners and the ability to use non-standard typefaces. SRSLY? The latter feature was specified in 1998! What took so long?!

No wonder native app developers aren’t crazy about web development for the iPhone. Why should they be? At least considering where we’re at today, there’s a lot to despise about modern web design and to despair about how little things have improved in the last 10 years.

And yet, there’s a lot to love too, but not the kind of stuff that makes iPhone developers want to abandon what’s familiar, comfortable, safe, accessible and hell, sexy.

It’s true, for example, that with the web you get massive distribution. It means you don’t need a framework like Sparkle to keep your apps up-to-date. You can localize your app in as many languages as you like, and based on your web stats, can get a sense for which languages you should prioritize. With protocols like OpenID and OAuth, you get access to all kind of data that won’t be available solely on a user’s system (especially when it comes to the iPhone which dispenses with “Save” functionality) as well a way to uniquely identify your customers across applications. And you get the heightened probability that someone might come along and look to integrate with or add value to your service via some kind of API, without requiring any additional download to the user’s system. And the benefits go on. But you get the point.

Even still, these benefits weren’t enough to sway iPhone developers, nor, apparently, Steve Jobs. And to the degree to which the web is lacking in features and functionality that would have allowed to Steve to hold off a little longer, there is opportunity to improve and expand upon what I call the collection of “web primitives” that compose the complete palette of interaction options for developers who call the web their native platform. The simple form controls, the lightboxes, the static embedded video and audio, the moo tools and scriptaculouses… they still don’t stack up against native (read: proprietary) interface controls. And we can do better.

We must to do better! We need to improve what’s going inside the browser frame, not just around it. It’s not enough to make a JavaScript compiler faster or even to add support for SVG (though it helps). We need to define, design and construct new primitives for the web, that make it super simple, straight-forward and extremely satisfying to develop for the web. I don’t know how it is that web developers have for so long put up with the frustrations and idiosyncrasies of web application development. And I guess, as far as the iPhone goes, they won’t have to anymore.

It’s a shame really. We could have done so much together. The web and the iPhone, that is. We could have made such sweet music. Especially when folks realize that Steve was right and developing for Safari is the future of application development, they’ll have wished that they had invested in and lobbied for richer and better tools and interfaces for what will inevitably become the future of rich internet application development and, no surprise, the future of the iPhone and all its kin.

Putting people into the protocol

I really don’t like the phrase “user-centric identity” and as I struggled to name this post, I came upon Pete Rowley’s 2006 phrase the people are in the protocol.

This isn’t much different from what I used to call “people in the browser” when I was at Flock, so I’ll use it.

Anyway, as part of another post I’m working on, it seemed useful to call out what I see as the benefits to services that “put people into the protocol” or, more aptly, those services that are designed around people who tend to use more than one web service and a single identifier (like an OpenID) to represent themselves across services.

Here’s what I’ve come up with so far:

  • I am me, wherever I go. I may have multiple personas, facets or identities that I use online, but fundamentally, I can manage them more effectively because services are oriented around me and not around the services that I use (it would be like logging into a new user account every time you want to switch applications!).
  • I have access to my stuff, wherever I am. Even though I use lots of different web services, just like I use lots of desktop applications, I can always access my data, no matter where I created it or where it’s stored. And if I want to get all of my data out of a service into another one, I should be able to do so.
  • My friends come with me, but continue to use only the services that they chose to. If I can send email from any domain to any domain, why can’t I join one network and then add friends from any other network?
  • I am the master of my domain. Both literally and figuratively, I should be able to choose any identity provider to manage all my external connections to the world, including running my own, from my own domain. While remote service providers can certainly set the standards for who they allow access to their APIs, this should be done in a clear and transparent way, so that even people who host their own identity can have fair access.

There may of course be other benefits that I’m forgetting or omitting, but I think I’ve covered some of the primary ones, at least so that I continue with my other post!

The story of

At FOO Camp, we held a session on Green Code and discussed various tactics for reducing power consumption by reducing (primarily) CPU cycles through wiser platform decisions and/or coding practices.

exPhone badgeSomewhere in the discussion we brought up the impending launch of the iPhone and it occurred to me that there really wasn’t any substantive discussion being had about what to do with the many thousands of cell phones that would be retired in favor of newer, shinier iPhones.

Thus the seed for took root and began to germinate in my mind — as something simple and feasible that I could create to raise awareness of the issue and provide actionable information for busy people who wanted to do the right thing but might not want to wade through the many circuitous online resources for wireless recycling.

I had a couple constraints facing me: first, I needed to get this done while Tara was traveling to Canada as I wanted it to be an [early} surprise birthday present. Second, I needed to get it done before so I could leverage the event to promote the site. And third, I had other competing priorities that I really needed to focus on.

exPhone Keynote LayoutI went about designing the site in Keynote (my new favorite design tool), relying heavily on inspiration from Apple’s section. I did a bunch of research and posted a lot of links to a Ma.gnolia group (in lieu of a personal set) and created a Flickr group at the same time. I of course also registered the associated Twitter account.

As I went about developing the site, I felt that I wanted to capture everything in a single page — and make it easy for printing. However, I brought my buddy Alex Hillman into the project to help me with the trickier PHP integration bits (his announcement) and he convinced me that multiple pages would actually be a better idea — not to mention compatible with my primary purpose of encouraging sustainable behavior! — and so we ended up breaking the content into three primary sections: Preparation, Donation and Recycling.

We riffed back and forth in SVN and things started to solidify quickly and we quickly realized that we should make the site more social and interactive. And, rather than build our own isolated silos, we decided we’d pull in photos from Flickr, bookmarks from Ma.gnolia and Delicious and use the groups functionality on Flickr and Ma.gnolia. This meant Alex simply had to toss the feeds into Yahoo! Pipes, dedupe them and then funnel the results in a SimplePie aggregator on our end to output the resultant feeds. It turned out that Pipes was, for some reason, not as reliable as we needed and so Alex ripped them out and ended up bumping up SimplePie’s caching of the direct feeds.

Alex put in extra effort on the Flickr integration side, creating an exPhone user account on Flickr and setting up email posting to make it super simple to get your photos of your exphones on to the site. All you have to do is take a photo of your exphone and email it to with a subject like this: tags: exphone, ‘the make and model of your phone’ (yes, the make and model should be in single quotes!). We’re kinda low on photos on there, so we’d love for you to contribute!

Lastly, I’ve gotta give props to The Dude Dean for his SEO tips. I’m typically not a fan of SEO, but I think when applied ethically, it can definitely help you raise your relevance in search engine results. We’re nowhere in sight, but I’d love to get up in the cell phone recycling results.

I’ve written this up primarily to demonstrate an evolving design process (Keynote to HTML to SVN prototyping to iterative launch) and the use of existing technology to build a simple but rich web application. By leveraging web services via various APIs and feeds, Alex and I were able to build a “socialized” site will little original development where most of our efforts were focused on content, design and behavior. I also made sure to mark up the site with microformats throughout making it trivial to add the organizations I mentioned to your address book or reuse the data elsewhere.

I like the idea of “disposable web apps” or “single purpose apps” that provide useful information, useful functionality or simply reuse existing materials in a novel or purposeful way. I’m also thrilled that Alex and I cobbled this thing together from scratch in a matter of three days. Yeah, it’s not a long-term, high value proposition, but it was great fun to work on and is something concrete that came out of that discussion at FOO Camp.

I of course welcome your thoughts and feedback and invite you to add your own stories, links or photos to the site!

Why I screenshot

sh pops the question

Three months ago, Sarah Hatter asked me a question that I had intended on answering then and there. In fact I did, but I had intended to expand upon these thoughts in a longer post:

Actually, I take shot primarily for my own purposes — research, learning and as a repository of interfaces that I can dig up later and imitate.

If I had to go out an search for a specific UI everytime I needed inspiration, I’d be a *much* slower designer than I already am! This way I can capture the best of the web *as* I come upon it, when the moment of inspiration hits.

I think this hints at what I said the other day about cleverness: she is the most clever who is the sum of everyone else’s cleverness (Ok, I didn’t say that exactly, but that’s kind of what I was getting at). On top of that, it’s rather inefficient to try to “innovate” your way to the next big thing when most “inventions” are actually evolutionary improvements to what’s come before. As if social networking and Web 2.0 was new! I mean, the version got ticked up from one-point-oh right?

But that’s not really what I’m saying. What I am saying is that I screenshot for history, for posterity, for education and erudition, for communication, to show off and, heck, for my own enjoyment. Call me twisted, but I really get off on novel approaches to old interfaces, clever disk images or fancy visualizations. Jacob Patton once called me the pornographer of Web 2.0. Nuff said.

Still, there is more to be said. For one thing, I don’t screenshot everything that I see or come across. Just like my blog posts, I tend to like to write about things that are interesting to me, but that, if I’m going to share to the wider world, will probably be of some interest to other folks, one way or another. I never assume interest, but, y’know, I do try to make this stuff look good in the off chance that someone takes inspiration from something I’ve uploaded… as in the case of Andy Baio‘s work on the redesign of Upcoming. According to his own recollection of his design process, he relied more heavily on my shots of the Flickr-Yahoo Account merge than on any other online resource for figuring out how to implement the same for Upcoming. So yay? Go team!

This is the perfect example of why my screenshotting of design patterns can be really useful for clever people. When other smarter people have already solved problems, and start repeating the solutions or interface in consistent ways, it becomes a design cow path. These are most interesting to me because, as the patterns emerge, we start to develop a visual language for web applications that can be used in the place of verbal descriptors like “adding friends” or “upload interfaces“. Rather than speak in the abstract, we can pull from an existing assortment of solutions from the wild that have already been proven in place, that you can interact with, and that you can evaluate on a case by case basis as to whether any given pattern is worth emulating in a new design.

I also screenshot as a way of in-between blogging, I guess. Y’know, like Twitter, Tumblr, Ma.gnolia, Plazes and (among others) are all forms of in-between blogging. They’re where I am in the absences between longer posts (such as this one) where I record what I’m up to, what I’m seeing and what’s interesting to me. My Flickr screenshots are probably more often than not more interesting than what I have to say over here, and certainly less verbose. And, most significantly, the screenshot is the new photograph, allowing me to connect through images of what I see with other people who are able to see things the way I see them. Imagine life before the original camera, where everyone’s depiction of one another was captured on canvas in oil paint; before screenshotting became a first class citizen on Flickr, we were living in a similarly blind world, cut off from these representations of our daily experience. But fortunately, as of a few months ago, that’s no longer the case:

Flickr: Content Filters

And, following off that last observation, I screenshot for posterity. Now that this internet thing has caught on and it’s been around a bit, it’s fun every now and then to reflect and go back to the days of the first bubble and take a look at what the “it” shine was back then (now it’s the “floor” effect — formerly known as the “wet floor” effect — but back then maybe it was the java lake applet?). Which is all fine and well, but once you start poking around, you’ll notice very quickly that the Wayback Machine is way incomplete. And while Google’s cache is useful, it certainly tends care more about the textual content of a page rather than how it originally looked. And that’s where screenshots could make up the difference, just as photographs of real life offer us a way to record the way things were, screenshots provide a mirror in time into the things we see on screen, into the interfaces that we interact with and the digital communications that we consume (check out this old view of the QuickSilver catalog compared with its current look or how about the Backpack preview or when Gmail stored less than 2GB of email?).

I don’t tend to think about the historic value of things when I shoot them; I do tend to evaluate their interestingness or contribution to a certain series along a theme. And yet, I’m curious to see, over time, just what these screenshots will reveal about us, and about the path we took to get to where end up. For one thing, web application development has changed drastically from where it was just a few years ago and now, with the iPhone, we’re embarking into wholly undiscovered territory (where it’s unclear if screenshots will be possible). But these screenshots help us learn about ourselves, and help us see the pieces-parts of our everyday experience. If I screenshot for any reason, perhaps it is to collect these scraps of evidence to help me better understand and put order into the world around me, to tie things together visually, and to explore solutions that work and others that fail. Anyway, it’s something I enjoy and will probably keep doing for the foreseeable future.

Problems with OpenID on Highrise

Trouble with OpenID

Turns out that 37 Signals’ implementation of OpenID could use some… getting real.

Let me go over these issues and provide either resources or remedies.

Normalization of OpenIDs URLs

Look at these three URLs and make a note to yourself about any differences you see:

To a lay person (or even your average geek), these URLs all represent the same thing — especially if you type any of them into the address bar, they’ll land you on my out-of-date homepage.

But, in the land of OpenID and URI evaluation, these differences can be very significant, especially when you get into the differences between OpenID v1.1 and the forthcoming v2.0 (which adds support for inames).

To the contrary of some discussion on the OpenID list, the way in which you normalize an identity URL very quickly becomes a usability issue if the cause of OpenID login failures are not immediately obvious.

Remedy: Given some of the issues folks have had with OpenID at Highrise, DHH decided to make usability the priority:

I’m going to fix the trailing slash issue on URL-based OpenIDs. We’ll be more liberal in what we take.

This should mean that folks logging in with OpenID shouldn’t have to guess at what their appropriate identity URL looks like, instead only substantively know what the important parts are (i.e. the domain and any sub-domain or path(s)).

Outstanding issues: Of course, 37 Signals can do this, but what happens when the identity URL that someone uses on Highrise doesn’t work elsewhere because other consumers aren’t as liberal with what they accept?

Lack of support for i-names

One of the issues (features?) that OpenID v2.0 brings is the support for i-names, a controversial schema for representing people, businesses and groups using non-familiar formatting codes.

I’ve heard that there’s somewhere in the ballpark of 20,000 i-names users in the wild (I happen to have =chris.messina but never use it), but compared with the over 70 million (and growing) URL-based OpenID users, this is an incredibly small minority of the overall OpenID landscape.

Nevertheless, one potential point of frustration for these users is in the lack of standardization in implementing or indicating support for i-names, as Rod Begbie pointed out in the Highrise forum, to which DHH replied, . We don’t support iname OpenIDs for now, though. We’re just supporting OpenID 1.1.

And this, I imagine, is going to be a common issue, for both OpenID implementors (dealing with support requests for support of i-names) and for i-names users, such that I question, as others have, the wisdom of offering support for i-names identifiers, when issues still clearly remain in the usability of basic URLs.

Remedy: Once the OpenID v2.0 spec has been finalized, there will need to be a new logo to indicate which version of OpenID a consuming site supports; this will hopefully work to set expectations for i-names users.

Outstanding issues: At the same time, the addition of i-names to OpenID v2.0 has caused a lot of concern for folks, many of whom have simply decided to stick with v1.1.

Personally, I don’t see the long term value in fragmenting the OpenID protocol away from more familiar URL-based identifiers. I want something simple, straightforward and obvious. Otherwise, v2.0 is going to be a headache to advocate, to implement and to support that a lot of folks with just stick with v1.1.

Double delegation aka the Sean Coon Problem

My buddy Sean Coon pinged me the other day to see if I could help him debug the problems he was having signing into Highrise with his OpenID account. When he had signed up, he had used as his OpenID URL. He’d started playing with it, but then left it, only to return later, unable to login.

His problem was three-fold, but I’ll first address a basic issue with delegation that some folks might not be familiar with.

As it turned out, Sean had delegated to resolve to ClaimID as his identity provider. The problem was that he used as his identity URL instead of, which is where his OpenID was actually stored.

Typically when people use[username] as their OpenID identity URL to login to sites, this transformation takes place invisibly. This is because ClaimID delegates to themselves.

The problem lies in that Sean delegated to his ClaimID profile, which in turn was delegated to ClaimID’s OpenID server. If this sounds confusing, it is, and that’s why it’s not allowed in OpenID.

As I understand it, delegation can only be done once, or else you might end up in an infinite chain of delegations that may end in some grandious infinite loop. By restricting your delegation hops to one, a lot of problems are avoided.

Remedy: In this case, Sean only needs to re-delegate to, and fortunately, there’s a handy WordPress plugin that can handle this for him.

Outstanding issues: Delegation is probably one of the coolest aspects of OpenID, since it allows you to use any URL of your choosing as your OpenID and then let someone else deal with the harder part of actually talking to all your services. Furthermore, you can delegate any number of services and set up fallbacks in case your primary identity provider is taking a nap. Communicating how this works and how to resolve and communicate errors when things go wrong is one of the biggest holes in the OpenID offering, and with user experience experts like 37 Signals joining up, I hope that these issues get the amount of due diligence and attention that they deserve.

Untested assumptions

Finally, I discovered a serious mistaken assumption in the Highrise sign-up process. To test out this issue, I created a test account, using as my OpenID:

Sign up for Highrise

Now, here’s the problem: they didn’t force me to login to that OpenID when I signed up; instead they just assumed that I knew what I was doing and that I was using a valid OpenID.

So here’s the email that I got confirming my account. Note my username:
Gmail - Welcome to Highrise

Of course when I go to login, I can’t, and I’m locked out of my account — since I can’t login and prove that I own — which, notably, is the same result as if I’d mistyped my OpenID. Fortunately, 37 Signals gave me a backdoor, but it kind of defeats the whole purpose of using OpenID and suggests that you shouldn’t let folks arbitrary set their OpenIDs without having them prove that they really have control of their stated identifier.

Remedy: For implementors, you must get proof that someone controls or owns an OpenID if you’re going to rely on it as their primary identifier. You can’t assume that they’ve typed it correctly or even that they’ve even used a proper OpenID. And, most importantly, you’ve got to stress test such a new system to make sure issues like this are avoided.

Oh, and it does appear that OpenIDs are totally not working at this time; I’ve put Scott Kveton and Jason Fried in touch, so hopefully they can resolve the matter. Interestingly, if you’ve delegated to more than one identity provider and you’re using your own OpenID URL to login to Highrise, you should be able to get in.


It’s still promising to see folks like 37 Signals get on board with OpenID, but we clearly have a long way to go.

I hope I’ve clarified a few of the current issues that people might be seeing, or that are generally confusing about OpenID, and I admit that while I’m trying to clarify these things, a lot of this will still sound like Greek to most folks.

Given that, if you’re having issues getting OpenID, feel free to drop me a note and I’ll see if I can’t help resolve it.