Announcing Emailtoid: mapping email addresses to OpenIDs

EmailtoidThe other night at Beer and Blog in Portland, fellow Vidooper Michael T Richardson announced and launched a new service that I’m both excited and a little apprehensive about.

The service is called Emailtoid, and while I prefer to pronounce is “email-toyed”, others might pronounce it “email two eye-dee”. And depending on your pronunciation, you might realize that this service is about using an email address as an ID — specifically an OpenID.

This is not a new idea, and it’s one that been debated and discussed in the OpenID community an awful lot, which culminated in a rough outline of how it might work by Brad Fitzpatrick following the Social Graph FOO Camp this past spring, and that David Fuelling turned into an early draft spec.

Well, we looked at this work and this discussion and felt that sooner or later, in spite of all the benefits of using actual URLs for identity, that someone needed to take a lead and actually build out this concept so we have something real to banter about.

The pragmatic reality is that many people are comfortable using email addresses as their identity online for signing up to new services; furthermore, many, many more people have email addresses who don’t also have URLs or homepages that they call their own (or can readily identify). And forcing people to learn yet another form of identifier for the web to satisfy the design of a protocol for arguably marginal value with a lesser user experience also doesn’t make sense. Put another way: the limitations of the technology should not be forced on end users, especially when it doesn’t need to be. And that’s why Emailtoid is a necessary experiment towards advancing identity on the web.

How it works

Emailtoid is a very simple service, and in fact is designed for obsolescence. It’s meant as a fallback for now, enabling relying parties to accept email addresses as identifiers without requiring the generation of a new local password and without requiring the address owner to give up or reveal their existing email credentials (otherwise known as the “password anti-pattern“).

Enter your email - Emailtoid

The flow works like this:

  1. Users enter either an OpenID or email address into a typical OpenID input field. For the purpose of this flow, we’ll presume an email address is used.
  2. The relying party splits email addresses at the ‘@’ symbol into the username and the domain, generating a directed identity request to the email domain. If an XRDS, YADIS or XRDS-Simple document is discovered at the domain, the typical OpenID flow is invoked.
  3. If no discovery document is found, the service falls back to Emailtoid (sending a request like http://emailtoid.net/mapper?email=jane@example.com), where users verify that they own the supplied email addresses by providing their one-time access token that Emailtoid mailed to them.
  4. At this point, users may optionally associate an existing OpenID with their email address, or use the OpenID auto-generated by Emailtoid. Emailtoid is not intended to serve as a full-featured OpenID provider, and we encourage using an OpenID from a third-party OpenID provider.
  5. In the case where users supply and verify their own OpenID, Emailtoid will create a 302 HTTP redirect removing Emailtoid from future interactions completely.

Should an email provider supply a discovery document after an Emailtoid mapping has been made, the new mapping will take precedence.

Opportunities and issues

The drive behind Emailtoid, again, is to reduce the friction of OpenID by reusing familiar identifiers (i.e. email addresses). Clearly the challenges of achieving OpenID adoption are not simply technological, and to a great degree rely on how the user experience needs to become more streamlined and deliver on the promise of greater security and convenience.

Therefore, if a service advertises that they support signing in with an email address, they must keep that promise.

Unfortunately, until all email providers do some kind of local resolution and OpenID authentication, we will need a centralized mapper such as Emailtoid to provide the fallback mapping. And therein lies the rub, defeating some of the distributed design of OpenID.

If anything, Emailtoid is intended to drive forward a conversation about the experience of OpenID, and about how we can make the protocol compatible with, or complementary to, existing and well-known means of identifying oneself on the web. Is it a final solution? Probably not — but it’s up, it’s running, it works and it forces us now to look critically at the question of emails as OpenIDs, now that we can actually experience the flow, and the feeling, of entering an email address into an OpenID box without ever having to enter, or create, another unnecessary password.

Thoughts on dynamic privacy

A highly touted aspect of Facebook Connect is the notion of “dynamic privacy“:

As a user moves around the open Web, their privacy settings will follow, ensuring that users’ information and privacy rules are always up-to-date. For example, if a user changes their profile picture, or removes a friend connection, this will be automatically updated in the external website.

Over the course of the Graphing Social Patterns East conference here in DC, Dave Morin and others from Facebook’s Developer Platform have made many a reference to this scheme but have provided frustratingly scant detail on how it will actually work.

Friend Connect - Disable by Facebook

In a conversation with Brian Oberkirch and David Recordon, it dawned on me that the pieces for Dynamic Privacy are already in place and that, to some degree, it seems that it’s really just a matter of figuring out how to effectively enforce policy across distributed systems in order to meet user expectations.

MySpace actually has made similar announcements in their Data Availability approach, and if you read carefully, you can spot the fundamental rift between the OpenSocial and Facebook platforms:

Additionally, rather than updating information across the Web (e.g. default photo, favorite movies or music) for each site where a user spends time, now a user can update their profile in one place and dynamically share that information with the other sites they care about. MySpace will be rolling out a centralized location within the site that allows users to manage how their content and data is made available to third party sites they have chosen to engage with.

Indeed, Recordon wrote about this on O’Reilly Radar last month (emphasis original):

He explained that MySpace said that due to their terms of service the participating sites (e.g. Twitter) would not be allowed to cache or store any of the profile information. In my mind this led to the Data Availability API being structured in one of two ways: 1) on each page load Twitter makes a request to MySpace fetching the protected profile information via OAuth to then display on their site or 2) Twitter includes JavaScript which the browser then uses to fill in the corresponding profile information when it renders the page. Either case is not an example of data portability no matter how you define the term!

Embedding vs sharing

So the major difference here is in the mechanism of data delivery and how the information is “leased” or “tethered” to the original source, such that, as Morin said, “when a user deletes an item on Facebook, it gets deleted everywhere else.”

The approach taken by Google Gadgets, and hence OpenSocial, for the most part, has been to tether data back to the source via embedded iframes. This means that if someone deletes or changes a social object, it will be deleted or changed across OpenSocial containers, though they won’t even notice the difference since they never had access to the data to begin with.

The approach that seems likely from Facebook can be intuited by scouring their developer’s terms of service (emphasis added):

You can only cache user information for up to 24 hours to assist with performance.

2.A.4) Except as provided in Section 2.A.6 below, you may not continue to use, and must immediately remove from any Facebook Platform Application and any Data Repository in your possession or under your control, any Facebook Properties not explicitly identified as being storable indefinitely in the Facebook Platform Documentation within 24 hours after the time at which you obtained the data, or such other time as Facebook may specify to you from time to time;

2.A.5) You may store and use indefinitely any Facebook Properties that are explicitly identified as being storable indefinitely in the Facebook Platform Documentation; provided, however, that except as provided in Section 2.A.6 below, you may not continue to use, and must immediately remove from any Facebook Platform Application and any Data Repository in your possession or under your control, any such Facebook Properties: (a) if Facebook ceases to explicitly identify the same as being storable indefinitely in the Facebook Platform Documentation; (b) upon notice from Facebook (including if we notify you that a particular Facebook User has requested that their information be made inaccessible to that Facebook Platform Application); or (c) upon any termination of this Agreement or of your use of or participation in Facebook Platform;

2.A.6) You may retain copies of Exportable Facebook Properties for such period of time (if any) as the Applicable Facebook User for such Exportable Facebook Properties may approve, if (and only if) such Applicable Facebook User expressly approves your doing so pursuant to an affirmative “opt-in” after receiving a prominent disclosure of (a) the uses you intend to make of such Exportable Facebook Properties, (b) the duration for which you will retain copies of such Exportable Facebook Properties and (c) any terms and conditions governing your use of such Exportable Facebook Properties (a “Full Disclosure Opt-In”);

2.B.8) Notwithstanding the provisions of Sections 2.B.1, 2.B.2 and 2.B.5 above, if (and only if) the Applicable Facebook User for any Exportable Facebook Properties expressly approves your doing so pursuant to a Full Disclosure Opt-In, you may additionally display, provide, edit, modify, sell, resell, lease, redistribute, license, sublicense or transfer such Exportable Facebook Properties in such manner as, and only to the extent that, such Applicable Facebook User may approve.

This is further expanded in the platform documentation on Storable Information:

Per the Developer Terms of Service, you may not cache any user data for more than 24 hours, with the exception of information that is explicitly “storable indefinitely.” Only the following parameters are storable indefinitely; all other information must be requested from Facebook each time.

The storable IDs enable you to keep unique identifiers for Facebook elements that correspond to data gathered by your application. For instance, if you collected information about a user’s musical tastes, you could associate that data with a user’s Facebook uid.

However, note that you cannot store any relations between these IDs, such as whether a user is attending an event. The only exception is the user-to-network relation.

I imagine that Facebook Connect will work by “leasing” or “sharing” information to remote sites and require, through agreement and compliance with their terms, to check in periodically (or to receive directives through a push mechanism) for changes to data, and then to flush caches of stored data every 24 hours or less.

In either model there is still a central provider and store of the data, but the question for implementation really comes down to whether a remote site ever has direct access to the data, and if so, how long it is allowed to store it.

Of note is the OpenSocial RESTful API, which provides a web-friendly mechanism for addressing and defining resources. Recordon pointed out to me that this API affords all the mechanisms necessary to implement the “leased” model of data access (rather than the embedded model), but leaves it up to the OpenSocial applications and containers to set and enforce their own data access policies.

…Which is a world of a difference from Facebook’s approach to date, for which there is neither code nor a spec nor an open discussion about how they’re thinking through the tenuous issues imbued in making decisions around data access, data control, “tethering” and “portability“. While folks like Plaxo and Yahoo are actually shipping code, Facebook is still posturing, assuring us to “wait and see”. With something so central and so important, it’s disheartening that Facebook’s “Open” strategy is anything but open, and everything less than transparent.

Relationships are complicated

Facebook | Confirm Requests

I’ve noticed a few interesting responses to my post on simplifying XFN. While my intended audiences were primarily fellow microformat enthusiasts and “lower case semantic web” types, there seems to be a larger conversation underway that I’d missed — one that both and have commented on.

In a treatise against XFN (and similarly reductive expressions of human relationships) from December of last year, Greenfield said a number of profound things:

  • …one of my primary concerns has always been that we not accede to the heedless restructuring of everyday human relations on inappropriate and clumsy models derived from technical systems – and yet, that’s a precise definition of social networking as currently instantiated.
  • All social-networking systems constrain, by design and intention, any expression of the full band of human relationship types to a very few crude options — and those static!
  • …it’s impossible to use XFN to model anything that even remotely resembles an organic human community. I passionately believe that this reductive stance is not merely wrong, but profoundly wrong, in that it deliberately aims to bleed away all the nuance, complication and complexity that makes any real relationship what it is.
  • I believe that technically-mediated social networking at any level beyond very simple, local applications is fundamentally, and probably persistently, a bad idea. From where I stand, the only sane response is to keep our conceptions of friendship and affinity from being polluted by technical metaphors and constraints to begin with.

Whew! Strong stuff, but useful, challenging and insightful.

Meanwhile, TBL defended a semi-autistic perspective in describing the future of the Semantic Web (yes, the uppercase version):

At the moment, people are very excited about all these connections being made between people — for obvious reasons, because people are important — but I think after a while people will realise that there are many other things you can connect to via the web.

While my sympathies actually lie with Greenfield (especially after a weekend getting my mom setup on Facebook so she could send me photos without clogging my inbox with 80MB emails… a deficiency in the design of the technology, not my mother mind you!), I also see the promise of a more self-aware, self-descriptive web. But, realistically, that web is a long way off, and more likely, that web is still going to need human intervention to make it work — at least for humans to benefit from it (oh sure, just get rid of the humans and the network will be just perfect — like planes without passengers, right?).

But in the meantime, there is a social web that needs to be improved, and that can be improved, in fairly simple and straight-forward ways, that will make it easier for regular folks who don’t (and shouldn’t have to) care about “data portability” and “password anti-patterns” and “portable contact lists” to benefit from the fact that the family and friends they care about are increasingly accessible online, and actually want to hear from them!

Even though Justin Smith takes another reductive look at the features Facebook is implementing, claiming that it wants to “own communications with your friends“, the reality is, people actually want to communicate with each other online! Therefore it follows that, if you’re a place where people connect and re-connect with one another, it’s not all that surprising that a site like Facebook would invest in and make improvements to facilitate interaction and communication between their members!

But let’s back up a minute.

If we take for granted that people do want to connect and to communicate on social networks (they seem to do it a lot, so much to that one might could even argue that people enjoy doing it!), what role should so-called “portable contact lists” play in this situation? I buy Greenfield’s assertion that attempts by technologists to reduce human relationships to a predefined schema (based on prior behavior or not) is a failing proposition, but that seems to ignore the opportunity presented by the fact that people are having to maintain many several lists of their friends in many different places, for no other reason than an omission from the design of the social internetwork.

Put another way, it’s not good enough to simply dismiss the trend of social networking because our primitive technological expressions don’t reflect the complexity of real human relationships, or because humans are just one of kind of “object” to be “semantified” in TBL’s “Giant Global Graph“… instead, people are connecting today, and they’re wanting to connect to people outside of their chosen “home” network and frankly the experience sucks and it’s confusing. It’s not good enough to get all prissy about it; the reality is that there are solutions out there today, and there are people working on these things, and we need smart people like Greenfield and Berners-Lee to see that solutions that enable the humanist web (however semantic it needs to be) are being prioritized and built… and that we [need] not accede to the heedless restructuring of everyday human relations on inappropriate and clumsy models derived from technical systems.

I can say that, from what I’ve observed so far, these are things that computers can do for us, to make the social computing experience more humane, should we establish simple and straightforward means to express a basic list of contacts between contexts:

  • help us find and connect to people that we’ve already indicated that we know
  • introduce us to people who we might know, or based on social proximity, should know (with no obligation to make friends, of course!)
  • help us from accidently bumping into people we’d rather not interact with (see block-list portability)
  • helping us to segment our friendships in ways that make sense to us (rather than the semi-arbitrary ways that social networks define)
  • helping us to confidently share things with just the people with whom we intend to share

There may be others here, but off the top of my head, I think satisfying these basic tasks is a good start for any social network that thinks allowing you to connect and interact with people who you might know, but who may not have already signed up for the service, is useful.

I should make one last point: when thinking about importing contacts from one context to another, I do not think that it should be an unthinking act. That is, just because it’s merely data being copied between servers, the reality is that those bits represent things much more sacred and complicated than any computer might ever be programmed to imagine. Therefore, just because we can facilitate and lower the friction of “bringing your friends with you” from one place to another doesn’t mean that it should be an automatic process and that all your friends in one place should be made to be your friends in the new place.

And regardless of how often good ol’ Mark Zuckerberg claims that the end game is to make communications more efficient, when it comes to relationships, every connection transposed from one context to another should have to be reconsidered (hmm, a great argument for tagging of contacts…)! We can and should not make assumptions about the nature of people’s relationships, no matter what kind of semantics they’ve used to describe them in a given context. Human relationships are simply far too complicated to be left up to assumptions and inferences made by technologists whose affinity oftentimes lies closer to the data than to the makers of the data.

The OpenID mobile experience

Two days ago, Ma.gnolia launched their mobile version, and it’s pretty awesome (disclosure: Ma.gnolia is a former client and current friend/partner of Citizen Agency).

In the course of development, Larry asked me what he thought he should do about adding OpenID sign-in to the mobile version. He was reluctant to do so because, he reasoned, the experience of logging in sucks, not just because of the OpenID round-trip dance, but because most identity providers don’t actually support a mobile-friendly interface.

Indeed, if you take a look at the flow from the Ma.gnolia mobile UI to my OpenID provider (using the iPhone simulator app), you can see that it does suck.

Mobile Ma.gnoliaiPhoney OpenID Verification

I strongly encourage Larry to go ahead and add OpenID even if the flow isn’t ideal. As it is, you can sign up to Ma.gnolia with only an OpenID (without a need for creating yet another username and password) and so without offering this login option, the mobile site would be off-limits to folks in this situation.

So there’s clearly an opportunity here, and I’m hoping that out of OpenIDDevCamp today, we can start to develop some best practices and interface guidelines for OpenID providers for the mobile flow (not to mention more generally).

If you’ve seen a good example of an OpenID (or roundtrip authentication flow) for mobile, leave a comment here and let me know. It’s hard to get screenshots of this stuff, so any pointers would be appreciated!

The problem with open source design

I’ve probably said it before, and will say it again, and I’m also sure that I’m not the first, or the last to make this point, but I have yet to see an example of an open source design process that has worked.

Indeed, I’d go so far as to wager that “open source design” is an oxymoron. Design is far too personal, and too subjective, to be given over to the whims and outrageous fancies of anyone with eyeballs in their head.

Call me elitist in this one aspect, but with all due respect to code artistes, it’s quite clear whether a function computes or not; the same quantifiable measures simply do not exist for design and that critical lack of objective review means that design is a form of Art, and its execution should be treated as such.
Continue reading “The problem with open source design”

Fluid, Prism, Mozpad and site-specific browsers

Matt Gertner of AllPeers wrote a post the other day titled, “Wither Mozpad?” In it he poses a question about the enduring viability of Mozpad, an initiative begat in May to bring together independent Mozilla Platform Application Developers, to fill the vacuum left by Mozilla’s Firefox-centric developer programs.

Now, many months after its founding, the group is still without a compelling raison d’être, and has failed to mobilize or catalyze widespread interest or momentum. Should the fledgling effort be disbanded? Is there not enough sustaining interest in independent, non-Firefox XUL development to warrant a dedicated group?

Perhaps.

There are many things that I’d like to say both about Mozilla and about Mozpad, but what I’m most interesting in discussing presently is the opportunity that sits squarely at the feet of Mozilla and Mozpad and fortuitously extends beyond the world-unto-itself-land of XUL: namely, the opportunity that I believe lies in the development of site-specific browsers, or, to throw out a marketing term: rich internet applications (no doubt I’ll catch flak for suggesting the combination of these terms, but frankly it’s only a matter of time before any distinctions dissolve).

Fluid LogoIf you’re just tuning in, you may or may not be aware of the creeping rise of SSBs. I’ve personally been working on these glorified rendering engines for some time, primarily inspired first by Mike McCracken’s Webmail.app and then later Ben Willmore’s Gmail Browser, most recently seeing the fruition of this idea culminated in Ruben Bakker’s pay-for Gmail wrapper Mailplane.app. More recently we’ve seen developments like Todd Ditchendorf’s Fluid.app which generates increasingly functional SSBs and prior to that, the stupidly-simple Direct URL.

But that’s just progress on the WebKit side of things.

If you’ve been following the work of Mark Finkle, you’ll be able to both trace the threads of transformation into the full-fledged project, as well as the germination of Mozpad.

Clearly something is going on here, and when measured against Microsoft’s Silverlight and Adobe’s AIR frameworks, we’re starting to see the emergence of an opportunity that I think will turn out to be rather significant in 2008, especially as an alternative, non-proprietary path for folks wishing to develop richer experiences without the cost, or the heaviness, of actually native apps. Yes, the rise of these hybrid apps that look like desktop-apps, but benefit from the connectedness and always-up-to-date-ness of web apps is what I see as the unrecognized fait accompli of the current class of stand-alone, standards compliant rendering engines. This trend is powerful enough, in my thinking, to render the whole discussion about the future of the W3C uninteresting, if not downright frivolous.

A side effect of the rise of SSBs is the gradual obsolescence of XUL (which already currently only holds value in the meta-UI layer of Mozilla apps). Let’s face it: the delivery mechanism of today’s Firefox extensions is broken (restarting an app to install an extension is so Windows! yuck!), and needs to be replaced by built-in appendages that offer better and more robust integration with external web services (a design that I had intended for Flock) that also provides a web-native approach to extensibility. As far as I’m concerned, XUL development is all but dead and will eventually be relegated to the same hobby-sport nichefication of VRML scripting. (And if you happen to disagree with me here, I’m surprised that you haven’t gotten more involved in the doings of Mozpad).

But all this is frankly good for Mozilla, for WebKit (and Apple), for Google, for web standards, for open source, for microformats, for OpenID and OAuth and all my favorite open and non-proprietary technologies.

The more the future is built on — and benefits from — the open architecture of the web, the greater the likelihood that we will continue to shut down and defeat the efforts that attempt to close it up, to create property out of it, to segregate and discriminate against its users, and to otherwise attack the very natural and inclusive design of internet.

Site specific browsers (or rich internet applications or whatever they might end up being called — hell, probably just “Applications” for most people) are important because, for a change, they simply side-step the standards issues and let web developers and designers focus on functionality and design directly, without necessarily worrying about the idiosyncrasies (or non-compliance) of different browsers (Jon Crosby offers an example of this approach). With real competition and exciting additions being made regularly to each rendering engine, there’s also benefit in picking a side, while things are still fairly fluid, and joining up where you feel better supported, with the means to do cooler things and where generally less effort will enable you to kick more ass.

But all this is a way of saying that Mozpad is still a valid idea, even if the form or the original focus (XUL development) was off. In fact, what I think would be more useful is a cross-platform inquiry into what the future of Site Specific Browsers might (or should) look like… regardless of rendering engine. With that in mind, sometime this spring (sooner than later I hope), I’ll put together a meetup with folks like Todd, Jon, Phil “Journler” Dow and anyone else interested in this realm, just to bat around some ideas and get the conversation started. Hell, it’s going on already, but it seems time that we got together face to face to start looking at, seriously, what kind of opportunity we’re sitting on here.

Making the most of hashtags

#hashtags logoA couple of days ago a new site called Hashtags.org was launched by Cody Marx Bailey and Aaron Farnham, two ambitious college students folks from Bryan & College Station, Texas.

I wanted to take a moment to comment on its arrival and also suggest a slight modification to the purpose and use of hashtags, now that we have a service for making visible this kind of metadata.

First of all, if you’re unfamiliar with hashtags or why people might be prepending words in their tweets with hash symbols (#), read Groups for Twitter; or A Proposal for Twitter Tag Channels to get caught up on where this idea came from.

You should note two things: first, when I made my initial proposal, Twitter didn’t have the track feature; second, I was looking to solve some pretty specific problems, largely related to groupings and to filtering and to amplifying intent (i.e. when making generic statements, appending an additional tag or two might help others better understand your intent). For consistency, my initial proposal required that all important terms be prefixed with the hash, despite how ugly this makes individual updates look. The idea was that, I’d try it out, see how it worked, and if someone built something off of it, or other people adopted the convention, I could decide if the hassle and ugliness were ultimately worth it. A short time after I published my proposal, the track feature launched and obviated parts of my proposal.

Though the track feature provided a means for following explicit information, there was still no official means to add additional information, whether for later recall purposes or to help provide more context for a specific update. And since Twitter currently reformats long links as meaningless TinyURLs, it’s nice to be able to provide folks with a hint about the content at the end of the link. On top of those benefits, hashtags provide a mechanism for leveraging Twitter’s tracking functionality even if your update doesn’t include a specific keyword by itself.

Now, I’ll grant you that a lot of this is esoteric. Especially given that Twitter is predicated on answering the base question “what are you doing?” I mean, a lot of this hashtag stuff is gravy, but for those who use it, it could provide a great deal of value, just like the community-driven @reply convention.

Moreover, we’ve already seen some really compelling and unanticipated uses of hashtags on Twitter — in particular the use of the hashtag as a common means for identifying information related to the San Diego fires.

And that’s really just the beginning. With a service like Tweeterboard providing even more interesting and contextual social statistics, it won’t be long before you’ll be able to discover people who talk about similar topics or ideas that you might enjoy following. And now, with Hashtags.org, trends in the frequency of certain topics will become all the more visible and quantifiable.

BUT, there is a limit here, and just because we can add all this fancy value on top of the blogosphere’s central intelligence system doesn’t mean that our first attempt at doing so is the best way to do it, or that we should definitely do it at all, especially if it comes at a high cost (perceived or real) to other users of the system.

Already it’s been made clear to me that the use of hashtags can be annoying, adding more noise than value. Some people just don’t like how they look. Still others feel that they encumber a simple communication system that should do one thing and one thing well, secondary uses be damned if they don’t blend with the how the system is generally used. This isn’t del.icio.us or Ma.gnolia after all.

And these points are all valid and well taken, but I think there’s some middle ground here. Used sparingly, respectfully and in appropriate measure, I think that the value generated from the use of hashtags is substantial enough to warrant their continued use, and it isn’t just hashtags.org that suggests this to me. In fact, I think hashtags.org, in the short term, might do more damage than good, if only because it means people will have to compose messages in unnatural ways to take advantage of the service, and this is never the way to design good software (sorry guys, but I think there’s room to improve the basic track feature yet).

In fact, with the release of the track feature, it became clear that every word used in a post is important and holds value (something that both Jack and Blaine noted in our early discussions). But it’s also true that without certain keywords present in a post, the track feature is useless. In this case in particular, where they provide additional context, I think hashtags serve a purpose. Consider this:

“Tara really rocked that presentation!”

versus:

“Tara really rocked that presentation! #barcampblock”

In the latter example, the presence of the hashtag provides two explicit benefits: first, anyone tracking “barcampblock” will get the update, and second, those who don’t know where Tara is presenting will be clued into the context of the post.

In another example:

“300,000 people evacuated in San Diego county now.”

versus

“#sandiegofire: 300,000 people evacuated in San Diego county now.”

Again, the two benefits are present here, demonstrating the value of concatenated hashtags where using the space-separated phrase “San Diego” would not have been caught by the track feature.

What I don’t think is as useful as when I first made my proposal (pre-tracking) is calling out specific words in a post for emphasis (unless you’re referring to a place or airport, but that’s mostly personal preference). For example, revising my previous proposal, I think that this approach is now gratuitous:

“Eating #popcorn at #Batman in #IMAX.”

Removing the hashes doesn’t actually reduce the meaning of this post, nor does it affect the tracking feature. And, leaving them out makes the whole update look much better:

“Eating popcorn at Batman in IMAX.”

If you wanted to give your friends some idea of where you are, it might be okay to use:

“Eating popcorn at Batman in IMAX at #Leows.”

…but even still, the hash is not wholly necessary, if only to help denote some specialness to the term “Leows”.

So, with that, I’m thrilled to see hashtags.org get off the ground, but it’s use should not interfere with the conventional use of Twitter. As well, they provide additional value when used conservatively, at least until there is a better way to insert metadata into a post.

As with most technology development, it’s best to iterate quickly, try a bunch of things (rather than just talk about them) and see what actually sticks. In the case of hashtags, I think we’re gradually getting to a pretty clear and useful application of the idea, if not the perfect implementation so far. Anyway, this kind of “conversational development” that allows the best approach to emerge over time while smoothing out the rough edges of an original idea seems to be a pretty effective way to go about making change, and it’s promising to see efforts like hashtags.org take a simple — if not controversial — proposal, and push it forward yet another step.