Stowe Boyd launches Microsyntax.org

hashStowe Boyd launched Microsyntax.org this morning and announced that I will be the first member of his advisory board.

Stowe and I have batted around a number of ideas for making posts on Twitter contain more information than what is superficially presented, and this new effort should create a space in which ideas, research, proposals and experiments can be made and discussed.

Ultimately, my hope is that Microsyntax.org will reach beyond Twitter and provide a forum for thinking through how we encapsulate data in channels that don’t natively support metadata by using conventions that express as much meaning as much as they encode.

Twitter with Channel tagsSince I originally proposed hashtags in August of 2007, I’ve thought a lot about what these conventions mean, and how wide adoption of something can radically elevate the field of competition.

There is a similar opportunity here, where, if the discourse is developed properly, such conventions can actually enable a greater range of expression over narrow channels, allowing for wider participation in and understanding of conversations.

Take, for instance, Stowe’s “GeoSlash” (as christened by Ross Mayfield) proposal. Whether his syntax is the right one (or even necessary!) isn’t something that can be argued rationally. It’s only something that can be investigated through experimentation and observation. To this point, there has been no central convening context in which such a proposal could be brought up, debated, discussed, considered, tinkered with, improved, championed and evaluated.

As a result, countless proposals have been made for baking moremeta” into Twitter’s data stream, but few have really taken off (compared with the relative success of hashtags and @replies).

While I’m sympathetic to arguments (and pleas!) against adding additional structure or formatting to tweets, I think that the bigger opportunity here extends beyond Twitter (which is primarily a public broadcast channel) to other applications, regardless of whether they use Twitter as the message routing infrastructure or not. Indeed, given my recent (and very positive) experience where @AlaskaAir checked me in to my flight over Twitter, you can imagine an opportunity developing where, say, forward-thinking airlines actually collaborate to develop a syntax for expressing checkin requests via some sort of direct SMS-based channel.

The situation of multiple competing-yet-overlapping SMS syntaxes lead me, somewhat mockingly, to start documenting what I called “picoformats“. If I’ve learned anything from the microformats process, it’s that anyone can invent a schema or a format, but getting adoption is the hard part (and also the most valuable). So, in order to promote adoption, you should always try to model behavior that already exists in the wild, and then work to make the intensions of the behavior more clear, repeatable and memorable.

Most microsyntax efforts fail to follow this process, and as a result, fail in the wild. Efforts that employ the scientific method tend to see more success: hashtags modeled the convention started by IRC channels and Jaiku (Joshua Schachter also used the hash to denote tags in the early days of Delicious); the $ticker convention (from StockTwits) follows how many financial trade publications denote stock symbols. And so on.

So when it comes to proposing new behaviors that don’t yet exist in the wild, I think that the Microsyntax.org project will be an excellent place to convene and host conversations and experiments, many of which will admittedly fail. But at minimum, there will be a record of what’s been tried, what the thinking and goals were, and where, hopefully, some modest successes have been achieved.

I’m looking forward to contributing to this effort and helping to stand up the community infrastructure with Stowe. While I’m not eager to see the Twitter stream polluted with characters intended only for computers, I think that there is still much explored ground in what can be accomplished through modest modifications of the way that we communicate over these kinds of narrow, unidimensional channels.

How to use Twimailer securely

TwimailerTwimailer is a nifty service that launched recently that makes Twitter BACN (“email that you want, just not right now“) more useful and informative (example).

The only problem is that it requires you to change your Twitter account email to point to an address provided by Twimailer — on the whole, not a big deal if you trust Twimailer, but in general bad practice. (Rod Begbie also pointed out that this prevents people from being able to find you by your email address).

Fortunately there is a better and more secure way to take advantage of Twimailer.

I’ll demonstrate in Gmail but really I’m just auto-forwarding new follower notifications from Twitter to your Twimailer address. That’s it.

  1. First, go ahead and sign up for a new Twimailer account. To get started, they just need an email address to send your notifications to. Twimailer will assign you a unique email address like twitter1234567@twimailer.com. Set this aside (copy it to TextEdit or something).
  2. Next, load up your Gmail inbox and search for “is now following you on Twitter!”. Open up one of the notifications from Twitter (the From email should be something like twitter-follow-your.address=gmail.com@postmaster.twitter.com). In the right hand drop-down menu, pick “Filter messages like this“:
    Filter messages like this
  3. You should then see an interface like this (click to enlarge):
    Create a filter
    Go ahead and test this search to make sure it’s working (presuming you haven’t deleted all your notifications).
  4. If everything looks good, go ahead and click Next Step and at check off “Forward it to” and enter your Twimailer email address that you set aside in Step 1.

    If you don’t want duplicate notifications from Twitter and Twimailer, you should also check off “Skip the Inbox” or “Delete it” (the message will still be forwarded).

    My setup looks like this (click to enlarge):

    Twimailer Filter

  5. Bonus: to filter or create a label for Twimailer notices, use this search: from:(notices@twimailer.com) OR to:(notices@twimailer.com).

That’s it!

It seems to me that this kind of feature improvement is something that Twitter should really do itself, but of course it’s great to see someone from the community pitch in and add incremental value until Twitter gets around to it.

At the same time, putting Twimailer in between you and Twitter’s password recovery mechanism seems unnecessarily dangerous (i.e. Twimailer could go down, get hacked, sold or might be simply be implemented insecurely (consider Spotify’s recent security breach)). I actually have no insight into these things about Twimailer, but I’d rather not take any unnecessary chances.

The approach that I described above should mitigate any risk with using Twimailer and keep you in direct control over your Twitter account.

RIP @factoryjoe

Twitter / Mr Messina: Oh, and in case you missed ...

Sometime last week, after two Manhattans, I decided to change my Twitter username from @factoryjoe to @chrismessina. In the scheme of things, not a big deal (yeah, okay, so I broke a couple thousand hyperlinks…). And yet, I can’t but feel like I’ve shed a skin or changed identities… at least to a specific audience.

I started using Twitter in 2006 as “factoryjoe”. Of course, this is the nick that I use everywhere —from Flickr to my personal homepage — so that choice was obvious. I essentially own factoryjoe on the web — people even occasionally call me “Joe” when we meet, such is their familiarity with my online persona. But that’s not my actual name.

When I talk in front of people and I introduce myself as “Chris Messina”, the disconnect between my real name and my online persona becomes distracting. And, over time, my motivations for having a separate online identity have waned.

But first, I suppose, I should provide some background.

Where did “factoryjoe” come from?

Every so often I’m asked where “factoryjoe” came from: “Kind of like ‘Joe the Plumber?’” “Kind of,” I say. “But not really.”

Growing up, I drew comic books for fun. In fact, for most of my formative years, it seemed pretty clear that I’d pursue a career in art. I worked in pastels, watercolor, pen and ink; I preferred pen and ink above all the others though, taking lessons from Rob Liefeld, Todd McFarlane, Jim Lee and others as Image Comics came on to the scene. It was a fond dream of mine to someday pen my own sequential art.
1984 PosterIn high school, I read Nineteen Eighty-Four and became enamored with the character of Winston Smith, Orwell’s “everyman” character. In Winston Smith, I found a confederate, struggling to assert his individual humanity against the massive, dehumanizing forces of groupthink and oligarchy. Similarly, I identified with Vonnegut’s Harrison Bergeron and his struggle against homogeneity and mediocrity. The contours of “factoryjoe” began to emerge against the backdrop of the metropolitan “FactoryCity”, where industrialism was proven a sham and one’s conspicuous pursuit of passion ruled over the shallow pursuit of material consumption.

Factory City

Factory Joe was the anonymous shell in which I could plant my aspirations and designs for the future. He served as a metaphorical vessel through which I could mold a broader narrative.

So… changing your Twitter username?

In every superhero’s journey, there comes a time when the mask grows bigger than its owner. Is it the mask that provides the wearer with his power, or is it something integral to the individual?

I once believed that I needed to have a deep separation between myself and my online persona — that they should be distinct; that I should distrust the web. Over time I’ve realized a great deal power by closing the gap between who I am offline and who I am online. I suppose this is the power of transparency, developed through consistency and demonstrated integrity.

@factoryjoe was, therefore, my first go at creating an online identity for myself. A kind of “home away from home” that I could experiment with before this whole social web thing caught on.

As it happened, this was fine when I had a small group of friends who used similar aliases for themselves, but more recently — inspired by Facebook’s allergy to pseudonyms and non-human friendly usernames — it seems that not only owning your own identity is in vogue, but using your real name is an act of assertiveness, inventiveness or establishment. Heck, if you’re willing to share your real name with 150+ million compatriots on Facebook, is there really that much to be gained from obfuscating your actual name on the open web anymore (that’s rhetorical)?

So, back to Future of Web Apps… following my workshop with Dave, I took a step back to think about how it must appear for me to be working on the social web and identity technologies while maintaining this dichotomy between my offline and online personas — in name only. C’mon, when people have feedback and I’m talking on stage — who do I want them addressing? — my assumed identity … or me? The friction that I invented is just no longer necessary.

So factoryjoe isn’t going away — not entirely at least. It’s a useful vessel to inhabit and I’ll continue to do so. But on Twitter, Facebook, and on my homepage, I’ll use my real name. There is simply no longer a good reason to differentiate between who I am online, and who I am off, if ever there was.

. . .

Postscript: I’m now @chrismessina on Twitter. If we were friends before — no need to make any changes — Twitter took care of that already. @factoryjoe‘s been retired, but now that I got it back from Recordon (he was just jealous, since he has the worst username ever), who knows, maybe he’ll return someday. We’ll see!

Twitter can has OAuth?

Twitter / Twitter API: Call for OAuth private beta participants ...

Twitter API lead Alex Payne announced today that Twitter is now accepting applications to its OAuth private beta, making good on the promises he made on the Twitter API mailing list and had repeated on the January 8 Citizen Garden podcast (transcript by stilist).

It’s worth pointing out that this has been a long time coming and is welcome news, especially following Alex’s announcement to limit Twitter API requests to 20000/hr per IP.

But it’s important to keep in mind that, in light of the recent security breaches, OAuth in and of itself does not, and will not, prevent phishing.

It does, however, provide a way for Twitter to better track the use of its API, and to enable higher quality of service for trusted (paying?) applications and to surface them through a Facebook-like application directory. It also means that Twitter users will have finer grained control over which applications have ongoing access to their accounts — and will be able to disable applications without changing their password.

I’m on the beta list, so I’m looking forward to seeing what their current UI looks like — and what lessons we can extract for other services going from zero OAuth to a completeld delegated authentication model.

Twitter and the Password Anti-Pattern

Twitter / Alex Payne: @factoryjoe Yes, OAuth is ...

I’ve written about the password anti-pattern before, and have, with regards to Twitter, advocated for the adoption of some form of delegated authentication solution for some while.

It’s not as if Twitter or lead developer Alex Payne aren’t aware of the need for such a solution (in fact, it’s not only been publicly recognized (and is Issue #2 in their API issue queue), but the solution will be available as part of a “beta” program shortly). The problem is that it’s taken so long for Twitter’s “password anti-pattern” problem to get the proper attention that it deserves (Twitter acknowledged that they were moving to OAuth last August) that unsuspecting Twitter users have now exposed themselves (i.e. Twitter credentials) to the kind of threat we knew was there all along.

This isn’t the first time either, and it probably won’t be the last, at least until Twitter changes the way third party services access user accounts.

Rather than focus on Twply (which others have done, and whose evidence still lingers), I thought I’d talk about why this is an important problem, what solutions are available, why Twitter hasn’t adopted them and then look at what should happen here.
Continue reading “Twitter and the Password Anti-Pattern”

Adding richness to activity streams

This is a post I’ve wanted to do for awhile but simply haven’t gotten around to it. Following my panel with Dave Recordon (Six Apart), Dave Morin (Facebook), Adam Nash (LinkedIn), Kevin Chou (Watercooler, Inc) and Sean Ammirati (ReadWriteWeb) on Social Networks and the NEED for FEEDs, it only seems appropriate that I would finally get this out.

The basic premise is this: lifestreams, alternatively known as “activity streams”, are great for discovering and exploring social media, as well as keeping up to date with friends (witness the main feature of Facebook and the rise of FriendFeed). I suggest that, with a little effort on the publishing side, activity streams could become much more valuable by being easier for web services to consume, interpret and to provide better filtering and weighting of shared activities to make it easier for people to get access to relevant information from people that they care about, as it happens.

By marking up social activities and social objects, delivered in standard feeds with microformats, I think we enable anyone to run a FriendFeed-like service that innovates and offers value based on how well it understands what’s going on and what’s relevant, rather than on its compatibility with any and every service.

Contemporary example activities

Here are the kinds of activities that I’m talking about (note that some services expand these with thumbnail previews):

  • Eddie updated his resume at LinkedIn.
  • Chris listened to “I Will Possess Your Heart” by Death Cab for Cutie on Pandora.
  • Brynn favorited a photo on Flickr.
  • Dave posted a message to Twitter via SMS.
  • Gary poked Kastner.
  • Leah bought The Matrix at Amazon.com.

Prior art

Both OpenSocial and Facebook provide APIs for creating new activities that will show up in someone’s activity stream or newsfeed.

Movable Type and the DiSo Project both have Action Stream plugins. And there are countless related efforts. Clearly there’s existing behavior out there… but should we go about improving it, where the primary requirement is a title of an action, and little, if any, guidance on how to provide more details on a given activity?

Components of an activity

Not surprisingly, a lot of activities provide what all good news stories provide: the who, what, when, where and sometimes, how.

Let’s take a look at an example, with these components called out:

e.g. Chris started listening to a station on Pandora 3 hours ago.

  • actor/subject (noun/pronoun)
  • action (verb)
  • social object (noun)
  • where (place)
  • when (time)
  • (how the object was created)
  • (expanded view of object)

Now, I’ll grant that not all activities follow this exact format, but the majority seem to.

I should point out one alternative: collective actions.

e.g. Chris and Dave Morin are now friends.

…but these might be better created as a post-processing step once we add the semantic salt to the original updates. Maybe.

Class actions

One of the assumptions I’m making is that there is some regularity and uniformity in activity streams. Moreover, there have emerged some basic classes of actions that appear routinely and that could be easily expressed with additional semantics.

To that end, I’ve started compiling such activities on the DiSo wiki.

Once we have settled on the base set of classes, we can start to develop common classnames and presentation templates. To start, we have: changed status or presence, posted messages or media, rated and favorited, friended/defriended, interacted with someone (i.e. “poking”), bookmarked, and consumed something (attended…, watched…, listened to…).

Combining activities with bundling

The concept of bundling is already present in OpenSocial and works for combining multiple activities of the same kind into a group:

FriendFeed Activity Bundling

This can also be used to bundle different kinds of activities for a single actor:

e.g. Chris watched The Matrix, uploaded five photos, attended an event and became friends with Dave.

From a technical perspective, bundling provides a mechanism for batching service-to-service operations, as defined in PaceBatch.

Bundling is also useful for presenting paged or “continued…” activities, as Facebook and FriendFeed do.

Advanced uses

I’d like to describe two advanced uses that inherit from my initial proposal for Twitter Hashtags: filtering and creating a distributed track-like service.

In the DiSo model, we use (will use) AtomPub (and someday XMPP) to push new activities to people who have decided to follow different people. Because the model is push-based, activities are delivered as they happen, to anyone who has subscribed to receive them. On the receiving end, this means that we can filter based on any number of criteria, such as actor, activity type, content of the activity (as in keywords or tags), age of the action, location or how an activity was created (was this message auto-generated from Brightkite or sent in by SMS?) or any combination therein.

This is useful if you want to follow certain activities of your friends more closely than others, or if you only care about, say, the screenshots I upload to Flickr but not the stuff I tweet about.

Tracking can work two ways: where your own self-hosted service knows how to elevate certain types of received activities which are then passed to your messaging hub and routed appropriately… for example, when Mom checks in using Brightkite at the airport (or within some distance radius).

On the other hand, individuals could choose to publish their activities to some third-party aggregator (like Summize) and do the tracking for individuals, pushing back activities that it discovers that matches criteria that you set, and then forwarding those activities to your messaging hub.

It might not have the legs that a centralized service like Twitter has, especially to start, but if Technorati were looking for a new raison d’etre, this might be it.

This is a 30,000 foot view

I was scant on code in this post, but given how long it was already, I’d rather just start throwing it into the output of the activity streams being generated from the Action Streams plugins and see how live code holds up in the wild.

I also don’t want to confuse too many implementation details with the broader concept and need, which again is to make activity streams richer by standardizing on some specific semantics based on actual trends.

I’d love feedback, more pointers to prior art, or alternative suggestions for how any of the above could be technically achieved using open technologies.

Kicking off 2008 with a themeword

#themewordAt Lifecamp on Monday, (incidentally held at Tantek’s Port Zero) we had a session where the small group of us brainstormed what Erica Douglass called “theme words” that might help us focus our goals for 2008.

Erica’s theme word for 2008 is “connect”. Mine is “” (in all its meanings). Alex Hillman’s is “growth”.

Now, this is a pretty simple exercise and a good way to kick off the New Year. What’s most interesting about this, however, is that we were able to extend participation by constructing a hashtag-based meme on Twitter. It started simply enough:

Twitter / Mr Messina: My thematic word for 2008: ...

The response that has followed has been pretty incredible, and demonstrates the value of using community-driven hashtags to both generate and (using hashtags.org).

Now, you obviously don’t have to use Twitter to participate; you can simply blog your own themeword and tag it with “themeword” or you could just write it down for yourself, and check back in at the end of the year and reflect on whether you stuck to your theme.

Either way, I’m already starting to see how “conduct” is a good word for me in 2008! What’s yours?

Making the most of hashtags

#hashtags logoA couple of days ago a new site called Hashtags.org was launched by Cody Marx Bailey and Aaron Farnham, two ambitious college students folks from Bryan & College Station, Texas.

I wanted to take a moment to comment on its arrival and also suggest a slight modification to the purpose and use of hashtags, now that we have a service for making visible this kind of metadata.

First of all, if you’re unfamiliar with hashtags or why people might be prepending words in their tweets with hash symbols (#), read Groups for Twitter; or A Proposal for Twitter Tag Channels to get caught up on where this idea came from.

You should note two things: first, when I made my initial proposal, Twitter didn’t have the track feature; second, I was looking to solve some pretty specific problems, largely related to groupings and to filtering and to amplifying intent (i.e. when making generic statements, appending an additional tag or two might help others better understand your intent). For consistency, my initial proposal required that all important terms be prefixed with the hash, despite how ugly this makes individual updates look. The idea was that, I’d try it out, see how it worked, and if someone built something off of it, or other people adopted the convention, I could decide if the hassle and ugliness were ultimately worth it. A short time after I published my proposal, the track feature launched and obviated parts of my proposal.

Though the track feature provided a means for following explicit information, there was still no official means to add additional information, whether for later recall purposes or to help provide more context for a specific update. And since Twitter currently reformats long links as meaningless TinyURLs, it’s nice to be able to provide folks with a hint about the content at the end of the link. On top of those benefits, hashtags provide a mechanism for leveraging Twitter’s tracking functionality even if your update doesn’t include a specific keyword by itself.

Now, I’ll grant you that a lot of this is esoteric. Especially given that Twitter is predicated on answering the base question “what are you doing?” I mean, a lot of this hashtag stuff is gravy, but for those who use it, it could provide a great deal of value, just like the community-driven @reply convention.

Moreover, we’ve already seen some really compelling and unanticipated uses of hashtags on Twitter — in particular the use of the hashtag as a common means for identifying information related to the San Diego fires.

And that’s really just the beginning. With a service like Tweeterboard providing even more interesting and contextual social statistics, it won’t be long before you’ll be able to discover people who talk about similar topics or ideas that you might enjoy following. And now, with Hashtags.org, trends in the frequency of certain topics will become all the more visible and quantifiable.

BUT, there is a limit here, and just because we can add all this fancy value on top of the blogosphere’s central intelligence system doesn’t mean that our first attempt at doing so is the best way to do it, or that we should definitely do it at all, especially if it comes at a high cost (perceived or real) to other users of the system.

Already it’s been made clear to me that the use of hashtags can be annoying, adding more noise than value. Some people just don’t like how they look. Still others feel that they encumber a simple communication system that should do one thing and one thing well, secondary uses be damned if they don’t blend with the how the system is generally used. This isn’t del.icio.us or Ma.gnolia after all.

And these points are all valid and well taken, but I think there’s some middle ground here. Used sparingly, respectfully and in appropriate measure, I think that the value generated from the use of hashtags is substantial enough to warrant their continued use, and it isn’t just hashtags.org that suggests this to me. In fact, I think hashtags.org, in the short term, might do more damage than good, if only because it means people will have to compose messages in unnatural ways to take advantage of the service, and this is never the way to design good software (sorry guys, but I think there’s room to improve the basic track feature yet).

In fact, with the release of the track feature, it became clear that every word used in a post is important and holds value (something that both Jack and Blaine noted in our early discussions). But it’s also true that without certain keywords present in a post, the track feature is useless. In this case in particular, where they provide additional context, I think hashtags serve a purpose. Consider this:

“Tara really rocked that presentation!”

versus:

“Tara really rocked that presentation! #barcampblock”

In the latter example, the presence of the hashtag provides two explicit benefits: first, anyone tracking “barcampblock” will get the update, and second, those who don’t know where Tara is presenting will be clued into the context of the post.

In another example:

“300,000 people evacuated in San Diego county now.”

versus

“#sandiegofire: 300,000 people evacuated in San Diego county now.”

Again, the two benefits are present here, demonstrating the value of concatenated hashtags where using the space-separated phrase “San Diego” would not have been caught by the track feature.

What I don’t think is as useful as when I first made my proposal (pre-tracking) is calling out specific words in a post for emphasis (unless you’re referring to a place or airport, but that’s mostly personal preference). For example, revising my previous proposal, I think that this approach is now gratuitous:

“Eating #popcorn at #Batman in #IMAX.”

Removing the hashes doesn’t actually reduce the meaning of this post, nor does it affect the tracking feature. And, leaving them out makes the whole update look much better:

“Eating popcorn at Batman in IMAX.”

If you wanted to give your friends some idea of where you are, it might be okay to use:

“Eating popcorn at Batman in IMAX at #Leows.”

…but even still, the hash is not wholly necessary, if only to help denote some specialness to the term “Leows”.

So, with that, I’m thrilled to see hashtags.org get off the ground, but it’s use should not interfere with the conventional use of Twitter. As well, they provide additional value when used conservatively, at least until there is a better way to insert metadata into a post.

As with most technology development, it’s best to iterate quickly, try a bunch of things (rather than just talk about them) and see what actually sticks. In the case of hashtags, I think we’re gradually getting to a pretty clear and useful application of the idea, if not the perfect implementation so far. Anyway, this kind of “conversational development” that allows the best approach to emerge over time while smoothing out the rough edges of an original idea seems to be a pretty effective way to go about making change, and it’s promising to see efforts like hashtags.org take a simple — if not controversial — proposal, and push it forward yet another step.

Twitter hashtags for emergency coordination and disaster relief

I know I’ve been beating the drum about hashtags for a while. People are either lukewarm to them or are annoyed and hate them. I get it. I do. But for some stupid reason I just can’t leave them alone.

Anyway, today I think I saw a glimmer of the promise of the hashtag concept revealed.

For those of you who have no idea what I’m talking about, consider this status update:

Twitter / nate ritter: #sandiegofire 300,000 peopl...

You’ll notice that the update starts out with “#sandiegofire”. That’s a hashtag. The hash is the # symbol and the tag is sandiegofire. Pretty simple.

Why use them? Well, it’s like adding metadata to your updates in a simple and consistent way. They’re not the most beautiful things ever, but they’re pretty easy to use. They also follow Jaiku’s channel convention to some extent, but break it in that you can embed hashtags into your actual post, like so:

Twitter / Mr Messina: @nateritter thanks for keep...

Following the , this simple design means that you can get more mileage out of your 140 characters than you might otherwise if you had to specify your tags separately or in addition to your content.

Anyway, you get the idea.

Hashtags become all the more useful now that Twitter supports the “track” feature. By simply sending ‘track [keyword]‘ to Twitter by IM or SMS, you’ll get real-time updates from across the Twitterverse. It’s actually super useful and highly informative.

Hashtags become even more useful in a time of crisis or emergency as groups can rally around a common term to facilitate tracking, as demonstrated today with the San Diego fires (in fact, it was similar situations around Bay Area earthquakes that lead me to propose hashtags in the first place, as I’d seen people Twittering about earthquakes and felt that we needed a better way to coordinate via Twitter).

Earlier today, my friend Nate Ritter started twittering about the San Diego fires, starting slowly and without any kind of uniformity to his posts. He eventually began prefixing his posts with “San Diego Fires”. Concerned that it would be challenging for folks to track “san diego fires” on Twitter because of inconsistency in using those words together, I wanted to apply hashtags as a mechanism for bringing people together around a common term (that Stowe Boyd incidently calls groupings).

I first checked Flickr’s Hot Tags to see what tag(s) people were already using to describe the fires:

Popular Tags on Flickr Photo Sharing

I picked “” — the tag that I thought had the best chance to be widely adopted, and that would also be recognizable in a stream of updates. I pinged Nate and around 4pm with my suggestion, and he started using it. Meanwhile, Dan Tentler (a co-organizer who I met at ETECH last year) was also twittering, blogging and shooting his experience, occasionally using #sandiegofire as his tag. Sometime later Adora (aka Lisa Brewster, another BarCamp San Diego co-organizer) posted a status using the #sandiegofire hashtag.

Had we had a method to disperse the information, we could have let people on Twitter know to track #sandiegofire and to append that hashtag to their updates in order to join in on the tracking stream (for example, KBPS News would have been easier to find had they been using the tag) (I should point out that the Twitter track feature actually ignores the hashmark; it’s useful primarily to denote the tag as metadata in addition to the update itself) .

Fortunately, Michael Calore from Wired picked up the story, but it might have come a little late for the audience that might have benefitted the most (that is, folks with Twitter SMS in or around affected areas).

In any case, hashtags are far from perfect. I have no illusions about this.

But they do represent what I think is a solid convention for coordinating ad-hoc groupings and giving people a way to organize their communications in a way that the tool (Twitter) does not currently afford. They also leave open the possibility for external application development and aggregation, since a Twitter user’s track terms are currently not made public (i.e. there is no way for me to know what other people are tracking across Twitter in the same way that I can see which tags have the most velocity across Flickr). So sure, they need work, but the example of #sandiegofire now should provide a very clear example of the problem I’d like to see solved. Hashtags are my best effort at working on this problem to date; I wonder what better ideas are out there waiting to be proposed?

Announcing OAuth 1.0 Public Draft 1

Well, it’s been a long time coming, and if you’ve been following my Twitters at all, you’ll know that I’ve been working on an open, authorization protocol called OAuth for the past few months. Today we released the first Public Draft for review.

The idea started as a humble effort to accomplish two goals: first, to enable Ma.gnolia members who created their accounts with OpenIDs (and therefore don’t have traditional usernames and passwords) to be able to use Dashboard Widgets; and second, to enable Twitter to adopt OpenID when its current API requires a username and password to authorize access to protected status feeds.

In any case, both of these use cases were part of the same problem: the lack of a uniform and open protocol for what’s called “delegated authentication”. Another useful metaphor that I’ve come to like is what John Panzer and Eran Hammer-Lahav used before him, that of a valet key:

OAuth is like a valet key for all your web services. A valet key lets you give a valet the ability to park your car, but not the ability to get into the trunk or drive more than 2 miles or limit the RPMs on your high end German automobile. In the same way, an OAuth key lets you give a web agent the ability to check your web mail but NOT the ability to pretend to be you and send mail to everybody in your address book.

Arguably the value of OAuth as a technological innovation goes beyond that. After all, anyone can implement their own valet key system that works in their own universe of vehicles. The harder part is actually the social and political work of getting everyone to buy in and follow the same design pattern, leading to interoperability between systems.

In fact that’s where we were before OAuth: Google had AuthSub, AOL had OpenAuth (OAuth’s former name, by the way), Yahoo had BBAuth and Flickr had FlickrAuth (not to mention Facebook Auth and Windows Live ID Web Authentication). Which meant that if you were an independent developer (like Matt Biddulph from Dopplr) you had to pick which auth system you wanted to support unless you had money and time coming out of your armpits, you’d code against all of them.

Of course, that’s not reality. And no one has the time or energy to maintain support for every protocol, so instead, most people take the easy way out and just ask for the veritable keys to all the different services you use:

ShareThis | Import your addresses...

Now, don’t get me wrong, this gets the job done. And it works. But it’s a really really really bad idea.

Not only are people being trained into thinking that it’s okay to fill in any form that looks like a Gmail login box on any old website (trusted or not) but it’s creating an untenable situation where, as a member of these various services, you have no way to control the access you’ve given away without changes your password — which in effect will disable every one of these sites that’s storing your credentials — forcing you to revisit every one of them and share with them your new username and password. What a crappy experience!

Fortunately, Flickr got it right a long time ago and set the bar for user experience. In their model, you can try out a bunch of tools that help you upload photos to the service or use off-site mashups that do cool things with your photos all without giving away your most valuable credentials: your username and password!

Instead, when you sign in to your account, Flickr will assign special keys called “tokens” to each application that wants to access your account. Flickr then lets you configure how much access you want to grant to each app and lets you revoke that access at any time. No changing your password, no running around to have to re-authenticate all the apps that you still want to use if you want to disable one of them.

OAuth takes that approach one step further and extracts the best practices from the popular authentication systems I mentioned above and turns it into one elegant, unified authentication protocol that anyone can implement. And, because it’s an open standard that we hope many people will adopt and replace their own proprietary authentication systems with, it should be a no-brainer for developers to use and to support, resulting in fewer sites that, with a straight face, continue to ask you for your username and password (oh, and yes, it is compatible with OpenID, with Google Accounts, with Yahoo Accounts and any other sign-in system — OAuth doesn’t dictate how you sign-in, only how you delegate authentication).

Even though we’re only releasing the first public draft today, we already have pledges from Ma.gnolia, Twitter, Pownce, Jaiku, Dopplr and others that they intend to implement the protocol.

If you want to get involved, join our mailing list, take a look at the OAuth libraries under development for PHP, Ruby, Python, C# and others. We plan to formally release the final version the OAuth Protocol v1.0 on Oct 1, so watch this space for more news until then.