Fluid, Prism, Mozpad and site-specific browsers

Matt Gertner of AllPeers wrote a post the other day titled, “Wither Mozpad?” In it he poses a question about the enduring viability of Mozpad, an initiative begat in May to bring together independent Mozilla Platform Application Developers, to fill the vacuum left by Mozilla’s Firefox-centric developer programs.

Now, many months after its founding, the group is still without a compelling raison d’être, and has failed to mobilize or catalyze widespread interest or momentum. Should the fledgling effort be disbanded? Is there not enough sustaining interest in independent, non-Firefox XUL development to warrant a dedicated group?

Perhaps.

There are many things that I’d like to say both about Mozilla and about Mozpad, but what I’m most interesting in discussing presently is the opportunity that sits squarely at the feet of Mozilla and Mozpad and fortuitously extends beyond the world-unto-itself-land of XUL: namely, the opportunity that I believe lies in the development of site-specific browsers, or, to throw out a marketing term: rich internet applications (no doubt I’ll catch flak for suggesting the combination of these terms, but frankly it’s only a matter of time before any distinctions dissolve).

Fluid LogoIf you’re just tuning in, you may or may not be aware of the creeping rise of SSBs. I’ve personally been working on these glorified rendering engines for some time, primarily inspired first by Mike McCracken’s Webmail.app and then later Ben Willmore’s Gmail Browser, most recently seeing the fruition of this idea culminated in Ruben Bakker’s pay-for Gmail wrapper Mailplane.app. More recently we’ve seen developments like Todd Ditchendorf’s Fluid.app which generates increasingly functional SSBs and prior to that, the stupidly-simple Direct URL.

But that’s just progress on the WebKit side of things.

If you’ve been following the work of Mark Finkle, you’ll be able to both trace the threads of transformation into the full-fledged project, as well as the germination of Mozpad.

Clearly something is going on here, and when measured against Microsoft’s Silverlight and Adobe’s AIR frameworks, we’re starting to see the emergence of an opportunity that I think will turn out to be rather significant in 2008, especially as an alternative, non-proprietary path for folks wishing to develop richer experiences without the cost, or the heaviness, of actually native apps. Yes, the rise of these hybrid apps that look like desktop-apps, but benefit from the connectedness and always-up-to-date-ness of web apps is what I see as the unrecognized fait accompli of the current class of stand-alone, standards compliant rendering engines. This trend is powerful enough, in my thinking, to render the whole discussion about the future of the W3C uninteresting, if not downright frivolous.

A side effect of the rise of SSBs is the gradual obsolescence of XUL (which already currently only holds value in the meta-UI layer of Mozilla apps). Let’s face it: the delivery mechanism of today’s Firefox extensions is broken (restarting an app to install an extension is so Windows! yuck!), and needs to be replaced by built-in appendages that offer better and more robust integration with external web services (a design that I had intended for Flock) that also provides a web-native approach to extensibility. As far as I’m concerned, XUL development is all but dead and will eventually be relegated to the same hobby-sport nichefication of VRML scripting. (And if you happen to disagree with me here, I’m surprised that you haven’t gotten more involved in the doings of Mozpad).

But all this is frankly good for Mozilla, for WebKit (and Apple), for Google, for web standards, for open source, for microformats, for OpenID and OAuth and all my favorite open and non-proprietary technologies.

The more the future is built on — and benefits from — the open architecture of the web, the greater the likelihood that we will continue to shut down and defeat the efforts that attempt to close it up, to create property out of it, to segregate and discriminate against its users, and to otherwise attack the very natural and inclusive design of internet.

Site specific browsers (or rich internet applications or whatever they might end up being called — hell, probably just “Applications” for most people) are important because, for a change, they simply side-step the standards issues and let web developers and designers focus on functionality and design directly, without necessarily worrying about the idiosyncrasies (or non-compliance) of different browsers (Jon Crosby offers an example of this approach). With real competition and exciting additions being made regularly to each rendering engine, there’s also benefit in picking a side, while things are still fairly fluid, and joining up where you feel better supported, with the means to do cooler things and where generally less effort will enable you to kick more ass.

But all this is a way of saying that Mozpad is still a valid idea, even if the form or the original focus (XUL development) was off. In fact, what I think would be more useful is a cross-platform inquiry into what the future of Site Specific Browsers might (or should) look like… regardless of rendering engine. With that in mind, sometime this spring (sooner than later I hope), I’ll put together a meetup with folks like Todd, Jon, Phil “Journler” Dow and anyone else interested in this realm, just to bat around some ideas and get the conversation started. Hell, it’s going on already, but it seems time that we got together face to face to start looking at, seriously, what kind of opportunity we’re sitting on here.

Public nuisance #1: Importing your contacts

Facebook Needs OAuth

I’ve talked about this before (as one of the secondary motivators behind OAuth) but I felt it deserved a special call out.

Recently, Simon Willison presented on OpenID and called the practice that Dopplr (and many many others) uses to import your contacts from Gmail absolute horrifying. I would concur, but point out that Dopplr is probably the least offender as they also provide safe and effective hcard importing from Twitter or any URL, just as Get Satisfaction does.

Unfortunately this latter approach is both less widely implemented and also unfamiliar to many regular folks who really just want to find their friends or invite them to try out a new service.

The tragedy here is that these same folks are being trained to hand out their email address and passwords (which also unlock payment services like Google Checkout) regularly just to use a feature that has become more or less commonplace across all social network sites. In fact, it’s so common that Plaxo even has a free widget that sites can use to automate this process, as does Gigya. Unfortunately, the code for these projects is not really open source, whereas Dopplr’s is, providing little assurance or oversight into how the import is done.

What’s most frustrating about this is that we have the technology to solve this problem once and for all (a mix of OpenID, microformats, OAuth, maybe some Jabber), and actually make this situation better and more secure for folks. Why this hasn’t happened yet, well, I’m sure it has something to do with politics and resources and who knows what else. Anyway, I’m eager to see a open and free solution to this problem and I think it’s the first thing we need to solve after January 1.

Data portability and thinking ahead to 2008

So-called data portability and data ownership is a hot topic of late, and with good reason: with all the talk of the opening of social networking sites and the loss of presumed privacy, there’s been a commensurate acknowledgment that the value is not in the portability of widgets (via OpenSocial et al) but instead, (as Tim O’Reilly eloquently put it) it’s the data, stupid!

Now, Doc’s call for action is well timed, as we near the close of 2007 and set our sights on 2008.

Earlier this year, ZDNet predicted that 2007 would be the year of OpenID, and for all intents and purposes, it has been, if only in that it put the concept of non-siloed user accounts on the map. We have a long way to go, to be sure, but with OpenID 2.0 around the corner, it’s only a matter of time before building user prisons goes out of fashion and building OpenID-based citizen-centric services becomes the norm.

Inspired by the fact that even Mitchell Baker of Mozilla is talking about Firefox’s role in the issue of data ownership (In 2008 … We find new ways to give people greater control over their online lives — access to data, control of data…), this is going to be issue that most defines 2008 — or at least the early part of the year. And frankly, we’re already off to a good start. So here are the things that I think fit into this picture and what needs to happen to push progress on this central issue:

  • Economic incentives and VRM: Doc is right to phrase the debate in terms of VRM. When it comes down to it, nothing’s going to change unless 1) customers refuse to play along anymore and demand change and 2) there’s increased economic benefit to companies that give back control to their customers versus those companies that continue to either restrict or abuse/sell out their customers’ data. Currently, this is a consumer rights battle, but since it’s being fought largely in Silicon Valley where the issues are understood technically while valuations are tied to the attractiveness a platform has to advertisers, consumers are at a great disadvantage since they can’t make a compelling economic case. And given that the government and most bureaucracy is fulled up with stakeholders who are hungry for more and more accurate and technologically-distilled demographic data, it’s unlikely that we could force the issue through the legal system, as has been approximated in places like Germany and the UK.
  • Reframing of privacy and access permissions: I’ve harped on this for awhile, but historic notions of privacy have been out-moded by modern realities. Those who do expect complete and utter control need to take a look at the up and coming generation and realize that, while it’s true that they, on a whole, don’t appreciate the value and sacredness of their privacy, and that they’re certainly more willing to exchange it for access to services or to simply dispense with it altogether and face the consequences later (eavesdroppers be damned!), their apathy indicates the uphill struggle we face in credibly making our case.

    Times have changed. Privacy and our notions of it must adapt too. And that starts by developing the language to discuss these matters in a way that’s obvious and salient to those who are concerned about these issues. Simply demanding the protection of one’s privacy is now a hollow and unrealistic demand; now we should be talking about access, about permissions, about provenance, about review and about federation and delegation. It’s not until we deepen our understanding of the facets of identity, and of personal data and of personal profiles, tastestreams and newsfeeds that can begin to make headway on exploring the economic aspects of customer data and who should control it, have access to it,

  • Data portability and open/non-proprietary web standards and protocols: Since this is an area I’ve been involved in and am passionate about, I have some specific thoughts on this. For one thing, the technologies that I obsess over have data portability at their center: OpenID for identification and “hanging” data, microformats for marking it up, and OAuth for provisioning controlled access to said data… The development, adoption and implementation of this breed of technologies is paramount to demonstrating both the potential and need for a re-orientation of the way web services are built and deployed today. Without the deployment of these technologies and their cousins, we risk web-wide lock-in to vender-specific solutions like Facebook’s FBML or Google’s , greatly inhibiting the potential for market growth and innovation. And it’s not so much that these technologies are necessarily bad in and of themselves, but that they represent a grave shift away from the slower but less commercially-driven development of open and public-domained web standards. Consider me the frog in the luke warm water recognizing that things are starting to get warm in here.
  • Citizen-centric web services: The result of progress in these three topics is what I’m calling the “citizen-centric web”, where a citizen is anyone who inhabits the web, in some form or another. Citizen-centric web services, are, of course, services provided to those inhabitants. This notion is what I think is, and should, going to drive much of thinking in 2008 about how to build better citizen-centric web services, where individuals identify themselves to services, rather than recreating themselves and their so-called social-graph; where they can push and pull their data at their whim and fancy, and where such data is essentially “leased” out to various service providers on an as-needed basis, rather than on a once-and-for-all status using OAuth tokens and proxied delegation to trusted data providers; where citizens control not only who can contact them, but are able to express, in portable terms, a list of people or companies who cannot contact them or pitch ads to them, anywhere; where citizens are able to audit a comprehensive list of profile and behavior data that any company has on file about them and to be able to correct, edit or revoke that data; where “permission” has a universal, citizen-positive definition; where companies have to agree to a Creative Commons-style Terms of Access and Stewardship before being able to even look at a customer’s personal data; and that, perhaps most import to making all this happen, sound business models are developed that actually work with this new orientation, rather than in spite of it.

So, in grandiose terms I suppose, these are the issues that I’m pondering as 2008 approaches and as I ready myself for the challenges and battles that lie ahead. I think we’re making considerable progress on the technology side of things, though there’s always more to do. I think we need to make more progress on the language, economic, business and framing fronts, though. But, we’re making progress, and thankfully we’re having these conversations now and developing real solutions that will result in a more citizen-centric reality in the not too distant future.

If you’re interested in discussing these topics in depth, make it to the Internet Identity Workshop next week, where these topics are going to be front and center in what should be a pretty excellent meeting of the minds on these and related topics.

hCard for OpenID Simple Registration and Attribute Exchange

Microformats + OpenID

One of the many promises of OpenID is the ability to more easily federate profile information from an identity provider to secondary consumer applications. A simple example of this is when you fill out a basic personal profile on a new social network: if you’ve ever typed your name, birthday, address and your dog’s name into a form, why on earth would you ever have to type it again? Isn’t that what computers are for? Yeah, I thought so too, and that’s why OpenID — your friendly neighborhood identity protocol — has two competing solutions for this problem.

Unfortunately, both Simple Registration (SREG for short) and its potential successor Attribute Exchange (AX) go to strange lengths to reinvent the profile data wheel with their own exclusive formats. I’d like to propose a vast simplification of this problem by adopting attributes from the standard (also known as ). As you probably know, it’s this standard that defines the attributes of the and therefore is exceptionally appropriate for the case of OpenID (given its reliance on URLs for identifiers!).

Very quickly, I’ll compare the attributes in each format so you get a sense for what data I’m talking about and how each format specifies the data:

Attribute SREG AX vCard
Nickname openid.sreg.nickname namePerson/friendly nickname
Full Name openid.sreg.fullname namePerson n (family-name, given-name) or fn
Birthday openid.sreg.dob birthDate bday
Gender openid.sreg.gender gender gender
Email openid.sreg.email contact/email email
Postal Code openid.sreg.postcode contact/postalCode/home postal-code
Country openid.sreg.country contact/country/home country-name
Language openid.sreg.language pref/language N/A
Timezone openid.sreg.timezone pref/timezone tz
Photo N/A media/image/default photo
Company N/A company/name org
Biography N/A media/biography note
URL N/A contact/web/default URL

Now, there are a number of other other attributes beyond what I’ve covered here that AX and vCard cover, often overlapping in the data being described, but not in attribute names. Of course, this creates a pain for developers who want to do the right thing and import profile data rather than having folks repeat themselves yet again. Do you support SREG? AX? hCard? All of the above?

Well, regular people don’t care, and shouldn’t care, but that doesn’t mean that this isn’t a trivial matter.

Microformat glueInstead, the barriers to doing the right thing should be made as low as possible. Given all the work that we’ve done promoting microformats and the subsequent widespread adoption of them (I’d venture that there are many million hcards in the wild), it only makes sense to leverage the upward trend in the adoption of hcard and to use it as a parallel partner in the OpenID-profile-exchange dance.

While AX is well specified, hCard is well supported (as well as well-specified). While AX is extensible, hCard is too. Most importantly, however, hcard and vcard is already widely supported in existing applications like Outlook, Mail.app and just about every mobile phone on the planet and in just about any other application that exchanges profile data.

So the question is, when OpenID is clearly a player in the future and part of that promise is about making things easier, more consistent and more citizen-centric, why would we go and introduce a whole new format for representing person-detail-data when a perfectly good one already exists and is so widely supported?

I’m hoping that reason and sense will prevail and that OpenID and hCard will naturally come together as two essential building blocks for independents. If you’re interested in this issue, please blog about it or express your support to specs@openid.net. I think the folks working on this spec need to hear from folks who make, create, build or design websites and who recognize the importance and validity of betting on, and support and adopting, microformats.

OpenSocial and Address Book 2.0: Putting People into the Protocol

Obey by Shepard FaireyI wonder if Tim O’Reilly knows something that he’s not telling the rest of us. Or maybe he knows something that the rest of us know, but that we haven’t been able to articulate yet. Who knows.

In any case, he’s been going on about this “Address Book 2.0for awhile, and if you ask me, it has a lot to do with Google’s upcoming announcement of a set of protocols, formats and technologies they’ve dubbed OpenSocial.

[Aside: I’ll just point out that I like the fact that the name has “open” in it (even if “open” will be the catchphrase that replaces the Web 2.0 meme) because it means that in order to play along, you have to be some shade of open. I mean, if Web 2.0 was all about having lots of bits and parts all over the place and throwing them together just-in-time in the context of a “social” experience, then being “open” will be what separates those who are caught up and have been playing along from those who have been asleep at the wheel for the past four years. Being “open” (or the “most” open) is the next logical stage of the game, where being anything other than open will be met with a sudden and painless death. This is a good thing™ for the web, but remember that we’re in the infancy of the roll-out here, and mistakes (or brilliant insights) will define what kind of apps we’re building (or able to build) for the next 10 years.]

Let me center the context here. A few days ago, I wrote about putting people in the protocol. I was talking about another evolution that will come alongside the rush to be open (I should note that “open” is an ongoing process, not an endpoint in and of itself). This evolution will be painful for those who resist but will bring great advantage to those who embrace it. It’s pretty simple and if you ask me, it lies at the heart of Tim’s Address Book 2.0 and Google’s OpenSocial; in a word, it’s people.

Before I get into that, let me just point out what this is not about. Fortunately, in his assessment of “What to Look for from Google’s Social Networking Platform“, David Card at Jupiter Research spelled it out in blindingly incorrect terms:

As an analyst who used to have the word “Unix” on his business card, I’ve seen a lot of “open” “consortia” fail miserably. Regular readers know my Rule of Partnership: For a deal to be important, two of the following three must occur:

– Money must change hands
– There must be exclusivity
– Product must ship

“Open” “consortia” aren’t deals. That’s one of the reasons they fail. The key here would be “Product must ship.”

This completely misses the point. This is why the first bubble was so lame. So many people had third-world capital in their heads and missed what’s new: the development, accumulation and exchange of first-world social capital through human networks.

Now, the big thing that’s changed (or is changing) is the emphasis on the individual and her role across the system. Look at MyBlogLog. Look at Automattic’s purchase of Gravatar. Look at the sharp rise in OpenID adoption over the past two years. The future is in non-siloed living man! The future is in portable, independent identities valid, like Visa, everywhere that you want to be. It’s not just about social network fatigue and getting fed up with filling out profiles at every social network you join and re-adding all your friends. Yeah, those things are annoying but more importantly, the fact that you have to do it every time just to get basic value from each system means that each has been designed to benefit itself, rather than the individuals coming and going. The whole damn thing needs to be inverted, and like recently rejoined ant segments dumped from many an ant farm, the fractured, divided, shattered into a billion fragments-people of the web must rejoin themselves and become whole in the eyes of the services that, what else?, serve them!

Imagine this: imagine designing a web service where you don’t store the permanent records of facets of people, but instead you simply build services that serve people. In fact, it’s no longer even in your best interest to store data about people long term because, in fact, the data ages so rapidly that it’s next to useless to try to keep up with it. Instead, it’s about looking across the data that someone makes transactionally available to you (for a split second) and offering up the best service given what you’ve observed when similar fingerprint-profiles have come to your system in the past. It’s not so much about owning or storing Address Book 2.0 as much as being ready when all the people that populate the decentralized Address Book 2.0 concept come knocking at your door. Are you going to be ready to serve them immediately or asking them to fill out yet another profile form?

Maybe I’m not being entirely clear here. Admittedly, these are rough thoughts in my head right now and I’m not really self-editing. Forgive me.

But I think that it’s important to say something before the big official announcement comes down, so that we can pre-contextualize this and realize the shift that’s happening even as the hammer drops.

Look, if Google and a bunch of chummy chums are going to make available a whole slew of “social graph” material, we had better start realizing what this means. And we had better start realizing the value that our data and our social capital have in this new eco-system. Forget page views. Forget sticky eyeballs. With OpenID, with OAuth, with microformats, with, yes, with FOAF and other formats — hell with plain ‘ol scrapable HTML! — Google and co. will be amassing a social graph the likes of which has yet to be seen or twiddled upon (that’s a technical term). It means that we’re [finally] moving towards a citizen-centric web and it means great things for the web. It means that things are going to get interesting, for Facebook, for MySpace, for Microsoft, for Yahoo! (who recently closed 360, btw!) And y’know, I don’t know what Spiderman would think of OpenSocial or of what else’s coming down the pipe, but I’m sure in any case, he’d caution that, with great power comes great responsibility.

I’m certainly excited about this, but it’s not all about Google. Moreover, OpenSocial is simply an acknowledgment that things have to (and have) change(d). What comes next is anyone’s guess, but as far as I’m concerned, Tim’s been more or less in the ballpark so far, it just won’t necessarily be about owning the Address Book 2.0, but what it means when it’s taken for granted as a basic building block in the vast clockwork of the open social web.

Stop building social networks

I started writing this post August 8th. Now that Dave Recordon is at Six Apart and blogging about these things and Brad Fitzpatrick has moved on to Google, I thought I should finally finish this post.

I fortuitously ran into Tim O’Reilly, Brad Fitzpatrick and Dave Recordon in Philz yesterday as I was grabbing a cup of coffee. They were talking about some pretty heady ideas and strategies towards wrenching free one’s friends networks from the multiple social networks out there — and recombining them in such a way that it’d be very hard to launch a closed down social network again.

The idea isn’t new. It’s certainly been attempted numerous times, with few successful efforts to show for it to date. I think that Brad and Dave might be on to something with their approach, though, but it begs an important question: once you’ve got a portable social network, what do you do with it?

Fortunately, Brian Oberkirch has been doing a lot of thinking on this subject lately with his series on , starting with a post on designing portable social networks lead up to his most recent post offering some great tips on how to prepare your site for the day when your users come knocking for a list of their friends to populate their new favor hang.

In his kick-off post, Brian laid the problem of social network fatigue as stemming from the:

  • Creation of yet another login/password to manage
  • Need to re-enter profile information for new services
  • Need to search and re-add network contacts at each new service
  • Need to reset notification and privacy preferences for each new service
  • Inability to manage and add value to these networks from a central app/work flow

I think these are the fundamental drivers behind the current surge of progress in user-centric identity services, as opposed to the aging trend of network-centric web services. If Eric Schmidt thinks that Web 3.0 will be made up of small pieces loosely joined and “in the cloud”, my belief, going back to my time with Flock, is that having consistent identifiers for the same person across multiple networks, services or applications is going to be fundamental to getting the next evolution of the web right.

Tim made the point during our discussion that at one point in computing history, SQL databases embedded access permissions in the database itself. In modern times, access controls have been decoupled from the data and are managed, maintained and federated without regard to the data itself, affording a host of new functionality and stability simply by adjusting the architecture of the system.

If we decouple people and their identifiers from the networks that currently define them, we start moving towards greater granularity of privacy control through mechanisms like global social whitelists and buddy list blocklists. It also means that individuals can solicit services to be built that serve their unique social graph across any sites and domains (kind of like a fingerprint of your relationship connections), rather than being restrained to the limited freedom in locked down networks like Facebook. And ultimately, it enables cross-sharing content and media with anyone whom you choose, regardless of the network that they’re on (just like email today, where you can send someone on Yahoo.com email from Gmail.com or even Hotmail.com, and so on, but with finer contact controls). The result is that the crosscut of one’s social network could be as complete (or discreet) as one chose, and that rather than managing it in a social network-centric way, you’d manage it centrally, just as you do your IM buddy list, and it would follow you around on any site that you visit.

So it’s become something of a refrain in the advice that we’ve been giving out lately to our clients that they should think very critically about what social functionality they should (and shouldn’t) build directly into their sites. Rather than assuming that they should “build what Flickr has” or think about which features of Facebook they should absorb, the better question, I think, is to assume that in the next 6-8 months (for the early adopters at least) there’s going to be a shift to these portable networks. Where the basics will mostly be better covered by existing solutions and will not need to be rebuilt. Where each new site — especially those with specific functionality like TripIt (disclosure: we’ve consulted TripIt) — will need to focus less on building out its own social network and more on how social functionality can support their core competency.

We’re still in the early stages of recognizing and identifying the components of this problem. Thus far, the Microformats wiki says:

Why is it that every single social network community site makes you:

  • re-enter all your personal profile info (name, email, birthday, URL etc.)?
  • re-add all your friends?

In addition, why do you have to:

  • re-turn off notifications?
  • re-specify privacy preferences?
  • re-block negative people?

AKA “social network fatigue problem” and “social network update/maintenance problem”.

I’ve yet to be convinced that this is a problem that the “rest of the world” beyond social geeks is suffering, but I do think that the situation can be greatly improved, even for folks who are used to abandoning their profiles when they forget their passwords. For one thing, the world today is too network-centric, and not person-centric. While I do think people should be able to take on multiple personas online (professional, casual, hobby, family, and so on), I don’t think that that means that they should have those boundaries set for them by the networks they join. Instead, they should maintain their multiple personas as separate identifiers: email addresses, IM addresses and/or profile URLs (i.e. OpenIDs). This allows for handy separation based on the way people already materialize themselves online. Projects like NoseRub and even the smaller additions of offsite-identifiers on sites like Digg, Twitter and Pownce also acknowledge that members think of themselves as being more faceted than a single URL indicates.

This is a good thing. And this is where social computing needs to go.

We need to stop building independent spider webs of sticky siloed social activity. We need to stop fighting the nature of the web and embrace the design of uniform resource identifiers for people. We need to have a user agent that actually understands what it means to be a person online. A person with friends, with contacts, with enemies, with multiple personas and surfaces and ambitions and these user agents of the social web need to understand that, though we live in many distinct places on the web and interact with many different services, that we as people still have one unified viewport through which we understand the world.

Until social networks understand this reality and start to adapt to it, the problem that Dave is describing is only going to continue to get worse for more and more people until truly, the problem of social network fatigue will spread beyond social geeks and start cutting into the bottom lines of companies that rely on the regularity of “sticky eyeballs” showing up.

While I will always and continue to bet on the open web, we’re reaching an inflection point where some fundamental conceptions of the web (and social networks) need to change. Fortunately, if us geeks have our way, it’ll probably be for the general betterment of the whole thing.

The story of exPhone.org

At FOO Camp, we held a session on Green Code and discussed various tactics for reducing power consumption by reducing (primarily) CPU cycles through wiser platform decisions and/or coding practices.

exPhone badgeSomewhere in the discussion we brought up the impending launch of the iPhone and it occurred to me that there really wasn’t any substantive discussion being had about what to do with the many thousands of cell phones that would be retired in favor of newer, shinier iPhones.

Thus the seed for exPhone.org took root and began to germinate in my mind — as something simple and feasible that I could create to raise awareness of the issue and provide actionable information for busy people who wanted to do the right thing but might not want to wade through the many circuitous online resources for wireless recycling.

I had a couple constraints facing me: first, I needed to get this done while Tara was traveling to Canada as I wanted it to be an [early} surprise birthday present. Second, I needed to get it done before so I could leverage the event to promote the site. And third, I had other competing priorities that I really needed to focus on.

exPhone Keynote LayoutI went about designing the site in Keynote (my new favorite design tool), relying heavily on inspiration from Apple’s section. I did a bunch of research and posted a lot of links to a Ma.gnolia group (in lieu of a personal set) and created a Flickr group at the same time. I of course also registered the associated Twitter account.

As I went about developing the site, I felt that I wanted to capture everything in a single page — and make it easy for printing. However, I brought my buddy Alex Hillman into the project to help me with the trickier PHP integration bits (his announcement) and he convinced me that multiple pages would actually be a better idea — not to mention compatible with my primary purpose of encouraging sustainable behavior! — and so we ended up breaking the content into three primary sections: Preparation, Donation and Recycling.

We riffed back and forth in SVN and things started to solidify quickly and we quickly realized that we should make the site more social and interactive. And, rather than build our own isolated silos, we decided we’d pull in photos from Flickr, bookmarks from Ma.gnolia and Delicious and use the groups functionality on Flickr and Ma.gnolia. This meant Alex simply had to toss the feeds into Yahoo! Pipes, dedupe them and then funnel the results in a SimplePie aggregator on our end to output the resultant feeds. It turned out that Pipes was, for some reason, not as reliable as we needed and so Alex ripped them out and ended up bumping up SimplePie’s caching of the direct feeds.

Alex put in extra effort on the Flickr integration side, creating an exPhone user account on Flickr and setting up email posting to make it super simple to get your photos of your exphones on to the site. All you have to do is take a photo of your exphone and email it to myexphone@exphone.org with a subject like this: tags: exphone, ‘the make and model of your phone’ (yes, the make and model should be in single quotes!). We’re kinda low on photos on there, so we’d love for you to contribute!

Lastly, I’ve gotta give props to The Dude Dean for his SEO tips. I’m typically not a fan of SEO, but I think when applied ethically, it can definitely help you raise your relevance in search engine results. We’re nowhere in sight, but I’d love to get up in the cell phone recycling results.

I’ve written this up primarily to demonstrate an evolving design process (Keynote to HTML to SVN prototyping to iterative launch) and the use of existing technology to build a simple but rich web application. By leveraging web services via various APIs and feeds, Alex and I were able to build a “socialized” site will little original development where most of our efforts were focused on content, design and behavior. I also made sure to mark up the site with microformats throughout making it trivial to add the organizations I mentioned to your address book or reuse the data elsewhere.

I like the idea of “disposable web apps” or “single purpose apps” that provide useful information, useful functionality or simply reuse existing materials in a novel or purposeful way. I’m also thrilled that Alex and I cobbled this thing together from scratch in a matter of three days. Yeah, it’s not a long-term, high value proposition, but it was great fun to work on and is something concrete that came out of that discussion at FOO Camp.

I of course welcome your thoughts and feedback and invite you to add your own stories, links or photos to the site!

My default WordPress setup: 17 must-have plugins

WordPress iconWordPress is my favorite blogging platform and has been for a long time. It gets the basics right and never overwhelmed me as I grew up in my blogging experience. However, like Firefox, WordPress is also eminently extensible and makes it easy to both get more out of the platform the longer you’re on it and the more plugins you add to customize your experience.

Recently I took a look at the numerous WordPress blogs I maintain and decided to extract some of the best plugins I use across them. They range from spam management to reporting and stats to authentication and better overall functionality. Here we go:

  • Akismet: the best comment spam protection this side of dodge. It fortunately comes pre-installed, though you’ll still need an API key from WordPress.com.
  • Clutter-Free: a simple plugin for customizing the WordPress composing interface. If you never turn off comments or worry about editing the slug, this is a handy plugin to keep things nice and tidy.
  • Comment Timeout: I just started using this one recently when it turned out that 90% of my comment spam was showing up on older posts. This one’s a life saver.
  • Diagnosis: this is a really useful plugin for finding out information about the server that you’re hosted on. Essential for debugging compatibility problems (like which version of PHP you’re on).
  • FeedBurner FeedSmith: Steve Smith originally wrote this plugin to make it easy to use FeedBurner for syndicating your blog and now FeedBurner has taken over its maintenance. Super easy to use and super useful.
  • Maintenance Mode: whenever I need to upgrade WordPress, I always flip the switch on this plugin giving my visitors a pleasant down-time message. It doesn’t come with LOLCats out of the box, but you can customize it to be if you’re feeling adventurous.
  • Share This: Alex King creates incredibly useful plugins and this is one of them. If you want to make it easy for your visitors to share your posts on bookmarking or social network sites, this is the one plugin you need.
  • TanTanNoodles Simple Spam Filter: Matt is skeptical about this plugin, but I find it useful. Essentially you can blacklist certain words and this plugin will delete any comments found to contain those words, as well as pre-filter comments as they’re being submitted. Whether it’s redundant to Akismet or not isn’t important to me — I need all the anti-spam kung fu I can get!
  • Trackback Validator: this plugin is part of a research program out of Rice University. I don’t know how well it works, but I certainly have very little trackback spam since installing it!
  • Subscribe To Comments: unless you’re a co.mments or coComment user, it’s often a pain to stay on top of comments you’ve left on other blogs. Subscribe To Comments adds a checkbox below your comment box to allow your readers to subscribe to comment followups via email.
  • WordPress.com Stats: like Akismet, this is another Automattic product. If you have a WordPress.com account, this plugin will gather visitor stats on your blog and integrate them with your WordPress.com dashboard.
  • WordPress Database Backup: this one is also pre-installed by default and is recommended as part of the routine for upgrading WordPress. Every time you increment your install, you should do a backup with this plugin.
  • WordPress Mobile Edition: Alex comes through with another hugely useful plugin for converting your site to be mobile-phone friendly. I’m currently working on a skin for the iPhone, but for everything else, this one works wonders. Highly recommended.
  • WordPress Reports: If the WordPress.com stats aren’t enough for you, Joe Tan has written an awesome plugin that merges your FeedBurner and Google Analytics stats into a very readable page of infographics.
  • WordPress OpenID (+): of course if I’m going to be running multiple WordPress blogs, I’m not going to want to remember multiple usernames and passwords across them. Instead, I use OpenID. Will Norris‘ work on Alan Castonguay original plugin fixes some bugs and update the JanRain library to avoid a number of compatibility errors.
  • WP-Cache: if you get any kind of traffic whatsoever, this plugin is a lifesaver, especially in spikes from Digg and elsewhere. Turn it off while testing but otherwise, leave it running.
  • WP-ContactForm: Akismet Edition: I used Chip Cuccio‘s WP-ContactForm for some time but found that it was a bit too restrictive with its spam fighting tactics. I switched to this version, which uses Akismet rather than regex rules and have found that it’s a better balance for me.

So there you go. That’s the list that I use for every WordPress blog that I start. I should ask: how many of these do you use? What’s your favorite list of WordPress must-adds?

Oh, and bonus! I start every theme I work on with . It’s extremely flexible, fully classed (including native support for microformats) and now there’s a contest for best skins on until the end of the summer. Definitely a must-have for any new blog I work on.

Twitter adds support for hAtom, hCard and XFN

Twitter / Alex Payne: TWITTER CAN HAS A MICROFORMATS

The Cinco de Meow DevHouse was arguably a pretty productive event. Not only did Larry get buckets done on hAtomic but I got to peer pressure Alex Payne from Twitter into adding microformats to the site based on a diagram I did some time ago:

Twitter adds microformats

There are still a few bugs to be worked out in the markup, but it’s pretty incredible to think about how much recognizable data is now available in Twitter’s HTML.

Sam Sethi got the early scoop, and now I’m waiting for the first mashup that takes advantage of the XFN data to plot out all the social connections of the Twittersphere.

Raising the standard for avatars

FactoryDevil

Not long ago, Gravatar crawled back out from the shadows and relaunched with a snazzy new service (backed by Amazon S3) that lets you claim multiple email addresses and host multiple gravatars with them for $10 a year.

The beauty of their service is that it makes it possible to centrally control the 80 by 80 pixel face that you put out to the world and to additionally tie a different face to each of your email addresses. And this works tremendously well when it comes to leaving a comment somewhere that a) supports Gravatar and b) requires an email address to leave a comment.

Now, when Gravatar went dark, as you might expect, some enterprising folks came together and attempted to develop a decentralized standard to replace the well-worn service in a quasi-authoritarian spec called Pavatar (for personal avatar).

Aside from the of a new term, the choice to create an overly complicated spec and the sadly misguided attempt to call this effort a microformat, the goal is a worthy one, and given the recent question on the OpenID General list about the same quandary, I thought I’d share my thoughts on the matter.

For one thing, avatar solutions should focus on visible data, just as microformats do — as opposed to hidden and/or spammable meta tags. To that end, whatever convention is adopted or promoted should reflect existing standards. Frankly, the microformat already provides a mechanism for identifying avatars with its “photo” attribute. In fact, if you look at my demo hcard, you’ll see how easy it would be to grab data from this page. There’s no reason why other social networks couldn’t adopt the same convention and make it easy to set a definitive profile for slurping out your current avatar.

In terms of URI locating, I might recommend a standard convention that appends avatar.jpg to the end of an OpenID as a means of conveniently discovering an avatar, like so. This concept follows the favicon.ico convention of sticking the favicon.ico file in the root directory of a site, and then using this icon in bookmarks. There’s no reason why, when URLs come to represent people, we can’t do the same thing for avatars.

Now, off of this idea is probably my most radical suggestion, and I know that when people shoot me down for it, it’s because I’m right, but just early (as usual).

Instead of a miserly 80 pixels square, I think that default personal avatars should be 512 pixels square (yes, a full 262,144 pixels rather than today’s 6,400).

There are a couple reasons and potential benefits for this:

  1. Leopard’s resolution independence supports icons that are 512px square (a good place to draw convention). These avatars could end up being very useful on the desktop (see Apple’s Front Row).
  2. While 80 pixels might be a useful size in an application, it’s often less than useful when trying to recognize someone in a lineup.
  3. We have the bandwidth. We have the digital cameras and iSights. I’m tired of squinting when the technology is there to fix the problem.
  4. It provides a high fidelity source to scale into different contortions for other uses. Trying blowing up an 80 pixel image to 300 pixels. Yuck!
  5. If such a convention is indeed adopted, as was, we should set the bar much higher (or bigger) from the get-go

So, a couple points to close out.

When I was designing Flock, I wanted to push a larger subscribable personal avatar standard so that we could offer richer, more personable (though hopefully not as male-dominated) interfaces like this one (featuring Technorati’s staff at the time):

Friends Feed Reading

In order to make this work across sites, we’d need some basic convention that folks could use in publishing avatars. Even today, avatars vary from one site to the next in both size and shape. This really doesn’t make sense. With the advent of OpenID and URL-based identity mashed up with microformats, it makes even less sense, though I understand that needs do vary.

So, on top of providing the basic convention for locating an avatar on the end of an OpenID (http://tld.com/avatar.jpg), why not use server-side transforms to also provide various avatar sizes, in multiples of 16, like: avatar.jpg (original, 512×512) avatar_256.jpg, avatar_128.jpg, avatar_48.jpg, avatar_32.jpg, avatar_16.jpg. This is similar to the Apple icon .icns format … I see no reason why we can’t move forward with better and richer representations of people.

Onward!