OpenID on the iPhone

During the OpenID/Oauth Session

I helped lead a session on Saturday at iPhoneDevCamp on the topic of OpenID and Oauth (a new protocol a group of us have been developing) to a packed room of developers, designers and interested parties.

My basic premise was that if you’re going to be developing an application for the iPhone that has any kind of account or social functionality that you should dispense with creating yet another identity silo and instead make use of OpenID. Among the reasons I cited:

  • Safari on the iPhone doesn’t have a password manager like 1Passwd and won’t be able to import all the Firefox passwords you’ve been recording for years. And, as mobile web browsers become more powerful, remembering web service account credentials will become more important (and more of a burden). Better to make it easy on your customers — one OpenID url, one username and password.
  • if you’ve logged in with OpenID on a web service on your desktop or laptop and have set your provider to always allow you to login in automatically, logging in on the iPhone will require you to only login to your OpenID provider and then enter your URL once for every web service that you want to login to. This means that you avoid the challenge of invisibly typing in your password over and over on the error prone touchscreen keyboard.
  • The ability to cross-polinate authenticated data using a combination of OpenID and Oauth while remote will be increasingly valuable, especially if the expectation is that applications are going to be entirely web-driven. When you’re dealing with desktop apps, you’re operating off a harddrive with known permissions; when you’re passing between web apps, the permission model is radically different and, just as when you go to check out from Amazon you always have to authenticate, developing patterns for this experience between web apps needs refinement. OpenID can help smooth out that interaction.

iSignin OpenID signin for OpenIDLastly, there is work going on (okay, I’m doing it so far) to make the OpenID login experience on the iPhone (and elsewhere) trump any kind of old school login system available. This obviously needs a lot of work and new thinking (maybe instead of authenticating by typing a password you have to SMS a unique shortcode, etc) but I think your money should be on OpenID if you’re going to be developing account-based web applications on the iPhone — or — generally.

Why I’m involved in iPhoneDevCamp

iPhoneDevCampWhile I’m planning to write a lengthier piece about why I think the iPhone and its constraints are important to the future of the open web, I did want to take a moment and talk about my involvement in co-organizing this weekend’s iPhoneDevCamp with Raven Zachary, whurley, Blake Burris, Dominic Sagolla and Christopher Allen and touch on its relationship with BarCamp and other similar camp-style events.

In particular, I received questions about my involvement in the event and calling it a “camp” from Jay Fichialos and Evan Prodromou, two BarCamp community members. I think that their concerns are valid and are worth answering, especially in public, as it gets at the line between commercial interests and community interests — and to what degree its okay to mix “business and grassroots” especially when, to date, BarCamp and the majority of *camp-styled events have avoided most the trappings of commercial endorsement.

Here’s essentially what I told them:

  1. For me, iPhoneDevCamp isn’t really about the iPhone. Personally, I could care less about the iPhone. What I am interested in, however, is the opportunity that the iPhone affords to promote the development and building of open web technologies in the conspicuous absence of proprietary technologies like Flash, SilverLight et al.
  2. I see my involvement as primarily to “keep it real”, to provide contacts and facilitation and to weigh in on issues of commercialization of the event. I think I represent a conservative perspective in this regard whereas my fellow co-organizers are more open to various forms of sponsor involvement. My goal is to keep the vibe community-centric and make sure that the event remain true to the spirit of prior camps, putting the participants first above sponsors.
  3. I like the idea of a productive and educational DevCamp model and would like to see this meme spread further. While this event is product-driven in name, I feel that subsequent events can morph into more product-agnostic events, extracting the base components of a “DevCamp” (part DevHouse, part BarCamp, part Mash Pit, part Mac Hack) into something more general. As with other events that I’ve been involved with, the event itself is non-proprietary and is open for reinterpretation and remixing. I would love for this event to enculturate new thinking, new ideas and new appreciation for using open web standards, open web technologies like microformats and OpenID and other non-proprietary web design methodologies. I’m sure other similar learning possibilities will emerge, but what’s important to me here is that the model of the DevCamp persist as yet another way for independents to gather themselves and self-educate.

Now, to be clear, I certainly do not care to hype the iPhone any more than it already is. I don’t own and iPhone and I haven’t decided whether I will buy one or not. Still, I feel like its release provides a grand opportunity to shift the thinking on developing for the iPhone towards open web technologies. Given the work I’ve been involved with from Spread Firefox to microformats to OpenID, this seems to be an opportunity not worth missing, regardless of the commercial implications. The web will survive the iPhone and will be made better by it. To what extent that is true, however, is entirely in our hands.

Why I screenshot

sh pops the question

Three months ago, Sarah Hatter asked me a question that I had intended on answering then and there. In fact I did, but I had intended to expand upon these thoughts in a longer post:

Actually, I take shot primarily for my own purposes — research, learning and as a repository of interfaces that I can dig up later and imitate.

If I had to go out an search for a specific UI everytime I needed inspiration, I’d be a *much* slower designer than I already am! This way I can capture the best of the web *as* I come upon it, when the moment of inspiration hits.

I think this hints at what I said the other day about cleverness: she is the most clever who is the sum of everyone else’s cleverness (Ok, I didn’t say that exactly, but that’s kind of what I was getting at). On top of that, it’s rather inefficient to try to “innovate” your way to the next big thing when most “inventions” are actually evolutionary improvements to what’s come before. As if social networking and Web 2.0 was new! I mean, the version got ticked up from one-point-oh right?

But that’s not really what I’m saying. What I am saying is that I screenshot for history, for posterity, for education and erudition, for communication, to show off and, heck, for my own enjoyment. Call me twisted, but I really get off on novel approaches to old interfaces, clever disk images or fancy visualizations. Jacob Patton once called me the pornographer of Web 2.0. Nuff said.

Still, there is more to be said. For one thing, I don’t screenshot everything that I see or come across. Just like my blog posts, I tend to like to write about things that are interesting to me, but that, if I’m going to share to the wider world, will probably be of some interest to other folks, one way or another. I never assume interest, but, y’know, I do try to make this stuff look good in the off chance that someone takes inspiration from something I’ve uploaded… as in the case of Andy Baio‘s work on the redesign of Upcoming. According to his own recollection of his design process, he relied more heavily on my shots of the Flickr-Yahoo Account merge than on any other online resource for figuring out how to implement the same for Upcoming. So yay? Go team!

This is the perfect example of why my screenshotting of design patterns can be really useful for clever people. When other smarter people have already solved problems, and start repeating the solutions or interface in consistent ways, it becomes a design cow path. These are most interesting to me because, as the patterns emerge, we start to develop a visual language for web applications that can be used in the place of verbal descriptors like “adding friends” or “upload interfaces“. Rather than speak in the abstract, we can pull from an existing assortment of solutions from the wild that have already been proven in place, that you can interact with, and that you can evaluate on a case by case basis as to whether any given pattern is worth emulating in a new design.

I also screenshot as a way of in-between blogging, I guess. Y’know, like Twitter, Tumblr, Ma.gnolia, Plazes and Last.fm (among others) are all forms of in-between blogging. They’re where I am in the absences between longer posts (such as this one) where I record what I’m up to, what I’m seeing and what’s interesting to me. My Flickr screenshots are probably more often than not more interesting than what I have to say over here, and certainly less verbose. And, most significantly, the screenshot is the new photograph, allowing me to connect through images of what I see with other people who are able to see things the way I see them. Imagine life before the original camera, where everyone’s depiction of one another was captured on canvas in oil paint; before screenshotting became a first class citizen on Flickr, we were living in a similarly blind world, cut off from these representations of our daily experience. But fortunately, as of a few months ago, that’s no longer the case:

Flickr: Content Filters

And, following off that last observation, I screenshot for posterity. Now that this internet thing has caught on and it’s been around a bit, it’s fun every now and then to reflect and go back to the days of the first bubble and take a look at what the “it” shine was back then (now it’s the “floor” effect — formerly known as the “wet floor” effect — but back then maybe it was the java lake applet?). Which is all fine and well, but once you start poking around, you’ll notice very quickly that the Wayback Machine is way incomplete. And while Google’s cache is useful, it certainly tends care more about the textual content of a page rather than how it originally looked. And that’s where screenshots could make up the difference, just as photographs of real life offer us a way to record the way things were, screenshots provide a mirror in time into the things we see on screen, into the interfaces that we interact with and the digital communications that we consume (check out this old view of the QuickSilver catalog compared with its current look or how about the Backpack preview or when Gmail stored less than 2GB of email?).

I don’t tend to think about the historic value of things when I shoot them; I do tend to evaluate their interestingness or contribution to a certain series along a theme. And yet, I’m curious to see, over time, just what these screenshots will reveal about us, and about the path we took to get to where end up. For one thing, web application development has changed drastically from where it was just a few years ago and now, with the iPhone, we’re embarking into wholly undiscovered territory (where it’s unclear if screenshots will be possible). But these screenshots help us learn about ourselves, and help us see the pieces-parts of our everyday experience. If I screenshot for any reason, perhaps it is to collect these scraps of evidence to help me better understand and put order into the world around me, to tie things together visually, and to explore solutions that work and others that fail. Anyway, it’s something I enjoy and will probably keep doing for the foreseeable future.

BarCampPortland and Pibb

Pibb - #pdxbarcamp

I’m here in Portland, OR at their BarCamp — it’s a great scene, but with a few differences.

First of all, this is the first time a BarCamp has been held specifically in a coworking space — in this case, an expansive collaborative environment called CubeSpace.

Second, Jay Fichialos, the original camphead, is here from Dallas and has transcribed the complete calendar into a great looking Google Spreadsheet.

Third, we’re using Pibb, a new online chat system built by Portland company JanRain, as the event’s channel. It seems to be performing really well for a new product and looks great. Unfortunately it doesn’t seem like there are permalinks available for the transcripts, but I’ve put in a request to the developers who were on-site for such a feature.

Otherwise, Dawn and Raven did a fantastic job putting the event together, there’s been plenty of food, great conversations and an impressive turnout. Oh, and Josh Bancroft’s Wii was definitely a welcome addition (even though Dawn kicked my ass).

Lastly, I’d like to commend BarCampPortland on achieving three to five male to female ratio of organizers… and yes, I mean that there five female planners of a total of eight. Attendance overall was still skewed towards male attendees, but the session that Dawn put on about Collaboration in Communities had a full 10 female participants — and it was one of the best and most interesting sessions I’ve been to. Progress is slow, but with increased awareness, continued vigilance and proactive inclusivity, I do think that the BarCamp community can continue to improve how it promotes, invites and nurtures a wider, more diverse, and more talented, community.

Thoughts on Mozilla

You can now directly download the video or the audio.

Spurred by a conversation I had today, I thought I’d post some wide-ranging and very rough thoughts on Mozilla. They’re pretty raw and uncensored, and I go for about 50 minutes, but it might be somewhat thought-provoking. At the least, I’d love to hear your thoughts — in agreement or vehement disagreement. Educate me!

And, here are the basic notes I was working from:

  1. the future of the web: silverlight, apollo, JavaFX — where are you?? where’s mozilla’s platform for the future?
  2. build tools. xul tools are in the crapper. look at webkit and xcode.
  3. dump spreadfirefox; get your focus back. power to the people — not more centralization. where’s the college teams? run it like a presidential but stop asking for donations. events, mash pits… MozCamps… whatever… I know something is happening in Japan with Joi Ito… but that’s about all I know about.
  4. out reach… mitchell is out there… but i feel like, with all due respect, she’s too coy… i think segolene royale — who recently lost the french election set a very good example.
  5. and, the press have no idea what mozilla is up to… where the money’s going… there’s work and a roadmap for FF3… but it’s all about FF3.
  6. joe six pack is not your audience. look at africa, non-profits, international audiences. green audiences. MozillaWifi… work with Meraki networks! Firefox + Wifi in a box. Bring the web to everyone stop being a browser company.
  7. Mozilla the platform… stop thinking of yourself as a browser company. stop competing with flock. start promoting platform uses of mozilla and treat these folks like GOLD! think of joost and songbird. as Microsoft has done, build an ecosystem of Firefox browsers…! And build the platform of support to nurture them. Make it possible for people to build sustainable businesses on top of Mozilla… provide all that infrastructure and support!
  8. CivicForge… like an ethical Cambrian House… the new sourceforge that works for non-developers… where’s the mozilla social network? sure they’re on Facebook, but it feels like a chore.
  9. leadership opportunities… Boxely… microformats… openid…. start prepping web designers for HTML5 if that’s the future.
  10. IE has caught up in the basics. They have tabs. They fixed popups and spyware. Firefox as an idea can sell; as a browser, not so much.
  11. Browsers are dead. They’re not interesting. Back to Joe Six Pack… he doesn’t care about browsers. He’ll use whatever is pre-installed. Need to get Firefox on Dells.. on Ubuntu… on the Mac. Songbird too. OEM for Joe Six Pack.
  12. Browsers are a commodity. People are happy with Safari, Firefox 2 and IE7. What comes next goes beyond the browser — again, Adobe, Microsoft and Sun are all betting on this.
  13. mobile. minimo is used by whom?
  14. Firefox as a flag — as a sports team… rah… rah! where’s the rebel yell? where’s the risk? where’s the backbone? Why can’t Firefox stand for more than web standards and safety? I don’t think Mozilla can afford to be reluctant or to pull any punches. They need to come out swinging every time. And be New York’s Babe Ruth to IE’s Boston Red Sox.
  15. open source is immortal; it’s time that mozilla starting acting open source. at this point what DON’T they have to lose? the world is not the world of 2005. i want to know what the mozilla of 2010 looks like. we’re blake ross? where’s parakey? where’s joe hewitt? where’s dave barron? there’s so much talent at mozilla… are things really happening? thank god kaply is in charge of microformats now. (but, firefox is NOT an information broker!)
  16. lastly… great hope for the future of firefox, despite what sounds like negative commentary.

Twitter adds support for hAtom, hCard and XFN

Twitter / Alex Payne: TWITTER CAN HAS A MICROFORMATS

The Cinco de Meow DevHouse was arguably a pretty productive event. Not only did Larry get buckets done on hAtomic but I got to peer pressure Alex Payne from Twitter into adding microformats to the site based on a diagram I did some time ago:

Twitter adds microformats

There are still a few bugs to be worked out in the markup, but it’s pretty incredible to think about how much recognizable data is now available in Twitter’s HTML.

Sam Sethi got the early scoop, and now I’m waiting for the first mashup that takes advantage of the XFN data to plot out all the social connections of the Twittersphere.

Raising the standard for avatars

FactoryDevil

Not long ago, Gravatar crawled back out from the shadows and relaunched with a snazzy new service (backed by Amazon S3) that lets you claim multiple email addresses and host multiple gravatars with them for $10 a year.

The beauty of their service is that it makes it possible to centrally control the 80 by 80 pixel face that you put out to the world and to additionally tie a different face to each of your email addresses. And this works tremendously well when it comes to leaving a comment somewhere that a) supports Gravatar and b) requires an email address to leave a comment.

Now, when Gravatar went dark, as you might expect, some enterprising folks came together and attempted to develop a decentralized standard to replace the well-worn service in a quasi-authoritarian spec called Pavatar (for personal avatar).

Aside from the of a new term, the choice to create an overly complicated spec and the sadly misguided attempt to call this effort a microformat, the goal is a worthy one, and given the recent question on the OpenID General list about the same quandary, I thought I’d share my thoughts on the matter.

For one thing, avatar solutions should focus on visible data, just as microformats do — as opposed to hidden and/or spammable meta tags. To that end, whatever convention is adopted or promoted should reflect existing standards. Frankly, the microformat already provides a mechanism for identifying avatars with its “photo” attribute. In fact, if you look at my demo hcard, you’ll see how easy it would be to grab data from this page. There’s no reason why other social networks couldn’t adopt the same convention and make it easy to set a definitive profile for slurping out your current avatar.

In terms of URI locating, I might recommend a standard convention that appends avatar.jpg to the end of an OpenID as a means of conveniently discovering an avatar, like so. This concept follows the favicon.ico convention of sticking the favicon.ico file in the root directory of a site, and then using this icon in bookmarks. There’s no reason why, when URLs come to represent people, we can’t do the same thing for avatars.

Now, off of this idea is probably my most radical suggestion, and I know that when people shoot me down for it, it’s because I’m right, but just early (as usual).

Instead of a miserly 80 pixels square, I think that default personal avatars should be 512 pixels square (yes, a full 262,144 pixels rather than today’s 6,400).

There are a couple reasons and potential benefits for this:

  1. Leopard’s resolution independence supports icons that are 512px square (a good place to draw convention). These avatars could end up being very useful on the desktop (see Apple’s Front Row).
  2. While 80 pixels might be a useful size in an application, it’s often less than useful when trying to recognize someone in a lineup.
  3. We have the bandwidth. We have the digital cameras and iSights. I’m tired of squinting when the technology is there to fix the problem.
  4. It provides a high fidelity source to scale into different contortions for other uses. Trying blowing up an 80 pixel image to 300 pixels. Yuck!
  5. If such a convention is indeed adopted, as was, we should set the bar much higher (or bigger) from the get-go

So, a couple points to close out.

When I was designing Flock, I wanted to push a larger subscribable personal avatar standard so that we could offer richer, more personable (though hopefully not as male-dominated) interfaces like this one (featuring Technorati’s staff at the time):

Friends Feed Reading

In order to make this work across sites, we’d need some basic convention that folks could use in publishing avatars. Even today, avatars vary from one site to the next in both size and shape. This really doesn’t make sense. With the advent of OpenID and URL-based identity mashed up with microformats, it makes even less sense, though I understand that needs do vary.

So, on top of providing the basic convention for locating an avatar on the end of an OpenID (http://tld.com/avatar.jpg), why not use server-side transforms to also provide various avatar sizes, in multiples of 16, like: avatar.jpg (original, 512×512) avatar_256.jpg, avatar_128.jpg, avatar_48.jpg, avatar_32.jpg, avatar_16.jpg. This is similar to the Apple icon .icns format … I see no reason why we can’t move forward with better and richer representations of people.

Onward!

Getting back to POSH (Plain ol’ Semantic HTML)

Salt and Pepper shakers

Original photo by paul goyette and shared under the Attribution-ShareAlike 2.0 license.

Following Web2Expo/, a number of us got together for a Microformats dinner at Thirsty Bear. Some concern was raised over the increasing influx of proposals for new microformats — instead of sustained work on existing formats or techniques.

In discussing this, we realized a few things. Chief among them is that, as a community, we’ve been spending a great deal of time and effort providing a rationale and explanation for why microformats are important and how we use a community-driven process to derive new microformats. Now, there are historic reasons why our process is different and why we continually refer new members to it. If you consider that web standards themselves are created, reviewed and ratified by the , a consortium of paying members bound to very specific rules and mandates, you’ll realize that the value of our community’s output is measurable by the degree to which we are able to consistently produce high quality, clear and implementable specifications. Without adherence to recognized process, chaos would unfold and we’d end up with a myriad of inconsistent and overlapping formats, which is what essentially killed the Structured Blogging initiative.

In the microformats community, it’s existing behavior discovered through research and prior standards work that most often leads to new formats, and this work is often undertaken and championed by independent individuals, as opposed to corporations. On top of that, our self-imposed mandate is to stay specific, focused and relevant, optimizing for the 80% use cases and ignoring the 20% edge cases.

This story has been replayed and retold the world over, with great effect and consequence. What we have failed to articulate in the same time and space, however, is what work is necessary beyond the creation of new microformats. And because of that, we have more so many folks joining the community, eager to help, and seeing only the opportunity to — what else? — create a new microformat (in spite of the warning to not do so!)!

So, the ultimate result of the conversation that night was to focus on a rebranding of an old idea along with a new process for generally getting involved in the microformats movement with a subset of tasks focused exclusively on advancing POSH.

From now on, we will now be promoting POSH (as coined by kwijibo in IRC) as a first order priority, alongside the development and improvement of existing microformats.

POSH (“Plain Old Semantic HTML”) is a very old idea, and constitutes the superset of semantic patterns within which microformats exist:

POSH Diagram

With POSH thusly established, we have enumerated four classes of actions that collectively represent a Process for Contributing in order to better channel the energy of new-comers and old-timers alike:

  1. Publish: if you’re not already, add valid, semantic markup to your own website. It goes without saying that you should also be publishing microformats wherever it makes sense. Focus on improving the web as it is and that you have access to.
  2. Spread: advocate for and encourage others to follow your lead in implementing valid POSH and microformats. Familiarize yourself with web standards, accessibility, and why POSH is important. Do presentations on POSH at BarCamps and elsewhere; write about it, share it with friends, hold POSH Pits to create and build things with POSH. Add buttons (coming soon) to your site once you’ve been POSHified!
  3. Use: consume microformats — and better yet — add live subscriptions to data marked up in existing formats. With all the microformats , we need to start seeing some really innovative and time-saving uses of microformats, including tools for easily embedding microformatted content into blog posts and elsewhere.
    1. OpenID: meanwhile, consider adding OpenID identity services to your application or service — and support and data syncing.
  4. Improve: once you’ve gone through and added POSH to all your websites, go back and refactor, iterate and provide feedback, tips and learnings about what you did, how you did it and why you did things the way you did to the greater community. Tag your posts with ‘POSH’, contribute them to the wiki and generally seek out opportunities for improving the resources available to the wider audience of web designers and programmers.

In the coming days, we’ll be adding more documentation to the wiki and encouraging others to spread the word (as you should!).

Lastly, to help frame the POSH concept, think of of it as a “Fast-tracked Microformats Process” — wherein you can do your research, develop semantic patterns and then implement them without going through the same drawn out process that accepted formats must go through… because the goal is actually not to develop a new format, but to solve a specific and time-sensitive problem. Over time, these implementations will come to represent the body of prior art necessary to make informed decisions about future formats, but the immediate goal is to simply POSHify the web and not attempt the development of yet another format.

The importance of View Source

Camino View Source

There’s been a long history of innovation on the web founded in open access to the underlying source code that first websites, then later interactive web applications, were built on. The facility of having ready access to the inner workings of any web page has been tantamount to continued inspiration, imitation, and most importantly, the ongoing education of subsequent generations of designer-developer hybrids.

On my panel today on The Hybrid Designer, I took a moment to call out my concerns that the shininess of Rich Internet Application (RIA) frameworks like and (the framework formerly known as WPF/E) is blocking out critical consideration to the gravity and potential consequences of moving to these platforms. As Marc Orchant put it:

One of the most interesting discussions in the session was precipitated when Messina voiced his concerns that “containers” for web functionality like Adobe Apollo and Microsoft Silver[light] would make it harder to create dynamic applications that leverage these data streams as they will, he predicted, created new “walled gardens” by obscuring what is currently a pretty open playing field of ideas and techniques. [Jeremy] Keith added the observation that by hiding the source for the hybrid applications created using these tool, up and coming designers would lose a valuable learning resource that runs counter to the spirit of a read/write web built using open, standardized tools. Needless to say, the room was pretty sympathetic to the sentiments expressed by the panel.

In particular, I was suggesting that these frameworks effectively remove the View Source command — an utter reversal in the trend towards openness in web technologies leading to, in my view, new silos within a more closed web.

Ryan Stewart, who sadly I didn’t get a chance to catch up with afterwards, took me to task for my oversimplification:

Today at the Web 2.0 Expo, I sat in on a panel with Richard MacManus, Kelly Goto, Chris Messina and . They talked about the “hybrid designer” and touched on some points about the web and the richness that has really created the “hybrid” notion. In one bit, Chris said he was lamenting the fact that a lot of RIA technologies are taking away the “view source” and he got applause from the crowd.

I think this is the perfect example of how misunderstood the RIA world is. Chris used the example of Apollo and Silverlight as two technologies that are killing view source. Apollo is meant for desktop applications. We don’t have “view source” on the desktop, but that doesn’t mean we couldn’t. Apollo uses Flex and Ajax to create the desktop applications, and BOTH of those allow for view source. It’s true that Flex developers can turn off that feature, but really how is that any different than obfuscating your JavaScript in an Ajax application? When people want to share, the RIA tools out there have mechanisms in place to let them do that. Can you ask for more than that?

I was also surprised to hear Chris complain about Silverlight in that group. Of all the technologies, I think Silverlight actually has the best “view source” support. It uses JavaScript as the programming language behind the hood, and the XAML is just text based, so you can view source just like any other web page and see both the XAML and JavaScript libraries. That’s pretty open I think.

I’ll plead ignorance here (especially in terms of Silverlight), but I refuse to back off from my point about the importance of View Source (a point that I don’t think Ryan disagrees with in principle).

Whether you can get at the “goods” in Silverlight or Apollo apps is only part of the problem. I’ve examined the contents of four or five Apollo apps and each one had any number of impenetrable .swf binaries that I couldn’t do anything with, and even with the complete source code of TwitterCamp, a rather simple Apollo app, it wasn’t obvious how a design-leaning hybrid designer like myself would actually modify the app without buying into expensive Adobe tools like ($699) or ($499). And that in sence, is no different than removing the View Source command altogether.

…and even when I finally did figure out that I could right click and choose View Source while running TwitterCamp, I received this error message and no source code:

Alert

Now, Ryan also claims that We don’t have “view source” on the desktop, and I would argue that 1) it depends on your platform and 2) I’m not fundamentally prevented from tinkering with my desktop apps. And this is key.

Let’s drill down for a moment.

On the Mac, every application has the equivalent of a View Source command: simply right click and choose “Show Package Contents”. Since every Mac application is essentially a special kind of folder, you can actually browse the contents and resources of an application — and, in certain cases, make changes. Now, this isn’t as good as getting to the raw source, since there are still unusable binaries in those directories, but you can at least get to the nib files and make changes to the look and feel of an application without necessarily touching code or having the full source.

And so just like on the web, especially with free and open source tools like Firebug and Greasemonkey, with a little bit of knowledge or persistence, you can modify, tweak or wholly customize your experience without getting permission from the application creator all by way of “viewing the source”. More importantly, you can learn from, adapt and merge prior art — source code that you’ve found elsewhere — and that, in turn, can be improved upon and release, furthering a virtuous cycle of innovation and education.

Nonetheless, I’m glad that Ryan has corrected me, especially about Silverlight, which indeed is put together with a lot of plain-text technologies. However, I still can’t help but be skeptical when there seems to be so much in it for Adobe and Microsoft to build out their own islands of the web where people buy only their tools and live in prefab Second Life worlds of quasi-standards that have been embraced and extended. It feels like déjà vu all over again; like we’ve been here before and though I’d thought that we’d internalized the reasons for not returning to those dark ages, the shininess of the new impairs our ability to remember the not-so-distant past… While Ryan may be technically correct about the availability of the source, if that top-level menu item vanishes from the first-gen of RIAs, I remain increasingly concerned that the net result will constitute the emergence of an increasingly closed and siloed web.

I do hope that Ryan’s optimism, coupled with activism from other open source and open web advocates, will work with great speed and efficacy to counter my fears and keep that which is now the most open and vital aspect of the web the way it is now and the way it was meant to be.

The relative value of open source to open services

There’s an active debate going on in the activeCollab community stemming from the announcement that the formerly exclusively community-backed open source project will lose much of its open source trappings to go commercial and focus a closed platform providing open web services.

For those who aren’t aware, activeCollab was created as a free, open source and downloadable response to Basecamp, the project management web app. In June of last year, the project founder and lead developer, Ilija Studen, offered his rationale for creating activeCollab:

First version of activeCollab was written somewhere about May 2005 for personal use. I wanted Basecamp but didn’t want to pay for it. Being a student with few freelance jobs I just couldn’t guaranty that I’ll have money for it every month. So I made one for myself. It’s running on my localhost even today.

Emphasis original.

Ilija offered many of the usual personal reasons for making his project free and open:

  • Learning.
  • Control.
  • Establishing community.
  • Earning money.

Now, the last one is significant, for a couple reasons, as was pointed out at the time of the first release: Ilija wanted to make money by offering commercial support and customization on a product imitating someone else’s established commercial product.

But competition is good, especially for my friends in Chicago, and they’ve said as much.

But, Ilija made one fatal mistake in his introductory post that I think he’s come to regret nearly a year later: I find it normal to expect something in return for your work. activeCollab will always be free.

And so a community of Basecamp-haters and open source freeloaders gathered around the project and around Ilija, eager to build something to rival the smug success of Basecamp, something sprung from the head of the gods of open source and of necessity, to retrace the steps of Phoenix before it (later redubbed Firefox), to fight the evils of capitalism, the injustice of proprietary code, and to stave off the economic realities of trying to make a living creating open source software.

For a little under a year, the project slogged on, a happy alternative to Basecamp, perfect for small groups without the ability to afford its shiny cousin, perfect for those who refuse to pay for software, and perfect for those who need such collaboration tools, but live sheltered behind a firewall.

A funny thing happened on the way to the bank, though, and Ilija realized that simply offering the code for people to download, modify and run on their own servers wasn’t earning him nearly enough to live on. And without an active ecosystem built around activeCollab (as WordPress and Drupal have), it was hard to keep developing the core when he literally was not able to afford continuing to doing so.

Thus to decision to break from his previous promise and close up the code and offer instead an open API on which others could build plugins and services — morphing activeCollab from a commodity download to a pay-for web service:

Perhaps I am naive, and this was the business model all along. i.e. Build a community for the free software during early development and testing, then close it up just as the project matures.

That was not original plan. Original plan was to build a software and make money from support and customization services. After a while we agreed that that would not be the best way to go. We will let other teams do custom development while we keep our focus solely on activeCollab.

But, the way in which he went about announcing this change put the project and the health of his community at risk, as Jason pointed out:

Ilja,

I’m a professional brand strategist, and while nothing is ever certain, I also feel that this is a bad move.

Essentially you’ve divided your following into three camps. For, against and don’t care. A terrible decision.

What you should have done (or should do… its not too late)__

—> Start a completely seperate, differently branded commercial service that offers professional services

—> Leave your existing open-source model the same and continue to develop the project in concert with the community

————————-

Sugar is not a great model to follow. It’s not.

A better example would Bryyght[dot]com, a commercial company hosting Drupal CMS. The people there are still very actively involved in the original open-source project.

Overall, you should choose your steps wisely. While you’re the driving source behind the project – NOBODY fully owns their own brand.

A brand is owned by the community that are a part of it. Without customers, a brand is nothing.

JH

A brand is owned by the community that are a part of it. Without customers, a brand is nothing. (Hmm, sounds like the theory behind the Community Mark).

I think JH has a point, and with regards to open source, one that Ilija would do well to consider. On the one hand, Ilija has every right to change the course of the project — he started it after all and has done the lion’s share of work. He also needs to figure out a way to make a living, and now, having tried one model, is ready to try another. On the other, closing up the core means that he has to work extra hard to counter the perception that activeCollab is not an open source project, when indeed, parts of it still will be, and likely, won’t be the worse for it.

That many of the original Basecamp haters who supported Ilija’s work have now turned their anger towards him suggests that he’s both pioneering a tribrid open business/open service/open source model and doing something right. At least people care enough to express themselves…

And yet, that’s not to say that the path will be easy or clear. As with most projects, the test is now how he manages this transition that will make the difference, not that he made the decision.

All the same, it does suggest that the open source community is going through an evolution where the question of what to be open about and with whom to share is becoming a lot harder to answer than it once was. Or at least how to sustain open source efforts that play into facile operation as web services.

With the Honest Public License coming in advance of the GPL v3 to cover the use of open source software in powering web applications and services, there are obvious issues with releasing code that once you could count on being tied to the personal desktop… now with the hybridization of the desktop/internet environments and the democratization of scripting knowledge, it’s a lot harder to make a living simply through customization and support services for packaged source code when you’re competing against everyone and their aunt, not to mention Yahoo, Google and the rest.

Steve Ivy asked a poignant question in his recent post on Open Source v. Open Services: If the service is open enough, what’s the value of the source?

Truly, that is a question that I think a lot of us, including folks like Ilija, are going to have to consider for some time to come. And as we do consider it, we must also consider what the sustainable models for open source and open services look like in the future, for we are now living finally living web service-based economy, where the quality of your execution and uptime matter nearly as much, if not more, than the quality of your source code.