Thoughts on Mozilla

You can now directly download the video or the audio.

Spurred by a conversation I had today, I thought I’d post some wide-ranging and very rough thoughts on Mozilla. They’re pretty raw and uncensored, and I go for about 50 minutes, but it might be somewhat thought-provoking. At the least, I’d love to hear your thoughts — in agreement or vehement disagreement. Educate me!

And, here are the basic notes I was working from:

  1. the future of the web: silverlight, apollo, JavaFX — where are you?? where’s mozilla’s platform for the future?
  2. build tools. xul tools are in the crapper. look at webkit and xcode.
  3. dump spreadfirefox; get your focus back. power to the people — not more centralization. where’s the college teams? run it like a presidential but stop asking for donations. events, mash pits… MozCamps… whatever… I know something is happening in Japan with Joi Ito… but that’s about all I know about.
  4. out reach… mitchell is out there… but i feel like, with all due respect, she’s too coy… i think segolene royale — who recently lost the french election set a very good example.
  5. and, the press have no idea what mozilla is up to… where the money’s going… there’s work and a roadmap for FF3… but it’s all about FF3.
  6. joe six pack is not your audience. look at africa, non-profits, international audiences. green audiences. MozillaWifi… work with Meraki networks! Firefox + Wifi in a box. Bring the web to everyone stop being a browser company.
  7. Mozilla the platform… stop thinking of yourself as a browser company. stop competing with flock. start promoting platform uses of mozilla and treat these folks like GOLD! think of joost and songbird. as Microsoft has done, build an ecosystem of Firefox browsers…! And build the platform of support to nurture them. Make it possible for people to build sustainable businesses on top of Mozilla… provide all that infrastructure and support!
  8. CivicForge… like an ethical Cambrian House… the new sourceforge that works for non-developers… where’s the mozilla social network? sure they’re on Facebook, but it feels like a chore.
  9. leadership opportunities… Boxely… microformats… openid…. start prepping web designers for HTML5 if that’s the future.
  10. IE has caught up in the basics. They have tabs. They fixed popups and spyware. Firefox as an idea can sell; as a browser, not so much.
  11. Browsers are dead. They’re not interesting. Back to Joe Six Pack… he doesn’t care about browsers. He’ll use whatever is pre-installed. Need to get Firefox on Dells.. on Ubuntu… on the Mac. Songbird too. OEM for Joe Six Pack.
  12. Browsers are a commodity. People are happy with Safari, Firefox 2 and IE7. What comes next goes beyond the browser — again, Adobe, Microsoft and Sun are all betting on this.
  13. mobile. minimo is used by whom?
  14. Firefox as a flag — as a sports team… rah… rah! where’s the rebel yell? where’s the risk? where’s the backbone? Why can’t Firefox stand for more than web standards and safety? I don’t think Mozilla can afford to be reluctant or to pull any punches. They need to come out swinging every time. And be New York’s Babe Ruth to IE’s Boston Red Sox.
  15. open source is immortal; it’s time that mozilla starting acting open source. at this point what DON’T they have to lose? the world is not the world of 2005. i want to know what the mozilla of 2010 looks like. we’re blake ross? where’s parakey? where’s joe hewitt? where’s dave barron? there’s so much talent at mozilla… are things really happening? thank god kaply is in charge of microformats now. (but, firefox is NOT an information broker!)
  16. lastly… great hope for the future of firefox, despite what sounds like negative commentary.

We found women in tech, so why are you still not reporting about them?

A Guide to the UnconventionalThere’s a good article on unconferences by Scott Kirsner in next week’s BusinessWeek. He talks about what an unconference is, discusses the rise of the wider community and the potential threat to the traditional conference model.

All in all, he does a pretty good job capturing an accurate picture of the “unconference scene” and it was great getting to talk to Scott about his piece.

I did want to take issue with his singling me out of “two fellow Web2Open organizers”, and bring some attention to gender blindness in media stories such as this one.

As with many stories in the popular press, it’s fairly typical to rest the foundation of a story on one or two key individuals; it keeps complexity low and avoids getting bogged down in details that are only of import to the characters of the story. And I’m sure that Scott didn’t intend any malice, but that Ross and Tara, who both stood on those chairs with me went unnamed strikes me as a missed opportunity to highlight not only the hard work that lots of folks have put into building this community, but in particular undermines the credit that Tara deserves for the incredible amount of work that she did to make Web2Open happen. If anyone, she’s the one that really deserves to be called out in the article.

But there’s a second and more insidious issue that I want to raise now, while the issue is relevant… If you read over the article, with the inside knowledge that I have of the background that went into the article, it’s doubly unfortunate that Tara wasn’t given more credit as a female organizer when she did far more than I did to pull off the conference; on top of that, the mention of Web2Open attendee Sudha Jamthe (a previous BarCamp organizer, no less) and Tara Dunion, spokeswoman for the Consumer Electronics Association, seem to paint them as bit players when compared to white guys like me, Dave Winer and Doug Gold.

Now, maybe I’m just over-sensitive to this kind of stuff, building mountains out of molehills and all that, but I suppose that’s the price of vigilance. And it’s also something that I can’t ignore when BarCamp is not and has never been solely about individuals, but about what we can do together, when serving each our own’s best interests. And this is especially relevant if you read Aaron Swartz’s thoughts on mysogny in the tech community:

If you talk to any woman in the tech community, it won’t be long before they start telling you stories about disgusting, sexist things guys have said to them. It freaks them out; and rightly so. As a result, the only women you see in tech are those who are willing to put up with all the abuse.

I really noticed this when I was at foo camp once, Tim O’Reilly’s exclusive gathering for the elite of the tech community. The executive guys there, when they thought nobody else was around, talked about how they always held important business meetings at strip clubs and the deficiencies of programmers from various countries.

Meanwhile, foo camp itself had a session on discrimination in which it was explained to us that the real problem was not racism or sexism, but simply the fact that people like to hang out with others who are like themselves.

The denial about this in the tech community is so great that sometimes I despair of it ever getting fixed. And I should be clear, it’s not that there are just some bad people out there who are being prejudiced and offensive. Many of these people that I’m thinking of are some of my best friends in the community. It’s an institutional problem, not a personal one.

Promoting women when they’re doing great things in the tech community has to become a top priority. Providing and seeking out the women who are serving in backbone roles within our community and bringing the spotlight to them and supporting them must become a shared priority. Working with women’s groups to create both inviting events and interesting opportunities to draw out and inspire the reluctant or hidden female talent is something that conference and *camp organizers alike must attend to.

I think I’m extra sensitive about this particular case for two reasons. The first is that we tried really hard and went out of our way to encourage and both in and in the Web2Expo. It was certainly a challenge, but I’m proud of the progress we made. I personally had the privilege to work with three incredible women on the designer track (Kelly Goto, Jen Pahlka and Emily Chang) and I think that made all the difference. The second issue probably stems from the Schwartz interview where Philipp Lenssen (the interviewer) reports:

The last barcamp I was at, in Nuremberg, had a men/ women ratio of about 80/ 2. It was quite sad, and I was wondering what the cause of this was. Is it partly also a problem of the hacker culture, to behave anti-social, and that this puts off more social people? Many good programmers I know, for instance, aren’t too social.

To which Aaron astutely replies:

I think that’s probably part of it; many people don’t have the social skills to notice how offensive they’re being. But even the people who are quite social and competent misbehave and, furthermore, they support a culture where this misbehavior is acceptable. I don’t exclude myself from this criticism.

Now, for a BarCamp to have an 80-2 male-female ratio is unacceptable as far as I’m concerned. And I would hope and challenge the BarCamp community, in particular, to do whatever it takes to work to remedy a condition like this. There are simply no excuses, only constant improvements to be made. And if any community were up to the challenge of taking head on and reversing this long term, systemic trend of making women effectively invisible, I should hope, and moreover expect, that it would be the BarCamp community to take the first worldwide steps towards addressing this critical matter and setting some baseline priorities for how we’re going to improve this situation.

Twitter adds support for hAtom, hCard and XFN

Twitter / Alex Payne: TWITTER CAN HAS A MICROFORMATS

The Cinco de Meow DevHouse was arguably a pretty productive event. Not only did Larry get buckets done on hAtomic but I got to peer pressure Alex Payne from Twitter into adding microformats to the site based on a diagram I did some time ago:

Twitter adds microformats

There are still a few bugs to be worked out in the markup, but it’s pretty incredible to think about how much recognizable data is now available in Twitter’s HTML.

Sam Sethi got the early scoop, and now I’m waiting for the first mashup that takes advantage of the XFN data to plot out all the social connections of the Twittersphere.

A different kind of net neutrality: Carbon Offsetting Web 2.0

Flickr Green

A couple months ago I had an idea that I’ve wanted to socialize since, but had only taken to doing so behind the scenes. Things being as they are, I’ve had little time to really advance this cause further, other than push it on a few friends who, so far, have reacted quite positively.

Prompted by Jeremy Zawodny’s post about Yahoo going carbon neutral and in support of Chris Baskind’s month-long effort to get high quality environmental links added to his Lighter Footstep group, I thought I’d finally write this up to see if it draws any interest.

The idea is rather simple and requires but one piece of support infrastructure that fortunately my fellow citizen coworker Ivan Storck is already hard at work on (more about that later).

So what’s the idea? Well, quite simply, it’s a web service that you use to offset the carbon footprint of your customers using your app. This would be mostly beneficial for larger services, but it’s my belief that every little bits counts!

For freemium services like Basecamp WordPress and Last.fm, providing an option for paying members to add $1/month to their bill in order to offset their use of your web service is where it begins. In exchange for this contribution, they would get a special distinction within the community, like a green avatar or badge to denote their carbon neutral status:

Last.fm Green

Now, this might seem like a trivial incentive, but then you might also be surprised to learn that the number one reason that people pay to upgrade their Flickr accounts is not because they need more storage or unlimited uploads, but instead because they want that tiny little PRO label next to their name. Offering a similar incentive on social networks — and making “offsetting cool” becomes a way to propagate this behavior, ultimately working towards completely offsetting the entirety of Web 2.0.

Now, those of you who have read up on or know anything about the power that servers draw will quickly be able to recognize that $1 month to offset a single user account is going overboard, given that it technically only costs a few cents per month to power most people’s individual use of social networking sites. And while you wouldn’t be wrong, you’ve hit on an interesting social component of this campaign: those who want to offset can do so, and in doing so, won’t just be offsetting their footprint, but some their neighbors as well, in an act straight out of Caterina Fake’s culture of generosity. So it’s not so much about offsetting one’s personal use, but on offsetting at a social level — and that this good deed is reflected a user’s avatar or badge means that anyone can effectively “upgrade” themselves to carbon neutral status — once they get annoyed that all their friends have “leveled up” and they haven’t. Meanwhile, those who have upgraded as a proactive choice can feel reassured that their influence is affecting those around them to make similar decisions, even if for different reasons — in the end, the result doubleplusgood.

So, about that API that I mentioned. It’s important to realize that 1) we’re in the early stages of and the 2) not all carbon offsetting funds are created equal (this is something I’m becoming evermore familiar with as we move to certify Citizen Space as a green office). Therefore, Ivan (who I mentioned and who also runs Sustainable Marketing and Sustainable Websites) has begun work on an API that will allow companies to purchase carbon offsets in bulk based on the actual amount of power consumed in something like a server farm evnironment (where power measurements are fairly easy to come by). Once initiated, the purchase will likely take place through one of Ivan’s affiliates based here in San Francisco called 3 Phases. In any case, we’re in the beginning phases of making this happen, but if you’re interested in helping or in offsetting your customers’ usage, leave a comment or drop me a note and we’ll see if we can’t push this work forward.

Likewise, if you can think of other ways to minimize the environmental footprint of your webservice or web office, blog about it and let others know! We’re doing what we can to create green coworking spaces and the more success stories we come across, the better.

Raising the standard for avatars

FactoryDevil

Not long ago, Gravatar crawled back out from the shadows and relaunched with a snazzy new service (backed by Amazon S3) that lets you claim multiple email addresses and host multiple gravatars with them for $10 a year.

The beauty of their service is that it makes it possible to centrally control the 80 by 80 pixel face that you put out to the world and to additionally tie a different face to each of your email addresses. And this works tremendously well when it comes to leaving a comment somewhere that a) supports Gravatar and b) requires an email address to leave a comment.

Now, when Gravatar went dark, as you might expect, some enterprising folks came together and attempted to develop a decentralized standard to replace the well-worn service in a quasi-authoritarian spec called Pavatar (for personal avatar).

Aside from the of a new term, the choice to create an overly complicated spec and the sadly misguided attempt to call this effort a microformat, the goal is a worthy one, and given the recent question on the OpenID General list about the same quandary, I thought I’d share my thoughts on the matter.

For one thing, avatar solutions should focus on visible data, just as microformats do — as opposed to hidden and/or spammable meta tags. To that end, whatever convention is adopted or promoted should reflect existing standards. Frankly, the microformat already provides a mechanism for identifying avatars with its “photo” attribute. In fact, if you look at my demo hcard, you’ll see how easy it would be to grab data from this page. There’s no reason why other social networks couldn’t adopt the same convention and make it easy to set a definitive profile for slurping out your current avatar.

In terms of URI locating, I might recommend a standard convention that appends avatar.jpg to the end of an OpenID as a means of conveniently discovering an avatar, like so. This concept follows the favicon.ico convention of sticking the favicon.ico file in the root directory of a site, and then using this icon in bookmarks. There’s no reason why, when URLs come to represent people, we can’t do the same thing for avatars.

Now, off of this idea is probably my most radical suggestion, and I know that when people shoot me down for it, it’s because I’m right, but just early (as usual).

Instead of a miserly 80 pixels square, I think that default personal avatars should be 512 pixels square (yes, a full 262,144 pixels rather than today’s 6,400).

There are a couple reasons and potential benefits for this:

  1. Leopard’s resolution independence supports icons that are 512px square (a good place to draw convention). These avatars could end up being very useful on the desktop (see Apple’s Front Row).
  2. While 80 pixels might be a useful size in an application, it’s often less than useful when trying to recognize someone in a lineup.
  3. We have the bandwidth. We have the digital cameras and iSights. I’m tired of squinting when the technology is there to fix the problem.
  4. It provides a high fidelity source to scale into different contortions for other uses. Trying blowing up an 80 pixel image to 300 pixels. Yuck!
  5. If such a convention is indeed adopted, as was, we should set the bar much higher (or bigger) from the get-go

So, a couple points to close out.

When I was designing Flock, I wanted to push a larger subscribable personal avatar standard so that we could offer richer, more personable (though hopefully not as male-dominated) interfaces like this one (featuring Technorati’s staff at the time):

Friends Feed Reading

In order to make this work across sites, we’d need some basic convention that folks could use in publishing avatars. Even today, avatars vary from one site to the next in both size and shape. This really doesn’t make sense. With the advent of OpenID and URL-based identity mashed up with microformats, it makes even less sense, though I understand that needs do vary.

So, on top of providing the basic convention for locating an avatar on the end of an OpenID (http://tld.com/avatar.jpg), why not use server-side transforms to also provide various avatar sizes, in multiples of 16, like: avatar.jpg (original, 512×512) avatar_256.jpg, avatar_128.jpg, avatar_48.jpg, avatar_32.jpg, avatar_16.jpg. This is similar to the Apple icon .icns format … I see no reason why we can’t move forward with better and richer representations of people.

Onward!

Getting back to POSH (Plain ol’ Semantic HTML)

Salt and Pepper shakers

Original photo by paul goyette and shared under the Attribution-ShareAlike 2.0 license.

Following Web2Expo/, a number of us got together for a Microformats dinner at Thirsty Bear. Some concern was raised over the increasing influx of proposals for new microformats — instead of sustained work on existing formats or techniques.

In discussing this, we realized a few things. Chief among them is that, as a community, we’ve been spending a great deal of time and effort providing a rationale and explanation for why microformats are important and how we use a community-driven process to derive new microformats. Now, there are historic reasons why our process is different and why we continually refer new members to it. If you consider that web standards themselves are created, reviewed and ratified by the , a consortium of paying members bound to very specific rules and mandates, you’ll realize that the value of our community’s output is measurable by the degree to which we are able to consistently produce high quality, clear and implementable specifications. Without adherence to recognized process, chaos would unfold and we’d end up with a myriad of inconsistent and overlapping formats, which is what essentially killed the Structured Blogging initiative.

In the microformats community, it’s existing behavior discovered through research and prior standards work that most often leads to new formats, and this work is often undertaken and championed by independent individuals, as opposed to corporations. On top of that, our self-imposed mandate is to stay specific, focused and relevant, optimizing for the 80% use cases and ignoring the 20% edge cases.

This story has been replayed and retold the world over, with great effect and consequence. What we have failed to articulate in the same time and space, however, is what work is necessary beyond the creation of new microformats. And because of that, we have more so many folks joining the community, eager to help, and seeing only the opportunity to — what else? — create a new microformat (in spite of the warning to not do so!)!

So, the ultimate result of the conversation that night was to focus on a rebranding of an old idea along with a new process for generally getting involved in the microformats movement with a subset of tasks focused exclusively on advancing POSH.

From now on, we will now be promoting POSH (as coined by kwijibo in IRC) as a first order priority, alongside the development and improvement of existing microformats.

POSH (“Plain Old Semantic HTML”) is a very old idea, and constitutes the superset of semantic patterns within which microformats exist:

POSH Diagram

With POSH thusly established, we have enumerated four classes of actions that collectively represent a Process for Contributing in order to better channel the energy of new-comers and old-timers alike:

  1. Publish: if you’re not already, add valid, semantic markup to your own website. It goes without saying that you should also be publishing microformats wherever it makes sense. Focus on improving the web as it is and that you have access to.
  2. Spread: advocate for and encourage others to follow your lead in implementing valid POSH and microformats. Familiarize yourself with web standards, accessibility, and why POSH is important. Do presentations on POSH at BarCamps and elsewhere; write about it, share it with friends, hold POSH Pits to create and build things with POSH. Add buttons (coming soon) to your site once you’ve been POSHified!
  3. Use: consume microformats — and better yet — add live subscriptions to data marked up in existing formats. With all the microformats , we need to start seeing some really innovative and time-saving uses of microformats, including tools for easily embedding microformatted content into blog posts and elsewhere.
    1. OpenID: meanwhile, consider adding OpenID identity services to your application or service — and support and data syncing.
  4. Improve: once you’ve gone through and added POSH to all your websites, go back and refactor, iterate and provide feedback, tips and learnings about what you did, how you did it and why you did things the way you did to the greater community. Tag your posts with ‘POSH’, contribute them to the wiki and generally seek out opportunities for improving the resources available to the wider audience of web designers and programmers.

In the coming days, we’ll be adding more documentation to the wiki and encouraging others to spread the word (as you should!).

Lastly, to help frame the POSH concept, think of of it as a “Fast-tracked Microformats Process” — wherein you can do your research, develop semantic patterns and then implement them without going through the same drawn out process that accepted formats must go through… because the goal is actually not to develop a new format, but to solve a specific and time-sensitive problem. Over time, these implementations will come to represent the body of prior art necessary to make informed decisions about future formats, but the immediate goal is to simply POSHify the web and not attempt the development of yet another format.

The importance of View Source

Camino View Source

There’s been a long history of innovation on the web founded in open access to the underlying source code that first websites, then later interactive web applications, were built on. The facility of having ready access to the inner workings of any web page has been tantamount to continued inspiration, imitation, and most importantly, the ongoing education of subsequent generations of designer-developer hybrids.

On my panel today on The Hybrid Designer, I took a moment to call out my concerns that the shininess of Rich Internet Application (RIA) frameworks like and (the framework formerly known as WPF/E) is blocking out critical consideration to the gravity and potential consequences of moving to these platforms. As Marc Orchant put it:

One of the most interesting discussions in the session was precipitated when Messina voiced his concerns that “containers” for web functionality like Adobe Apollo and Microsoft Silver[light] would make it harder to create dynamic applications that leverage these data streams as they will, he predicted, created new “walled gardens” by obscuring what is currently a pretty open playing field of ideas and techniques. [Jeremy] Keith added the observation that by hiding the source for the hybrid applications created using these tool, up and coming designers would lose a valuable learning resource that runs counter to the spirit of a read/write web built using open, standardized tools. Needless to say, the room was pretty sympathetic to the sentiments expressed by the panel.

In particular, I was suggesting that these frameworks effectively remove the View Source command — an utter reversal in the trend towards openness in web technologies leading to, in my view, new silos within a more closed web.

Ryan Stewart, who sadly I didn’t get a chance to catch up with afterwards, took me to task for my oversimplification:

Today at the Web 2.0 Expo, I sat in on a panel with Richard MacManus, Kelly Goto, Chris Messina and . They talked about the “hybrid designer” and touched on some points about the web and the richness that has really created the “hybrid” notion. In one bit, Chris said he was lamenting the fact that a lot of RIA technologies are taking away the “view source” and he got applause from the crowd.

I think this is the perfect example of how misunderstood the RIA world is. Chris used the example of Apollo and Silverlight as two technologies that are killing view source. Apollo is meant for desktop applications. We don’t have “view source” on the desktop, but that doesn’t mean we couldn’t. Apollo uses Flex and Ajax to create the desktop applications, and BOTH of those allow for view source. It’s true that Flex developers can turn off that feature, but really how is that any different than obfuscating your JavaScript in an Ajax application? When people want to share, the RIA tools out there have mechanisms in place to let them do that. Can you ask for more than that?

I was also surprised to hear Chris complain about Silverlight in that group. Of all the technologies, I think Silverlight actually has the best “view source” support. It uses JavaScript as the programming language behind the hood, and the XAML is just text based, so you can view source just like any other web page and see both the XAML and JavaScript libraries. That’s pretty open I think.

I’ll plead ignorance here (especially in terms of Silverlight), but I refuse to back off from my point about the importance of View Source (a point that I don’t think Ryan disagrees with in principle).

Whether you can get at the “goods” in Silverlight or Apollo apps is only part of the problem. I’ve examined the contents of four or five Apollo apps and each one had any number of impenetrable .swf binaries that I couldn’t do anything with, and even with the complete source code of TwitterCamp, a rather simple Apollo app, it wasn’t obvious how a design-leaning hybrid designer like myself would actually modify the app without buying into expensive Adobe tools like ($699) or ($499). And that in sence, is no different than removing the View Source command altogether.

…and even when I finally did figure out that I could right click and choose View Source while running TwitterCamp, I received this error message and no source code:

Alert

Now, Ryan also claims that We don’t have “view source” on the desktop, and I would argue that 1) it depends on your platform and 2) I’m not fundamentally prevented from tinkering with my desktop apps. And this is key.

Let’s drill down for a moment.

On the Mac, every application has the equivalent of a View Source command: simply right click and choose “Show Package Contents”. Since every Mac application is essentially a special kind of folder, you can actually browse the contents and resources of an application — and, in certain cases, make changes. Now, this isn’t as good as getting to the raw source, since there are still unusable binaries in those directories, but you can at least get to the nib files and make changes to the look and feel of an application without necessarily touching code or having the full source.

And so just like on the web, especially with free and open source tools like Firebug and Greasemonkey, with a little bit of knowledge or persistence, you can modify, tweak or wholly customize your experience without getting permission from the application creator all by way of “viewing the source”. More importantly, you can learn from, adapt and merge prior art — source code that you’ve found elsewhere — and that, in turn, can be improved upon and release, furthering a virtuous cycle of innovation and education.

Nonetheless, I’m glad that Ryan has corrected me, especially about Silverlight, which indeed is put together with a lot of plain-text technologies. However, I still can’t help but be skeptical when there seems to be so much in it for Adobe and Microsoft to build out their own islands of the web where people buy only their tools and live in prefab Second Life worlds of quasi-standards that have been embraced and extended. It feels like déjà vu all over again; like we’ve been here before and though I’d thought that we’d internalized the reasons for not returning to those dark ages, the shininess of the new impairs our ability to remember the not-so-distant past… While Ryan may be technically correct about the availability of the source, if that top-level menu item vanishes from the first-gen of RIAs, I remain increasingly concerned that the net result will constitute the emergence of an increasingly closed and siloed web.

I do hope that Ryan’s optimism, coupled with activism from other open source and open web advocates, will work with great speed and efficacy to counter my fears and keep that which is now the most open and vital aspect of the web the way it is now and the way it was meant to be.

The relative value of open source to open services

There’s an active debate going on in the activeCollab community stemming from the announcement that the formerly exclusively community-backed open source project will lose much of its open source trappings to go commercial and focus a closed platform providing open web services.

For those who aren’t aware, activeCollab was created as a free, open source and downloadable response to Basecamp, the project management web app. In June of last year, the project founder and lead developer, Ilija Studen, offered his rationale for creating activeCollab:

First version of activeCollab was written somewhere about May 2005 for personal use. I wanted Basecamp but didn’t want to pay for it. Being a student with few freelance jobs I just couldn’t guaranty that I’ll have money for it every month. So I made one for myself. It’s running on my localhost even today.

Emphasis original.

Ilija offered many of the usual personal reasons for making his project free and open:

  • Learning.
  • Control.
  • Establishing community.
  • Earning money.

Now, the last one is significant, for a couple reasons, as was pointed out at the time of the first release: Ilija wanted to make money by offering commercial support and customization on a product imitating someone else’s established commercial product.

But competition is good, especially for my friends in Chicago, and they’ve said as much.

But, Ilija made one fatal mistake in his introductory post that I think he’s come to regret nearly a year later: I find it normal to expect something in return for your work. activeCollab will always be free.

And so a community of Basecamp-haters and open source freeloaders gathered around the project and around Ilija, eager to build something to rival the smug success of Basecamp, something sprung from the head of the gods of open source and of necessity, to retrace the steps of Phoenix before it (later redubbed Firefox), to fight the evils of capitalism, the injustice of proprietary code, and to stave off the economic realities of trying to make a living creating open source software.

For a little under a year, the project slogged on, a happy alternative to Basecamp, perfect for small groups without the ability to afford its shiny cousin, perfect for those who refuse to pay for software, and perfect for those who need such collaboration tools, but live sheltered behind a firewall.

A funny thing happened on the way to the bank, though, and Ilija realized that simply offering the code for people to download, modify and run on their own servers wasn’t earning him nearly enough to live on. And without an active ecosystem built around activeCollab (as WordPress and Drupal have), it was hard to keep developing the core when he literally was not able to afford continuing to doing so.

Thus to decision to break from his previous promise and close up the code and offer instead an open API on which others could build plugins and services — morphing activeCollab from a commodity download to a pay-for web service:

Perhaps I am naive, and this was the business model all along. i.e. Build a community for the free software during early development and testing, then close it up just as the project matures.

That was not original plan. Original plan was to build a software and make money from support and customization services. After a while we agreed that that would not be the best way to go. We will let other teams do custom development while we keep our focus solely on activeCollab.

But, the way in which he went about announcing this change put the project and the health of his community at risk, as Jason pointed out:

Ilja,

I’m a professional brand strategist, and while nothing is ever certain, I also feel that this is a bad move.

Essentially you’ve divided your following into three camps. For, against and don’t care. A terrible decision.

What you should have done (or should do… its not too late)__

—> Start a completely seperate, differently branded commercial service that offers professional services

—> Leave your existing open-source model the same and continue to develop the project in concert with the community

————————-

Sugar is not a great model to follow. It’s not.

A better example would Bryyght[dot]com, a commercial company hosting Drupal CMS. The people there are still very actively involved in the original open-source project.

Overall, you should choose your steps wisely. While you’re the driving source behind the project – NOBODY fully owns their own brand.

A brand is owned by the community that are a part of it. Without customers, a brand is nothing.

JH

A brand is owned by the community that are a part of it. Without customers, a brand is nothing. (Hmm, sounds like the theory behind the Community Mark).

I think JH has a point, and with regards to open source, one that Ilija would do well to consider. On the one hand, Ilija has every right to change the course of the project — he started it after all and has done the lion’s share of work. He also needs to figure out a way to make a living, and now, having tried one model, is ready to try another. On the other, closing up the core means that he has to work extra hard to counter the perception that activeCollab is not an open source project, when indeed, parts of it still will be, and likely, won’t be the worse for it.

That many of the original Basecamp haters who supported Ilija’s work have now turned their anger towards him suggests that he’s both pioneering a tribrid open business/open service/open source model and doing something right. At least people care enough to express themselves…

And yet, that’s not to say that the path will be easy or clear. As with most projects, the test is now how he manages this transition that will make the difference, not that he made the decision.

All the same, it does suggest that the open source community is going through an evolution where the question of what to be open about and with whom to share is becoming a lot harder to answer than it once was. Or at least how to sustain open source efforts that play into facile operation as web services.

With the Honest Public License coming in advance of the GPL v3 to cover the use of open source software in powering web applications and services, there are obvious issues with releasing code that once you could count on being tied to the personal desktop… now with the hybridization of the desktop/internet environments and the democratization of scripting knowledge, it’s a lot harder to make a living simply through customization and support services for packaged source code when you’re competing against everyone and their aunt, not to mention Yahoo, Google and the rest.

Steve Ivy asked a poignant question in his recent post on Open Source v. Open Services: If the service is open enough, what’s the value of the source?

Truly, that is a question that I think a lot of us, including folks like Ilija, are going to have to consider for some time to come. And as we do consider it, we must also consider what the sustainable models for open source and open services look like in the future, for we are now living finally living web service-based economy, where the quality of your execution and uptime matter nearly as much, if not more, than the quality of your source code.

Pukka 1.5 adds support for Ma.gnolia

Pukka Ma.gnolia Support

Pukka, a favorite tool of the Delicious crowd, has added support for Ma.gnolia with its 1.5 release.

Thanks to (which mirrors the Delicious API), a number of formerly Delicious-only applications can also be used with Ma.gnolia. Pukka now ranks among them, though not with out a few discrepancies, notably no support for spaces in tags or ratings, but these are minor issues that can be worked out over time (note, to enable private bookmarks, check this out).

What’s interesting about apps adding cross-domain API support is the slow emergence of standards in new areas (i.e. outside the standard protocols). A framework for application developers that handles multiple bookmarking APIs that essentially do the same thing would be of great value, similar to the work that Jacob Jay started with his MediaSock framework (for publishing to over a dozen media services). I could see such a framework being really useful in browsers, feed readers, media players and similar applications.

Anyone?