Just a reminder that SuperHappyDevHouse is this weekend!
Category: The Web Arts
Raising the standard for avatars
Not long ago, Gravatar crawled back out from the shadows and relaunched with a snazzy new service (backed by Amazon S3) that lets you claim multiple email addresses and host multiple gravatars with them for $10 a year.
The beauty of their service is that it makes it possible to centrally control the 80 by 80 pixel face that you put out to the world and to additionally tie a different face to each of your email addresses. And this works tremendously well when it comes to leaving a comment somewhere that a) supports Gravatar and b) requires an email address to leave a comment.
Now, when Gravatar went dark, as you might expect, some enterprising folks came together and attempted to develop a decentralized standard to replace the well-worn service in a quasi-authoritarian spec called Pavatar (for personal avatar).
Aside from the unnecessary invention of a new term, the choice to create an overly complicated spec and the sadly misguided attempt to call this effort a microformat, the goal is a worthy one, and given the recent question on the OpenID General list about the same quandary, I thought I’d share my thoughts on the matter.
For one thing, avatar solutions should focus on visible data, just as microformats do — as opposed to hidden and/or spammable meta tags. To that end, whatever convention is adopted or promoted should reflect existing standards. Frankly, the hcard microformat already provides a mechanism for identifying avatars with its “photo” attribute. In fact, if you look at my demo hcard, you’ll see how easy it would be to grab data from this page. There’s no reason why other social networks couldn’t adopt the same convention and make it easy to set a definitive profile for slurping out your current avatar.
In terms of URI locating, I might recommend a standard convention that appends avatar.jpg to the end of an OpenID as a means of conveniently discovering an avatar, like so. This concept follows the favicon.ico convention of sticking the favicon.ico file in the root directory of a site, and then using this icon in bookmarks. There’s no reason why, when URLs come to represent people, we can’t do the same thing for avatars.
Now, off of this idea is probably my most radical suggestion, and I know that when people shoot me down for it, it’s because I’m right, but just early (as usual).
Instead of a miserly 80 pixels square, I think that default personal avatars should be 512 pixels square (yes, a full 262,144 pixels rather than today’s 6,400).
There are a couple reasons and potential benefits for this:
- Leopard’s resolution independence supports icons that are 512px square (a good place to draw convention). These avatars could end up being very useful on the desktop (see Apple’s Front Row).
- While 80 pixels might be a useful size in an application, it’s often less than useful when trying to recognize someone in a lineup.
- We have the bandwidth. We have the digital cameras and iSights. I’m tired of squinting when the technology is there to fix the problem.
- It provides a high fidelity source to scale into different contortions for other uses. Trying blowing up an 80 pixel image to 300 pixels. Yuck!
- If such a convention is indeed adopted, as
favicon.icowas, we should set the bar much higher (or bigger) from the get-go
So, a couple points to close out.
When I was designing Flock, I wanted to push a larger subscribable personal avatar standard so that we could offer richer, more personable (though hopefully not as male-dominated) interfaces like this one (featuring Technorati’s staff at the time):
In order to make this work across sites, we’d need some basic convention that folks could use in publishing avatars. Even today, avatars vary from one site to the next in both size and shape. This really doesn’t make sense. With the advent of OpenID and URL-based identity mashed up with microformats, it makes even less sense, though I understand that needs do vary.
So, on top of providing the basic convention for locating an avatar on the end of an OpenID (http://tld.com/avatar.jpg), why not use server-side transforms to also provide various avatar sizes, in multiples of 16, like: avatar.jpg (original, 512×512) avatar_256.jpg, avatar_128.jpg, avatar_48.jpg, avatar_32.jpg, avatar_16.jpg. This is similar to the Apple icon .icns format … I see no reason why we can’t move forward with better and richer representations of people.
Onward!
Getting back to POSH (Plain ol’ Semantic HTML)
Original photo by paul goyette and shared under the Attribution-ShareAlike 2.0 license.
Following Web2Expo/Open, a number of us got together for a Microformats dinner at Thirsty Bear. Some concern was raised over the increasing influx of proposals for new microformats — instead of sustained work on existing formats or techniques.
In discussing this, we realized a few things. Chief among them is that, as a community, we’ve been spending a great deal of time and effort providing a rationale and explanation for why microformats are important and how we use a community-driven process to derive new microformats. Now, there are historic reasons why our process is different and why we continually refer new members to it. If you consider that web standards themselves are created, reviewed and ratified by the W3C, a consortium of paying members bound to very specific rules and mandates, you’ll realize that the value of our community’s output is measurable by the degree to which we are able to consistently produce high quality, clear and implementable specifications. Without adherence to recognized process, chaos would unfold and we’d end up with a myriad of inconsistent and overlapping formats, which is what essentially killed the Structured Blogging initiative.
In the microformats community, it’s existing behavior discovered through research and prior standards work that most often leads to new formats, and this work is often undertaken and championed by independent individuals, as opposed to corporations. On top of that, our self-imposed mandate is to stay specific, focused and relevant, optimizing for the 80% use cases and ignoring the 20% edge cases.
This story has been replayed and retold the world over, with great effect and consequence. What we have failed to articulate in the same time and space, however, is what work is necessary beyond the creation of new microformats. And because of that, we have more so many folks joining the community, eager to help, and seeing only the opportunity to — what else? — create a new microformat (in spite of the warning to not do so!)!
So, the ultimate result of the conversation that night was to focus on a rebranding of an old idea along with a new process for generally getting involved in the microformats movement with a subset of tasks focused exclusively on advancing POSH.
From now on, we will now be promoting POSH (as coined by kwijibo in IRC) as a first order priority, alongside the development and improvement of existing microformats.
POSH (“Plain Old Semantic HTML”) is a very old idea, and constitutes the superset of semantic patterns within which microformats exist:
With POSH thusly established, we have enumerated four classes of actions that collectively represent a Process for Contributing in order to better channel the energy of new-comers and old-timers alike:
- Publish: if you’re not already, add valid, semantic markup to your own website. It goes without saying that you should also be publishing microformats wherever it makes sense. Focus on improving the web as it is and that you have access to.
- Spread: advocate for and encourage others to follow your lead in implementing valid POSH and microformats. Familiarize yourself with web standards, accessibility, and why POSH is important. Do presentations on POSH at BarCamps and elsewhere; write about it, share it with friends, hold POSH Pits to create and build things with POSH. Add buttons (coming soon) to your site once you’ve been POSHified!
- Use: consume microformats — and better yet — add live subscriptions to data marked up in existing formats. With all the microformats implementations in the wild, we need to start seeing some really innovative and time-saving uses of microformats, including tools for easily embedding microformatted content into blog posts and elsewhere.
- Improve: once you’ve gone through and added POSH to all your websites, go back and refactor, iterate and provide feedback, tips and learnings about what you did, how you did it and why you did things the way you did to the greater community. Tag your posts with ‘POSH’, contribute them to the wiki and generally seek out opportunities for improving the resources available to the wider audience of web designers and programmers.
In the coming days, we’ll be adding more documentation to the wiki and encouraging others to spread the word (as you should!).
Lastly, to help frame the POSH concept, think of of it as a “Fast-tracked Microformats Process” — wherein you can do your research, develop semantic patterns and then implement them without going through the same drawn out process that accepted formats must go through… because the goal is actually not to develop a new format, but to solve a specific and time-sensitive problem. Over time, these implementations will come to represent the body of prior art necessary to make informed decisions about future formats, but the immediate goal is to simply POSHify the web and not attempt the development of yet another format.
The importance of View Source
There’s been a long history of innovation on the web founded in open access to the underlying source code that first websites, then later interactive web applications, were built on. The facility of having ready access to the inner workings of any web page has been tantamount to continued inspiration, imitation, and most importantly, the ongoing education of subsequent generations of designer-developer hybrids.
On my panel today on The Hybrid Designer, I took a moment to call out my concerns that the shininess of Rich Internet Application (RIA) frameworks like Apollo and SilverLight (the framework formerly known as WPF/E) is blocking out critical consideration to the gravity and potential consequences of moving to these platforms. As Marc Orchant put it:
One of the most interesting discussions in the session was precipitated when Messina voiced his concerns that “containers” for web functionality like Adobe Apollo and Microsoft Silver[light] would make it harder to create dynamic applications that leverage these data streams as they will, he predicted, created new “walled gardens” by obscuring what is currently a pretty open playing field of ideas and techniques. [Jeremy] Keith added the observation that by hiding the source for the hybrid applications created using these tool, up and coming designers would lose a valuable learning resource that runs counter to the spirit of a read/write web built using open, standardized tools. Needless to say, the room was pretty sympathetic to the sentiments expressed by the panel.
In particular, I was suggesting that these frameworks effectively remove the View Source command — an utter reversal in the trend towards openness in web technologies leading to, in my view, new silos within a more closed web.
Ryan Stewart, who sadly I didn’t get a chance to catch up with afterwards, took me to task for my oversimplification:
Today at the Web 2.0 Expo, I sat in on a panel with Richard MacManus, Kelly Goto, Chris Messina and . They talked about the “hybrid designer” and touched on some points about the web and the richness that has really created the “hybrid” notion. In one bit, Chris said he was lamenting the fact that a lot of RIA technologies are taking away the “view source” and he got applause from the crowd.
I think this is the perfect example of how misunderstood the RIA world is. Chris used the example of Apollo and Silverlight as two technologies that are killing view source. Apollo is meant for desktop applications. We don’t have “view source” on the desktop, but that doesn’t mean we couldn’t. Apollo uses Flex and Ajax to create the desktop applications, and BOTH of those allow for view source. It’s true that Flex developers can turn off that feature, but really how is that any different than obfuscating your JavaScript in an Ajax application? When people want to share, the RIA tools out there have mechanisms in place to let them do that. Can you ask for more than that?
I was also surprised to hear Chris complain about Silverlight in that group. Of all the technologies, I think Silverlight actually has the best “view source” support. It uses JavaScript as the programming language behind the hood, and the XAML is just text based, so you can view source just like any other web page and see both the XAML and JavaScript libraries. That’s pretty open I think.
I’ll plead ignorance here (especially in terms of Silverlight), but I refuse to back off from my point about the importance of View Source (a point that I don’t think Ryan disagrees with in principle).
Whether you can get at the “goods” in Silverlight or Apollo apps is only part of the problem. I’ve examined the contents of four or five Apollo apps and each one had any number of impenetrable .swf binaries that I couldn’t do anything with, and even with the complete source code of TwitterCamp, a rather simple Apollo app, it wasn’t obvious how a design-leaning hybrid designer like myself would actually modify the app without buying into expensive Adobe tools like Flash ($699) or Flex Builder ($499). And that in sence, is no different than removing the View Source command altogether.
…and even when I finally did figure out that I could right click and choose View Source while running TwitterCamp, I received this error message and no source code:
Now, Ryan also claims that We don’t have “view source” on the desktop
, and I would argue that 1) it depends on your platform and 2) I’m not fundamentally prevented from tinkering with my desktop apps. And this is key.
Let’s drill down for a moment.
On the Mac, every application has the equivalent of a View Source command: simply right click and choose “Show Package Contents”. Since every Mac application is essentially a special kind of folder, you can actually browse the contents and resources of an application — and, in certain cases, make changes. Now, this isn’t as good as getting to the raw source, since there are still unusable binaries in those directories, but you can at least get to the nib files and make changes to the look and feel of an application without necessarily touching code or having the full source.
And so just like on the web, especially with free and open source tools like Firebug and Greasemonkey, with a little bit of knowledge or persistence, you can modify, tweak or wholly customize your experience without getting permission from the application creator all by way of “viewing the source”. More importantly, you can learn from, adapt and merge prior art — source code that you’ve found elsewhere — and that, in turn, can be improved upon and release, furthering a virtuous cycle of innovation and education.
Nonetheless, I’m glad that Ryan has corrected me, especially about Silverlight, which indeed is put together with a lot of plain-text technologies. However, I still can’t help but be skeptical when there seems to be so much in it for Adobe and Microsoft to build out their own islands of the web where people buy only their tools and live in prefab Second Life worlds of quasi-standards that have been embraced and extended. It feels like déjà vu all over again; like we’ve been here before and though I’d thought that we’d internalized the reasons for not returning to those dark ages, the shininess of the new impairs our ability to remember the not-so-distant past… While Ryan may be technically correct about the availability of the source, if that top-level menu item vanishes from the first-gen of RIAs, I remain increasingly concerned that the net result will constitute the emergence of an increasingly closed and siloed web.
I do hope that Ryan’s optimism, coupled with activism from other open source and open web advocates, will work with great speed and efficacy to counter my fears and keep that which is now the most open and vital aspect of the web the way it is now and the way it was meant to be.
The relative value of open source to open services
There’s an active debate going on in the activeCollab community stemming from the announcement that the formerly exclusively community-backed open source project will lose much of its open source trappings to go commercial and focus a closed platform providing open web services.
For those who aren’t aware, activeCollab was created as a free, open source and downloadable response to Basecamp, the project management web app. In June of last year, the project founder and lead developer, Ilija Studen, offered his rationale for creating activeCollab:
First version of activeCollab was written somewhere about May 2005 for personal use. I wanted Basecamp but didn’t want to pay for it. Being a student with few freelance jobs I just couldn’t guaranty that I’ll have money for it every month. So I made one for myself. It’s running on my localhost even today.
Emphasis original.
Ilija offered many of the usual personal reasons for making his project free and open:
- Learning.
- Control.
- Establishing community.
- Earning money.
Now, the last one is significant, for a couple reasons, as was pointed out at the time of the first release: Ilija wanted to make money by offering commercial support and customization on a product imitating someone else’s established commercial product.
But competition is good, especially for my friends in Chicago, and they’ve said as much.
But, Ilija made one fatal mistake in his introductory post that I think he’s come to regret nearly a year later: I find it normal to expect something in return for your work. activeCollab will always be free
.
And so a community of Basecamp-haters and open source freeloaders gathered around the project and around Ilija, eager to build something to rival the smug success of Basecamp, something sprung from the head of the gods of open source and of necessity, to retrace the steps of Phoenix before it (later redubbed Firefox), to fight the evils of capitalism, the injustice of proprietary code, and to stave off the economic realities of trying to make a living creating open source software.
For a little under a year, the project slogged on, a happy alternative to Basecamp, perfect for small groups without the ability to afford its shiny cousin, perfect for those who refuse to pay for software, and perfect for those who need such collaboration tools, but live sheltered behind a firewall.
A funny thing happened on the way to the bank, though, and Ilija realized that simply offering the code for people to download, modify and run on their own servers wasn’t earning him nearly enough to live on. And without an active ecosystem built around activeCollab (as WordPress and Drupal have), it was hard to keep developing the core when he literally was not able to afford continuing to doing so.
Thus to decision to break from his previous promise and close up the code and offer instead an open API on which others could build plugins and services — morphing activeCollab from a commodity download to a pay-for web service:
Perhaps I am naive, and this was the business model all along. i.e. Build a community for the free software during early development and testing, then close it up just as the project matures.
That was not original plan. Original plan was to build a software and make money from support and customization services. After a while we agreed that that would not be the best way to go. We will let other teams do custom development while we keep our focus solely on activeCollab.
But, the way in which he went about announcing this change put the project and the health of his community at risk, as Jason pointed out:
Ilja,
I’m a professional brand strategist, and while nothing is ever certain, I also feel that this is a bad move.
Essentially you’ve divided your following into three camps. For, against and don’t care. A terrible decision.
What you should have done (or should do… its not too late)__
—> Start a completely seperate, differently branded commercial service that offers professional services
—> Leave your existing open-source model the same and continue to develop the project in concert with the community
————————-
Sugar is not a great model to follow. It’s not.
A better example would Bryyght[dot]com, a commercial company hosting Drupal CMS. The people there are still very actively involved in the original open-source project.
Overall, you should choose your steps wisely. While you’re the driving source behind the project – NOBODY fully owns their own brand.
A brand is owned by the community that are a part of it. Without customers, a brand is nothing.
JH
A brand is owned by the community that are a part of it. Without customers, a brand is nothing. (Hmm, sounds like the theory behind the Community Mark).
I think JH has a point, and with regards to open source, one that Ilija would do well to consider. On the one hand, Ilija has every right to change the course of the project — he started it after all and has done the lion’s share of work. He also needs to figure out a way to make a living, and now, having tried one model, is ready to try another. On the other, closing up the core means that he has to work extra hard to counter the perception that activeCollab is not an open source project, when indeed, parts of it still will be, and likely, won’t be the worse for it.
That many of the original Basecamp haters who supported Ilija’s work have now turned their anger towards him suggests that he’s both pioneering a tribrid open business/open service/open source model and doing something right. At least people care enough to express themselves…
And yet, that’s not to say that the path will be easy or clear. As with most projects, the test is now how he manages this transition that will make the difference, not that he made the decision.
All the same, it does suggest that the open source community is going through an evolution where the question of what to be open about and with whom to share is becoming a lot harder to answer than it once was. Or at least how to sustain open source efforts that play into facile operation as web services.
With the Honest Public License coming in advance of the GPL v3 to cover the use of open source software in powering web applications and services, there are obvious issues with releasing code that once you could count on being tied to the personal desktop… now with the hybridization of the desktop/internet environments and the democratization of scripting knowledge, it’s a lot harder to make a living simply through customization and support services for packaged source code when you’re competing against everyone and their aunt, not to mention Yahoo, Google and the rest.
Steve Ivy asked a poignant question in his recent post on Open Source v. Open Services: If the service is open enough, what’s the value of the source?
Truly, that is a question that I think a lot of us, including folks like Ilija, are going to have to consider for some time to come. And as we do consider it, we must also consider what the sustainable models for open source and open services look like in the future, for we are now living finally living web service-based economy, where the quality of your execution and uptime matter nearly as much, if not more, than the quality of your source code.
NASA 2.0
If you haven’t been wondering what’s up with NASA lately, you’re probably not alone. Though once a bastion for the advancement of humankind, in recent years the space agency has seemingly vanished into a well of bureaucracy and lack of coherent, public-supported vision.
Now, thanks to a number of young, forward-thinking upstarts within the organization, that might all start to change, starting tomorrow night at NASA’s Ames Research Facility in Mountain View, California with the kick off of the World Space Party (aka Yuri’s Night).
With 4,000 expected attendees, this is probably one of the first if not largest raves ever held on government property (you can only imagine the red tape that they had to go through to get this approved!). The space is perfectly suited for this kind of thing — and represents the new thinking and outward focus surging within the organization.
On top of that, there is growing interest in open source (notable given the restrictiveness of the NASA Open Source Agreement), in Second Life, and in coworking, as witnessed by NASA’s tenant status at Citizen Space and in their CoLab project.
I’m certainly excited to see these changes coming to NASA — and if it’s any indicator of what changes might be wrought in the government with the addition of a little 2.0 fever and open source, there’s hope for us yet.
Microformats: Empowering Your Markup for Web 2.0
I received a copy of John Allsopp’s new book, Microformats: Empowering Your Markup for Web 2.0 in the mail today.
My first impression is certainly positive and I think that John has made a very valuable contribution to the community and to our efforts to get microformats out there on the open web.
We now have a solid resource that describes the community, the process, a number of microformats and how they’re being used today and profiles a number of organizations that are making good use of microformats already (sadly he missed Ma.gnolia in the bunch, but there’s always second printings!).
This book is ideal for web developers looking for a handy reference on the existing formats, for web designers wondering about how to make use of microformats in their code and how to apply CSS effectively using their semantics and finally, there’s even probably a trick or two that folks familiar with microformats might learn in its nearly 350 pages.
So, go buy yourself a copy and let me (and John) know what you think!
37 Signals’ new app Highrise launches
With narry a word on the 37 Signals’ blog SvN, the veil has been lifted on their long awaited CRM tool called Highrise.
There are a number of posts that capture some of the many features of Highrise on the SvN blog and are worth a look:
- Preview 1: An introduction to Highrise (the product previously known as Sunrise)
- Preview 2: Highrise permissions and groups
- Preview 3: Highrise welcome and workspace tabs
- Preview 4: Adding people to Highrise and dealing with duplicates
- Preview 5: Highrise tasks
- Preview 6: Highrise people, companies, and the dashboard
- Preview 7: Highrise plays well with email
- Preview 8: Highrise Cases
In the meantime, I’ve collected a bunch of screenshots (nice catch Allen) — in addition to their great tour — that will give you a sense of what the app is all about.
I’m totally excited about their adoption of OpenID, but I have to admit, their implementation — especially in the forum — is a little odd. I love the auto-adapting interface for inputing your OpenID, but the fact that I can’t sign in to the forum with the OpenID that I created my Highrise account with kind of misses the point.
And still no sign of microformats either, but a guy can hope right?
Anyway, it is exciting to take a look at all the interface greatness in this app — definitely some fine work. Whether I’ll become a paying customer is up in the air, especially as open source solutions like CiviCRM exist (though without the interface trappings that make 37 Signals products so attractive). I do like what I see so far, though — and if I can find a way to fit it into my workflow, I’ll likely end up a pretty satisfied user.
Apollo Alpha is out, the WOW comes later
There’s a ton of buzz being tossed on the alpha release of Adobe’s new Apollo platform. And reasonably so, as ZDNet blogger Ryan Stewart points out, in a world of Web 2.0 internet-goodness, this is the desktop rearing its head again in the form powerful RIAs.
I’ll leave the coverage to other folks, but in the meantime, I installed the runtime libraries and ran the sample apps included — grabbing a bunch of screenshots along the way that you should take a look at.
I also set up a Flickr group for other screenshots and a Ma.gnolia group for collecting news and other Apollo-related links.
I’m particularly excited about Apollo given its advance of the state of web tech… and the best is yet to come (though Finetune gives a taste of where we’re starting from). At the same time, I’d prefer a slightly lest costly and more open — but equally intuitive and capable — solution. OpenLaszlo, where y’at?
Alex King releases Twitter Tools beta for WordPress
Alex King has released a WordPress plugin that links your WordPress blog to your Twitter account, allowing you to pull your “tweets” into your blog or post directly to Twitter from WordPress. Among other features is a sidebar widget for latest tweets and a forthcoming digest mode.









