On Integrated Feed Reading

Feed IconRyan King pointed me to a post by Tim Bray about how unintuitive feed consumption is in browsers today.

I couldn’t agree more. Indeed, RSS and general feed consumption in browsers have been tacked on, hacked in, and bludgeoned into the UI in inconsistent and narrow ways. Safari‘s got its poorly-named RSS view. Firefox (for now) has its simple toolbar and livemark feature as well as countless third-party add-ons.

We’ve also got some great web-based and desktop tools whose tasks are to deal only with feed content.

But all those are simply not sufficient nor reflect how fundamentally syndicated content is changing the way people interact, publish and share on the web.

To date, we’ve taken mere baby steps towards a truly syndicated web. We’ve tended to stay close to our concrete, static websites because of the familiarity and stability they offer us. We’re used to things existing in one place at a time in real life; on the web, general expectations have stuck to this powerful paradigm (look, I had a talk with my mum about this stuff so I know it’s true! If you already get RSS, you’re excluded from this generalization (notice my use of the word “general“?).

What is becoming increasingly clear, however, is that the old ways of thinking about content and where it should exist (or indeed where it actually does exist) no longer need apply. Consider podcasts, the perfect example of empheral media. You can’t search for podcasts directly; no, instead you have search for text about the podcast unless you go to some visual directory, which still relies on word and image (still not aural search technologies — we need the Riya of podcasting!). On top of that, you typically have to download the “physical” file and play it locally or on your pacemaker, severing the link back to the original source which may be updated or changed later.

The point is this: Tim Bray is not only right but the problem he describes goes deeper than just poor feed integration and workflows in existing browsers. It’s that browsers aren’t moving fast enough to embrace the potential that syndicated content has for radically improving the efficiency, responsiveness and collaborative nature of the web. Think about all the information you consume with feeds already — it’s only going to get worse until browsers fundamentally look at the web as an event stream and less as a library of independent books and pages.

Browsers in particular need to change to address this emerging opportunity and make it both easy and seamless to leverage the benefits of syndicated content. Flock is obviously taking a stab at it, both in the browser and in how we’re architecting our web real estate (or should I say faux estate?). In my view, Flock is an API aggregator that lives and breathes syndicated content. Yeah sure, it’ll load up webpages like any other browser, but it’s how we expose web services and feed content that’s really exciting and new.

So now I’m curious. As hourly Flock builds aren’t terribly stable, I’ve been without an aggregator for some time and so I’ve probably gotten behind in personal aggregation trends. How have you guys been managing your feeds? I notice that I get a lot of traffic from Bloglines and Rojo, so what are the key features you’re dying for in a syndicated content app?

Architecting the Flock Content Platform

The Future of *.flock.com

I had a meeting with Daryl, Vera and a sleepy Lloyd the other day to figure out how to bring the Flock.com website properties forward, in terms of both design and utility as well as towards an architecture for participation (thanks Tim!).

So what’s gong to happen to *.flock.com? Simple: massive syndication, resyndication and the collapse of web development as we know it.

Using Drupal as our core platform enables us to move content back and forth between all the different platforms that we use (and trust me, there’s quite a few (password: flock). It also means that the content building blocks that we’re using to build our site will be available to our community to mashup pretty much however it wants (sticking within some liberal licensing scheme, of course (thanks Larry et al!)).

The implications of moving to such an architecture are significant.

To begin with, it means that besides producing content ourselves, we’ll be able to consume feeds seamlessly that our community produces. Yeah, so we can pull in other people’s blog feeds, Flickr feeds, forum feeds and on and on (thanks for adding RSS to Basecamp, Jason!). If it’s got an API or feed output (password: flock) (or is marked up with microformats) figure that at some point we could use it somewhere on our site. It’s like one big disgusting paste-board exercise. Glorious, glorious!

Roadkill ElmoSo get this: this is where web development is going; this is also where Flock is going. Static websites are a relic of a foregone era. It’s no surprise that when you come upon a prone animal on the side of the road, it’s a good bet that it’s dead. Roadkill or natural causes or radiation sickness. Same thing is true on the web.

Yeah, if this direction sounds like chaos, it’s not. It’s ordered madness, which, you’ll note, has massive amounts of potential energy in its structure. Go ask a physicist what that means coz I have no idea. Fortunately, we’ve got some tricks up our sleeves to tame this beast. Check it out.

One thing that’s important here is being able to 1) navigate your way around the site and 2) get back to where you were many days or weeks ago (in an event stream, how do you hit pause?). Well, for one, tagging. Duh. And rich, full-text search (thanks Google!). And a social network thingamabobber. And favorites. And outgoing feeds. With permalinks.

Gee, a website that mirrors Flock’s featureset? Eeeenteresting.

No but seriously, back in the days of Round Two we were building both a browser and a web service. Why? Well, it’s actually pretty interesting when you’re designing both the content source and the user agent. It’s like choosing both the bread and the cheese for your fondue. Or chocolate and fruits. Um yeah ok, but why? Because intimate knowledge of both sides of the equation helps you fill in other variables that much faster!

Consider: 1 + x = 3.

Easy right?

So try this one on: APIs + Feeds + Drupal + Microformats + Flock + mojo, baby = you figure it out.

But I’ll tell you one thing, it’s going to kick ass.

The Out of Towner Meetup

Net2 Logo So tomorrow night is only the second ever Net Tuesday event, being put on by CompuMentor slash TechSoup. Reposting the details:

On December 13th, join Bay Area web innovators and social change agents for demos, discussions and drinks at Net Tuesday!

Ed Batista, Executive Director of Attention Trust, will show you how to stand up for your attention rights, while Seth Sternberg will demo web-based IM app Meebo.

Doors open at 6pm at Balazo Gallery at 2183 Mission St. (Mission & 18th)

Net Tuesdays are held on the second Tuesday of every month, and are part of TechSoup’s NetSquared project. Email net2@techsoup.org or visit the NetSquared site for more details on how to join the movement to remix the web for social change.

So that’s great and all, but the interesting thing (well, besides the event itself) will be the post-Net Tuesday Out of Towner meetup, being organized by yours truly for a few out of town friends. I’m thinking that once the Net2 event is over, we’ll mosey on over to Medjool for drinks, food and general tomfoolery. San Fransocializing will likely occur, but that’s up to the individual attendees.

Anyway, go sign up and bring your friends!

Offline Extension Development for Flock

Proposed Round Two Offline Action Syncing UII had an interesting discussion with Freeman Murray at SHDH last night about offline apps for use in remote areas where network latency can sometimes stretch from days into weeks for access to a “network”.

In such circumstances, you’ll have folks driving around on mopeds or busses with wifi antennae, USB drives or CD-ROMs delivering email and providing a means to getting “online”, admittedly asynchronously. Thought 28.8 was slow? Try email by bike messenger. It’s only one step above message delivery a la Paul Revere.

But in some cases, this is the best they’ve got and you can’t expect Google to trot around the world setting up mesh wifi networks for these folks (not in the near future anyway). And while some folks are working on this project and it is getting better, nevertheless, we must constantly be aware of and design for a rich offline experience.

This is especially close to me, considering I’m writing this on BART without connectivity as I head to the airport. Yes, even in industrialized areas connectivity is still not guaranteed (let alone free).

So an idea I’ve been playing with for some time is how Flock can support offline browsing and interaction, beyond pulling simple content from your cache or downloaded feeds. In the old model of the static web, the permalink model made sense and was perfectly useful — indeed, it’s still nice to be able to pull up that bar’s website when you’re wandering around Paris at 9pm, an hour late for the meetup that you helped coordinate and you’ve got no idea where it is except for the micromap saved in your browser’s cache.

But the problem is that we’ve divorced the data from the interaction layer. It’s like having your Halo saved games without having the game engine to play them. Boy, that’s a whole truckload of fun.

So anyway, Freeman and I were discussing how Flock could help with this problem. One idea that has some legs, I think, combines data delivery in microformatted XHTML pages (you’re caching it already, might as well make the data semantic and thus useful) and then allow basic interactivity with that data through extensions that are designed for both online and offline use. Thus, in the event that you’re offline, well, instead of pulling unsuccessfully from the server, it would pull from your local cache and allow you to do certain basic things, queueing your actions to be performed when you have connectivity again.

Indeed you could even use a P2P architecture for this, sharing encrypted offline action-scripts across a mesh network. When any one of those nodes reconnects, it could upload those instructions to the final destination where they’d be executed in batch. This would have the added benefit of spreading the need for connectivity across many nodes, instead of just one (like a USB drive which could get lost, stolen, confiscated or otherwise compromised). Should the actions have already been performed when the “message in a bottle” arrives, the commands would be simply ignored. There are technical details in here that are beyond my comprehension, but be that as it may, the idea is promising.

So back to offline extensions development… Freeman proposed an architecture in which Flock would ask the web app for a locally-run “servlet” that would provide similar offline interactivity when not connected. Autodiscovery would happen in the way that sites provide favicons or perhaps through a rel=”offline” link. The question though, is whether the user would need to take explicit action to install the servlet extension outright, or if, by visiting a web app (like Gmail or Basecamp), you’re implicitly expressing your interest in using the functionality of the app, whether on or offline.

I think the latter reflects a reality that I want. Installing apps on the desktop doesn’t make sense anymore, what with all my data being hosted in the cloud. So being to access and manipulate my data when I’m offline is going to become a requirement that the browser can tackle. I mean, this is the problem with solar power. You need to use your car whether it’s sunny or not, and so we’ve got rechargeable batteries, right?

Well, I think you get the point. Here’s the high-level description of the proposed solution (which I may or may not have communicated effectively):

  • microformatted XHTML cached as your local datastore
  • locally-run servlets that offer data interactivity simulating the remote data store
  • offline actions supported in extensions
  • queueing mechanism in the browser to synchronize via XML-RPC or p2p encrypted action file when connectivity is available

I probably got a bunch of the technical details wrong, but whaddya think of the general concept? Does this already exist in some other form somewhere?

Gems from Sean Coon

Tara pointed me to a great post on the power of tagging and the creation of grassroots, semantic content creation on Connecting*the*dots. A couple excerpts:

So what’s the connection between geo-political events and blogging and the tactical fervor of Web 2.0? (social bookmarking, tagging, open source, open content, etc.) In a nutshell: everything.

He calls tagging:

a tactical [strategy] in the battle of the information age. … The effort, I believe, is based on the desire of individual voices to be heard amidst the shelling of the mainstream media.

The legitimization of the individual (creative and political) perspective is being sustained in the 21st century by the conviction of the blogosphere, …The concept of social dialog and the elemental foundation of Capitalism are beginning to shift in exciting ways.

Blogs are beginning to bridge the social and communication gaps between nations. My peers are thinking differently when developing this medium, even in traditional business development circumstances. The tactical approach to producing, managing, sharing, finding and using information objects — defined from the bottom up — is finally getting it’s due.

connecting*the*dots: Tag! We’re It! Part II

technorati tags: , , ,

Laughing Squid 10th Anniversary Party

Laughing Squid 10 Year Anniversary

Tonight my good friend Scott Beale is celebrating 10 years of Laughing Squid.

I’m psyched about Scott’s outside-the-bubble success and feel privileged to call him a friend — he’s been instrumental in bringing our little BANC community together and making sure there’s always fabulous photos from our events. Without Scott, the glue that keeps us Bay Area folks so closely bound up wouldn’t be nearly as strong.

After all, on the web, without photographic evidence, you don’t really exist now do you?

Is open source immune to bubble economics?

© 1999 CRC Press LLC, © 1999-2005 Wolfram Research, Inc.

Open source business models are booming in the software industry, a rapid rise that has some experts wondering if it’s a bubble that will burst.

Is open source a bubble ready to burst? – ZDNet UK Insight

Knowing full well that I’m adding to a meme that needs no help in spreading, I’d like toss out a theory inspired by what appears to be growing speculation about the Second Coming of the Bubble (y’know, since the first one (referred to as the “Dot-com Boom” back in the day) and its subsequent bursting sucked so hard).

My theory is based on absolutely no math and certainly no experience with economics. My background is in design fer crissake. But that doesn’t mean that I can’t make obversations and conclusions about the state of things from where I sit. Pffthb.

So here’s the deal. Bubble or not, it doesn’t really make much difference. Well, not in my corner of the world. In fact, I would be delighted if we are going through some kind of dot-org bubble — in which case, it would be certainly less like the first go round, when all these brilliant ideas got sucked up behind barriers of proprietary software licenses. No, a dot-org bubble would be more like the way things were back when no one knew or cared about the intarweb except for a few dorky blokes in sweaters and tight chinos pushing packets around and having one helluva good time.

But back to the previous bust. In spite of all the money that got pushed around, one of the few good things seems to have been Firefox‘s Athena-like explosion from the head of AOL. Which also incidentally seemed to be the tipping point that brought the entire house of cards crumbling down… but I digress.

See, the question on most people’s minds seems to be “Can something like that happen again?!” or “Oh my god! It’s happening again! …Isn’t it?”

Well, maybe the right way to ask that question is, “Shouldn’t it keep happening until we get it right?”

I mean, what if these bubbles are part of some grand Darwinianly organic weeding out process that will lead to all the source code in the world being released under open licenses! Wouldn’t that be great?! …Ah yes, but then there’s that tricky thing we call reality.

Foiled again.

So back to my theory. What’s really happening here is that the focus has been exclusively on the fact that some people are trying to make some make money using open source tools and methodologies and have pulled in some VC to support their efforts. Yet the real story is that open source has reached critical mass and is gaining widespread adoption — so much so that people with dollars are willing to make some serious bets on its future. Let’s get down to the brass tacks of the matter: you invest in something either to see it grow or because you’d like to reap some benefit from your willingness to take a risk. What’s being communicated is that open source is now a less risky business proposition and it’s cost-competitive too:

Ron Rose, the chief information officer of Priceline.com, said that the company has become “predisposed” to buying open source products because of the “economic benefits”. A vibrant community behind a product also ensures a long-term road map, he added.

So all this hubbub over an impending open source bubble is silly. Open source doesn’t work that way. Companies will make money building open source tools or fail trying, not simply because they’re part of the open source ecosystem, but because of the quality of their ideas, execution or people. So even if all this “neue bubble” money dries up, open source will continue remain as vibrant as it’s ever been. It survived the first dot com boom and bust. It will survive the next.

technorati tags: , ,