Site-specific browsers and GreaseKit

GreaseKit - Manage Applications

There’s general class of applications that’s been gaining some traction lately in the Mozilla communities built on a free-standing framework called .

The idea is simple: take a browser, cut out the tabs, the URL bar and all the rest of the window chrome and instead load one website at a time. Hence the colloquial name: “site specific browser”.

Michael McCracken deserves credit for inspiring interest in this idea early on with his Webmail app, that in turn lead to Ben Willmore’s Gmail Browser application. Both literally took WebKit — Apple’s open source rendering engine (like Mozilla’s engine) — had it load Gmail.com and released their apps. It doesn’t get much more straight-forward than that.

For my own part, I’ve been chronically the development of Site-Specific Browsers from the beginning, setting up the WebKit PBWiki and releasing a couple of my own apps, most recently Diet Pibb. I have a strong belief that a full featured rendering engine coupled with a few client side tweaks is the future of browsers and web apps. You can see hints of this in Dashboard Widgets and Adobe’s AIR framework already, though the former’s launching flow conflicts with the traditional “click an icon in my dock to launch an application” design pattern.

Anyway, in developing my Site-Specific Browsers (or Desktop Web Apps?), I became enamored with an input manager for Safari called Creammonkey that allows you to run Greasemonkey scripts inside of Safari (ignore the name — Kato, the developer, is Japanese and English is his second language). An input manager works by matching a particular application’s .plist identifier and injecting its code (via SIMBL) into the application at run-time, effectively becoming part of the host application, in turn giving it access to all the application’s inner workings. When it comes to a rendering engine, it’s this kind of injection that allows you to then inject your own CSS or Javascript into a webpage, allowing you to make whatever modifications you want.

This is what Creammonkey did for Safari. And thank god it was open source or else we never would have ended up with today’s release of the successor to Creammonkey called GreaseKit.

Let me step back a little.

When I found out about Creammonkey, I contacted Kato Kazuyoshi, the developer, and told him how excited I was about what he had created.

“But could I use this on Site-Specific Browsers?” I wanted to know.

In broken English he expressed his uncertainty and so I went about hacking it myself.

I ended up with a crude solution where I would recompile Creammonkey and aim it at a different application every time I wanted to make use of a different Greasemonkey scripts. It was tedious and didn’t really work the way I envisioned, but given my meager programming skills, it demonstrated the idea of Site-Specific Browsers with Site-Specific Hacks.

I called this collection of site-specific scripts with a properly crafted Input Manager a “GreaseKit” and let the idea sit for a while.

Some time later, Kato got in touch with me and we talked about rereleasing Creammonkey with the functionality that I envisioned and a new name. Today he released the results of that work and called it GreaseKit.

I can’t really express how excited I am about this. I suspect that the significance of this development probably won’t shake the foundations of the web, but it’s pretty huge.

For one thing, I’ve found comparable solutions (Web Runner) clunky and hard to use. In contrast, I’m able to create stand-alone WebKit apps in under 2 minutes with native Apple menus and all the fixins using a template that Josh Peek made. No, these apps aren’t cross-platform (yet), but what I lose in spanning the Linux/PC divide, I gain in the use of , Apple’s development environment, and frankly a faster rendering engine. And, as of today, the use of Greasemonkey scripts centrally managed for every WebKit app I develop.

These apps are so easy to make and so frigging useful that people are actually building businesses on them. Consider Mailplane, Ruben Bakker’s Gmail app. It’s only increments better than McCracken’s WebMail or Willmore’s Gmail Browser, but he’s still able to charge $25 for it (which I paid happily). Now, with GreaseKit in the wild, I can add all my favorite Greasemonkey scripts to Mailplane — just like I might have with Firefox — but if Gmail causes a browser crash, I only lose Mailplane, rather than my whole browser and all the tabs I have open. Not to mention that I can command-tab to Mailplane like I can with Mail.app… and I can drag and drop a file on to Mailplane’s dock icon to compose a new email with an attachment. Just add offline storage (inevitable now that WebKit supports HTML5 client-side database storage) and you’ve basically got the best of desktop, web and user-scriptable applications all in one lightweight package. I’ll have more to write on this soon, but for now, give GreaseKit a whirl (or check the source) and let me know what you think.

Twitter hashtags for emergency coordination and disaster relief

I know I’ve been beating the drum about hashtags for a while. People are either lukewarm to them or are annoyed and hate them. I get it. I do. But for some stupid reason I just can’t leave them alone.

Anyway, today I think I saw a glimmer of the promise of the hashtag concept revealed.

For those of you who have no idea what I’m talking about, consider this status update:

Twitter / nate ritter: #sandiegofire 300,000 peopl...

You’ll notice that the update starts out with “#sandiegofire”. That’s a hashtag. The hash is the # symbol and the tag is sandiegofire. Pretty simple.

Why use them? Well, it’s like adding metadata to your updates in a simple and consistent way. They’re not the most beautiful things ever, but they’re pretty easy to use. They also follow Jaiku’s channel convention to some extent, but break it in that you can embed hashtags into your actual post, like so:

Twitter / Mr Messina: @nateritter thanks for keep...

Following the , this simple design means that you can get more mileage out of your 140 characters than you might otherwise if you had to specify your tags separately or in addition to your content.

Anyway, you get the idea.

Hashtags become all the more useful now that Twitter supports the “track” feature. By simply sending ‘track [keyword]‘ to Twitter by IM or SMS, you’ll get real-time updates from across the Twitterverse. It’s actually super useful and highly informative.

Hashtags become even more useful in a time of crisis or emergency as groups can rally around a common term to facilitate tracking, as demonstrated today with the San Diego fires (in fact, it was similar situations around Bay Area earthquakes that lead me to propose hashtags in the first place, as I’d seen people Twittering about earthquakes and felt that we needed a better way to coordinate via Twitter).

Earlier today, my friend Nate Ritter started twittering about the San Diego fires, starting slowly and without any kind of uniformity to his posts. He eventually began prefixing his posts with “San Diego Fires”. Concerned that it would be challenging for folks to track “san diego fires” on Twitter because of inconsistency in using those words together, I wanted to apply hashtags as a mechanism for bringing people together around a common term (that Stowe Boyd incidently calls groupings).

I first checked Flickr’s Hot Tags to see what tag(s) people were already using to describe the fires:

Popular Tags on Flickr Photo Sharing

I picked “” — the tag that I thought had the best chance to be widely adopted, and that would also be recognizable in a stream of updates. I pinged Nate and around 4pm with my suggestion, and he started using it. Meanwhile, Dan Tentler (a co-organizer who I met at ETECH last year) was also twittering, blogging and shooting his experience, occasionally using #sandiegofire as his tag. Sometime later Adora (aka Lisa Brewster, another BarCamp San Diego co-organizer) posted a status using the #sandiegofire hashtag.

Had we had a method to disperse the information, we could have let people on Twitter know to track #sandiegofire and to append that hashtag to their updates in order to join in on the tracking stream (for example, KBPS News would have been easier to find had they been using the tag) (I should point out that the Twitter track feature actually ignores the hashmark; it’s useful primarily to denote the tag as metadata in addition to the update itself) .

Fortunately, Michael Calore from Wired picked up the story, but it might have come a little late for the audience that might have benefitted the most (that is, folks with Twitter SMS in or around affected areas).

In any case, hashtags are far from perfect. I have no illusions about this.

But they do represent what I think is a solid convention for coordinating ad-hoc groupings and giving people a way to organize their communications in a way that the tool (Twitter) does not currently afford. They also leave open the possibility for external application development and aggregation, since a Twitter user’s track terms are currently not made public (i.e. there is no way for me to know what other people are tracking across Twitter in the same way that I can see which tags have the most velocity across Flickr). So sure, they need work, but the example of #sandiegofire now should provide a very clear example of the problem I’d like to see solved. Hashtags are my best effort at working on this problem to date; I wonder what better ideas are out there waiting to be proposed?

And you wonder why people in America are afraid of the Internet

Ladies and gentlemen, I would like to present to you two exhibits.

Here is Exhibit A from today’s International Herald Tribune:

Will Google take the mobile world of Jaiku onto the Web? - International Herald Tribune

In contrast (Exhibit B) we have the same exact article, but with a completely different headline:

Google’s Purchase of Jaiku Raises New Privacy Issues - New York Times

Now, for the life of me, I can’t figure out how the latter is a more accurate or more appropriate title for the article, which is ostensibly about Google’ acquisition of Jaiku.

But, for some reason, the editor of the NY Times piece decided that it would — what? — sell more papers? — to use a more incendiary and moreover misleading headline for the story.

Here’s why I take issue: I’m quoted in the article. And here’s where the difference is made. This is how the how the article ends:

“To date, many people still maintain their illusion of privacy,” he said in an e-mail message.

Adapting will take time.

“For iPhone users who use the Google Maps application, it’s already a pain to have to type in your current location,” he said. “‘Why doesn’t my phone just tell Google where I am?’ you invariably ask.”

When the time is right and frustrations like this are unpalatable enough, Mr. Messina said, “Google will have a ready answer to the problem.”

Consider the effect of reading that passage after being lead with a headline like “Google’s Purchase of Jaiku Raises New Privacy Issues” versus “Will Google take the mobile world of Jaiku onto the Web?” The latter clearly raises the specter of Google-as-Big-Brother while ignoring the fallacy that privacy, as people seem to understand it, continues to exist. Let’s face it: if you’re using a cell phone, the cell phone company knows where you are. It’s just a matter of time before you get an interface to that data and the illusion that somehow you gave Google (or any other third party) access to your whereabouts.

I for one do not understand how this kind of headline elevates or adds to the discourse, or how it helps people to better understand and come to gripes with the changing role and utility of their presence online. While I do like the notion that any well-engineered system can preserve one’s privacy while still being effective, I contend that it’s going to take a radical reinterpretation of what we think is and isn’t private to feel secure in who can and can’t see data about us.

So, to put it simply, there are no “new” privacy issues raised by Google’s acquisition of Jaiku; it’s simply the same old ones over and over again that we seem unable to deal with in any kind of open dialogue in the mainstream press.

Leaving TechMeme

Techmeme

It may seem obvious to some wiser than me, but every now and then I realize that I need to disrupt my habits, force-inject some new behaviors and shift-reload the inputs into my thinking.

Above you can see what the top left corner of WebKit looks like to me everyday. In a nutshell, it depicts the places I visit the most. You’ll notice that TechMeme comes second after the Ma.gnolia bookmarklet. That’s a pretty prime spot to occupy in terms of shaping (or warping) my perspective.

Don’t get me wrong: TechMeme is a very well executed news service; above all else, its presentation and hierarchy of information is excellent and serves my goal of consuming a lot of information in aggregate in a very short amount of time. It’s as good a zeitgeist as any when it comes to what certain people are thinking about.

And therein lies the rub.

TechMeme provides an interesting and compelling overview of what’s hot in conventional tech news. It is, however, overly self-referential, and, insomuch as it gives greater weight to certain “types” of people and posts, it ends up becoming necessarily more often than not. And on top of that, the pundits who are featured prominently on its pages (and who like to argue amongst each other who’s popular and who’s not (n.b. I avoided reading any of those posts during that maelstrom; I provide the link merely for context)) have certain priorities that, well, just don’t overlap so well with mine.

I mean, it’s good to know what’s up with Google and Yahoo and Microsoft, and what the latest startups are up to and so on. It’s also fun to see what the latest controversies are over net discrimination and the what arcane nonsense is being attempted to fasten down the semantic web… But I think I’ve had my fill of that for now. Frankly, It’s time to turn off the firehose.

Maybe it’s because I’m leaving for a week Oaxaca and I need a reset; or maybe it’s realizing that I’ve so much to do and I have to stop measuring myself by what everyone else is doing (or has already done). Or maybe it’s because I can feel the sentiment and motivations of the tech community changing ever-so-gradually and becoming increasingly corrupt. Or who knows, maybe nothing has changed except that I need to rearrange some furniture just for the sake of change.

Whatever the case, it’s time to bid adieu to the link that’s been occupying second position on my browser’s bookmark bar for the last umpteen months. I’m going to have to go back to piecing things together one by one on my own; digging up my own dirt, assembling my own theories instead of the ones advanced by crowd economics, leaving the punditry to be consumed by other more capable pundits. Maybe I’ll come back someday — in fact, I probably will. But, as a sheer exercise of will (or perhaps as a mere act of intention), as of today, I’m leaving TechMeme.

Deleting Techmeme

In Google-Apple partnership, Jobs gets the Bill Gates he always wanted

Steve Jobs and Bill GatesSome time ago I read Founders at Work and learned quite a lot about the early days of Apple, as told from Steve Wozniak’s perspective. What was most remarkable was how close Steve Jobs, Bill Gates and Wozniak all were to one another, and to the early foundations of the personal computer.

It seems like Jobs has always been been driven, forceful and something of a nerdjock, whereas the geeks he surrounded himself with (for their technical prowess) were just plain nerds (i.e. Bill Gates and Wozniak). Nowadays it seems that Jobs has found folks to hang out with that are more his type: fewer pocket-protectors, more social skills, better servers.

I’m of course talking about the folks at Google. And I’ve been going on and on about their strategic relationship and how important it is for some time, but finally I can point to arch-curmudgeon Nick Carr to speak for me. In “Google, Apple and the future of personal computing“, he observes:

At this very moment, in a building somewhere in Silicon Valley, I guarantee you that a team of engineers from Google and Apple are designing a set of devices that, hooked up as terminals to Google’s “supercomputer,” will define how we use computers in the future. You can see various threads of this system today – in Apple’s iPhone and iPod Touch, its dot-mac service, its iLife and iWork applications as well as in Google’s Apps suite and advertising system, not to mention its vast data-center network. What this team is doing right now is weaving all those threads together into what will be, for most of us, the fabric of cloud computing. (This is so big, you need at least two metaphors to describe it.)

Here’s how the partnership works. Apple is taking responsibility for “the user interface and people.” It’s designing the devices themselves, which will be typically elegant machines that run versions of OS X. While Apple puts together the front end of the integrated network-computing system, Google provides “the perfect back end” – the supercomputer that provides the bulk of the data-processing might and storage capacity for the devices. While the devices will come with big flash drives to ensure seamless computing despite the vagaries of network traffic, all data will be automatically backed up into Google’s data centers, and those centers will also serve up most of the applications that the devices run. The applications themselves will represent the joint efforts of Google and Apple – this, I’m sure, is the trickiest element of the partnership – and will be supplemented, of course, by myriad web-delivered software services created by other companies (many of which will, in due course, also run on Google’s supercomputer).

Well, it’s nice to know that someone else sees the potentiality of this relationship.

atomic_wedgieAnd it’s also nice to know that, in Google, Steve Jobs has found a couple more stylish nerd types that finally appreciate the more suave and sophisticated side of his geekdom. Together, finally, they’re going to give Bill Gates the atomic wedgie of his life.

Did the web fail the iPhone?

Twitter / Ian McKellar: @factoryjoe, wait, so all these "web apps" people have invested time and money in are now second-class applications?

Ian might be right, but not because of Steve’s announcement today about opening up the iPhone.

Indeed, my reaction so far has been one of quasi-resignation and disappointment.

A voice inside me whimpers, “Don’t give up on the web, Steve! Not yet!”

iPhoneDevCampYou have to understand that when I got involved in helping to plan iPhoneDevCamp, we didn’t call it iPhoneWebDevCamp for a reason. As far as we knew, and as far as we could see into the immediate future, the web was the platform of the iPhone (Steve Jobs even famously called Safari the iPhone’s SDK).

The hope that we were turning the corner on desktop-based applications was palpable. By keeping the platform officially closed, Apple brought about a collective channeling of energy towards the development of efficient and elegant web interfaces for Safari, epitomized by Joe Hewitt’s iPhone Facebook App (started as a project around iPhoneDevCamp and now continued on as by Christopher Allen, founder of the ).

And we were just getting started.

…So the questions on my mind today are: was this the plan all along? Or, was Steve forced into action by outside factors?

iPhone Spider WebIf this were the case all along, I’d be getting pretty fed up with these kind of costly and duplicitous shenanigans. For godsake, Steve could at least afford to stop being so contradictory! First he lowers the price of the iPhone months after releasing it, then drops the price of DRM-free tracks (after charging people to “upgrade their music”), and now he’s promising a software SDK in February, pledging that an “open” platform “is a step in the right direction” (after bricking people’s phones and launching an iPhone WebApps directory, seemingly in faux support of iPhone Web App developers).

Now, if this weren’t in the plan all along, then Apple looks like a victim of the promise — and hype — of the web as platform. (I’ll entertain this notion, while keeping in mind that Apple rarely changes direction due to outside influence, especially on product strategy.)

Say that everything Steve said during his keynote were true and he (and folks at Apple) really did believe that the web was the platform of the future — most importantly, the platform of Apple’s future — this kind of reversal would have to be pretty disappointing inside Apple as well. Especially considering their cushy arrangement with Google and the unlikelihood that Mac hardware will ever outsell PCs (so long as Apple has the exclusive right to produce Mac hardware), it makes sense that Apple sees its future in a virtualized, connected world, where its apps, its content and its business is made online and in selling thin clients, rather than in the kind of business where Microsoft made its billions, selling dumb boxes and expiring licenses to the software that ran on them.

If you actually read Apple’s guide for iPhone content and application development, you’d have to believe that they get the web when they call for:

  • Understanding User-iPhone Interaction
  • Using Standards and Tried-and-True Design Practices
  • Integrating with Phone, Mail, and Maps
  • Optimizing for Page Readability
  • Ensuring a Great Audio and Video Experience (while Flash is not supported)

These aren’t the marks of a company that is trying to embrace and extend the web into its own proprietary nutshell. Heck, they even support microformats in their product reviews. It seems so badly that they want the web — the open web — to succeed given all the rhetoric so far. Why backslide now?

Well, to get back to the title of this post, I can’t but help feel like the web failed the iPhone.

For one thing, native apps are a known quantity for developers. There are plenty of tools for developing native applications and interfaces that don’t require you to learn some arcane layout language that doesn’t even have the concept of “columns”. You don’t need to worry about setting up servers and hosting and availability and all the headaches of running web apps. And without offering “services in the cloud” to make web application hosting and serving a piece of cake, Apple kind of shot itself in the foot with its developers who again, aren’t so keen on the ways of the web.

Flipped around, as a proponent of the web, even I can admit how unexciting standard interfaces on the web are. And how much work and knowledge it requires to compete with the likes of Adobe’s AIR and Microsoft’s SilverLight. I mean, us non-proprietary web-types rejoice when Safari gets support for CSS-based rounded corners and the ability to use non-standard typefaces. SRSLY? The latter feature was specified in 1998! What took so long?!

No wonder native app developers aren’t crazy about web development for the iPhone. Why should they be? At least considering where we’re at today, there’s a lot to despise about modern web design and to despair about how little things have improved in the last 10 years.

And yet, there’s a lot to love too, but not the kind of stuff that makes iPhone developers want to abandon what’s familiar, comfortable, safe, accessible and hell, sexy.

It’s true, for example, that with the web you get massive distribution. It means you don’t need a framework like Sparkle to keep your apps up-to-date. You can localize your app in as many languages as you like, and based on your web stats, can get a sense for which languages you should prioritize. With protocols like OpenID and OAuth, you get access to all kind of data that won’t be available solely on a user’s system (especially when it comes to the iPhone which dispenses with “Save” functionality) as well a way to uniquely identify your customers across applications. And you get the heightened probability that someone might come along and look to integrate with or add value to your service via some kind of API, without requiring any additional download to the user’s system. And the benefits go on. But you get the point.

Even still, these benefits weren’t enough to sway iPhone developers, nor, apparently, Steve Jobs. And to the degree to which the web is lacking in features and functionality that would have allowed to Steve to hold off a little longer, there is opportunity to improve and expand upon what I call the collection of “web primitives” that compose the complete palette of interaction options for developers who call the web their native platform. The simple form controls, the lightboxes, the static embedded video and audio, the moo tools and scriptaculouses… they still don’t stack up against native (read: proprietary) interface controls. And we can do better.

We must to do better! We need to improve what’s going inside the browser frame, not just around it. It’s not enough to make a JavaScript compiler faster or even to add support for SVG (though it helps). We need to define, design and construct new primitives for the web, that make it super simple, straight-forward and extremely satisfying to develop for the web. I don’t know how it is that web developers have for so long put up with the frustrations and idiosyncrasies of web application development. And I guess, as far as the iPhone goes, they won’t have to anymore.

It’s a shame really. We could have done so much together. The web and the iPhone, that is. We could have made such sweet music. Especially when folks realize that Steve was right and developing for Safari is the future of application development, they’ll have wished that they had invested in and lobbied for richer and better tools and interfaces for what will inevitably become the future of rich internet application development and, no surprise, the future of the iPhone and all its kin.

Data capital, or: data as common tender

Legal TenderWikipedia states that … is payment that, by law, cannot be refused in settlement of a debt denominated in the same currency. , in turn, is a unit of exchange, facilitating the transfer of goods and/or services.

I was asked a question earlier today about the relative value of open services against open data served in open, non-proprietary data formats. It got me thinking whether — in the pursuit of utter openness in web services and portability in stored data — that’s the right question. Are we providing the right incentives for people and companies to go open? Is it self-fulfilling or manifest destiny to arrive at a state of universal identity and service portability leading to unfettered consumer choice? Is this how we achieve VRM nirvana, or is there something missing in our assumptions and current analysis?

Mary Jo Foley touched on this topic today in a post called Are all ‘open’ Web platforms created equal? She asks the question whether Microsoft’s PC-driven worldview can be modernized to compete in the network-centric world of Web 2.0 where no single player dominates but rather is made up of Best of Breed APIs/services from across the Web. The question she alludes to is a poignant one: even if you go open (and Microsoft has, by any estimation), will anyone care? Even if you dress up your data and jump through hoops to please developers, will they actually take advantage of what you have to offer? Or is there something else to the equation that we’re missing? Some underlying truism that is simply refracting falsely in light of the newfound sexiness of “going open”?

We often tell our clients that one of the first things you can do to “open up” is build out an API, support microformats, adopt OpenID and OAuth. But that’s just the start. That’s just good data hygiene. That’s brushing your teeth once a day. That’s making sure your teeth don’t fall out of your head.

There’s a broader method to this madness, but unfortunately, it’s a rare opportunity when we actually get beyond just brushing our teeth to really getting to sink them in, going beyond remedial steps like adding microformats to web pages to crafting just-in-time, distributed open-data-driven web applications that actually do stuff and make things better. But as I said, it’s a rare occasion for us because we’ve all been asking the wrong questions, providing the wrong incentives and designing solutions from the perspective of the silos instead of from the perspective of the people.

Let me make a point here: if your data were legal tender, you could take it anywhere with you and it couldn’t be refused if you offered to pay with it.

Last.fm top track chartsLet me break that down a bit. The way things are today, we give away our data freely and frequently, in exchange for the use of certain services. Now, in some cases, like Pandora or Last.fm, the use of the service itself is compelling and worthwhile, providing an equal or greater exchange rate for our behavior or taste data. In many other cases, we sign up for a service and provide basic demographic data without any sense of what we’re going to get in return, often leaving scraps of ourselves to fester all across the internet. Why do we value this data so little? Why do we give it away so freely?

I learned of an interesting concept today while researching legal tender called “Gresham’s Law” and commonly stated as: When there is a legal tender currency, bad money drives good money out of circulation.

Don’t worry, it took me a while to get it too. Nicolas Nelson offered the following clarification: if high quality and low quality are forced to be treated equally, then folks will keep good quality things to themselves and use low quality things to exchange for more good stuff.

Think about this in terms of data: if people are forced (or tricked) into thinking that the data that they enter into web applications is not being valued (or protected) by the sites that collect the data, well, eventually they’ll either stop entering the data (heard of social network fatigue?) or they’ll start filling them with bogus information, leading to “bad data” driving out the “good data” from the system, ultimately leading to a kind of data inflation, where suddenly the problem is no longer getting people to just sign up for your service, but to also provide good data of some value. And this is where data portability — or data as legal tender — starts to become interesting and allows us to start seeing around through the distortion of the refraction.

Think: Data as currency. Data to unlock services. Data owned, controlled, exchanged and traded by the creator of said data, instead of by the networks he has joined. For the current glut of web applications to maintain and be sustained, we must move to a system where people are in charge of their data, where they garden and maintain it, and where they are free to deposit and withdraw it from web services like people do money from banks.

If you want to think about what comes next — what the proverbial “Web 3.0” is all about — it’s not just about a bunch of web applications hooked up with protocols like OAuth that speak in microformats and other open data tongue back and forth to each other. That’s the obvious part. The change comes when a person is in control of her data, and when the services that she uses firmly believe that she not only has a right to do as she pleases with her data, but that it is in their best interest to spit her data out in whatever myriad format she demands and to whichever myriad services she wishes.

The “data web” is still a number of years off, but it is rapidly approaching. It does require that the silos popular today open up and transition from repositories to transactional enterprises. Once data becomes a kind of common tender, you no longer need to lock it; in fact, the value comes from its reuse and circulation in commerce.

To some degree, Mint and Wesabe are doing this retroactively for your banking records, allowing you to add “data value” to the your monetary transactions. Next up Google and Microsoft will do this for your health records. For a more generic example, Swivel is doing this today for the OECD but has a private edition coming soon. Slife/Slifeshare, i use this and RescueTime do this for your use of desktop apps.

This isn’t just attention data that I’m talking about (though the recent announcements in support of APML are certainly positive). This goes beyond monitoring what you’re doing and how you’re spending your time. I’m talking about access to all the data that it would take to reconstitute your entire digital existence. And then I’m talking about the ability to slice, dice, and splice it however you like, in pursuit of whatever ends you choose. Or choose not to.


I’ll point to a few references that influenced my thinking: Social Capital To Show Its Worth at This Week’s Web 2.0 Summit, What is Web 2.0?, Tangled Up in the Future – Lessig and Lietaer, , Intentional Economics Day 1, Day 2, Day 3.

Modern music economics: a fierce independent streak

CoverSutra - IN RAINBOWS

Steven Hodson posted a response to my IN RAINBOWS entry titled “Being free doesn’t make crap any better“. He makes the simple argument that, just because bands are freeing themselves from their labels and giving their fans the ability to pay what they want for their albums, that this won’t necessarily result in higher quality music being produced. It just means that we won’t have to buy the filler crap that most bands crank out to fill out albums to convince us to shell out $18 a CD.

He reminisces:

When I first started collecting music back in the days of vinyl it was commonly accepted that at least one; two at the most, tracks on the LP would be crap and usually stuffed onto the B-side of the LP. Over the years this ratio has slowly changed to the point that the majority of the time you are lucky if even half the songs are worth listening to. We became nothing but cash cows for the music industry as we lined up obediently with every big release and plunked over our hard earned money because we had no alternatives.

He goes on point out the change:

Then came the Internet and suddenly we had a way to thumb our noses at the industry that had been bleeding us dry and get only the songs we felt were worth listening to. The days of the 45 single had returned albeit in electronic form.

He then attacks what he sees as my warm and fuzzy view of a kinder, gentler “Open Media Web” (my term, borrowed from Songbird’s Rob Lord — (a client of Citizen Agency)).

The comment thread seems particularly interesting, so I thought I’d reproduce it here. I begin:

Hmm, I’m not sure that I have any illusions about the role of commerce in the decisions of these bands. Especially in the cases of Radiohead and NIN, they’ll do fine selling direct to consumers. For many other bands, especially undiscovered ones, or ones who aren’t MySpace et al-savvy, I think it’ll be a long slog before they can go completely independent. Let’s face it, you have to reach a certain amount of volume selling your wares before you can survive off of it.

In any case, I wouldn’t look at this as so much a warm and fuzzy revolution, but rather the kind of circumstance that made the coming of Firefox so exciting… the ground is beginning to shift and the landscape is taking on new forms. If Firefox didn’t come around, who knows when Microsoft would have been forced to update its browser…! The same thing is true for music now with bands advocating for fans to “steal their music” (as Trent Reznor proclaimed, genius marketing if you ask me) or advocating against the use of DRM (since it effectively reduces the number of people who can experience a band’s music, limiting their potential viral spread — which is where bands get their volume from!).

Anyway, I see the commerce side of this. This isn’t just a “Free The Music From the Evil Tyrants” thing. This is changing the way that money is made and how it flows. An Open Media Web is about recirculation, redistribution and greater freedom of choice. Personally I hope this change (more openness and choice) brings about a Darwinian evolution where the crap begins to wane and bands are forced to actually crank out top shelf A-Sides in order to make it.

We’re still a long way off, but I’m not sure, as was the case in the last post we exchanged words on, we really disagree.

Steve follows:

Chris I have to admit I always like it when you join in any of the conversations I try and spark here at WinExtra. Both from the point of view that you as a firm Web 2.0 proponent bring to the table and because you always have some intelligent feedback. Maybe that is one of the reasons why your posts tend to either spark thoughts of my own or figure prominently in my posts.

I do agree that the ground is shifting under us but whether it will make any difference in the larger picture of society is highly debatable. In this bubble we call the early adopterism of all things cool we seem to suffer from a myopic view that people outside of the bubble will see things the same way.

With music as much as folks who look at the current happenings hope that this will indeed foreshadow a larder trend outside of the bubble that will result in more bands being able to get out from under the thumb of music labels and become successful the fact is I think for the larger Internet world it will only see the word FREE.

As for not disagreeing on points in discussions we have had in the past I think my next post today; which again was sparked by one of yours, may see us definitely at opposite ends LOL

Thanks for taking time and being a part of my contributions to the conversation chain

Finally:

I do think there is something to the insider bubbloptics effect that keeps us somewhat sheltered from reality. And as much as I try to empathize or imagine what the rest of the world might think about such things, let’s face it, I’m like Paris Hilton thinking that I can speak for Guantanamo inmates.

That said, I do have a view inside the bubble, and since I’m originally from New England, I have a curmudgeonly distrust of all things large and who think they’re in charge. Usually that means the government or Big Business, and in this case, I’m talking about the collusive record labels.

Will there be some cataclysmic changing of the guard where every band joins up with a new RIAA (Recording Independents Association of Anarchists?) and goes Free Agent Nation on the former RIAA’s ass? Will all bands start giving away their music for free? Or better yet, seeding copies of their albums to the BitTorrent networks themselves? I doubt it.

BUT, what is important here is that Radiohead, NIN and the others are waking up from their somnambulant stupor and realizing, in Harrison Bergeron fashion, that they do indeed have free will and can take risks (instead of just pot shots) with their own careers if they so choose.

And since the actualization of choice is tantamount to establishing that one has free will, marketing-driven or not, the fact is, their model will become an inspiration for an entire generation who won’t just assume that the only way to make it is through signing away your life and becoming a slave to the economics you decried in your post, but instead that they can consider alternative routes to success and satisfaction and more importantly, more genuine or original ways to create and be involved with music, less as a Business, and more as an Art.

So Mozilla wants to go mobile, eh?

As with baseball, on the web we have our home teams and our underdogs and our all-stars; we have our upsets, our defeats, and our glorious wins in the bottom of the ninth. And though I’m actually not much of a baseball fan anymore (though growing up in New England, I was exposed to plenty of Red Sox fever), I do relate my feelings for Mozilla to the way a lot of folks felt about the Red Sox before they finally won the World Series and broke the Curse of the Bambino: that is, I identify with Mozilla as my team, but dammit if they don’t frustrate me on occasion.

Tara wonders why I spend so much time on Mozilla when clearly I’m a perennial critic of the direction they’re headed in and the decisions that they make. But then Tara also didn’t grow up around vocal critics of the Red Sox who expressed their dedication and patronage to the team through their constant criticism and anger. It might not make sense, and it might not seem worth my time, but whatever the case, you really can’t be neutral about Mozilla and still consider yourself a fan. Even if you disagree with everything decision that they make, they’re still the home team of the Open Web and heck, even as you bitch and whine about this or about that, you really just want to see them do well, oftentimes in spite of themselves.

So, with that said, let me give you a superficial summary of what I think about Mozilla’s recent announcement about their mobile strategy:

If you want to stop reading now, you can, but the details and background of my reasoning might be somewhat interesting to you. I make no promises though.

Continue reading “So Mozilla wants to go mobile, eh?”