Why YouTube should support Creative Commons now

YouTube should support Creative Commons

I was in Miami last week to meet with my fellow screeners from the Knight News Challenge and Jay Dedman and Ryanne Hodson, two vlogger friends whom I met through coworking, started talking about content licensing, specifically as related to President-Elect Barack Obama’s weekly address, which, if things go according to plan, will continue to be broadcast on YouTube.

The question came up: what license should Barack Obama use for his content? This, in turn, revealed a more fundamental question: why doesn’t YouTube let you pick a license for the work that you upload (and must, given the terms of the site, own the rights to in the first place)? And if this omission isn’t intentional (that is, no one decided against such a feature, it just hasn’t bubbled up in the priority queue yet), then what can be done to facilitate the adoption of Creative Commons on the site?

To date, few video sharing sites, save Blip.tv and Flickr (even if they only deal with long photos), have actually embraced Creative Commons to any appreciable degree. Ironically, of all sites, YouTube seems the most likely candidate to adopt Creative Commons, given its rampant remix and republish culture (a culture which continues to vex major movie studies and other fastidious copyright owners).

One might make the argument that, considering the history of illegally shared copyrighted material on YouTube, enabling Creative Commons would simply lead to people mislicensing work that they don’t own… but I think that’s a strawman argument that falls down in practice for a number of reasons:

  • First of all, all sites that enable the use of CC licenses offer the scheme as opt-in, defaulting to the traditional all rights reserved use of copyright. Enabling the choice of Creative Commons wouldn’t necessarily affect this default.
  • Second, unauthorized sharing of content or digital media under any license is still illegal, whether the relicensed work is licensed under Creative Commons or copyright.
  • Third, YouTube, and any other media sharing site, bears some responsibility for the content published on their site, and, regardless of license, reserves the right to remove any material that fails to comply completely with its Terms of Service.
  • Fourth, the choice of a Creative Commons license is usually a deliberate act (going back to my first point) intended to convey an intention. The value of this intention — specifically, to enable the lawful reuse and republishing of content or media by others without prior per-instance consent — is a net positive to the health of a social ecosystem insomuch as this choice enables a specific form of freedom: that is, the freedom to give away one’s work under certain, less-restrictive stipulations than the law allows, to aid in establishing a positive culture of sharing and creativity (as we’ve seen on , SoundCloud and CC Mixter).

Preventing people from choosing a more liberal license conceivably restricts expression, insomuch as it restricts an “efficient, content-enriching value chain” from forming within a legal framework. Or, because all material is currently licensed under the most restrictive regime on YouTube, every re-use of a portion of media must therefore be licensed on a per-instance basis, considerably impeding the legal reuse of other people’s work.

. . .

Now, I want to point out something interesting here… as specifically related to both this moment in time and about government ownership of media. A recently released report from the GAO on Energy Efficiency carried with it the following statement on copyright:

This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.

Though it can’t simply put this work into the public domain because of the potential copyrighted materials embedded therein, this statement is about as close as you can get for an assembled work produced by the government.

Now consider that Obama’s weekly “radio address” is self-contained media, not contingent upon the use or reuse of any other copyrighted work. It bears considering what license (if any) should apply (keeping in mind that the government is funded by tax-payer dollars). If not the public domain, under what license should Obama’s weekly addresses be shared? Certainly not all rights reserved! — unfortunately, YouTube offers no other option and thus, regardless of what Obama or the Change.gov folks would prefer, they’re stuck with a single, monolithic licensing scheme.

Interestingly, Google, YouTube’s owner, has supported Creative Commons in the past, notably with their collaboration with Radiohead on the House of Cards open source initiative and with the licensing of the Summer of Code documentation (Yahoo has a similar project with Flickr’s hosting of the Library of Congress’ photo archive under a liberal license).

I think that it’s critical for YouTube to adopt the Creative Commons licensing scheme now, as Barack Obama begins to use the site for his weekly address, because of the powerful signal it would send, in the context of what I imagine will be a steady increase and importance of the use of social media and web video by government agencies.

Don Norman recently wrote an essay on the importance of social signifiers, and I think it underscores my point as to why this issue is pressing now. In contrast to the popular concept of “affordances” in design and design thinking, Norman writes:

A “signifier” is some sort of indicator, some signal in the physical or social world that can be interpreted meaningfully. Signifiers signify critical information, even if the signifier itself is an accidental byproduct of the world. Social signifiers are those that are relevant to social usages. Some social indicators simply are the unintended but informative result of the behavior of others.

. . .

I call any physically perceivable cue a signifier, whether it is incidental or deliberate. A social signifier is one that is either created or interpreted by people or society, signifying social activity or appropriate social behavior.

The “appropriate social behavior”, or behavior that I think Obama should model in his weekly podcasts is that of open and free licensing, introducing the world of YouTube viewers to an alternative form of licensing, that would enable them to better understand and signal to others their intent and desire to share, and to have their creative works reused, without the need to ask for permission first.

For Obama media to be offered under a CC license (with the licensed embedded in the media itself) would signal his seriousness about embracing openness, transparency and the nature of discourse on the web. It would also signify a shift towards the type of collaboration typified by Web 2.0 social sites, enabling a modern dialectic relationship between the citizenry and its government.

I believe that now is the time for this change to happen, and for YouTube to prioritize the choice of Creative Commons licensing for the entire YouTube community.

Advertisements

Parsing the “open” in Facebook’s “fbOpen” platform

fbOpenYesterday, as expected, Facebook revealed the code behind their F8 platform, a little over a year after its launch, offering it under the Common Public Attribution License.

I can’t help but notice the glaring addition of Section 15: Network Use and Exhibits A and B to the CPAL license. But I’ll dive into those issues in a moment.

For now it is worth reviewing Facebook’s release in the context of the OSI’s definition of open source; of particular interest are the first three sections: Free Redistribution, Source Code, and Derived Works. Arguably Facebook’s use of the CPAL so far fits the OSI’s definition. It’s when we get to the ninth attribute (License Must Not Restrict Other Software) where it becomes less clear whether Facebook is actually offering “open source” code, or is simply diluting the term for its own gain, given the attribution requirement imposed in Exhibit B:

Each time an Executable, Source Code or Larger Work is launched or initially run (including over a network), a display of the Attribution Information must occur on the graphic user interface employed by the end user to access such Covered Code (which may include a splash screen).

In other words, any derivative work cleft from the rib of Facebook must visibly bear the mark of the “Initial Developer”, namely, Facebook, Inc., and include the following:

Attribution Copyright Notice: Copyright © 2006-2008 Facebook, Inc.
Attribution Phrase (not exceeding 10 words): Based on Facebook Open Platform
Attribution URL: http://developers.facebook.com/fbopen
Graphic Image as provided in the Covered Code: http://developers.facebook.com/fbopen/image/logo.png

Most curious of all is how Facebook addressed a long-held concern of Tim O’Reilly that open source licenses are obsolete in the era of network computing and Web 2.0 (emphasis original):

…it’s clear to me at least that the open source activist community needs to come to grips with the change in the way a great deal of software is deployed today.

And that, after all, was my message: not that open source licenses are unnecessary, but that because their conditions are all triggered by the act of software distribution, they fail to apply to many of the most important types of software today, namely Web 2.0 applications and other forms of software as a service.

And in the Facebook announcement, Ami Vora states:

The CPAL is community-friendly and reflects how software works today by recognizing web services as a major way of distributing software.

Thus Facebook neatly skirts this previous limitation in most open source licenses by amending Section 15 to the CPAL, explicitly covering “Network Use”:

The term ‘External Deployment’ means the use, distribution, or communication of the Original Code or Modifications in any way such that the Original Code or Modifications may be used by anyone other than You, whether those works are distributed or communicated to those persons or made available as an application intended for use over a network. As an express condition for the grants of license hereunder, You must treat any External Deployment by You of the Original Code or Modifications as a distribution under section 3.1 and make Source Code available under Section 3.2.

I read this as referring to network deployments of the Facebook platform on other servers (or available as a web service) and forces both the release of code modifications that hit the public wire as well as imposing the display of the “Attribution Information” (as noted above).

. . .

So okay, first of all, we’re not really dealing with the true historic definition of open source here, but we can mince words later. The code is available, is free to be tinkered with, reviewed, built on top of, redistributed (with that attribution restriction) and there’s even a mechanism for providing feedback and logging bugs. Best of all, if you submit a patch that is accepted, they’ll send you a Facebook T-shirt! (Wha-how! Where do I sign up?!)

Not ironically, Facebook’s approach with smells an awful lot like Microsoft’s Shared Source Initiative (some background). Consider the purpose of one of Microsoft’s three Shared Source licenses, the so-called “Reference License”:

The Microsoft Reference License is a reference-only license that allows licensees to view source code in order to gain a deeper understanding of the inner workings of a given technology. It does not allow for modification or redistribution. Microsoft uses this license primarily for technologies such as its development libraries.

Now compare that with the language of Facebook’s announcement:

The goal of this release is to help you as developers better understand Facebook Platform as a whole and more easily build applications, whether it’s by running your own test servers, building tools, or optimizing your applications on this technology. We’ve built in extensibility points, so you can add functionality to Facebook Open Platform like your own tags and API methods.

While it’s certainly conceivable that there may be intrepid entrepreneurs that decide to extend the platform and release their own implementations (which, arguably would require a considerable amount of effort and infrastructure to duplicate the still-proprietary innards of Facebook proper — remember that the fbOpen platform IS NOT Facebook), they’d still need to attach the Facebook brand to their derivative work and open source their modifications, under a CPAL-compatible license (read: not GPL).

In spite of all this — and whether Facebook is really offering a “true” open source product or not — is really not the important thing. I’m raising issues simply to put this move into a broader context, highlighting some important decision points where Facebook zagged where others might have otherwise zigged, based on their own priorities and aspirations with the move. Put simply: Facebook’s approach to open source is nothing like Google’s, and it’s critical that people considering building on either the fbOpen platform or OpenSocial do themselves a favor and familiarize themselves with the many essential differences.

Furthermore, in light of my recent posts, it occurs to me that the nature of open source is changing (or being changed) by the accelerating move to cloud computing architectures (where the source code is no longer necessarily a strategic asset, but where durable and ongoing access to data is the primary concern (harkening to Tim O’Reilly’s frequent “Data is the Intel Inside” quip) and how Facebook is the first of a new class of enterprises that’s growing up after open source.

I hope to expand on this line of thinking, but I’m starting to wonder — with regards to open source becoming essentially passé nowadays — did we win? Are we on top? Hurray? Or, did we bet on the wrong horse? Or, did the goalposts just move on us (again)? Or, is this just the next stage in an ongoing, ever-volatile struggle to balance the needs of business models that tend towards centralization against those more free-form and freedom seeking and expanding models where information and knowledge must diffuse, and must seek out growth and new hosts in order to continue to become more valuable. Again, pointing to Tim’s contention that Web 2.0 is also at least partly about harnessing collective intelligence, and that data sources that grow richer as more people use them is a facet of the landscape, what does openness mean now? What barriers do we need to dissemble next? If it’s no longer the propriety of software code, then is it time that we began, in earnest, to scale the walls of the proprietary data horders and collectors and take back (or re-federate) what might be rightfully ours? Or that we should at least be given permanent access to? Hmm?


Related coverage: