Clarifying my comments on Twitter’s annotations

Two weeks ago, Mathew Ingram from GigaOM pinged me via my Google Profile to ask what my thoughts — as an open web advocate — are on Twitter’s new annotations feature. He ended up posted portions of my response yesterday in a post titled “Twitter Annotations Are Coming — What Do They Mean For Twitter and the Web?

The portion with my comments reads:

But Google open advocate Chris Messina warns that if Twitter doesn’t handle the new feature properly, it could become a free-for-all of competing standards and markups. “I find them very intriguing,” he said of Annotations, but added: “It could get pretty hairy with lots of non-interoperable approaches,” a concern that others have raised as well. For example, if more than one company wants to support payments through Annotations but they all use proprietary ways of doing that, “getting Twitter clients and apps to actually make sense of that data will be very slow going indeed,” said Messina. However, the Google staffer said he was encouraged by the fact that Twitter was looking at supporting existing standards such as RDFa and microformats (as well as potentially Facebook’s open graph protocol).

Unfortunately some folks found these comments more negative than I intended them to be, so I wanted to flesh out my thinking by providing the entire text of the email I sent to Mathew:

Thanks for the question Mathew. I admit that I’m no expert on Twitter Annotations, but I do find them very intriguing… I see them creating a lot of interesting momentum for the Twitter Dev Community because they allow for all kinds of emergent things to come about… but at the same time, without a sane community stewardship model, it could get pretty hairy with lots of non-interoperable approaches that re-implement the same kinds of features.

That is — say that someone wants to implement support for payments over Twitter Annotations… if a number of different service providers want to offer similar functionality but all use their own proprietary annotations, then that means getting Twitter clients and apps to actually make sense of that data will be very slow going indeed.

I do like that Ryan Sarver et al are looking at supporting existing schema where they exist — rather than supporting an adhocracy that might lead to more reinventions of the wheel than Firestone had blowouts. But it’s unclear, again, how successful that effort will be long term.

Of course, as the weirdo originator of the hashtag, it seems to me that the Twitter community has this funny way of getting the cat paths paved, so it may work out just fine — with just a slight amount of central coordination through the developer mailing lists.

I’d really like to see Twitter adopt ActivityStreams, of course, and went to their hackathon to see what kind of coordination we could do. Our conversation got hijacked so I wasn’t able to make much progress there, but Twitter does seem interested in supporting these other efforts and has reached out to help move things forward.

Not sure how much that helps, but let me know what other questions you might have.

I stand by these comments — though I can see how, spliced and taken out of context, they could be misconstrued.

Considering that we’re facing similar questions about the extensibility model for ActivityStreams, I can speak from experience that guiding chaos into order is actually how “standards” evolve over time. Managing that process determines how quickly an effort like Twitter’s annotations will succeed.

Twitter’s approach of  balancing between going completely open against being centrally managed is a smart approach, and I’m looking forward to both working with them on their efforts, as well as seeing what their developer community produces.

Social media versus Oil Can Henry’s

It’s the banal that determines whether social media will succeed in the mainstream, and today I had an experience that I think demonstrates how far away we are from achieving the the ubiquitously useful social media experience we deserve.

Specifically, I got my oil changed.

The epitome of banal, right?

Yeah, except, see, I don’t really know anything about cars (yeah, I’m man enough to admit it… what? What?!), — and so when the Oil Can Henry’s technician suggested that I use synthetic motor oil instead of the conventional stuff I’d been using, I had no idea what to tell him — though the significant price difference definitely put me off.

Famous 20-Point Full-Service Oil Change

Pressed for an answer, I did what anyone in this situation would do (yeah right): I posted to Twitter and CC’d Aardvark (a question-answer service that follows my tweets):

Twitter / Chris Messina: I've got ~26K miles on a 2 ...

Within seconds @vark sent me a direct message confirming that they’d received my query and were on the case:

Twitter / Direct Messages

Of course by now the attendant needed an answer — I was there for an oil change after all — and stalling until I got a definitive answer would have just been awkward.

“Sure,” I said, “what the hell.”

Then the responses started rolling in.

The first came from Derek S. on Aardvark 3 minutes later:

I’m far from a car expert, but my experience with my Honda Fit is that Hondas are generally engineered to run on the basics… regular unleaded gas, regular oil, etc. My guess is it’s probably not worth it.

Hmm, okay, that’s basically what I thought too, but it sounds like Derek knows as much about cars as I do.

Then came the first response on Twitter from Kasey Skala:

@chrismessina synthetic is for 75k+

Hmm, well, that’s pretty definitive. Guess I got punk’d.

But then more answers came in. A total of 17 tweets overall:

Erik Marden:

@chrismessina synthetic costs more, but lasts longer. I always go for it.

Rex Hammock:

@chrismessina For the record, Castrol is 100% owned by BP. Just saying. For the record.

Nick Cairns:

@chrismessina castrol is a bp co

Jon Bringhurst:

@chrismessina If you go synthetic, keep in mind that time between oil changes can jump up to like 10k+ miles, depending on how you drive.

@joshsprague:

@chrismessina Started doing 15Kmile synthetic on my 98 Honda. Need to read up more, but think fewer oil changes = less oil used.

Mark Boszko:

@chrismessina Synthetic oil is always a good idea, in my experience. I’ve taken cars to nearly 300K miles with its help.

Sam Herren:

@chrismessina Only if you wanna keep synthetic for the rest of the time you own the car.  Can’t go back and forth.

@earsmack:

@chrismessina I’ve heard that’s about the time to do it. Advantage = less frequent oil changes but nary any cost savings in my experience.

Frank Stallone (2, 3):

@chrismessina I put only synthetic oils in my cars — check your manual you may find you were suppose to be putting that in from the start!

@chrismessina I just looked up your car – every engine that Honda built for it should use synthetic http://bit.ly/aRvtmX

@chrismessina I love Amsoil the most but I’ll use Castrol and Mobile 1 any day — very trust worthy brands

B J Clark:

@chrismessina yes, go with synthetic and then only change it once every 5k – 10k miles.

Todd Zaki Warfel (2):

@chrismessina primary benefit of synthetic is if you drive hard or want to go longer on oil changes (e.g. 6-10k).

@chrismessina it’s the only thing I ran in my Mini Cooper S Works Edition (street legal race car)

Osman Ali:

@chrismessina Mobil 1

Christopher Loggins:

@chrismessina Prob too late, but Castrol Syntec is good oil. Good viscocity, temperature range, and zinc. Would use vs conventional.

I’ve captured all the responses here to give you a sense for the variety of answers I received from respondents who were all presumably unaware of each other’s responses.

If you ask me, this is a pretty good range — and is an excellent demonstration of both social search and distributed cognition and illustrates why “social” can’t be solved by an algorithm (this is the stuff that Brynn‘s an expert on).

The reality is that that my social network (including my 22,000+ Twitter followers and extended network through Aardvark) failed me. I probably made a premature decision to switch to synthetic oil — or at best, a decision without sufficient knowledge of the consequences (i.e. that once you switch, you really shouldn’t switch back). It’s not like it’s the end of the world or anything, but this is the kind of experience that I’d expect social networks to be really good at. And it’s not like I didn’t get good answers — they just weren’t there when I needed them.

And it’s all the more funny because I actually tweeted my plans two hours before I left… why didn’t the network anticipate that I might need this kind of information and prepare it in advance? Better yet: why didn’t my car tell me its opinion (I’m half serious — it should be the authority, right?)? Surely the answer I sought was out there in the world some where — why didn’t my network tee this up for me? (And no doubt I’m not the first person to find himself in this situation!)

The network responded, but only after it was too late. So the next time I’m confronted by a question like this, what’s the likelihood that I’ll turn to my network? What if I didn’t work on this stuff for a living?

Out of curiosity, I submitted this question to Fluther, Quora, and tried to cross-post to Facebook (since Facebook is working on its own Q&A solution) but that failed for some reason.

So far, I’ve received three responses on Fluther, none on Quora, and two on Aardvark. I also posted the full text of my question to Google and Bing but amusingly enough, only my Fluther question came up as a result.

My takeaway? We’ve certainly made progress on the accessibility of social networks in aiding in question answering, but until our networks are able to provide better real-time or anticipatory responses, caveat emptor still applies.

Then again, YMMV.

My first five months at Google, by the numbers

Clarification: The first version of the post talked about my first six months at Google. Apparently my math skills haven’t improved since I took the job, however, as there are actually only five months between June and January. I regret the error.

Today marks six five months since I joined Google on my birthday on January 7. It’s been an interesting, busy time for me.

Having never worked for a big company (where I define “big” as having more than 100 employees), working for Google is a lot like moving from the suburbs into a big city — I’m just constantly meeting new people and finding out about stuff I had no idea was going on.

Still, to put things in perspective, Google only has about 20,000 employees, whereas, Microsoft has nearly 100,000 and HP has a whopping 300,000. Those numbers boggle my mind, but are useful to keep in mind when Googlers call their employer a “startup”, unironically.

Speaking of big numbers, Eric Schmidt threw some big numbers around recently about the amount of data being created relative today to the sum total of all data that’s been create thus far. Essentially, since the beginning of time and 2003, five exabytes of information were created; since then, we’ve been creating something like five exabytes every two days (skip to 19:43 in this video to see the actual quote; it of course also makes sense that Google would need to rev its indexing approach to accommodate this influx of data).

With all that data, it occurred to me that I should figure out what my contribution is — not in gigabytes, but in terms of other social media metrics. And given how data-focused Google tends to be, I figured I’d focus on areas of growth.

So in the last six five months, here’s my data:

Also, based on my Fitbit weekly averages, I’ve also walked about 1,000,000 steps over the past 152 days (though it’d be so much cooler if they hurried up and offered an API!).

So, not completely exhaustive — and some data was more elusive than other figures to track down — but there’s a snapshot of various metrics from my first six five months at Google.

Up and to the Right

I highly expect things to only increase their “up and to the right” trajectory from here on out.

Two tastes better together: Combining OpenID and OAuth with OpenID Connect

Update: Einar Solbakken has translated this post to Danish.

OpenID Connect

On Friday, David Recordon, one of the original authors of OpenID, released a single-page specification for OpenID Connect, a concept that I outlined on this blog in January before I joined Google.

I’m particularly excited about this early proposal because it builds on all the great progress that the community has made recently on a litany of technologies, including OAuth 2.0 and the link-based resource descriptor format (LRDD) and its emerging JSON-based variant (JRD).

But I’m most excited about OpenID Connect because it forces the OpenID community to evaluate the progress we’ve made over the last three years (OpenID 2.0 was introduced in 2007) and to think critically about where we go next, and how we get there, given what the market has indicated it wants.

Rearticulating the problem

When Brad Fitzpatrick first created OpenID, he was looking to solve a fairly mundane problem: develop a protocol that made it possible for a commenter to claim her comments on someone else’s blog. For the commenter, she had a way to vouch for her words; for the blog owner, he had a way to establish the authenticity of the comments left by his readers. Given this context, all that was required in the early days of OpenID was a stable way to uniquely identify people — gathering additional profile information wasn’t as necessary because blog commenting forms already asked for — and often required — that commenters supply their name and email address.

Thus the basic architecture of OpenID concerned itself with establishing identity across contexts (i.e. “Bob” from Context A is the same “Bob” found in Context B), rather than with profile portability. This focus lent itself to privacy-preserving anonymous and pseudonymous transactions where identity could be established without the need to divulge personally-identifying information, or without forcing you to collapse the boundaries of separate social contexts.

This feature of OpenID (called directed identity) enabled you to hold a single account at, say, yahoo.com, but sign in to third party sites using “non-correlatable identifiers”. That is, this feature made it possible to maintain discreet profiles for logging in to other sites across the web without needing a different password to manage each.

The ability to “select [the] OpenID identifier” that I want to share with stackoverflow.com is how this feature manifests on yahoo.com:

Yahoo - Select your OpenID identifier

The economics of user-centric identity

Features like directed identity, however, present several challenges for users and OpenID relying parties.

For users, these features complicate the sign in flow by introducing new interface surfaces (as seen above) and management tasks. They also increase the cognitive burden of registration by requiring a user to pick a profile (or create a new one) to use in a given context. Additionally, the ability to refrain from disclosing profile information when registering for a new service may seem economically advantageous to the user at the outset (“Aha! I refuse to tell you my name or email address!”) but results in unintended disadvantages over time.

That is, because OpenID users share less information with third parties, they are perceived as being “less valuable” than email-based registrants or users that connect to their Facebook or Twitter accounts.

Why? Simply put: OpenID, by design, favors the user rather than the relying party. In contrast, technologies like Facebook and Twitter Connect emphasize the benefits to relying parties. So while it might seem like an inconvenience to custom-tailor your personal privacy settings on Facebook, the liberal defaults are meant to make Facebook users’ accounts more valuable to relying parties than other, more privacy-preserving account configurations.

So, as Twitter and Facebook have grown in popularity and the number of sites willing to outsource their account management to them have increased, both OpenID users and providers find themselves in a predicament: if they continue to restrict the flow of data, the number of OpenID relying parties will diminish in favor of Facebook- and Twitter-Connected sites. If instead OpenID users become more liberal with the data that they are willing (and able) to share with third parties, they will still need to rally support from relying parties to be recognized as valuable users. Thus making more data available from OpenID users is the first essential step that we must take to regain our footing in the marketplace.

But it won’t be enough.

To overcome both the real and perceived economic disadvantage of supporting OpenID, we need to make adopting OpenID exceedingly simple, straight-forward, and economically advantageous — in real terms.

Why harmonizing “Connect” is important

I wrote my overview for OpenID Connect convinced that the “connect” verb (inherited from the Twitter and Facebook platforms) would help users distinguish between merely registering for a site and signing up for and sharing some data about themselves. Even though Facebook abandoned the “connect” brand at F8 this year, I’m still of the mind that the “connect” verb suits our purposes, even if it’s going to take several years to catch on in common usage.

In any case, if OpenID solves the problem of providing a stable and unique way to identify someone, then the “Connect” in OpenID Connect layers in the ability to access data on someone’s behalf (via conventional APIs like Portable Contacts or ActivityStreams).

It’s this assemblage of authentication and authorization technologies that the industry is calling out for — as evidenced by the success of Facebook and Twitter Connect and more recently, Messenger Connect from Microsoft and upstart efforts like Diaspora that cite OpenID among the technologies they intend to leverage. Without a common standard, each of these efforts is inventing its own custom-tailored solution, retarding industry-wide progress and delaying the development of next generation social applications.

Thus, by leveraging OAuth as the core of OpenID Connect, we can build on the consensus and momentum that has been achieved in the marketplace, and by weaving in a standard and much-simpler discovery mechanism, we can preserve the decentralized design of OpenID. Presuming that Facebook, Twitter, Google, and others all become OpenID Connect providers, that means that site operators can implement one connect API and interoperate with potentially dozens of providers with a single, well-understood open source stack of technologies.

Such an outcome would be good for relying parties (or “clients” in the parlance of Recordon’s proposal) as well as citizens of the web, who deserve a choice when it comes to entrusting a provider with their digital identity but are increasingly marginalized by “privacy-preserving technologies” that are not economically viable.

“Connect” also provides a convenient answer to the question of what kind of interface to present to the users who want to use their OpenID:

OpenID Connect

(Note that I also used the “connect” verb very intentionally in my social agent mockups for designing identity into the browser.)

If every site that supports third party authentication today added a “connect” button in place of their conventional “sign up” or “register” buttons and deployed a consistent user experience around picking a provider (some combination of NASCAR buttons and a type-anything email/URL field) that executed the OpenID Connect protocol, we’d be well along the path of decentralizing the social web, and restoring balance to the ecosystem.

What does OpenID stand for?

Of course, applying the OpenID brand to this solution isn’t something that I would do trivially, since the OpenID Foundation is the real authority for the trademark. However, at the foundation’s board meeting earlier this year at the OpenID Summit West, we unanimously decided to expand the scope of the OpenID Foundation’s mission to include advancing the technological underpinnings of internet identity in general, without regard for the existing OpenID technology.

This is a critical recasting of the role that OpenID and the OpenID Foundation plays in the ecosystem. Though there are other groups with similar mandates, the OpenID Foundation has decided to take on the internet identity opportunity as a general problem, rather than one narrowly scoped to disposable use cases.

In that light, it seems to me that we have come to a crossroads in the history of the foundation — however knowingly — and decided to take aggressive actions to advance the cause.

Without speaking for the foundation as a whole, I believe that it is essential that we are able to reconceive OpenID as the brand for decentralized digital identity. OpenID need not be thought of as merely an identity algorithm, but as a means for representing and conducting oneself online and across digital environments. Thus as the identity landscape undulates, the OpenID Foundation is in the position to articulate solutions that are not protocol-bound, but responsive to needs of the time, and able to adapt to and weather the shifting winds of technological progress.

After OpenID 2.0, OpenID Connect is the next significant reconceptualization of the technology that aims to meet the needs of a changing environment — one that is defined by the flow of data rather than by its suppression. It is in this context that I believe OpenID Connect can help usher forth the next evolution in digital identity technologies, building on the simplicity of OAuth 2.0 and the decentralized architecture of OpenID.

Two interviews on the open web from SXSW

You must have an HTML5-capable browser to watch this video. You may also download this video directly.

Funny how timing works out, but two interviews that I gave in March at SXSW have just been released.

The first — an interview with Abby Johnson for WebProNews — was recorded after my ActivityStreams talk and is embedded above. If you have trouble with the embedded video, you can download it directly. I discuss ActivityStreams, the open web and the role of the Open Web Foundation in providing a legal framework for developing interoperable web technologies. I also explain the historical background of FactoryCity.

In the second interview, with Eric Schwartzman, I discuss ActivityStreams for enterprise, and how information abundance will affect the relative value of data that is hoarded versus data that circulates. Of the interview Eric says: In the 5 years I’ve been producing this podcast, this discussion with Chris, recorded at South by Southwest (SXSW) 2010 directly following his presentation on activity streams, is one of the most compelling interviews I’ve ever recorded. I expect to include many of his ideas in my upcoming book “Social Marketing to the Business Customer” to be published by Wiley early next year.

If you’re interested in these subjects, I’ll be speaking at Northern Voice in Vancouver this weekend, at PARC Forum in Palo Alto on May 13, at Google I/O on May 19, and at GlueCon in Denver, May 27. I also maintain a list of previous interviews that I’ve given.

What I like about Facebook’s “openness”

likeLet’s get something straight: in my last post, I didn’t say that Facebook was evil.

Careful readers would understand that I said that funneling all user authentication (and thus the storage of all identities) through a single provider would be evil. I don’t care who that provider might be — but centralizing so much control — the fate of our collective digital existences! — in the hands of a single entity just can not be permitted.

That said, I do want to say some nice things about the open things that Facebook launched at F8, because as an advocate of the open web, there are some important lessons to be had that we’d do well to learn from.

  • Simplicity: I have to admit that Facebook impressed me with how simple they’ve made it to integrate with their platform, and how clear the value proposition is. From launching OAuth 2.0 (rather aggressively, since the standards process hasn’t even completed yet!) to removing the 24-hour caching policy, Facebook made considerable changes to their developer platform to ease adoption, integration, and promote implementation. This sets the bar for how easy (ideally) technologies like OpenID and ActivityStreams need to become.
  • Avoiding NIH (mostly): In particular, Facebook dispensed with their own proprietary authorization protocol and went with the emerging industry standard (OAuth 2.0). I hope that this move reduces complexity and friction for developers implementing secure protocols, increasing the number of available high quality OAuth libraries, and leads to fewer new developers needing to figure out signatures and crypto when sometimes even the experts get these things wrong. By standardizing on OAuth, we’re within range of dispensing with passwords once and for all (…okay, not quite).
  • Giving credit: I also think that Facebook deserves credit for giving credit to projects like Dublin Core, link-rel canonical, Microformats, and RDFa in their design of the Open Graph Protocol. I’ve seen many other efforts that start from scratch when plenty of other initiatives already exist simply because they’re unawares or don’t do their homework (one of which is the OpenLike effort!). I’m not sure I agree with the parts that Facebook extracted from these efforts, but as David Recordon said, we can fight over “where the quotes and angle-brackets should go“, but at the end of the day, they still shipped something that net-net increases the amount of machine-readable data on the web. And if they’re sincere in their efforts, this is just the beginning of what may emerge as a much wider definition of how more parties can both contribute to — and benefit from — the protocol.
  • Open licensing: Now that I’ve been involved in this area for a longer period of time, I’ve learned a simple truth: it’s hard to give things away, especially if you want other people to use them, even moreso when some of those potential users are competitors. But, that’s why the Open Web Foundation was created, and why David and I are board members. After setting up foundations over and over again, we decided that it needed to be easier to do! Now all the hard work of the Open Web Foundation’s legal committee is starting to pay off, and I am quite satisfied that Facebook has validated this effort. We’re still so early in the process that it’s not entirely clear how to make use of the Open Web Foundation’s agreement, but surely this will motivate us to find our own Creative Commons-like approach to proclaiming support for open web licensing on individual projects.

So, while I still have my reservations about Facebook’s master plan, they did do a number of things right — not everything — but I’m tough customer to please. When it comes to the identity stuff, I’m definitely non-plussed, but that’s where my ideology and their business needs collide — and I get it.

What this means is that we all need to show more hustle out on the field and get serious. With Facebook’s Hail Mary at F8, we just got set back a touchdown, and a field goal just ain’t gunna cut it.

Understanding the Open Graph Protocol

All likes lead to Facebook

I attended Facebook’s F8 conference yesterday (missed the keynote IRL, but you can catch it online) and came away pondering the Open Graph Protocol.

In they keynote Zuck said (as Luke Shepard calls him):

Today the web exists mostly as a series of unstructured links between pages. This has been a powerful model, but it’s really just the start. The open graph puts people at the center of the web. It means that the web can become a set of personally and semantically meaningful connections between people and things.

While I agree that the web is transmogrifying from a web of documents to a web of people, I have deep misgivings about what the Open Graph Protocol — along with Facebook’s new Like button — means for the open web.

There are three elements of Facebook’s announcements that seem to conspire against the web:

  • A new format
  • Convenient to implement
  • Facebook account required

First, to support the Open Graph Protocol, all you need to do is add some RDFa-formatted metatags to the HEAD of your HTML pages (as this example demonstrates, from IMDB):

Simple right? Indeed.

And from the looks of it, pretty innocuous. Structured data is good for the web, and I’d never argue to the contrary. I’m skeptical about calling this format “open” — because it smells more like openwashing from here, but I’m willing to give it the benefit of the doubt for now. (Similarly, XAuth still has to prove its openness cred, so I understand how these things can come together quickly behind closed doors and then adopt a more open footing over time.)

So, rather than using data that’s already on the web, everyone that wants to play Facebook’s game needs to go and retrofit their pages to include these new metadata types. While they’re busy with that (it should take a few minutes at most, really), won’t they also implement support for Facebook’s Like button? Isn’t that the motivation for supporting the Open Graph Protocol in the first place?

Why yes, yes it is.

And that’s the carrot to convince site publishers to support the Open Graph Protocol.

Here’s the rub though: those Like buttons only work for Facebook. I can’t just be signed in to any social web provider… it’s got to be Facebook. And on top of that, whenever I “like” something, I’m sending a signal back to Facebook that gets recorded on both my profile, and in my activity stream.

Ok, not a big deal, but think laterally: how about this? What if Larry and Sergey wanted to recreate PageRank today?

You know what I bet they wish they could have done? Forced anyone who wanted to add a page to the web to authenticate with them first. It sure would have kept out all those pesky spammers! Oh, and anyone that wanted to be part of the Google index, well they’d have to add additional metadata to their pages so that the content graph would be spic and span. Then add in the “like” button to track user engagement and then use that data to determine which pages and content to recommend to people based on their social connections (also stored on their server) and you’ve got a pretty compelling, centralized service. All those other pages from the long tail? Well, they’re just not that interesting anyway, right?

This sounds a lot to me like “Authenticated PageRank” — where everyone that wants to be listed in the index would have to get a Google account first. Sounds kind of smart, right? Except — shucks — there’s just one problem with this model: it’s evil!

When all likes lead to Facebook, and liking requires a Facebook account, and Facebook gets to hoard all of the metadata and likes around the interactions between people and content, it depletes the ecosystem of potential and chaos — those attributes which make the technology industry so interesting and competitive. It’s one thing for semantic and identity layers to emerge on the web, but it’s something else entirely for the all of the interactions on those layers to be piped through a single provider (and not just because that provider becomes a single point of failure).

I give Facebook credit for launching a compelling product, but it’s dishonest to think that the Facebook Open Graph Protocol benefits anyone more than Facebook — as it exists in its current incarnation, with Facebook accounts as the only valid participants.

As I and others have said before, your identity is too important to be owned by any one company.

Thus I’m looking forward to what efforts like OpenLike might do to tip back the scales, and bring the potential and value of such simple and meaningful interactions to other social identity providers across the web.


Please note that this post only represents my views and opinions as an independent citizen of the web, and not that of my employer.