The DiSo Project is just over a year old. It’s remained a somewhat amorphous blob of related ideas, concepts and aspirations in my brain, but has resulted in some notable progress, even if such progress appears dubious on the surface.
For example, OAuth is a core aspect of DiSo because it enables site-to-site permissioning and safer data access. It’s not because of the DiSo Project that OAuth exists, but my involvement in the protocol certainly stems from the goals that I have with DiSo. Similarly, Portable Contacts emerged (among other things) as a response to Microsoft’s “beautiful fucking snowflake” contacts API, but it will be a core component of our efforts to distribute and decentralize social networking. And meanwhile, OpenID has had momentum and a following all its own, and yet it too fits into the DiSo model in my head, as a cornerstone technology on which much of the rest relies.
Tonight I gave a talk specifically about activity streams. I’ve talked about them before, and I’ve written about them as well. But I think things started to click tonight for people for some reason. Maybe it was the introduction of the mocked up interface above (thanks Jyri!) that shows how you could consume activities based on human-readable content types, rather than by the service name on which they were produced. Maybe it was providing a narrative that illustrated how these various discreet and abstract technologies can add up to something rather sensible and desirable (and looks familiar, thanks to Facebook Connect).
In any case, I won’t overstate my point, but I think the work that we’ve been doing is going to start accelerating in 2009, and that the activity streams project, like OAuth before, will begin to grow legs.
And if I haven’t made it clear what I’m talking about, well, we’re starting with an assumption that activities (like the ones in Facebook’s newsfeed and that make up the bulk of FriendFeed’s content) are kind of like the synaptic electrical impulses that make social networking work. Consider that people probably read more Twitter content these days than they do conventional blog posts — if only because, with so much more content out there, we need more smaller bite-sized chunks of information in order to cope.
So starting there, we need to look at what it would take to recreate efficient and compelling interfaces for activity streams like we’re used to on FriendFeed and Facebook, but without the benefit of having ever seen any of the services before. I call this the “zero knowledge test”. Let me elaborate.
When I say “without the benefit of having ever seen”, I primarily mean from a programmatic standpoint. In other words, what would it take to be able to deliver an equivalent experience to FriendFeed without hardcoding support for only a few of the more popular services (FriendFeed currently supports 59 out of the thousands of candidate sites out there)? What would we need in a format to be able to join, group, de-dupe, and coalesce individual activities and otherwise make the resulting output look human readable?
Our approach so far has been to research and document what’s already out there (taking a hint from the microformats process). We’ve then begun to specify different approaches to solving this problem, from machine tags to microformats to extending ATOM (or perhaps RSS?).
Of course, we really just need to start writing some code. But fortunately with products like Motion in the wild and plugins like Action Stream, we at least have something to start with. Now it’s just a matter of rinse, wash and repeat.
25 thoughts on “Where we’re going with Activity Streams”
Looks like something messed up all your hyperlinks. Could you tidy this up and repost, plz?
A.HREF fail there Chris.
Wow, BIG TIME fail! HA!
Looks like MarsEdit stripped out all my brackets…. like they were being rationed or something. Thanks for the report guys!
Chris, you still have a broken link to the slideshare presentation (just below the mockup image).
Unlike some other social web standards in the making (say OAuth or PoCo), it seems like here you’re really diving into the “what“, not just the “how“. That’s a potential issue, consider how difficult to it may be foresee, for example, all the possible verbs that future services will implement.
If I look at a somewhat similar case, the XFN relation types, I find that a total failure. On the one hand, I have no idea how to describe very common relations (for example, people on my blogroll that inspire me, but I never met or can’t consider a contact), on the other hand there are no less than four romantic relations, probably the most transient relation type, and thus they are probably almost never used.
Would be interesting to hear how you think that could be tackled – open/extensible wiki list? some committee to take requests?…
@Ofer: Fixed that last niggling link. Thanks.
As for your question, yes, this is obviously a challenge.
Generically I think you do start out with the how, but clearly in this case, what’s valuable IS the “what”. Therefore, we’re taking the approach of doing lots of research and then starting with a small subset of what we find in the wild to include in the spec. From there we can indicate how to add new verb types, but like hashtags, it really depends on popularity and adoption — in some ways, if we do our work right, the types will emerge from actual behavior, rather than trying to guess at them in advance.
I also wouldn’t say that XFN is a total failure. I think that XFN can be extended however much you want — and be expressive as you like. If you’re good at popularizing your extensions, then they’ll catch on. Put another way, XFN started somewhere, and now at least it’s supported in some popular software (of course, I’ve said that rel-contact and rel-me should be sufficient).
Anyway, if you’d like to contribute some research into what’s out there, we’d definitely appreciate it!
>Similarly, Portable Contacts emerged (among other things) as a response to Microsoft’s “beautiful fucking snowflake” contacts API, but it will be a core component of our efforts to distribute and decentralize social networking.
I find this amusing on so many levels. Then again, it is consistent with the me-too-effort-but-were-better-because-we-are-OPEN culture that surrounds OpenSocial and related efforts.
I wonder what you think about the thread at http://groups.google.com/group/portablecontacts/browse_thread/thread/f4398fbb3c416f44?pli=1
Just for clarification, it wasn’t me that called Microsoft’s Contacts API a BFS… It was someone from Microsoft!
I think the point is that the open/community approach may not result in “better” solutions, but through rough consensus, we can arrive at solutions that many parties will adopt (or at least that’s been the case). While MSFT solutions gain adoption in the House of MSFT, we’ve seen less of their solutions spread through viral channels, at least recently, without large marketing campaigns behind them.
Chris, I really like where this is going. I’m starting to consider myself part of your ‘tribe.’ If you can find the right niche for me to help out with DiSo, please let me know. I’ve dabbled in programming here and there. If any of you have the time/generosity to get me up to speed on how everything works so far…
This is all new to us. We discovered the Twitter API and its powerful search. This Activity Stream data is more useful than we could ever think. Powerful stuff
I’m still failing to see the point of activity streams. you say “Maybe it was the introduction of the mocked up interface above (thanks Jyri!) that shows how you could consume activities based on human-readable content types, rather than by the service name on which they were produced.”
but you would still need to manually add the service url’s.
or is a this a matter of being able to filter certain actions?
sidenote–blah failed to auth via openid, was it my provider? (chi.mp) i clicked “always trust” so it wasn’t my fault
DeGroot points out that, while customers and suppliers may have their own ways of calculating dielectric constant, having a standardized IPC methodology will allow for better benchmarking of such attributes as frequency, classes of material types, and standardized test structures that are similar to practical signal traces.
Interesting stuff. I’m hoping at some point this modeling will lead to intuitive interfaces that lead to conections of and into niche communities where global knowledge can be collected and communicated.
Wow looks like a nice blog I just found out.. I didn’t get everything about the DiSo Project but I’ll read more about it..