What can dogs tell us about the real-time web?

Ticka's nose by Jimmy
Ticka’s nose by Jimmy

Did you know that a beagle’s nose has 300 million receptor sites? Humans, in contrast, have about six million. And that changes everything in a dog’s perception of the world. It also explains why they sniff and snort as much as they do and have such a preoccupation with other dogs’ pee.

I discovered this and other fascinating doggie facts reading Cathleen Schine’s book review of Alexandra Horowitz’s “Inside of a Dog: What Dogs See, Smell, and Know“, published in the New York Times.

When Marshall Kirkpatrick called me today to discuss his upcoming ReadWrite Real-Time Web Summit and report, I used some of these tidbits to help explain the changes I see coming with the emergence of the real-time web.

Specifically, in the document-centric era of the web, humans largely adapted their behavior to fit the speed of the network, and chunked their thoughts into discreet, long-lived static blog posts and documents. But, as we’re seeing, Gutenberg’s reach into the web can only extend so far: the mores of physical media shall eventually give way to the seeping tendencies of data in the networked age.

If the speed of thinking — and the shape of our thoughts — have previously been confined to 93.5 square inches (the area of an eight and half by eleven sheet of paper), then our perception of reality must adjust to the scale of the web — to draw a comparison, as though we expanded our olfactory centers from 6 to 300 million.

Consider one consequence of “the mechanics of the canine snout”:

People have to exhale before we can inhale new air. Dogs do not. They breath in, then their nostrils quiver and pull the air deeper into the nose as well as out through side slits. Specialized photography reveals that the breeze generated by dog exhalation helps to pull more new scent in. In this way, dogs not only hold more scent in at once than we can, but also continuously refresh what they smell, without interruption, the way humans can keep “shifting their gaze to get another look.”

Imagine that we were able to interpret information at the scale and rapidity that dogs parse scent. That’s where we need to go.

To put this into perspective, consider how long it takes you to read one page of text; three minutes? Five? If we had the equivalent of a dog’s sense of smell for our ability to consume information, we’d be able to consume FIFTY pages of information in the same amount of time that it takes us to currently consume ONE. (For shits and giggles, if you printed the Internet, it would take up around 700 square miles of US letter-sized pages).

The dog’s nose, therefore, is perfectly adapted to consume vast quantities of information by scent. In order to cope with the real-time era of the web, we must imagine a similar augmentation of our own knowledge processing abilities if we’re to cope with the deluge.

In the real-time era, information is no longer restricted to an arbitrary number of words that fit on a page — let alone the kind of structures that were given to such proportions. Now, it is our capacity to consume and process information efficiently and effectively that limits us — partly explaining why we’re struggling to cope with all these “distractions”. Our brains are just doing what they were designed to do: process an intermittent flow of incomplete information and make rough cost-benefit calculations of possible decisions, while mitigating risk.

Lest we be overcome with information, we crave resolution and action. The crisis of the real-time web is how we confront an unending stream of undifferentiated information that all seems equally important and immediate, paralyzing us. In these cases, failing our own intrinsic resources, we look to surrogates (parents or other authority figures — celebrities suffice) to help us discard irrelevant information and get to the good stuff. We look to their reassurance to help us make a decision.

And this is why filters — natural, artificial, or social — will be so important in the real-time web.

As advanced as we think we are, our animal brains are just not adapted for this kind of environment. And we’re going to need help — as well as new thinking.

To reinforce this point, let’s return to our canine friends.

Contrary to what “dog whisperer” Cesar Millan claims, dogs are not pack animals — at least not in the way that wolves are. Schine writes:

[...] Countering the currently fashionable alpha dog “pack theories” of dog training, Horowitz notes that “in the wild, wolf packs consist almost entirely of related or mated animals. They are families, not groups of peers vying for the top spot. . . . Behaviors seen as ‘dominant’ or ‘submissive’ are used not in a scramble for power; they are used to maintain social unity.”

The idea that a dog owner must become the dominant member by using jerks or harsh words or other kinds of punishment, she writes, “is farther from what we know of the reality of wolf packs and closer to the timeworn fiction of the animal kingdom with humans at the pinnacle, exerting dominion over the rest. Wolves seem to learn from each other not by punishing each other but by observing each other.”

So just as we must shake such ingrained, patriarchic theories in animal biology, we must also reconsider the models we have for thinking about, understand, and relate to information in the flow of activity streams.

Dogs are able to consume vast quantities of information by scent — and that means that their perception of reality is fundamentally different from ours. Will we ever know what it’s like to smell a rose with 50 times more receptors? No, probably not — nor is it clear that we’ll be able to augment our native cognitive abilities to consume information 50 times faster than we do today. And yet the real-time web relentlessly marches forth, promising a massive shift in both our access and ability to cope with such huge amounts of data.

Presuming that we keep the brains we have, this has huge ramifications for interaction and user experience design. We cannot simply apply document-based interfaces to this new, more rapid and fluid space. Instead, we need to take inspiration from the field of game design (Halo would suck if it operated at anything less than real-time); we need to think about how social search fits in and can augment our ability to filter information and make better decisions; we need to consider how one can effectively project intentions onto the web to receive better, faster, automatic service, as Doc Searls’ Project VRM proposes; we need to take advantage of the always-on human network, as Amazon’s Mechanical Turk and Q & A service Aardvark do; and we should embrace the natural and native speed that comes with a more conversational and people-centric web.

If this review got me to realize anything, it’s that we should be careful about applying familiar and comfortable rubrics to the nature of information flows on the real-time web. Our brains are powerful and incredibly plastic, but the quantities of information available on the real-time web may bring us to the limit of our current cognitive abilities. Our challenge as designers, developers, and innovators, is therefore either to modify the environment around us, or build new tools and methods that make will us 50 times more capable of confronting this emerging reality.

13 Comments

  1. directeur said
    at 8pm on Sep 16th # |

    Intersting facts and ideas, Chris! I noticed your interest in distributed and realtime social networks, so I thought that you might be interested in socnode http://socnode.org ; the basic unit of distributed social networks. It’s a humble idea (and an implementation) that proves that it is doable. Please feel free to check out this site’s pages for more information and the http://socnode.org/code page for implementations and a demo.

  2. sean coon said
    at 9pm on Sep 16th # |

    i can’t help but to think about how we, as a nation, developed hundreds of thousands of miles of highways to support the next big time saver (cars) and then used the same infrastructure to expand the agriculture/farming/food shopping industries into a cross country, processed, non-local venture. the economic and health ills that have resulted are pretty obvious today.

    so yes, the buck is in the work to design such an environment, but i want my brain going deeper, not “real-time” wider; less connectivity, not more.

    i guess that’s my personal conundrum.

  3. Robin said
    at 9pm on Sep 16th # |

    That is an awesome photo.

  4. Todd said
    at 5am on Sep 17th # |

    Let us not forget the ( big ) group of people who have been making use of real time data for 10 years…ebay bidders.

  5. Damon said
    at 7am on Sep 17th # |

    Hey Chris -

    Great post! Excellent that you picked up on social-search-ala-Aardvark as a kind of filter from information overload. Aardvark is all about having the machine do what the machine does well (eg, index lots of information to route questions quickly), and having the human do what the human does well (eg, focus attention on another human, and give a helpful sympathetic answer). And everything we do follows a heavily user-driven design process, so we’re explicitly attending to the vagaries of human cognition and social psychology in the virtual wild.

    The one thread you didn’t pick up on in your piece though was the genetic side: the fabulous Alexandra Horowitz is my sister! It’s no coincidence that Aardvark was originally founded as “The Mechanical Zoo”, bringing up a suite of animal-inspired projects :)

    Cheers,

    - Damon Horowitz

    Co-Founder and CTO, Aardvark

  6. at 8pm on Sep 17th # |

    Absolute cracker of a post (and photo) Chris! You actually pose a very good counter argument to my own thinking about how the trend of the realtime web will play-out.

    My personal feeling is (at the moment) that aggregators like Gist and the like will play a big role in assisting us with classifying and interpreting the information on the realtime stream by permitting us to deal with it on our own terms. The primary thing that I’m now thinking though is, man, Chris is right Halo (or any FPS) would be pretty lame if we applied previously well understood paradigms for game playing to it (say a board game for instance).

    So given that the best way to deal with the streamlining the information of the realtime web is in realtime, we have very interesting times ahead.

    I’m off now to check out the links you reference in the article.

    I guess it’s time for the realtime web to “break out of the page” as it were, or more importantly time for us to embrace the fact that it already has.

  7. sull said
    at 10pm on Sep 17th # |

    Nice post.

    I’m thinking that the importance of the Real-Time Web is not so much to give a normal human being a rapid stream of content/headlines/messages to stare at and more to feed the intelligent filters so that the most appropriate and legitimate content bubble back up and come into view for us.

    You’ve touched on this. So this is just my way of saying it I suppose.

    I usually refer to Stocks and Flows which was written about in 2004 by Lee Lefever. I’ll comment further on my blog – http://vocal.ly/pd1

  8. at 5am on Sep 18th # |

    This post is right on the mark. You point out one of the problems we’ve been trying to solve for analytics. How do you move from text or numbers based representations in tables and charts to higher density forms of representation: animations, that represent a stream of data in time.

    Humans are actually quite good at processing vast amounts of visual information in a live stream. Just like dogs are at smelling things out. We maneuver complex visual environments all the time whenever we’re out wandering through large crowds in a big city, or navigating terrain while hiking through a natural environment.

    The problem is that there hasn’t been enough commercial work in 2D animated interfaces for the mainstream because it is expensive and difficult. And the processing power of our hardware is just now becoming sufficient for this to work.

    What we now need are visionaries that work on these next generation animated interfaces and the frameworks to generate them. It is tempting to move to 3D right away, but I believe we need to first understand how 2D animations and simulations can be harnessed to achieve a higher bandwidth interface.

    We’ve all seen those 2D interfaces in sci-fi flicks. Now just imagine using them to visualize and navigate the real-time web. That’s one part of what Web 3.0 will be about. I believe with newer, faster browsers and HTML5 you will see this happen in the very near future.

  9. at 9am on Sep 21st # |

    Scent so important for IA, UX, web copywriting, navigation, search… the list goes on. Great, great article. :)

  10. sull said
    at 11am on Sep 21st # |

    the email notifications from this comment module is poorly implemented. i do like backtype though. but i’m getting useless malformatted messages in my inbox, fyi.

  11. Kathy Sierra said
    at 11pm on Oct 13th # |

    What an amazing post, Chris. I heard this comparison once, only in reverse–my dog trainer explained my foxhound’s overwhelming need to follower her nose as “it’s like surfing the ‘net for her”.

    I’m just happy to hear any discussion about the web (real-time or otherwise) that even HINTS at the notion of hard cognitive limits. No matter how much we like to think we’re being rewired, there’s only so much plasticity between our ears.

    I will say, however, that we ALREADY possess a tool that dramatically increases the speed with which we can process information. It’s called… visuals. A picture is worth 1024 words and all that. While not *everything* can be represented graphically, it’s no secret that our brains are highly tuned for taking in and filtering far more visual info than, say, processing written words.

    And while it’s always been true that visuals usually make communication more efficient and effective, the ability to represent data, info, and knowledge in graphic/visual ways is a skill the real-time web will demand of us all.

  12. at 11pm on Oct 13th # |

    Thanks for your thoughts, Kathy. I agree with you (as a visual designer!) but also think that pictures can be equally misleading if they “encode” the wrong words or do so in a way that doesn’t reflect truth. As they say, “beauty is in the eye of beholder”, which suggests that there’s also a great deal of subjectivity in one’s visual interpretation of stimuli (as there is in all sense, I suppose!).

    I think that the olfactory metaphor is an interesting one though, since dogs actually have this amplified sense, and it would also be hard for us to imagine going from two eyeballs to 300… (perhaps a dragonfly’s vision would be more apt?).

    Anyway, we’re still going to need better systems to augment the limited gray matter systems we have, and the only way for them to be better than Clippy, in my estimation, is through [social] network effects.

  13. Cornbredkennels said
    at 10am on Dec 24th # |

    Thanks for the thoughts. Do we know the validity/interplay of comparing depth of a sense with speed of reception? 300 times sharper is still on the same clock. Do we want to think faster or deeper? I’m a husbandman of dogs, and I’ve made my choice.

2 Trackbacks

  1. [...] What can dogs tell us about the real-time web? | FactoryCity What can dogs tell us about the real-time web? http://ff.im/-8fy1y [from http://twitter.com/kenmat/statuses/4047366422 (tags: tweecious CesarMillan Amazon.com AmazonMechanicalTurk DocSearls MySpace NewYorkTimes Olfaction Gamedesign) [...]

  2. [...] blog post entitled “What can dogs tell us about the real-time web?” – http://factoryjoe.com/blog/2009/09/16/what-can-dogs-tell-us-about-the-real-time-web/ [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*