by Barry Chudakov on January 31st, 2012

Search and Recognition

In Ingmar Bergman’s 1966 prescient masterpiece, Persona, a thin young boy awakens in a hospital. He pulls a single, ill-fitting sheet over him and turns restlessly, tellingly, taking up his eyeglasses to read a book. Then, by deliberate contrast, he reaches to the camera lens. Next he walks over to blurry images of the faces of an actress (Liv Ullmann) and a nurse (Bibi Andersson) and his hand traces those images as though to understand them, to see if they are as real as they seem. The faces of the two women merge as the boy reaches out, trying to comprehend what he’s seeing.


The Boy






That boy is us. We are all waking up in Bergman’s Bed. Moving from the technology of the book to the lens and the image, we are all in the search and recognition phase of realizing how our tools and technologies alter and merge our identities. Acknowledging the IBM prediction that mind reading is no longer science fiction and we will soon use computers to see how the brain responds to facial expressions, excitement and concentration levels, and the thoughts of a person without them physically taking any actions; or the realization that ‘most runway models meet the BMI criteria for anorexia’—we confuse and conflate copy and original, fact and artifact, self and other.

Consider the seemingly obvious conclusion of a new multitasking study:

FaceTime, the Apple video-chat application, is not a replacement for real human interaction, especially for children.

According to that recent Stanford University study published in the scientific journal, Developmental Psychology, “Tween girls who spend much of their waking hours switching frantically between YouTube, Facebook, television and text messaging are more likely to develop social problems.” Clifford Nass, a Stanford professor of communications who worked on the study, explains: “No one had ever looked at this, which really shocked us. Kids have to learn about emotion, and the way they do that, really, is by paying attention to other people. They have to really look them in the eye.”

Really look them in the eye? That is the thin young boy’s dilemma, as it is ours.

Increasingly the eye is a surrogate, a screen, a simulation whose ‘reality’ is a merger of what we know and what appears cognate with what we know, but is a world apart. Susan Sontag in Illness as Metaphor said that our views of cancer and AIDS were actually entwined with and surrogates for our world views. The same may be said for search and recognition, two conjoined factors in our current world view. These two halves of a burgeoning dynamic are not external to a deeper understanding of the human condition in the digital age—they are fundamental to it.


Looking into eachothers eyes, Fedra79, Flickr, all rights reserved.



Search is now the I Ching of a billion lives. Google performs 34,000 searches per second, 2 million per minute; 121 million per hour; 3 billion per day; 88 billion per month (figures rounded). Those figures don’t include the lesser but still impressive numbers from Yahoo (3,200 searches per second) and Bing (927 searches per second). As we are searching, our very movements search us; our secrets are someone else’s business plan. DARPA is now soliciting proposals for biometric research with the intent of developing software-based systems that identify users based on movements or habits while they use their computers or laptops. They’re looking for innovative ways to identify a user by collecting behavior metrics, or what DARPA calls “cognitive fingerprints” or “human secrets.” The fingerprint could include eye movement, keystrokes, mouse tracking or even language usage patterns. While there are plenty of solid reasons to view these stealth search and recognition endeavors as a cause for alarm, as John Battelle writes in his excellent Search Blog something else is going on here:

”Our tools have not caught up with our brains, and vice versa. We have shaped technology, and now it is shaping us – sure – but we can keep shaping it till we get the feedback loop right. So far, we simply have not – the music ain’t flowing, so to speak. In our relationship to what Kevin Kelly calls the technium, we’re awkward pre-teens.”

As awkward pre-teens our restless searching may be part of a larger pattern of growth and understanding. The echoes of the word search are fascinating: from L. circare “go about, wander, traverse,” from circus “circle”, search neatly expresses our wandering ways as navigators through the digital circus that surrounds us from cloud computing, cyber-warfare, photonics and the reputation economy to biometric sensors, telematics, eyewear and skin-embedded screens. Wandering and searching, as teens do, we seek recognition, itself meaning “to acknowledge, know again, examine.” This re-cognition or knowing again is a deeper exercise than we realize. Our tools are changing this knowing again, deconstructing it. Where recognition once meant acknowledging, reaffirming, it now has an opposite meaning: we are now fully suspicious, certain that trust of the the other is a false positive, reconfirming whether we have ever seen this before.

In fact, we have not.

We are doing to ourselves what our software has done to our world: we are making ourselves miscellaneous. Our deconstructed selves, our fingerprints and retinas and saliva and gait are just the beginning. Soon not only will will sensors be able to track matching fingerprints and faces, but they’ll correlate them with heartbeats and bodily movements to make sure that everything checks out. We are already re-cognizing ourselves as a miscellany of facial characteristics and expressions, gait, voice patterns, behavioral patterns, linking and tracking patterns, and pure data. We already have the beginnings of social surrogacy. With fear in our hearts we ask: will identity itself devolve to surrogacy? Once highly sophisticated tools define us by our unique (and ever so deconstructed) characteristics, will we begin, as we have done with all our other tools, to think in the logic of the deconstruction, in the logic of the data-mining tool? We will accept the logic of surrogacy?

Today we are still waking up, touching the screen to try to understand it, sensing that this merger of faces, one turning into the other, one identity merging with the other, is telling us something important. But what?


Identity_6338, kayotepeyote, Flickr, all rights reserved.



What we are coming to realize is the truth of the commonplace, “when you pick up one end of the stick, you pick up the other.” Or as Jeffrey I. Cole, director of the Center for the Digital Future, said after a 10-year study by the Center incorporating more than 100 major issues involved in the impact of online technology in the United States:

We believe that America is at a major digital turning point. Simply, we find tremendous benefits in online technology, but we also pay a personal price for those benefits. The question is: how high a price are we willing to pay?

As we use tools they define and refine our view of whatever we use the tool to enable. Doing so, we adapt to the logic and grammar of the tool. This adaptation changes us. The change in our lives is what I call a Metalife. What is important here is not that we alter our lives in response to our tools: we are adaptive creatures whose evolution has depended upon—and thrived because of—that very adaptability. What is important is not the pattern but the recognition of that pattern.

We must gain a greater understanding of our response to form.

Here ancient wisdom traditions can guide us. The Buddhist paradox “form is emptiness; emptiness is form” or Philippians’ “peace … which passeth all understanding” invite us to consider the formless as the background to form:

“Out beyond ideas of right doing and wrong doing there is a field. I’ll meet you there.” (Rumi)

However, as we create ever more intriguing objects—an entire Internet of things—form seduces us handily. Of course, we are biological marvels of pattern recognition; we now measure our recognition in nanoseconds as we use Xbox Kinect or play a myriad of games that prepare us to fly airplanes or go into combat. But in our growing facility with pattern recognition and form manipulation we must pay greater attention to recognition itself. We do so by embracing the formlessness that is the context, the landscape, the very oxygen that our forms breathe. Most of us are blind to the pattern of the pattern: namely that we continuously, religiously, adapt to our forms and then, quantocius quantotius, reorder our world.

The question becomes how do we see ourselves, watch ourselves, understand ourselves while we are otherwise engaged? (For this reason I have said that Metalife is the life we live while we’re busy doing something else.) Now as we rapidly change our lives and what we value in response to ubiquitous tools, we encounter an imperative: we must come to know the formlessness that is the base of our constructions in order to truly understand our relation to form, to fully recognize the pattern we’ve been searching for so long.


Bodhisattvas, h.koppdelaney, Flickr, some rights reserved, Creative Commons license.

Note: In Buddhism, a bodhisattva (Sanskrit: बोधिसत्त्व bodhisattva; Pali: बोधिसत्त bodhisatta) is either an enlightened (bodhi) existence (sattva) or an enlightenment-being.



Biometrics in Argentina: Mass Surveillance as a State Policy

Face Recognition

In the future, can you remain anonymous?

John Battelle’s Search Blog

Jonathan Franzen Continues to Hate Technology

Private Snoops Find GPS Trail Legal to Follow

Tracking, Sniffing & Fingerprinting: The Metalife of Identity

Use an iPhone? Yup, The Government Tracks That

Who Owns Your Personal History?


Email This Post Email This Post     Print This Post Print This Post

From Commentary

1 Comment
  1. Nice looking post!

We invite your thoughts and comments about this post. Leave a reply here.

Note: XHTML is allowed. Your email address will never be published.

Subscribe to this comment feed via RSS