“At such times the person’s face clearly is something that is not lodged in or on his body, but rather something that is diffusely located in the flow of events in the encounter ….” Erving Goffman, “On Face-Work,” Interaction Ritual, p. 7.
I am in a phone meeting with a Hollywood producer. He’s just completed an animation sequence with one of the world’s famous actors. Although the finished work was superb, we’re worried the actor’s animated face isn’t looking real.
Known as “the Botox syndrome,” animated faces have a waxiness that comes from the difficulty of putting the 43 identified face muscles into a data format that uses approximately 80 nodal points. When measured and mapped these points create a numerical code called a faceprint. This faceprint defines the face in a database that tracks actors’ moving features. Faces, like bodies in motion, tend to stay in motion, which makes faceprinting and animation tricky.
I look out the window momentarily at the Bambusa Chungii and cherry laurels bending in the Florida spring breezes. We finish the conference call with an unthinkable proposition: how many muscles do we have to build to make the next animation really real? I consider the long history from Vermeer’s camera obscura to the 32 surrounding cameras used to capture every angle of actors’ facial expressions in the video game L.A. Noire. What compels us to rival reality, reconfigure it to watch, show—and fool—ourselves? Presently an inkling dawns, a realizing: emerging technologies transform the face into what semiotics calls a sign—something that stands for something else. Umberto Eco wrote, “The sign aims to be the thing, to abolish the distinction of the reference, the mechanism of replacement. Not the image of the thing but its … double ….”
Our doubles now beget doubles. EyeSee makes ordinary-looking mannequins that have a camera embedded in one eye feeding data into facial-recognition software formerly used to spot criminals—tracking the age, gender, and race of window shoppers. In security applications, your face is a double seeking data. At the 2012 RNC, undercover officers walked among demonstrators, took photos and transmitted “real-time video of protesters as they moved about the streets.” Live video from smartphones fed into the 2012 RNC surveillance system that included 94 high-definition cameras connected via a wireless network, and each CCTV included a geographic tag—all seeking a data match, aka recognition. As of February 2011 Face.com, a platform for facial recognition in photos uploaded via web and mobile applications, had scanned billions of photos monthly, tagging faces in those photos and tying them directly to available social networking information. They “discovered” 18 billion faces across its API and Facebook applications; in June 2012 Facebook acquired both the company and enhanced face recognition. As John Villasenor wrote in Forbes, “Technology is making it easier to modify and redistribute content.” Any face is simply content to modify and redistribute.
THE DIGITAL FACELIFT
We are no longer “the lords and owners of our faces.” They have exploded into digital smidgens. All the kings horses and IT men couldn’t put these “vizards to our hearts” back together again. When the FBI spends $1 billion on a Next Generation Identification Program to apply facial identification to track anyone by her face, or when an associate professor at Sichuan University in China uses face recognition technology to verify class attendance instead of doing a roll call of his 100 students—we have not only leapt head-first into the chasm of hyperreality; we’re undergoing a digital facelift, tooling our most intimate self-display into guises that will look back at us in a new and unrecognized mirror. As we watch, survey, and animate each other with a host of rapidly evolving technologies, something new is coming to life and what we took for granted for millennia may soon be unlike what has ever been.
Technology likes faces. They fit neatly in lens’ apertures, they frame exactly in webcams; they are stackable and able to be shuffled like a card deck, matched and batched (the best systems have a match rate of 93 percent when searching a database of 1.6 million faces). In this measure, technology has determined that they are interchangeable, and since these are our faces, and our faces are a sign of our identity, we are too. Search and recognition are driven by the face as a transmissible unit of identity.
FACE AS RECONNAISSANCE
We would be wise to take re-cognition literally. Today your face is the recon device du jour. Hitachi Kokusai Electric has a surveillance camera that is able to capture a face and search up to 36 million faces in one second for a similar match in its database. A prototype called SideWays tracks the eyes of multiple customers simultaneously, enabling retailers to gather information about how consumers interact with products and in-store displays, then show them a message about what they’ve just looked at. But recon is merely the headline. Our faces have come undone. Once a semi-public design charrette for eyes, nose and other mirror-mediated features, your face is now a conscript in the border dispute between self and (tracking) other—part of a larger mission to digitize everything in existence into miscellany.
Paul Ekman developed the facial action coding system to enable programming this miscellany. Today our company and others use facial response intelligence—500+ lines of data for every 30 seconds of video watching—to gain insight into behaviors. As Rana el Kaliouby and colleagues wrote recently, “The human face is a powerful channel for communicating valence as well as a wide gamut of emotion states.”
Channel, tool, artifact in the Internet of Things, face is the involuntary data display that emerges as a shared resource like water. Whether emotion response, facial recognition, or animation, our face, features and expressions are devolving to algorithmically captured data points. As of this writing Google Glass now has its first face recognition app called MedRef for Glass. (Glass is a wearable computer with a head-mounted display that is being developed by Google in the Project Glass research and development project.) MedRef will help medical professionals to scan patients’ faces and gain quick access to their medical histories. MedRef enables medical professionals to view a patient’s folder, which might include photos, voice, and text notes; and since it’s shareable with Google Glass, other physicians and nurses can have immediate access to these records—and untold numbers of faces.
This new science is watching the face to track not only its contours, maladies, and meanings—but also its networks and social conduct. As our faces slip into miscellany, we come apart in shards; we are broken into splinters. The job for all of us is to ensure these pieces do not dis-integrate. Maintaining integrity—the quality of being well integrated with our intentions and ourselves—becomes a new and important role. Beneath our face muscles are our feelings, as well as our deepest secrets. We must watch the Watchmen, and our own internal observer, to ensure that these too do not come undone.
Barry Chudakov is the founder and principal of Sertain Research, a company that uses emotion recognition technologies to measure how our faces respond to stimuli, tracking universal facial expressions and emotions at more than 24 times per second. Based on the Facial Action Coding System (FACS), data captured from face and emotion recognition testing gives Sertain a powerful measurement tool that enhances research utility.
Email This Post Print This Post