︎ scroll ︎ about ︎ artists

Alive in the Eyes:
How to Make Your Artificial Intelligence
Look Amazing In 5 Days


i. Introduction

From patterns in wood grain, to craters on the moon, humans often succumb to their pareidoliac urge; to see their face reflected in the world.  Researchers in a lab at Harvard Medical School saw a startling yet familiar image emerge across their monitors when they hooked up their artificial intelligence XDREAM to a monkey named Ringo.2 After schooling XDREAM on over a million images from the standard training database ImageNet, the deep generative neural network was fed inputs from a genetic algorithm which monitored the stimuli of specific neurons in Ringo's brain.3 The images started out as soft, grey abstractions, but soon took shape as the algorithm altered the image in response to higher neural activity in the facial recognition centers of Ringo's brain. Out of the glitch emerged semi-human, semi-primate eyes above a white face mask, eyes reminiscent to those of Ringo's caretaker Diane (and the very researchers in the room). "'If cells are dreaming, [these images] are what the cells are dreaming about"' says neuroscientist Carlos Ponce, one of the researchers.4 The images certainly evoke a dream– in all their uncanny, startling strangeness.

Humans' ability to see images in their mind's eye has long been another mystery of human consciousness, a domain of religious imagination, philosophical musing, and personal experience. Now scientists are beginning to reach behind the veil, to see if they can understand how the imagination functions, which neurons are primed for which stimuli, who is dreaming about what. At their disposal are nascent neural networks, artificial intelligence technologies trained on vast, often dubious data sets like ImageNet; eager to excite each neuron and trigger its particular whim. As he sipped his juice, Ringo faced a flickering onslaught of strange visual forms. XDREAM methodically altered the colors, shapes, and textures, studiously attending to Ringo's unconscious desires.5

While scientists might seem far from reading minds, algorithms are already creating much of our visual stimuli, littering the virtual sphere with dots and semi-circles we recognize as human. From deep-fakes that embody political figures like puppet-masters to the promise of AI predicting art market trends, contemporary culture is outsourcing more and more of its imaginative labor to artificial decision-making. Architect and visual theorist Juhani Pallasmaa warns of this trend, asking "is our uniquely human gift of imagination threatened by today's over-abundance of images? Do mass-produced and computer-generated images already imagine on our behalf?"6 Pallasmaa argues that "mental imagery is the crucial vehicle of perception,thought, language and memory... without imagination we would not have our sense of empathy and compassion, or an inkling of the future."7 As we spend more time consuming mass produced and commodified images, Pallasmaa fears we become pacified by inundation, which "threaten[s] our authentic capacities of imagination."8

The face the scientists saw, the face that Ringo imagined through XDREAM, the face that the neurons in Ringo's brain dreamt of, is hardly a face at all. But still, that naked rendering stares back at us, and I cannot help but meet its synthetic gaze. "The face speaks to me and thereby invites me to a relation" writes Emmanuel Levinas, who argues that the face is the locus of our moral obligation to the other. The face "resists possession,"10 "forbids us to kill,"11 and "destroys and overflows the plastic image it leaves me,"12 revealing the infinity of the other being, who is just as infinite as the self. What, then, can we make of the algorithmic image which takes root in us, stirring our sense for sentience, inviting us into relation? How can we understand this image, part Ringo's-dreamscape, part neuro-stimuli, part data-composite; how can we reckon with this perplexing portrait of hybridity?

As artificial intelligence generates more of the visual sphere, we are primed for this reckoning. Ringo is not the only one contemplating a masked face; more and more of our social, poetic, and private imaginations are populated by faces who don't belong to specific people, instead portraying the image of our collective digital defuse. As the social imagination is shaped by media inundated by fake accounts, our perception of what's relevant, important, or true becomes skewed by how many bots are employed by which influencing element. Artists dutifully respond– utilizing and mis-utilizing tools like neural networks and ImageNet, to subvert their functions, revealing their hidden assumptions and inner workings. Until recently, our private imaginations were our only refuge from seeing machines' prying eyes. Now, as billionaires develop implantable brain-machine interfaces13 and XDREAM perfects her renderings, our once personal imaginings might soon inform our Amazon wishlist.  Rather than serving as an alarm bell for a techno-dystopian future, Ringo's imaginings betray the already embedded nature of human existence– complexly interdependent on all other living and nonliving things. These portraits of hybridity prompt deeper questioning: what are the implications of a culture whose pictures aren't made by people, but concocted by a swarm of slippery statistics? What happens to our social imagination and political agency when artificial intelligence understands how we imagine better than we understand how it does? What is the significance of recognizing a face, anyway?

ii. The Social Imaginary

As artificially generated images fill up the social imaginary, Pallasmaa writes that "our experiences and behavior are in danger of being increasingly conditioned by images of unidentifiable origins and intentions."14 The social imaginary describes the "set of assumptions that allow us to imagine the society to which we belong... the body of underlying, inchoate notions that we broadly share and that we use to construct images and theories about the social whole."15 The primary influence on the contemporary American social imaginary is "the media we consume...[which] create[s] a composite idea of the world out there and our possible role in it."16 While the first half of the 20th century was marked by a dominant mass media that unified households around the country, the advent of the internet brought about "radical and postmodern movements that undermined traditional authority [and] brought down reality and truth along with it."17 The liberatory intention of these movements was undermined by the fact that authority and paternalism were not usurped by an egalitarian media ecology, but control was instead reallocated to "the realm of algorithms and user agreements, opaque corporate policy and the subtle tyranny of an unconsidered nature."18 Now, when we interact online, it is as likely we are meeting artificial intelligence as human intelligence. The constituents are elusive, as spam bots populate government surveys,19 police misidentify suspects based on racist facial recognition software,20 and deep-fakes superimpose your face onto a lewd video. The implications of the artificially injected social imaginary are coming into focus, in all their absurdist, destabilizing glory.

When Adobe Photoshop was first demonstrated in the early 1990's, John Knoll used a vacation picture of his topless wife to illustrate the ways images could be altered.

Maintaining the legacy of creating software primed for body modification, Adobe continues to hone its programs, now employing artificial intelligence to tweak selfies and further perfect the face.21 Adobe Sensei is the company's facial recognition and machine learning framework, which can draw data points from all your photos as you edit a single picture. The program can easily correct an accidental blink by pulling open eyes from another shot, and seamlessly edit them together. The program can also make curatorial decisions for you, choosing "the best shots based on visible faces, and the perceived technical quality of the shot." Subtler elements like the length and angle of a smile, or the "distortions and warping created when a selfie is taken just a few feet from a smartphone camera's lens" are easily corrected by the algorithm. While the implications of this software might seem harmless, outsourcing the drudgery of photo-editing to a droid also erases the individual's creative agency, passing off aesthetic decision-making to a data-determined norm.

The term deep-fake was coined by an early user on Reddit, who married the machine learning term "deep learning" and the name of the classic porn movie Deep Throat, rooting the terminology in the sexual control of women's bodies. While much of the public dismay about deep-fake technology centers around the potential disruption of fake political videos, 96% of deep-fakes are nonconsensual pornography,22 as amateurs draw from internet searches, social media accounts, or images of their exes, to make dirty videos from the comfort of their own homes.

Outside of personal harassment, deep-fakes are weaponized to influence vast numbers of people online. Sites like thispersondoesnotexist.org, which degenerates a never ending stream of photo-realistic fake portraits, demonstrate the ease with which generative adversarial networks can render a believable human face. Besides the blatant accouchement of artificial personalities, Hito Steyerl reminds us that the some of the more affective strategies for swaying the social imaginary use the stupidest technologies. She sites the "very popular tool in elections to deploy Twitter armies to sway public opinion and deflect popular hashtags" as technologies of a low-grade, even as the "social implications of this kind of artificial stupidity... are already monumental in global politics."23 The reality is that stupid technologies are actually terrifyingly effective at subtly skewing people's opinions, and undermining their faith in a perceivable truth. Researcher Aviv Ovadya cautions for what she calls "reality apathy": in the face of an increasingly convoluted media landscape, people might give up on trying to make sense of what is true.24 As more and more fake faces flood the public square, the difference between reality and fantasy might prove so allusive that people start to wonder if anything was even true in the first place; did faces ever belong to people?

As deep-fakes threaten to infect the social imaginary with images of suspect origins and intentions, an "endlessly escalating war between those striving to create flawless deep-fake videos and those developing automated tools that make them easy to spot"25 is underway. As the technology gets better at creating seamless images, those working to detect fakes have found a "clever way to expose videos by looking for literal signs of life: a person's heartbeat."26 It seems digital detectives are taking a queue from EMTs, scanning images for a believable pulse. Although not detectable to the naked eye, as blood pumps through our veins subtle color shifts occur in our skin, allowing things like fitness trackers to measure our heartbeats. Similarly, "the color of your face exhibits the same phenomenon, subtly shifting in color as your heart endlessly pumps blood through the arteries and veins under your skin, and even a basic webcam can be used to spot the effect and even measure your pulse.”27 Deep-fakes have not yet caught on to this subtle vital sign, often compositing images from many sources to create arhythmic heart rates which betray their nonhuman subjects. It seems the living human face has a tell, for as long as it takes before AI learns to replicate it.

iii. The Poetic Imaginary

As the social imaginary is pumped full of algorithmically generated images, artists are grappling with these new tools' poetic potential. More and more, the images made by machines are also made for machines; surveillance footage to be scrutinized by software, social media for tuning algorithms to our every subtle emotional whim. While some artists see AI as a trend to cash in on,28 other's take on the role of researchers and interrogators, learning about the potentials and limits of artificial sight and synthesis, and imagining where society is heading. As Trevor Paglan says, "seeing is always changing. Seeing is historical, seeing is cultural." Paglan, along with artist Adam Harvey, turn to AI as the subject of their work, examining the underlying social and political ramifications of the nascent bots who, like themselves, endeavor to render a visual world.

In response to increased surveillance by facial recognition software, in 2010 artist Adam Harvey began thinking of ways people could undermine algorithms' assumptions about what a face looks like, by donning an "anti-face."29 Inspired by the World War I camouflage tactic dazzle,30 Harvey created what he characterizes as a "technique and strategy"; using make-up and hairstyling to confuse automated facial recognition by "break[ing] apart the continuity of a face."31 His piece Computer Vision Dazzle, is a series of looks where chunks of hair weave down the center of the face, jewels chart a line from cheek to chin, and black gradient cubes disrupt the jaw-line.32 Similar to World War I ship dazzle, the camouflage does not make the face disappear, but rather reshapes the contours so that machines trained on limited data sets will not recognize what they see. As more facial recognition technologies are becoming more versatile, avant-garde cosmetic solutions will need to plumpen the lips and darken the rouge, or their reverse-peek-boo might be spotted.

Artist Trevor Paglan also reflects on the paradoxical dynamic of being governed by deterministic algorithms in the face of an increasingly unpredictable future in his piece From ‘Apple’ to ‘Anomaly.’ The piece consists of a massive mosaic of images, which sprawls across a curving wall, displaying some 30,000 photographs from ImageNet, the archive used to train many artificial intelligences on object identification (including XDREAM). The photographs are organized based on tags, with "“porker”... adjacent to “ham and eggs”, which is adjacent to “abattoir”".33 ImageNet was created by extracting all the images and tags from WordNet, a massive, nesting, lexical database created by researchers at Princeton and Stanford Universities. Starting with the data from WordNet, researchers went on to scrape "the entire internet for images, in 2008 or 2009... collect[ing] tens of millions."34 They went on to use Amazon Mechanical Turk, a crowdsourcing software which employed hundreds of people to tag the images with associative words. Of all the categories of images, it seems the human face was the most daunting to tag.

To AI, a face is just another object. Of the entire archive, "there are about 2,500 images of people. The further you get into the classification of people, the more suspect you get."35 Paglen's project points to the fact that neither technology nor the act of classification is apolitical, that both reveal the underlying assumptions and prejudices of the society who creates them. While the classification of things like "apples" might be benign, some "classifications take on more untoward implications, such as those filed under ‘debtors’, ‘alcoholics’ or ‘unusual person’."36 Paglen points to ImageNet's historical predecessor phrenology, the racist, misogynistic, pseudoscientific classification system which attributed character traits to biological and anatomical abnormalities visible in head shape. As Paglen notes, "the face of the criminal, the size of the cranium: these things that seemed to be part of history have very much come back."37 If these are the pictures and tags which train the allegedly objective AI, it is no surprise that surveillance systems wrongly identify more people of color,38 or that female applicants are automatically turned down for jobs by hiring algorithms.39

The truth is, as Paglen concludes, these images aren't supposed to make sense to humans, "these images aren’t meant for us".40 The images transform into fodder, feeding algorithms' desire to decipher once-elusive visual forms. As their appetite swells, "machines-seeing-for-machines [becomes] a ubiquitous phenomenon, encompassing everything from facial-recognition systems conducting automated biometric surveillance at airports to department stores intercepting customers’ mobile phone pings to create intricate maps of movements through the aisles."41 Our every step, swipe, and speculation sketches the human image in the mind of the surveilling machine, the most studious anthropologist of human nature.

iv. Conclusion

The once locating image of the human face seems to have become untethered, both from the human and from the act of facing. The two dots have been switched out, crowd-sourced, blended with other species, and no longer peer back at us. The semi-circular line has wrinkled, bent by competing algorithmic mood predictions, or become obscured by a white mask which shrouds its ambivalent disposition. The face that Ringo's neurons' dreamt of is the confused portrait of a data-rich, knowledge-poor culture, whose splintering social systems, depleted cultural and community life, and derelict democracy barrel blindly towards a techno-determinist future in which every blink can be sold and every smile costs you. For a culture with so many images, we have outsourced our process of envisioning, and the predictions don't look good.

A healthy online community life would demand a great deal from each community member. Levinas's moral obligation of the face has become obscured, requiring us to imagine the reality of the person behind each tweet or selfie, and critically perceive their intentions. Even behind the face of synthetic media, the intention is real. By greeting all media with attentiveness and care, we might peer beyond its surface, imagine its deeper message. Is this media meant to confuse, anger, or pacify me? How does this media provoke me, and how do I wish to be provoked? Without the locating eyes and mouth, we must rely on our ability to perceive the masked being behind the media, and learn to imagine and face this encounter.

The image of Ringo's neuron's dreams is a gift. When the scientists tried to look in Ringo's mind, they saw a picture of themselves looking, and there is something to be learned from their complex reflection. The entire premise of such an experiment– that using a monkey and an algorithm they could investigate the human imagination– is built on an assumption about what monkeys and algorithms and humans share. The images betray our amalgamated reality, that my ability to exist is dependent on a delicate assemblage of nonhuman entities, from microflora assimilating my nourishment, to Wikipedia nourishing my curiosity. Yet not all entities have equal agency, and most remain completely unseen. The visibility of artificial intelligence infiltrating the virtual sphere allows us to reconsider humans' role in relation to other intelligences. The images articulate a politics of hybridity, the blending of animal and machine, the ways bodies improvise based on past information, and the accidental artistry of unlikely collaborators. Rather than seeing these images as a small step towards mind-reading algorithms, why not take them as a prompt for a deeper reckoning with our hybrid nature.

1.) The title for this essay was authored by two artificial title generators at www.coolgenerator.com, and www.tweakyourbiz.com.
2.)  Ed Yong, "AI Evolved These Creepy Images to Please a Monkey's Brain," The Atlantic, May 2nd, 2019, https://www.theatlantic.com/science/archive/2019/05/ai-evolved-these-trippy-images-to-please-a-monkeys-neurons/588517/
3.)  Till S. Hartmann, Gabriel Kreiman, Margaret S. Livingstone, Carlos R. Ponce, Peter F. Schade and Will Xiao, "Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences," Cell, Volume 177, Issue 4 (2019), https://www.cell.com/cell/fulltext/S0092-8674(19)30391-5.
4.) Ibid.
5.) Ibid.
6.) Juhani Pallasmaa, The Embodied Image (Chichester: John Wiley & Sons Ltd., 2011), 17.
7.) Juhani Pallasmaa, The Embodied Image, 10.
8.) Juhani Pallasmaa, The Embodied Image, 11.
9.) Emmanuel Levinas, Totality and Infinity, trans. Alphonso Lingis (Hingham: Martinus Nijhoff Publishers, 1979), 198.
10.) Emmanuel Levinas, Totality and Infinity, 197.
11.) Emmanuel Levinas, Totality and Infinity, 86.
12.) Emmanuel Levinas, Totality and Infinity, 50-51.
13.) Elon Musk's side project Neuralink is a neural implant that allows users to control a computer with their brain. https://neuralink.com/
14.) Juhani Pallasmaa, The Embodied Image, 17.
15.) Greg Jackson, "The Inner Life of a Sinking Ship," Hedgehog Review Vol. 20 No. 3 (Fall 2018), https://hedgehogreview.com/issues/the-evening-of-life/articles/the-inner-life-of-a-sinking-ship
16.) Ibid.
17.) Ibid.
18.) Ibid.
19.) Derek Johnson, "How AI-powered bots could drive the conversation on pending federal regs," The Business of Federal Technology, December 24th, 2019, https://fcw.com/articles/2019/12/24/deepfake-comment-spam-johnson.aspx
20.) Annie Brown, "Wrongfully Accused by an Algorithm," The Daily. Podcast audio, August 3rd, 2020, https://www.nytimes.com/2020/08/03/podcasts/the-daily/algorithmic-justice-racism.html?searchResultPosition=14
 21.) Andrew Liszewski, "Adobe Thinks It Can Make Your Selfies a Lot Less Ugly With This Mystery App," Gizmodo, March 6th, 2017, https://gizmodo.com/adobe-thinks-it-can-make-your-selfies-a-lot-less-ugly-w-1794085624
22.) Andrew Liszewski, "A New Tool for Detecting Deepfakes Looks for What Isn't There: an Invisible Pulse," Gizmodo, September 29th, 2020, https://gizmodo.com/a-new-tool-for-detecting-deepfakes-looks-for-what-isnt-1845214263
23.) Hans Ulrich Obrist, "Making the invisible visible: Art meets AI," Kulturtechniken 4.0. (2018), https://www.goethe.de/prj/k40/en/kun/ooo.html
24.) Rob Toews, "Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared," Forbes, May 25th, 2020, https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/?sh=542d6d977494
25.) Andrew Liszewski, "A New Tool for Detecting Deepfakes Looks for What Isn't There: an Invisible Pulse."
26.) Ibid.
27.) Ibid.
28.) Artist and computer scientist Dr. Ahmed Elgammal and his artificial intelligence AICAN not only create portraits but, through their company Artrendex, purport to prophesize future art trends by analyzing past art movements, selling their predictions to art collectors and investors.
29.) Adam Harvey, Computer Vision Dazzle Camouflage, https://cvdazzle.com/
30.) During World War I, Dazzle Camouflage was used to mislead enemies and ruin their ability to estimate their ship's speed and direction, by painting it with bold lines to disrupt the ships contours.
31.) Adam Harvey, Computer Vision Dazzle Camouflage, https://cvdazzle.com/
32.) Ibid.
33.)  Joe Lloyd, "Trevor Paglen — interview: ‘Everything is surveillance software at this point’," Studio International, September 30th, 2019, https://www.studiointernational.com/index.php/trevor-paglen-interview-everything-is-surveillance-software-at-this-point
34.)  Ibid.
35.)  Ibid.
36.)  Ibid.
37.)  Ibid.
38.) Annie Brown, "Wrongfully Accused by an Algorithm," The Daily, Podcast audio, August 3rd, 2020. https://www.nytimes.com/2020/08/03/podcasts/the-daily/algorithmic-justice-racism.html?searchResultPosition=14
39.) James Bridle, "New Ways of Seeing," BBC 4, Podcast audio, April 17th, 2019. https://www.bbc.co.uk/programmes/m000458l
40.)Joe Lloyd, "Trevor Paglen — interview: ‘Everything is surveillance software at this point’."
41.) Balasz Takac, "Trevor Paglen is Having a European Moment With Three Seminal Shows," Widewalls, September 26th, 2019, https://www.widewalls.ch/magazine/trevor-paglen-barbican-pace-geneva-fondazione-prada


Bridle, James. "New Ways of Seeing." BBC 4. Podcast audio, April 17th, 2019. https://www.bbc.co.uk/programmes/m000458l

Brown, Annie. "Wrongfully Accused by an Algorithm." The Daily. Podcast audio, August 3rd, 2020. https://www.nytimes.com/2020/08/03/podcasts/the-daily/algorithmic-justice-racism.html?searchResultPosition=14

Jackson, Greg. "The Inner Life of a Sinking Ship." Hedgehog Review Vol. 20 No. 3 (Fall 2018). https://hedgehogreview.com/issues/the-evening-of-life/articles/the-inner-life-of-a-sinking-ship

Johnson, Derek. "How AI-powered bots could drive the conversation on pending federal regs." The Business of Federal Technology, December 24th, 2019. https://fcw.com/articles/2019/12/24/deepfake-comment-spam-johnson.aspx

Hartmann, Till S., Gabriel Kreiman, Margaret S. Livingstone, Carlos R. Ponce, Peter F. Schade and Will Xiao. "Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences." Cell, Volume 177, Issue 4 (2019). https://www.cell.com/cell/fulltext/S0092-8674(19)30391-5

Harvey, Adam. Computer Vision Dazzle Camouflage. https://cvdazzle.com/

Levinas, Emmanuel. Totality and Infinity. Translated by Alphonso Lingis. Hingham: Martinus Nijhoff Publishers, 1979.

Liszewski, Andrew. "Adobe Thinks It Can Make Your Selfies a Lot Less Ugly With This Mystery App." Gizmodo, March 6th, 2017. https://gizmodo.com/adobe-thinks-it-can-make-your-selfies-a-lot-less-ugly-w-1794085624

Liszewski, Andrew. "Using AI Smarts, Photoshop Elements Can Now Automatically Open Closed Eyes in a Photo." Gizmodo, October 4th, 2017. https://gizmodo.com/using-ai-smarts-photoshop-elements-can-now-automatical-1819105248

Lloyd, Joe. "Trevor Paglen — interview: ‘Everything is surveillance software at this point’." Studio International, September 30th, 2019. https://www.studiointernational.com/index.php/trevor-paglen-interview-everything-is-surveillance-software-at-this-point

Obrist, Hans Ulrich. "Making the invisible visible: Art meets AI." Kulturtechniken 4.0. (2018). https://www.goethe.de/prj/k40/en/kun/ooo.html

Pallasmaa, Juhani. The Embodied Image. Chichester: John Wiley & Sons Ltd., 2011.

Stereyl, Hito. "Technology Has Destroyed Reality." The New York Times, December 5th, 2018. https://www.nytimes.com/2018/12/05/opinion/technology-has-destroyed-reality.html

Takac, Balasz. "Trevor Paglen is Having a European Moment With Three Seminal Shows." Widewalls, September 26th, 2019. https://www.widewalls.ch/magazine/trevor-paglen-barbican-pace-geneva-fondazione-prada

Toews, Rob. "Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared." Forbes, May 25th, 2020. https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/?sh=542d6d977494

Yong, Ed. "AI Evolved These Creepy Images to Please a Monkey's Brain." The Atlantic, May 2nd, 2019. https://www.theatlantic.com/science/archive/2019/05/ai-evolved-these-trippy-images-to-please-a-monkeys-neurons/588517/