Toggle Menu

Artificial Intelligence Is Failing Humans

When someone’s always watching, we lose our sense of self. 

Patricia J. Williams

July 16, 2019

Security cameras placed in front of a US flag ahead Trump’s visit to Beijing in 2017.(Reuters / Damir Sagolj)

In a recent New York Times editorial titled “Why We Should Stop Fetishizing Privacy,” entrepreneur Heidi Messer posited the ultimate list of techno-optimist clichés. Tech companies drive the economy, bringing health, wealth, jobs, and truth. Those who caution against a host of risks such as monopoly, hubris, and shortsightedness should be dismissed as ignorant “privacy evangelists.” Public regulation is bad because tech companies have “the talent and resources” to protect us against cyberwarfare and “foreign and criminal intrusion.” We should follow the example of “digital natives,” who “start with an awareness that their data isn’t private.” According to Messer, public oversight would only gum up the workings of all the utopian delights of this shiny new world. Given such blessings, why would we want to break up big tech companies?

There is much to unpack in Messer’s breezy dismissal of both unchecked monopoly power and the invasive apparatus of totalistic technological surveillance. For present purposes, however, I would like to think about how the concept of the self is affected by the widening use of algorithms to translate more and more bits of ourselves into numerical representations. Artist Trevor Paglen gives a succinct example: When people upload pictures of their kids, algorithms reading those photos feed invisible data sets in ways that may eventually influence something as apparently unrelated as those children’s health insurance. Similarly, if a teenager uploads a picture of herself having a beer, her underage drinking may be marked as information that can be sold, utilized by police departments whose scrutiny “will be guided by your ‘pattern of life’ signature,” warns Paglen. “When you put an image on Facebook or other social media, you’re feeding an array of immensely powerful artificial systems information about how to identify people and how to recognize places and objects, habits and preferences, race, class, and gender identifications, economic statuses, and much more.”

Recently, a 10-year-old in Maryland shared clearly marked play money with classmates while riding on his school bus. The driver contacted his supervisor. Police were called, and finally, the Secret Service—all to investigate the child for counterfeiting. While this is absurd on its face (and yes, the child was black), what’s more invisibly sad is that each time a person enters a database for having had contact with police, it will affect all sorts of other life chances, including risk assessments for employment, credit, and child custody.

That’s largely because artificial intelligence dispenses predictive computations based only on what it is trained—by humans—to see. Many universities now use Canvas, a course management platform on which students can discuss material or share lecture notes while their instructor is talking. When I was being trained to use the program, I noticed that the IT department had a screen on which the entire faculty was numerically ranked based on who generated the most comments during lectures. I was told that it would help us know what parts of a lecture stirred interest, but to me, it seemed only a whisker away from a Kim Kardashian standard of generated buzz as professorial achievement—a test I surely fail because I often tell my students to close their laptops.

Current Issue

View our current issue

Subscribe today and Save up to $129.

Nevertheless, that kind of scoring is becoming a more important metric in all walks of life, particularly on social media. Are you getting hits? Are you being seen? I was tempted to run out and buy a glitter bustier and some feathers in order to raise my rank. The system incentivizes me to be splashy—in the eyes of a machine reading the abstraction of students representing me via the abstraction of how often their fingers strike the keys over the course of an hour.

What gets lost between life and the screen is that we are contextual beings. That’s where machines and AI fall dismally short. Machine learning has trouble distinguishing the serious from the ironic or a pig butt from pornography, while as humans, we can smell fear, taste excitement, and feel the heat of someone else’s humiliation. There are sensory dimensions that go into our calculations of whether to like or not like someone, to grow close or pull back, to rescue, forgive, or let die.

AI also threatens an important aspect of democratic identity: the right of self-invention. The word “persona” means “mask,” as in ancient theater. It is something through which one presents oneself to the world. In jurisprudence, the persona is both the protection of one’s inner privacy (or freedom to think without censor) and the right to invent oneself outwardly. We present ourselves differently at the opera than the pool hall. We code-switch between accents and languages when speaking to our babies as opposed to our bosses. We seek amnesty for our misdeeds. But we lose that ability to compose ourselves fluidly and situationally if we are eternally confined by our last worst moment—if our weakest and most foolish acts are always fresh and foremost in the unnuanced mechanical brain of deep data.

W.E.B. Du Bois and Frantz Fanon described the stress of double consciousness experienced by black people living in white worlds—the phenomenon of always having to be aware of who’s watching. Life under these conditions is no longer one’s own creative, whimsical, and enjoyable construction. It becomes about a constant fear of offending. That kind of disembodying quality can be hard to bear, even when it’s praising you—just think of the obsessive gaze focused on celebrities like Princess Diana or John Lennon, who were ultimately adored to death. Yet it’s what tech giants like Google and Facebook seem intent on normalizing. Internalizing the panopticon has always been an extraordinary stress on marginalized populations like migrants and people of color. If this is the new norm for everybody, we will see those kinds of stresses magnified, stretching us ultimately to the breaking point. A merciful society gives us room to breathe.

Patricia J. WilliamsTwitterPatricia J. Williams is University Professor of Law and Philosophy, and director of Law, Technology and Ethics at Northeastern University.


Latest from the nation