Race After Technology opens with a brief personal history set in the Crenshaw neighborhood of Los Angeles, where sociologist Ruha Benjamin spent a portion of her childhood. Recalling the time she set up shop on her grandmother’s porch with a chalkboard and invited other kids to do math problems, she writes, “For the few who would come, I would hand out little slips of paper…until someone would insist that we go play tag or hide-and-seek instead. Needless to say, I didn’t have that many friends!” But beyond the porch, things weren’t so cozy. As she gazed out the back window during car rides, she saw “boys lined up for police pat-downs,” and inside the house she heard “the nonstop rumble of police helicopters overhead, so close that the roof would shake.” The omnipresent surveillance continued when she visited her grandmother years later as a mother, her homecomings blighted by “the frustration of trying to keep the kids asleep with the sound and light from the helicopter piercing the window’s thin pane.”
Benjamin’s personal beginning sets the tone for her book’s approach, one that focuses on how modern invasive technologies—from facial recognition software to electronic ankle monitors to the metadata of photos taken at protests—further racial inequality. Instead of confining herself to the technical reasons that infrared soap dispensers don’t react to darker skin or that algorithms that use names to predict the ethnicity of job applicants exacerbate workplace discrimination, she reconfigures technologies as vessels of history, exploring how their circumstances produce their effects. Presented as a “field guide” and subtitled “Abolitionist Tools for the New Jim Code,” Race After Technology concerns itself with introducing the many technologies that aren’t as obtrusive and menacing as armed police flying overhead but that are equally domineering. A kind of critical cipher for the age of Big Data and mass surveillance, the book illuminates how cutting-edge tech so often reproduces old inequalities. As a guide to how good intentions still fail to stem bias and prejudice (and often even amplify them), Race After Technology also offers us an account of how machines and algorithms can be racist. Discriminatory technology always has a human source, she reminds us, but the trick is learning how to find the ghost lurking in every machine.
The modern study of the intersection of race and technology has its roots in the 1990s, when tech utopianism clashed with the racism of tech culture. As the Internet grew into a massive nexus for commerce and leisure and became the heart of modern industry, the ills of tech workplaces manifested themselves online in chat rooms, message boards, and multiplayer video games that were rife with harassment and hate speech. Documenting these instances, a range of scholars, activists, and politicians attempted to combat these ills, but with little success. When the Simon Wiesenthal Center sent letters to Internet providers in 1996 protesting the rise of neo-Nazi websites, for example, the reply it received from a representative of the Electronic Frontier Foundation, a prominent tech lobby, channeled a now commonplace mantra: “The best response is always to answer bad speech with more speech.” Similarly, media studies researcher Lisa Nakamura documented a dismissive comment in a study of the online game LambdaMOO. In response to a failed community petition to curb racial harassment, a detractor countered, “Well, who knows my race unless I tell them? If race isn’t important [then] why mention it? If you want to get in somebody’s face with your race then perhaps you deserve a bit of flak.”
Popular
"swipe left below to view more authors"Swipe →
Race After Technology belongs to this earlier tradition of protest and scholarship—books like the seminal collections Race in Cyberspace and Communities in Cyberspace—that responded to this dismissive environment by documenting the way the Internet altered and entrenched conventions around race and identity, as well as the way those shifts were dictated by the characteristics of different online spaces. From Byron Burkhalter exploring how the people in Usenet newsgroups relied on a host of conversational quirks and specialized knowledge to discern the race of other users to Judith Donath examining how online handles and signatures communicated the personalities and identities of their authors, these early texts made the now obvious case that the Internet was shaped by the larger world—for better and worse.
In Race After Technology, however, Benjamin expands this insight, examining not only the emergence of a racist Internet but also how it is produced by a tech sector and a set of commercial products (online and off-) that are themselves shaped by historical prejudices, biases, and inequalities. An anthropologist and sociologist by training, Benjamin has specialized in research on biotechnology and race. Her first book, People’s Science, explored a California stem cell initiative that silenced poor and disabled research subjects despite ostensibly being designed to recognize their concerns. She has applied her interest in the gaps between scientific ideals and practice to a wide range of subjects, like egg donation and biased algorithms, for outlets such as HuffPost, The Guardian, and the Los Angeles Times. Race After Technology bridges Benjamin’s research and her broader interest in increasing the public’s literacy in tech. Less rooted in a particular type of technology, the book focuses on the practices and rhetoric that shape how issues concerning race—in artificial intelligence (AI), algorithms, and data collection—are treated across the tech sector.
The prevalence of secondary sources in Race After Technology can make the book feel more like a literature review than a focused thesis, but things snap into focus as Benjamin trains her roving eye on recurring forms across technologies, particularly codes, which in her telling encompass programming languages as well as names, addresses, and hashtags. Codes, she warns, “act as narratives” and “operate within powerful systems of meaning that render some things visible, others invisible, and create a vast array of distortions and dangers.”
Benjamin’s social and technical definition of codes serves as one of the book’s cruxes. Her interest isn’t simply to catalog all the oppressive tech out there; her goal is to make the dangers of tech legible, to teach us how to read technology through the lenses of history and experience.
To make clear how today’s technologies channel the heinous social systems of the past, she riffs on Michelle Alexander’s notion that we are living in a new era of Jim Crow. For her, racism is not just an untold chapter in the story of technology; it’s a constant presence, a leitmotif. Like Alexander, who coined the term “New Jim Crow” to accentuate the manner in which the carceral state was built from an existing racist blueprint under the auspices of neutrality and fairness, Benjamin uses hers to underline how central bias is in seemingly objective technological systems.
Benjamin makes her case by answering a set of questions in each chapter. Can robots, AI, and algorithms be racist? Yes. Are discriminatory “glitches” mistakes? No. Do unbiased technologies free us from our biases? Of course not. She grounds these assertions in wide-ranging arguments that reveal interesting patterns across a variety of contexts. To explain how computer programs perpetuate racism, for example, she looked at Beauty AI, a 2016 beauty pageant judged by a then-pioneering machine learning algorithm. Developed by a company based in Australia and Hong Kong, the algorithm strongly preferred contestants with lighter skin color, choosing only six nonwhite winners out of thousands of applicants and leaving its creators confused. “The robots did not like people with darker skin,” they said matter-of-factly. Instead of dismissing the makers of Beauty AI outright, Benjamin looks to the deep learning process that produced the algorithm, in which the software was set up to sort photos in accordance with labeled images tagged with information on face symmetry, wrinkles, skin color, and other factors. Those labels, she notes, were encoded with biases about what defines beauty, tainting Beauty AI from the start. If deep learning is a theory of the mind, as Mark Zuckerberg claims, Benjamin asks, “Whose mind is it modeled on?”
That question resonates as she examines other technologies that rely on questionable input. Discussing automated soap dispensers, which viral videos have shown to be unresponsive to hands that have darker skin tones, Benjamin moves intuitively from a technical explanation to a historical one:
Near infrared technology requires light to bounce back from the user and activate the sensor, so skin with more melanin, absorbing as it does more light, does not trigger the sensor. But this strictly technical account says nothing about why this particular sensor was used, whether there are other options, which recognize a broader spectrum of skin tones, and how this problem was overlooked during development and testing…. Like segregated water fountains of a previous era, the discriminatory soap dispenser offers a window onto a wider social terrain.
What’s instructive here is how the human and mechanical components of the dispenser intermix in ways that only reaffirm existing inequalities: From the human comes the machines that then change human life.
Soap dispensers are, of course, only one part of everyday life, and for farther-reaching systems, such as predictive policing software that assigns recidivism risk scores to parolees, Benjamin highlights how much their metrics rely on tainted data. “Institutional racism, past and present, is the precondition for the carceral technologies that underpin the US penal system,” she writes. “At every stage of the process—from policing, sentencing, and imprisonment to parole—automated risk assessments are employed.” Again, her interest is in the social history of technology. If risk scores are built on data derived from excessive stop-and-frisks, racialized sentencing disparities, and targeted dragnets like the ones Benjamin saw growing up in Crenshaw, they can only exacerbate the justice system’s existing biases. This example is more theoretical than her deconstruction of the soap dispenser, but it speaks to the same issue: How can a technology correct the mistakes of the past when those mistakes are built into its design?
These cases illustrate how expansive tech criticism can be when outcomes and effects are privileged over intentions, and Race After Technology is at its liveliest when Benjamin is zipping across milieus, connecting disparate technologies, movements, politics, and social systems. As she puts it in one arresting passage about the broader historical context and implications of Black Lives Matter:
The Movement for Black Lives is implicitly an anti-eugenics movement. The aim is not just to stop premature deaths that result from police violence but to foster economic, social, and political power and resources that will sustain Black life more broadly. Fostering life, in turn, requires reckoning with the multiple ways science and technology can expose people to death—from Dr. Marion J. Sims’ experiments carried out on anesthetized enslaved women and designed to hone gynecological techniques, to then President Barack Obama’s 563 drone strikes that killed hundreds.
Benjamin convincingly makes this move from the affirmation of life embedded in Black Lives Matter to its negation in drone warfare and scientific racism. The links here are both rhetorical and real. Black Lives Matter, birthed during the Obama presidency, has organized around the disproportionately high black maternal death rate, but as the movement grew, it also took its cues from abroad. The consequences of Obama’s deployment of police and military force abroad influenced how Black Lives Matter organized against police abuses at home, in cities like Baltimore, Dallas, and Ferguson, Mo., that have infamously militarized police forces. In moments like this, the capacious humanitarianism at the heart of Benjamin’s project comes to the fore: The pursuit of digital equality is a global project.
What’s refreshing about Race After Technology is that it’s clearly written to preempt any attempts to downplay or avoid tech’s deep-seated inequalities. Instead of dragging tech titans like Zuckerberg, who once used a virtual reality (VR) headset to visit a disaster site, or academics like John McWhorter, who dismissed the notion of discriminatory design by saying, “No one at Google giggled while intentionally programming its software to mislabel black people,” Benjamin changes the rules of engagement. While she speaks frankly of companies like Facebook and Amazon, which “encode judgments into technical systems but claim that the racist results of their designs are entirely exterior to the encoding process,” she concentrates on their actions and practices rather than their stated beliefs. There are no supervillain tech bros in her account, no evil cabals of trolls launching denial-of-service strikes from the Dark Web, no innocent bots corrupted by the inherent evils of Twitter. There’s just prejudice and its pernicious adaptability.
Benjamin’s powerful argument helps us probe the inequalities under the surface of everyday technology and forces us to build a politics that focuses on not just the Silicon Valley robber barons but on society as a whole. Instead of reducing discriminatory tech to individual pathologies, she emphasizes processes and convergences that cut across all of American culture and economics. As often as she points out the homogeneity of the makers of discriminatory tech, she also notes that the inequality this tech produces is a structural problem far more than a personal one. In fact, in her shrewd telling, the homogeneity of race and gender in the tech sector can easily mask others—such as shared methods and educational backgrounds among coworkers—that reinforce today’s hierarchies:
We could expect a Black programmer, immersed as she is in the same systems of racial meaning and economic expediency as the rest of her co-workers, to code software in a way that perpetuates racial stereotypes. Or, even if she is aware and desires to intervene, will she be able to exercise the power to do so?
What Benjamin is reminding us is that the inequalities that technologies produce are sociological and political as well as cultural. Diversity can characterize a work environment while shrouding the mechanisms of decision-making therein. It may be unlikely, but if the management is all white, a team of black programmers is just as capable of anti-black racism as a white one.
This is an important point and answers a question that music critic Anupa Mistry recently raised in an essay for Pitchfork about incorporating a structural analysis into any representational politics. Is identity-based art radical, she asked, if a marginalized group creates it but it does not articulate or explore an accompanying politics? Or as she put it, “Is the representation that feeds the content mill really just a catfish?” As both Mistry and Benjamin note, if we’re not confronting power and systems, nothing is going to change.
Benjamin recognizes that confronting power and systems is not going to be easy. In discussing programs that use VR vocational training to prepare incarcerated people for the labor market after their sentences are served, she notes that this seemingly innovative tech fails to address an existing deterrent for convicts seeking work: background checks. “The labor market is already shaped by a technology that seeks to sort out those who are convicted, or even arrested, regardless of race,” she observes. “When such technological fixes are used by employers to make hiring decisions in the name of efficiency, there is little opportunity for a former felon, including those who have used VR [vocational training], to garner the empathy of an employer.” Likewise, the health care practice of hot-spotting, which uses geographic data to allocate resources to areas with more high-needs patients, may appear to address disparities in the quality and cost of health care. But as Benjamin writes, in its naive use of geographic data and a top-down definition of “high needs,” hot-spotting often employs the logic of racial profiling and is yet one more hindrance to creating a more egalitarian health care system.
What’s ultimately distinctive about Race After Technology is that its withering critiques of the present are so galvanizing. The field Benjamin maps is treacherous and phantasmic, full of obstacles and trip wires whose strength lies in their invisibility. But each time she pries open a black box, linking the present to some horrific past, the future feels more open-ended, more mutable. As a category, too, technology is equally mutable, giving activists and users who have struggled to repair broken systems new ways of understanding how discrimination is manifested. This is perhaps Benjamin’s greatest feat in the book: Her inventive and wide-ranging analyses remind us that as much as we try to purge ourselves from our tools and view them as external to our flaws, they are always extensions of us. As exacting a worldview as that is, it is also inclusive and hopeful. What happens in Crenshaw matters everywhere.