Society / February 27, 2024

AI Isn’t a Radical Technology

Actually, it’s not a technology at all—regardless of what overheated predictions about Sora have to say about it.

R.H. Lossin and Jason Resnikoff
(Getty)

With the release of Open AI’s Sora—which turns text prompts into sophisticated videos— technological threats to democracy are once again at the center of US election coverage. While accurate information is crucial to democracy, ascribing the ability to determine the next election to a technology that has been public for a little over a week is at best premature. At worst, it reinforces a mythology of technological agency that causes far more confusion than any single technology possibly could.

Assigning world-changing power to a new, flashy tool inflates its influence—to the benefit of tech entrepreneurs and to the detriment of the rest of us. Tech insiders regularly proclaim the apocalyptic threat of their own inventions: Last year, over 350 industry executives, researchers, and public intellectuals signed an open letter declaring that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” These exaggerations are reflected in the way most people in advanced, industrial societies talk about technology in general. A recent New York Times feature describes AI as a “powerful technology” that “moves swiftly” and “evolves,” autonomously “developing its own kind of intelligence.” According to the paper of record, AI is a “revolution” that we might limit or rein in, but never control.

The problem is that none of this is true, and this inaccuracy promotes particular political interests that are bad for democracy and worse for economic equality.

Using the term “AI” to refer to a whole complex of infrastructures, human labor, resources, legislation, and regulatory bodies elides the many decisions required to create and implement machine learning at a large scale. In other words, it hides the political process. This allows those with the resources to design and invest in products sold as AI—products for activities such as border surveillance, mortgage approval, and criminal sentencing—to depict their particular social vision as the definition of human progress itself. Because these products collectively form the very infrastructure of so much of our civic and political participation, democracy is being shaped and constrained by a wealthy minority that includes figures such as Peter Thiel and Elon Musk who nurture truly disturbing fantasies of the greater good. This process, which cumulatively translates these interests into the very landscape of advanced capitalist life, is much more significant than the advent of something that allows us to generate intricate and polished cat videos.

Generative AI might produce superficially impressive results at first blush, but it is not revolutionary; it does not present a dramatic historical or technical break with the past. It does not “move” or “evolve” on its own. “Limited memory” AI (that is, AI that can store experiences acquired over time) may have superseded the chess-playing “reactive machines” of the 1990s (i.e., AI with no “past,” only task-based responses), but it still requires active and passive human labor to change and develop. The much-lauded “intelligence” produced by various training models is limited and dependent on humans. According to Google, there are four types of artificial intelligence: the two just mentioned and two that have human-like decision-making and emotional capabilities (“theory of mind” and “self-aware”). It is the latter two that haunt the popular imagination, but they don’t exist yet. In the words of Microsoft Research principal researcher Kate Crawford, artificial intelligence “is neither artificial nor intelligent.” In other words, AI only “knows” anything in the same way that a calculator knows that 2 plus 3 is 5, which is why it cannot be counted on to learn and develop in the same way that a human would. ChatGPT’s recent public meltdown represents a hard limit in AI’s capability rather than a glitch limit in AI programming.

Strictly speaking, AI is not a technology at all. Facial-recognition software, translation and speech recognition programs, scheduling algorithms, and predictive models might all have different machine learning tools running in the background, but it is patently absurd to refer to them as a singular technology. We do not call everything containing copper wires “wiring.”

Even the dubious claim that AI can mimic human intelligence does not distinguish it in the history of computing. In 1946, the year before the invention of the Electronic Numerical Integrator and Computer, one of the first truly programmable electronic digital computers, Lord Louis Mountbatten told the British Institution of Radio Engineers that this “electronic brain” would “extend the scope of the human brain.” He went on: “Machines…can exercise a degree of memory, and some are now being designed to exercise…choice and judgment.” The term “electronic brain” was a commonplace synonym for early computers, and those who used the term often meant it quite literally—the purpose of computers was to reproduce functions of human thought. “Now that the electronic brain and the memory machine are upon us,” Mountbatten warned in familiar language, “it seems that we are really facing a new revolution, not an industrial one but a revolution of the mind.”

Pronouncements such as these prevent criticism of the actual policies, motives, and practices of a diverse set of interests, from Big Tech to digital sweatshops to law enforcement agencies. Talking about AI as if it is a singular, evolving phenomenon makes relationships (say, between employers and employees, or police and citizens) appear as the effects of technology. For example, if some form of AI made parole violations easier to detect, that would be perceived as a technical improvement. The intensification of carceral practices is thus smuggled in under cover of mere mechanics. In other words, it disguises social, political, and economic issues as technological problems with ostensibly objective solutions. We often find the narrative of a unitary, evolving technology such as AI persuasive because it appeals to a broadly held belief that technical change is evidence of historical progress. Most people are therefore reluctant to criticize technology like AI and focus instead on its effects. As the philosopher Herbert Marcuse pointed out decades ago, “the apparatus to which the individual is to adjust and adapt himself is so rational that individual protest appear not only as hopeless but irrational.”

The promise that AI will replace human beings, like the promise of automation before it, plays a central role in the maintenance of employers’ power over workers. Fretting about whether humans risk being replaced by AI shifts attention away from what is actually happening: Human beings are being paid less for worse jobs. Executives in the automobile industry coined the word “automation” in the 1940s to help them fight their recently unionized workers. Using “automation” did not necessarily make things more efficient, but it did often alter jobs enough that they were no longer covered by union contracts. At the time, workers reported that bosses brought in machines that, far from abolishing human labor, sped up workers and broke up good jobs into many bad ones. Today, employers are using AI in precisely the same way: to describe new, often low-tech, bad jobs, and to depict their efforts to get rid of good jobs as “progress.”

These are the notorious “ghost work” jobs, where workers perform labor that companies attribute to machines, like content moderation on social media or micro-tasks on Amazon’s MTurk, Appen, Clickworker, Telus, or CloudFactory. Workers on these platforms plug the gaps in computer systems. For example, French supermarkets recently installed AI that uses video surveillance systems to alert clerks when customers shoplift. However, as the sociologist Antonio Casilli has shown, the “AI” in this case consists of hundreds of workers in Madagascar (earning between €90 and €100 a month) watching surveillance footage and messaging stores when they observe theft.

Far from a new future for work, this mode of labor is a throwback to the at-home piecework of the earliest days of industrialism. If anything makes this work modern, it is the way exploitation masquerading as AI runs along lines of global inequality, with North American and European employers hiring poorly paid workers in South America, Africa, and Asia.

Since the beginning of industrialization, employers have used machines to break up skilled craft work into “unskilled” jobs. If computers have added anything new to this process, it is the ability of bosses to apply these old methods to white-collar work.

Unionized workers have recently given us one of the best examples of assessing the labor implications of machine learning realistically. After a 148-day strike, members of the Writers Guild of America won a significant amount of control over the use AI in their work. Writers did not worry that text-generating software would replace them. Instead, they feared studios would dissolve the job of writer into re-writer, using computers to write a first draft of a script, or a section of a script, and paying pennies on the dollar to get degraded re-writers to render the (likely extremely rough) computer-generated text into final copy.

Most workers lack the ability to negotiate with their employer over what kinds of machines can and should be a part of the labor process. Unions, afraid that they will be depicted as being against progress, often do not bargain over technology at all. But as the WGA strike showed, the ability to bargain over the specific technologies that alter and organize places of employment is crucial to the maintenance of workers’ control—and the maintenance of good wages and jobs.

We do not need tech billionaires to write open letters about the existential threat of AI. Rather, ordinary people need the ability to exert control over their own public spaces, homes, and workplaces, and this includes having a say in technological “upgrades.” In order for this to happen on a large scale, the mythology of AI needs to be discarded for a far more mundane conversation about the uses of particular machines and an understanding that technology is neither inevitable nor synonymous with human progress in general.

This type of change will not be brought about by regulating AI. On the contrary, calling for the regulation of AI is in many ways actually good for the tech industry. According to this narrative, AI needs to be regulated it is because it is escaping human control. If this is the case, it must truly be intelligent—a revolution made by the tech industry. AI is not revolutionary. It is a way of portraying the control of powerful people over society’s material resources as rational, a way to reframe social and economic hierarchy as progress. We need to stop asking what we are going to do about AI and start asking why a few private individuals already hold so much power over the rest of us.

Can we count on you?

In the coming election, the fate of our democracy and fundamental civil rights are on the ballot. The conservative architects of Project 2025 are scheming to institutionalize Donald Trump’s authoritarian vision across all levels of government if he should win.

We’ve already seen events that fill us with both dread and cautious optimism—throughout it all, The Nation has been a bulwark against misinformation and an advocate for bold, principled perspectives. Our dedicated writers have sat down with Kamala Harris and Bernie Sanders for interviews, unpacked the shallow right-wing populist appeals of J.D. Vance, and debated the pathway for a Democratic victory in November.

Stories like these and the one you just read are vital at this critical juncture in our country’s history. Now more than ever, we need clear-eyed and deeply reported independent journalism to make sense of the headlines and sort fact from fiction. Donate today and join our 160-year legacy of speaking truth to power and uplifting the voices of grassroots advocates.

Throughout 2024 and what is likely the defining election of our lifetimes, we need your support to continue publishing the insightful journalism you rely on.

Thank you,
The Editors of The Nation

R.H. Lossin

R.H. Lossin writes about labor, libraries, technology, contemporary art, and American radicalism. Her work has appeared in New Left Review, Salvage, Boston Review, Jacobin, Art Agenda, The Brooklyn Rail, and The New York Review of Books. She holds a PhD in communications from Columbia University and teaches at the Brooklyn Institute for Social Research.

Jason Resnikoff

Jason Resnikoff is an assistant professor of contemporary history at the Rijksuniversiteit Groningen in the Netherlands and the author of Labor’s End: How the Promise of Automation Degraded Work.

More from The Nation

Jeff Bezos attends the UFC 306 at Riyadh Season Noche UFC event at Sphere on September 14, 2024, in Las Vegas, Nevada.

Newspapers That Refuse to Endorse Are Betraying Journalism—and Democracy Newspapers That Refuse to Endorse Are Betraying Journalism—and Democracy

Billionaire publishers who censor endorsements that offend Donald Trump confirm their scorching disregard for the traditions of a free press.

John Nichols

This Anti-Immigrant Ruling by a Trump Judge Tells You All You Need to Know

This Anti-Immigrant Ruling by a Trump Judge Tells You All You Need to Know This Anti-Immigrant Ruling by a Trump Judge Tells You All You Need to Know

For judges like Trevor McFadden, the cruelty toward immigrants is not only the point, it’s the source of the pleasure.

Column / Elie Mystal

How Wisconsin Lost Control of the Strange Disease Killing Its Deer

How Wisconsin Lost Control of the Strange Disease Killing Its Deer How Wisconsin Lost Control of the Strange Disease Killing Its Deer

Despite early containment efforts, chronic wasting disease has been allowed to run rampant in the state. That’s bad news for all of us.

Feature / Jimmy Tobias

The Perfect, Smiling Wives of the Christian Right

The Perfect, Smiling Wives of the Christian Right The Perfect, Smiling Wives of the Christian Right

Far from being an immutable fact of Christianity, evangelical antifeminism is recent, virulent, and gaining traction every day.

Feature / Talia Lavin

A depiction of the Broadway Linear Park from 32nd Street in Manhattan.

Can New York’s Most Famous Street be Turned into a Park? Can New York’s Most Famous Street be Turned into a Park?

The effort to transform Broadway into a pedestrian space.

Books & the Arts / Karrie Jacobs

Emily Oster and the Optimization of Parenting

Emily Oster and the Optimization of Parenting Emily Oster and the Optimization of Parenting

What gets lost when we approach pregnancy and raising children through data?

Books & the Arts / Anna Louie Sussman