Actually, it’s not a technology at all—regardless of what overheated predictions about Sora have to say about it.
With the release of Open AI’s Sora—which turns text prompts into sophisticated videos— technological threats to democracy are once again at the center of US election coverage. While accurate information is crucial to democracy, ascribing the ability to determine the next election to a technology that has been public for a little over a week is at best premature. At worst, it reinforces a mythology of technological agency that causes far more confusion than any single technology possibly could.
Assigning world-changing power to a new, flashy tool inflates its influence—to the benefit of tech entrepreneurs and to the detriment of the rest of us. Tech insiders regularly proclaim the apocalyptic threat of their own inventions: Last year, over 350 industry executives, researchers, and public intellectuals signed an open letter declaring that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” These exaggerations are reflected in the way most people in advanced, industrial societies talk about technology in general. A recent New York Times feature describes AI as a “powerful technology” that “moves swiftly” and “evolves,” autonomously “developing its own kind of intelligence.” According to the paper of record, AI is a “revolution” that we might limit or rein in, but never control.
The problem is that none of this is true, and this inaccuracy promotes particular political interests that are bad for democracy and worse for economic equality.
Using the term “AI” to refer to a whole complex of infrastructures, human labor, resources, legislation, and regulatory bodies elides the many decisions required to create and implement machine learning at a large scale. In other words, it hides the political process. This allows those with the resources to design and invest in products sold as AI—products for activities such as border surveillance, mortgage approval, and criminal sentencing—to depict their particular social vision as the definition of human progress itself. Because these products collectively form the very infrastructure of so much of our civic and political participation, democracy is being shaped and constrained by a wealthy minority that includes figures such as Peter Thiel and Elon Musk who nurture truly disturbing fantasies of the greater good. This process, which cumulatively translates these interests into the very landscape of advanced capitalist life, is much more significant than the advent of something that allows us to generate intricate and polished cat videos.
Generative AI might produce superficially impressive results at first blush, but it is not revolutionary; it does not present a dramatic historical or technical break with the past. It does not “move” or “evolve” on its own. “Limited memory” AI (that is, AI that can store experiences acquired over time) may have superseded the chess-playing “reactive machines” of the 1990s (i.e., AI with no “past,” only task-based responses), but it still requires active and passive human labor to change and develop. The much-lauded “intelligence” produced by various training models is limited and dependent on humans. According to Google, there are four types of artificial intelligence: the two just mentioned and two that have human-like decision-making and emotional capabilities (“theory of mind” and “self-aware”). It is the latter two that haunt the popular imagination, but they don’t exist yet. In the words of Microsoft Research principal researcher Kate Crawford, artificial intelligence “is neither artificial nor intelligent.” In other words, AI only “knows” anything in the same way that a calculator knows that 2 plus 3 is 5, which is why it cannot be counted on to learn and develop in the same way that a human would. ChatGPT’s recent public meltdown represents a hard limit in AI’s capability rather than a glitch limit in AI programming.
Strictly speaking, AI is not a technology at all. Facial-recognition software, translation and speech recognition programs, scheduling algorithms, and predictive models might all have different machine learning tools running in the background, but it is patently absurd to refer to them as a singular technology. We do not call everything containing copper wires “wiring.”
Even the dubious claim that AI can mimic human intelligence does not distinguish it in the history of computing. In 1946, the year before the invention of the Electronic Numerical Integrator and Computer, one of the first truly programmable electronic digital computers, Lord Louis Mountbatten told the British Institution of Radio Engineers that this “electronic brain” would “extend the scope of the human brain.” He went on: “Machines…can exercise a degree of memory, and some are now being designed to exercise…choice and judgment.” The term “electronic brain” was a commonplace synonym for early computers, and those who used the term often meant it quite literally—the purpose of computers was to reproduce functions of human thought. “Now that the electronic brain and the memory machine are upon us,” Mountbatten warned in familiar language, “it seems that we are really facing a new revolution, not an industrial one but a revolution of the mind.”
Pronouncements such as these prevent criticism of the actual policies, motives, and practices of a diverse set of interests, from Big Tech to digital sweatshops to law enforcement agencies. Talking about AI as if it is a singular, evolving phenomenon makes relationships (say, between employers and employees, or police and citizens) appear as the effects of technology. For example, if some form of AI made parole violations easier to detect, that would be perceived as a technical improvement. The intensification of carceral practices is thus smuggled in under cover of mere mechanics. In other words, it disguises social, political, and economic issues as technological problems with ostensibly objective solutions. We often find the narrative of a unitary, evolving technology such as AI persuasive because it appeals to a broadly held belief that technical change is evidence of historical progress. Most people are therefore reluctant to criticize technology like AI and focus instead on its effects. As the philosopher Herbert Marcuse pointed out decades ago, “the apparatus to which the individual is to adjust and adapt himself is so rational that individual protest appear not only as hopeless but irrational.”
The promise that AI will replace human beings, like the promise of automation before it, plays a central role in the maintenance of employers’ power over workers. Fretting about whether humans risk being replaced by AI shifts attention away from what is actually happening: Human beings are being paid less for worse jobs. Executives in the automobile industry coined the word “automation” in the 1940s to help them fight their recently unionized workers. Using “automation” did not necessarily make things more efficient, but it did often alter jobs enough that they were no longer covered by union contracts. At the time, workers reported that bosses brought in machines that, far from abolishing human labor, sped up workers and broke up good jobs into many bad ones. Today, employers are using AI in precisely the same way: to describe new, often low-tech, bad jobs, and to depict their efforts to get rid of good jobs as “progress.”
These are the notorious “ghost work” jobs, where workers perform labor that companies attribute to machines, like content moderation on social media or micro-tasks on Amazon’s MTurk, Appen, Clickworker, Telus, or CloudFactory. Workers on these platforms plug the gaps in computer systems. For example, French supermarkets recently installed AI that uses video surveillance systems to alert clerks when customers shoplift. However, as the sociologist Antonio Casilli has shown, the “AI” in this case consists of hundreds of workers in Madagascar (earning between €90 and €100 a month) watching surveillance footage and messaging stores when they observe theft.
Far from a new future for work, this mode of labor is a throwback to the at-home piecework of the earliest days of industrialism. If anything makes this work modern, it is the way exploitation masquerading as AI runs along lines of global inequality, with North American and European employers hiring poorly paid workers in South America, Africa, and Asia.
We now confront a second Trump presidency.
There’s not a moment to lose. We must harness our fears, our grief, and yes, our anger, to resist the dangerous policies Donald Trump will unleash on our country. We rededicate ourselves to our role as journalists and writers of principle and conscience.
Today, we also steel ourselves for the fight ahead. It will demand a fearless spirit, an informed mind, wise analysis, and humane resistance. We face the enactment of Project 2025, a far-right supreme court, political authoritarianism, increasing inequality and record homelessness, a looming climate crisis, and conflicts abroad. The Nation will expose and propose, nurture investigative reporting, and stand together as a community to keep hope and possibility alive. The Nation’s work will continue—as it has in good and not-so-good times—to develop alternative ideas and visions, to deepen our mission of truth-telling and deep reporting, and to further solidarity in a nation divided.
Armed with a remarkable 160 years of bold, independent journalism, our mandate today remains the same as when abolitionists first founded The Nation—to uphold the principles of democracy and freedom, serve as a beacon through the darkest days of resistance, and to envision and struggle for a brighter future.
The day is dark, the forces arrayed are tenacious, but as the late Nation editorial board member Toni Morrison wrote “No! This is precisely the time when artists go to work. There is no time for despair, no place for self-pity, no need for silence, no room for fear. We speak, we write, we do language. That is how civilizations heal.”
I urge you to stand with The Nation and donate today.
Onwards,
Katrina vanden Heuvel
Editorial Director and Publisher, The Nation
Since the beginning of industrialization, employers have used machines to break up skilled craft work into “unskilled” jobs. If computers have added anything new to this process, it is the ability of bosses to apply these old methods to white-collar work.
Unionized workers have recently given us one of the best examples of assessing the labor implications of machine learning realistically. After a 148-day strike, members of the Writers Guild of America won a significant amount of control over the use AI in their work. Writers did not worry that text-generating software would replace them. Instead, they feared studios would dissolve the job of writer into re-writer, using computers to write a first draft of a script, or a section of a script, and paying pennies on the dollar to get degraded re-writers to render the (likely extremely rough) computer-generated text into final copy.
Most workers lack the ability to negotiate with their employer over what kinds of machines can and should be a part of the labor process. Unions, afraid that they will be depicted as being against progress, often do not bargain over technology at all. But as the WGA strike showed, the ability to bargain over the specific technologies that alter and organize places of employment is crucial to the maintenance of workers’ control—and the maintenance of good wages and jobs.
We do not need tech billionaires to write open letters about the existential threat of AI. Rather, ordinary people need the ability to exert control over their own public spaces, homes, and workplaces, and this includes having a say in technological “upgrades.” In order for this to happen on a large scale, the mythology of AI needs to be discarded for a far more mundane conversation about the uses of particular machines and an understanding that technology is neither inevitable nor synonymous with human progress in general.
This type of change will not be brought about by regulating AI. On the contrary, calling for the regulation of AI is in many ways actually good for the tech industry. According to this narrative, AI needs to be regulated it is because it is escaping human control. If this is the case, it must truly be intelligent—a revolution made by the tech industry. AI is not revolutionary. It is a way of portraying the control of powerful people over society’s material resources as rational, a way to reframe social and economic hierarchy as progress. We need to stop asking what we are going to do about AI and start asking why a few private individuals already hold so much power over the rest of us.
R.H. LossinR.H. Lossin writes about labor, libraries, technology, contemporary art, and American radicalism. Her work has appeared in New Left Review, Salvage, Boston Review, Jacobin, Art Agenda, The Brooklyn Rail, and The New York Review of Books. She holds a PhD in communications from Columbia University and teaches at the Brooklyn Institute for Social Research.
Jason ResnikoffJason Resnikoff is an assistant professor of contemporary history at the Rijksuniversiteit Groningen in the Netherlands and the author of Labor’s End: How the Promise of Automation Degraded Work.