Don’t Blame Students for Using ChatGPT to Cheat

Don’t Blame Students for Using ChatGPT to Cheat

Don’t Blame Students for Using ChatGPT to Cheat

When college education is rendered transactional, a generation trained to use technological tools to solve problems is just doing what it’s told.

Copy Link
Facebook
X (Twitter)
Bluesky
Pocket
Email

The latest higher-ed discourse is positively flooded with worries about ChatGPT, a free chatbot developed by the OpenAI research lab that produces fluent, if not always correct, responses to user prompts. Though there are anecdotal reports that it provides highly plausible and polished answers on occasion, ChatGPT still demonstrates obvious limitations and generally fails to provide perfectly coherent or accurate prose that meets academic standards of research and citation. But the technology is improving, and with a little fact-checking and revision, texts generated by current-generation AIs can be made to resemble original student submissions. Over winter break, many professors quickly revised their syllabi, anticipating a wave of machine-made writing that cannot be caught using conventional plagiarism checkers.

Although many people have written about how to AI-proof assignments—with some even proclaiming the death of the college essay—few have remarked on why faculty think students will be so eager to take up this new plagiarizing technology. ChatGPT did not cause our plagiarism problem: It has only automated or deskilled the essay-mill industry already churning out papers for pay. Computer-assisted plagiarism is a mere symptom of a much larger problem with education today.

Students cheat for a variety of reasons. Some plagiarizers are desperate, others are overworked, and some students simply do not understand the rules. But it’s naive to fret over a culture of plagiarism without considering any of the political or economic context of today’s university. Students’ calculations are logical in the era of the neoliberal, austerity-addled college: Education is no longer promoted as a goal worth pursuing for its own sake; if post-secondary education is a transactional process whose sole purpose is to unlock better career options, then why not cut corners? Why not optimize one’s chances? It’s not a secret why undergraduates might now have fewer compunctions about submitting someone else’s work.

Corporatization of the university reframed education as a narrow form of preprofessional training. Students are now acutely aware of the fact that the university functions as a class-sorting mechanism. Educational institutions launder privilege as scholarly merit, rewarding students whose families possessed the resources to prepare them for college and send them off to elite schools.

However, amid ongoing precarity, the university is no longer a reliable path into the middle class. Students are rightfully anxious about how their academic performance might affect their socioeconomic futures. When everyone from parents to politicians insists that students should choose their course of study based on anticipated return on investment, we should not be surprised that many undergraduates see plagiarism as a strategy to ensure that all those years of (often expensive, debt-inducing) schooling will pay off.

Meanwhile, as tuitions rise, more students have taken on jobs to pay for their education, leaving them with little time to complete homework. At some level they must realize that—as Malcolm Harris suggested—they are performing unpaid work to ready themselves for an employment opportunity that may never come. ChatGPT must undoubtedly appear to many of them as a labor-saving device, reducing time spent on unremunerated tasks such as essay writing so they can focus on waged labor and other pressing responsibilities such as care work.

In many ways, ChatGPT looks like the flexible laboring subject students are expected to become after graduation. The capitalist class pushes workers to rebrand, retrain, and relocate as they chase scarce jobs in an era of economic turbulence. The chatbot is well-suited to this instability: it is a virtuoso capable of adopting any style, voice, or opinion that is demanded of it. As such, it’s unencumbered by commitments that might hinder it from completing its duties. If necessary, it will contradict itself in mid-sentence. ChatGPT embodies the cynicism and opportunism that Italian Marxist Paolo Virno diagnoses in post-Fordist workers forced to sell themselves out to remain employed. AI plagiarism becomes an object lesson preparing future workers to shed their beliefs and values whenever capitalism deems them inexpedient.

None of this excuses plagiarism or suggests that universities should ignore attempts to game the system. A graduating cohort reliant upon machines to think is especially scary considering the future that AI is poised to bring about. As Ezra Klein has argued, clever chatbots will soon “drive the cost of bullshit to zero.” Here Klein uses this profane term in the way it has been theorized by philosopher Harry G. Frankfurt: Interested parties are going to use chatbots to crank out a limitless amount of discourse disconnected from the truth in order to influence and confuse the public.

This is precisely the nightmare scenario that the essay genre prepares students to confront. In my writing classes, we learn how to locate, evaluate, and understand sources of good information about any topic. Just as importantly, students come to see scholarly inquiry as a collective project dedicated to improving our shared understanding of the world. The chatbot does none of these things. It does not care about the truth, and it has no grasp of the topics that it holds forth upon. It is incapable of listening or responding to others. Now, at least, AI can only spit out monologues of probable-sounding blather.

If students do not dig deep into writing and research, they will never fully appreciate the difference between the AI’s plausible nonsense and genuine scholarly dialogue. Unfortunately, some faculty have already replaced the essay with alternate assignments such as oral exams that make it harder to cheat but do not replicate the experience of entering into a sustained conversation with other writers.

Fighting back against AI plagiarism will be difficult because, once more, the university’s corporatization has made it susceptible to it to begin with. Faculty have long suffered the same forms of casualization experienced by other college graduates. Universities often relegate writing instruction to underpaid adjuncts with no job security—overburdened faculty who have the least support to hold the line against the chatbot plagiarism. Meanwhile, quick fixes such as AI detectors will only make student-teacher relationships more adversarial, eroding the trust and goodwill that helps prevent students from plagiarizing in the first place.

To stop AI plagiarism, we must reverse the trend toward both student and faculty precarity while creating a learning environment in which undergraduates value education as an intrinsic good. Otherwise, artificial intelligences will soon render academic integrity and scholarly inquiry obsolete.

Disobey authoritarians, support The Nation

Over the past year you’ve read Nation writers like Elie Mystal, Kaveh Akbar, John Nichols, Joan Walsh, Bryce Covert, Dave Zirin, Jeet Heer, Michael T. Klare, Katha Pollitt, Amy Littlefield, Gregg Gonsalves, and Sasha Abramsky take on the Trump family’s corruption, set the record straight about Robert F. Kennedy Jr.’s catastrophic Make America Healthy Again movement, survey the fallout and human cost of the DOGE wrecking ball, anticipate the Supreme Court’s dangerous antidemocratic rulings, and amplify successful tactics of resistance on the streets and in Congress.

We publish these stories because when members of our communities are being abducted, household debt is climbing, and AI data centers are causing water and electricity shortages, we have a duty as journalists to do all we can to inform the public.

In 2026, our aim is to do more than ever before—but we need your support to make that happen. 

Through December 31, a generous donor will match all donations up to $75,000. That means that your contribution will be doubled, dollar for dollar. If we hit the full match, we’ll be starting 2026 with $150,000 to invest in the stories that impact real people’s lives—the kinds of stories that billionaire-owned, corporate-backed outlets aren’t covering. 

With your support, our team will publish major stories that the president and his allies won’t want you to read. We’ll cover the emerging military-tech industrial complex and matters of war, peace, and surveillance, as well as the affordability crisis, hunger, housing, healthcare, the environment, attacks on reproductive rights, and much more. At the same time, we’ll imagine alternatives to Trumpian rule and uplift efforts to create a better world, here and now. 

While your gift has twice the impact, I’m asking you to support The Nation with a donation today. You’ll empower the journalists, editors, and fact-checkers best equipped to hold this authoritarian administration to account. 

I hope you won’t miss this moment—donate to The Nation today.

Onward,

Katrina vanden Heuvel 

Editor and publisher, The Nation

Ad Policy
x