Society / August 15, 2024

California’s AI Safety Bill Is a Mask-Off Moment for the Industry

AI’s top industrialists say they want regulation—until someone tries to regulate them.

Garrison Lovely
Google DeepMind chief Demis Hassabis (L) and Google chief executive Sundar Pichai open the tech titan's annual I/O developers conference focusing on how artificial intelligence is being woven into search, email, virtual meetings and more in Mountain View, California, on May 14, 2024.

Google DeepMind chief Demis Hassabis (L) and Google chief executive Sundar Pichai open the tech titan’s annual I/O developers conference focusing on how artificial intelligence is being woven into search, email, virtual meetings and more in Mountain View, California, on May 14, 2024.

(Glenn Chapman / AFP via Getty Images)

“Does SB 1047…spell the end of the Californian technology industry?” Yann LeCun, the chief AI scientist at Meta and one of the so-called “godfathers” of the artificial intelligence boom, asked in June.

LeCun was echoing the panicked reaction of many in the tech community to SB 1047, a bill currently making its way through the California State Legislature. The legislation would create one of the country’s first regulatory regimes specifically designed for AI. SB 1047 passed the state Senate nearly unopposed and is currently awaiting a vote in the state Assembly. But it faces a barrage of attacks from some of Silicon Valley’s most influential players, who have framed it as nothing less than a death knell for the future of technological innovation.

This is a real mask-off moment for the AI industry. If we listen to the top companies, human-level AI could arrive within five years, and full-blown extinction is on the table. The leaders of these companies have talked about the need for regulation and repeatedly stated that advanced AI systems could lead to, as OpenAI CEO Sam Altman memorably put it, “lights out for all of us.”

But now they and their industry groups are saying it’s too soon to regulate. Or they want regulation, of course, but just not this regulation.

None of the major AI companies support SB 1047. Some, like Google and Meta, have taken unusually strong positions against it. Others are more circumspect, letting trade associations speak for them or requesting that the bill be watered down further. With such an array of powerful forces stacked against it, it’s worth looking at what exactly SB 1047 does and does not do. And when you do that, you find not only that the reality is very different from the rhetoric, but that some tech bigwigs are blatantly misleading the public about the nature of this legislation.

According to its critics, SB 1047 would be hellish for the tech industry. Among other things, detractors warn that the bill would make it legal to jail start-up founders for innocent paperwork mistakes; cede the US AI lead to China; and destroy open-source development. “Without open-source AI, there is no AI start-up ecosystem and no academic research on large models. Meta will be fine, but AI start-ups will just die. Seems pretty apocalyptic to me,” LeCun warned. To make matters worse, AI investors assert that the bill manifests “a fundamental misunderstanding of the technology” and that its creators haven’t been receptive to feedback.

But when you look past this hyperbole, you’ll find a radically different landscape. In reality, the actual bill is comprised of very popular provisions, crafted with extensive input from AI developers, and endorsed by world-leading AI researchers, including the two other people seen as godfathers of AI alongside LeCun. SB 1047’s primary author says it won’t do any of the aforementioned “apocalyptic” things its critics warn against, a claim echoed by OpenAI whistleblower Daniel Kokotajlo, who supports the bill and “predict[s] that if it passes, the stifling of AI progress that critics doomsay about will fail to materialize.”

Also unlikely to materialize is an AI exodus from the state. SB 1047 applies to anybody doing business in California—the world’s fifth-largest economy and its de facto AI headquarters.

According to SB 1047 author state Senator Scott Wiener, the heart of the bill requires a set of safety measures from developers of “covered models”—AI systems larger and more expensive than the most powerful existing ones. The legislation would require that these developers provide “reasonable assurance” that their models won’t cause catastrophic harms, defined as at least $500 million in damage or a mass-casualty event. Wiener says the other key provision is that developers must be able to shut down a covered model in case of an emergency.

Wiener is far from a burn-it-down leftist. He identifies as pro-AI, innovation, and open-source. A recent Politico profile describes Wiener as “a business-friendly moderate, by San Francisco standards” and includes criticism from the left for his “coziness” with tech.

The Nation Weekly

Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

Those relationships have not shielded Wiener from the tech industry’s wrath over the bill. All three of the leading AI developers—OpenAI, Anthropic, and Google—are part of TechNet, a trade group opposing the bill (members also include Amazon, Apple, and Meta).

OpenAI initially didn’t take a public position on the bill, but a company spokeswoman spoke out against it in a New York Times article on Wednesday. The Times reported that the company told Wiener that “serious A.I. risks were national security issues that should be regulated by the federal government, not by states.”

A Microsoft lobbyist told me the company’s officially neutral but also prefers a national law. TechNet and other industry associations argue that AI safety is already “appropriately being addressed at the federal level” and that we should wait for in-progress national AI safety standards. They fail to acknowledge that Republicans have promised to block meaningful federal legislation and reverse Biden’s executive order on AI, the closest thing to national AI regulation and the source of the forthcoming standards.

And as we’ll recall, Google and Meta have publicly opposed the bill.

The nearest thing to industry support has come from Anthropic, the most safety-oriented top AI company. Anthropic published a “support if amended” letter requesting extensive changes to the bill, the most significant of which is a move from what the company calls “broad pre-harm enforcement” to a requirement that developers create safety plans as they see fit. If a covered model causes a catastrophe and its creator’s safety plan “falls short of best practices or relevant standards, in a way that materially contributed to the catastrophe, then the developer should also share liability.” Anthropic calls this a “deterrence model” that would allow developers to flexibly set safety practices as standards evolve.

Wiener says he appreciates Anthropic’s detailed feedback and that the SB 1047 team is positive about the “bulk” of their proposals, but he’s reluctant to fully embrace the shift away from pre-harm enforcement.

A researcher at a top company wrote to me that their safety colleagues “seem broadly supportive” of SB 1047 and “annoyed with the Anthropic letter.”

Vox reported that Anthropic’s attempt to water down the bill “comes as a major disappointment to safety-focused groups, which expected Anthropic to welcome—not fight—more oversight and accountability.”

Anthropic was started by OpenAI employees who, according to a November New York Times report, failed to oust Sam Altman in 2021. It has since taken $6 billion in investment from Google and Amazon, the price of doing business in capital-intensive AI development.

These kinds of investments can have an effect on company priorities—which are often suspect to begin with. As Anthropic policy chief Jack Clark himself told Vox last September, “I think the incentives of corporations are horrendously warped, including ours.”

But by comparison, the reaction to the bill from the AI investor community makes Big Tech look downright responsible.

The most coordinated and intense opposition has been from Andreessen Horowitz, known as a16z. The world’s largest venture capital firm has shown itself willing to say anything to kill SB 1047. In open letters and the pages of the Financial Times and Fortune, a16z founders and partners in their portfolio have brazenly lied about what the bill does.

They say SB 1047 includes the “unobtainable requirement” that developers “certify that their AI models cannot be used to cause harm.” But the bill text clearly states, “‘Reasonable assurance’ does not mean full certainty or practical certainty.”

They claim that the emergency shutdown provision effectively kills open-source AI. However, Wiener says the provision was never intended to apply to open-sourced models and even amended the bill to make that clear.

The “godmother of AI,” Fei Fei Li, published an op-ed in Fortune parroting this and other a16z talking points. She wrote, “This kill switch will devastate the open-source community.” An open letter from academics in the University of California system echoes this unsupported claim.

A16z recently backed Li’s billon-dollar AI start-up—context that didn’t make into Fortune.

The most consistent and perhaps most preposterous narrative is that a16z is championing “little tech” against an overreaching government that’s unduly burdening “start-ups that are just getting off the ground.” But SB 1047 applies only to models that cost at least $100 million to train and use more computing power than any known model yet has.

So these start-ups will be wealthy enough to train unprecedentedly expensive and powerful models, but won’t be able to afford to conduct and report on basic safety practices? Would a16z be happy if start-ups in their portfolio didn’t have these plans in place?

Oh, and the champion of “little tech” neglects to mention that they are invested in OpenAI and Facebook (where a16z cofounder Marc Andreessen sits on the board).

SB 1047 has also acquired powerful enemies on Capitol Hill. The most dangerous might be Zoe Lofgren, the ranking Democrat in the House Committee on Science, Space, and Technology. Lofgren, whose district covers much of Silicon Valley, has taken hundreds of thousands of dollars from Big Tech and venture capital, and her daughter works on Google’s legal team. She has also stood in the way of previous regulatory efforts.

Lofgren recently took the unusual step of writing a letter against state-level legislation, arguing that SB 1047 was premature because “the science surrounding AI safety is still in its infancy.” Similarly, an industry lobbyist told me that “this is a rapidly evolving industry,” and that by comparison, “the airline industry has established best practices.”

The AI industry does move fast, and we do remain in the dark about the best ways to build powerful AI systems safely. But are those arguments against regulating it now?

This cautious, wait-and-see approach seems to extend only to their position on regulations. When it comes to building and deploying more powerful and autonomous AI systems, the companies see themselves in an all-out race.

In the West, self-regulation is the status quo. The only significant Western mandatory rules on general AI are included in the sweeping EU AI Act, but these don’t take effect until June 2025.

All the major AI companies have made voluntary commitments. But overall, compliance has been less than perfect.

The meltdown in response to SB 1047 is evidence of an industry that is “allergic to regulation because they’ve never been meaningfully regulated,” says Teri Olle, director of Economic Security California and co-sponsor of the bill.

Opponents of SB 1047 are eager to frame it as a radical, industry-destroying measure driven by fears of an imminent sci-fi robot takeover. By shifting the conversation toward existential risk, they aim to distract from the bill’s specific provisions, which have garnered strong support in multiple statewide polls.

Representative Lofgren writes that the bill “seems heavily skewed toward addressing hypothetical existential risks.”

However, co-sponsors Wiener, Olle, and Sneha Revanur, founder and president of Encode Justice, all told me they were far more focused on catastrophic risks—a bar far below complete human extinction.

It’s true that no one really knows if AI systems could become powerful enough to kill or enslave every last person (though the heads of the leading AI companies and the most cited AI scientists have all said it’s a real possibility). But it’s very hard to simultaneously argue, as many tech boosters do, that AI will be as important as the industrial revolution, but also that there’s no risk that AI systems could enable catastrophes.

Three leading AI experts and a “founding figure” of Internet law published a letter endorsing the bill, arguing that “we face growing risks that AI could be misused to attack critical infrastructure, develop dangerous weapons, or cause other forms of catastrophic harm.” These risks, they write, “could emerge within years, rather than decades” and are “probable and significant enough to make safety testing and common-sense precautions necessary.”

Wiener says he would prefer “one strong federal law,” but isn’t holding his breath. He notes that, aside from the TikTok ban, Congress hasn’t meaningfully regulated technology in decades. In the face of this inaction, California has passed its own laws on data privacy and net neutrality (Wiener authored the latter).

Given this, Olle says, “all eyes are on Sacramento and Brussels in the EU to really chart a path for how we should appropriately regulate AI and regulate tech.” She argues that SB 1047 is about more than just regulation—it’s about the question of “Who decides? Who decides what the safety standards are going to be for this very powerful technology?” She observes that, currently, these decisions are being made by a small group of people—so few that they could “fit in a minivan”—yet they’re making choices with “massive societal impact.”

Wiener represents San Francisco and, as a result, has borne a significant personal and political cost by shepherding SB 1047, says someone working on the bill: “You don’t have to love [Wiener] on everything to realize that he is just a stubborn motherfucker.… The amount of political pain he is taking on this is just unbelievable.… He has just lost a lot of relationships and political partners and people who are just incredibly furious at him over this. And I just think he actually thinks the risks are real and thinks that he has to do something about it.”

Opponents assert that there is a “massive public outcry” against SB 1047 and highlight imagined and unsubstantiated harms that will befall sympathetic victims like academics and open-source developers. However, the bill aims squarely at the largest AI developers in the world and has statewide popular support, with even stronger support from tech workers.

If you scratch the surface, the fault lines become clear: AI’s capitalists are defending their perceived material interests from a coalition of civil society groups, workers, and the broader public.

Note: this piece has been updated to include OpenAI’s opposition to the bill.

We need your support

What’s at stake this November is the future of our democracy. Yet Nation readers know the fight for justice, equity, and peace doesn’t stop in November. Change doesn’t happen overnight. We need sustained, fearless journalism to advocate for bold ideas, expose corruption, defend our democracy, secure our bodily rights, promote peace, and protect the environment.

This month, we’re calling on you to give a monthly donation to support The Nation’s independent journalism. If you’ve read this far, I know you value our journalism that speaks truth to power in a way corporate-owned media never can. The most effective way to support The Nation is by becoming a monthly donor; this will provide us with a reliable funding base.

In the coming months, our writers will be working to bring you what you need to know—from John Nichols on the election, Elie Mystal on justice and injustice, Chris Lehmann’s reporting from inside the beltway, Joan Walsh with insightful political analysis, Jeet Heer’s crackling wit, and Amy Littlefield on the front lines of the fight for abortion access. For as little as $10 a month, you can empower our dedicated writers, editors, and fact checkers to report deeply on the most critical issues of our day.

Set up a monthly recurring donation today and join the committed community of readers who make our journalism possible for the long haul. For nearly 160 years, The Nation has stood for truth and justice—can you help us thrive for 160 more?

Onwards,
Katrina vanden Heuvel
Editorial Director and Publisher, The Nation

Garrison Lovely

Garrison Lovely is a freelance journalist. His work has been featured in Jacobin, Current Affairs, and New York Focus, among other places.

More from The Nation

Government watchdog Accountable.US projects a graphic onto the Federalist Society building to highlight news of DC Attorney General Brian Schwalb’s investigation into conservative kingpin Leonard Leo’s nonprofit network in August 2023.

The Man Behind the Right-Wing Supreme Court Wants to “Crush” the Liberal Media The Man Behind the Right-Wing Supreme Court Wants to “Crush” the Liberal Media

The conservative activist Leonard Leo has declared his intention to spend $1 billion on promoting right-wing ideas in news and entertainment.

Chris Lehmann

Stormy Daniels at the

What We Owe Stormy Daniels What We Owe Stormy Daniels

She is the latest in a long line of women who have survived mass-media humiliation. We are survivors, but we will always be surviving.

Melissa Petro

Tyreek Hill #10 of the Miami Dolphins and Jaylen Waddle #17 celebrate after Hill's receiving touchdown on September 8, 2024 in Miami Gardens, Florida. Prior to the game, Miami-Dade police pushed Hill face down on the concrete and handcuffed him.

Police Violence, Tyreek Hill, and the NFL Owners Who Bankroll Brutality Police Violence, Tyreek Hill, and the NFL Owners Who Bankroll Brutality

The NFL does not cross the police. The NFL partners with the police.

Dave Zirin

Unhoused senior citizens call a homeless advocate from at Tussing Park in Grants Pass, Oregon. on Thursday. March 28, 2024.

The Moral Failure of the Grants Pass Decision The Moral Failure of the Grants Pass Decision

While the United States might indeed be the richest country in history, it hasn’t proven particularly rich in generosity.

Cedar Monroe and Rev. Dr. Liz Theoharis

How a Probation Violation Turns Into Indefinite Jail Time

How a Probation Violation Turns Into Indefinite Jail Time How a Probation Violation Turns Into Indefinite Jail Time

Probation detainers are forcing tens of thousands of people to remain in jail indefinitely.

Feature / Katie Rose Quandt

Why Black Voter Turnout Is Dropping in Wisconsin

Why Black Voter Turnout Is Dropping in Wisconsin Why Black Voter Turnout Is Dropping in Wisconsin

Author Malaika Jabali tells Laura Flanders why simmering frustration could even shift some Black voters in Wisconsin to Trump.

Q&A / Laura Flanders