Society / August 15, 2024

California’s AI Safety Bill Is a Mask-Off Moment for the Industry

AI’s top industrialists say they want regulation—until someone tries to regulate them.

Garrison Lovely
Google DeepMind chief Demis Hassabis (L) and Google chief executive Sundar Pichai open the tech titan's annual I/O developers conference focusing on how artificial intelligence is being woven into search, email, virtual meetings and more in Mountain View, California, on May 14, 2024.

Google DeepMind chief Demis Hassabis (L) and Google chief executive Sundar Pichai open the tech titan’s annual I/O developers conference focusing on how artificial intelligence is being woven into search, email, virtual meetings and more in Mountain View, California, on May 14, 2024.

(Glenn Chapman / AFP via Getty Images)

“Does SB 1047…spell the end of the Californian technology industry?” Yann LeCun, the chief AI scientist at Meta and one of the so-called “godfathers” of the artificial intelligence boom, asked in June.

LeCun was echoing the panicked reaction of many in the tech community to SB 1047, a bill currently making its way through the California State Legislature. The legislation would create one of the country’s first regulatory regimes specifically designed for AI. SB 1047 passed the state Senate nearly unopposed and is currently awaiting a vote in the state Assembly. But it faces a barrage of attacks from some of Silicon Valley’s most influential players, who have framed it as nothing less than a death knell for the future of technological innovation.

This is a real mask-off moment for the AI industry. If we listen to the top companies, human-level AI could arrive within five years, and full-blown extinction is on the table. The leaders of these companies have talked about the need for regulation and repeatedly stated that advanced AI systems could lead to, as OpenAI CEO Sam Altman memorably put it, “lights out for all of us.”

But now they and their industry groups are saying it’s too soon to regulate. Or they want regulation, of course, but just not this regulation.

None of the major AI companies support SB 1047. Some, like Google and Meta, have taken unusually strong positions against it. Others are more circumspect, letting trade associations speak for them or requesting that the bill be watered down further. With such an array of powerful forces stacked against it, it’s worth looking at what exactly SB 1047 does and does not do. And when you do that, you find not only that the reality is very different from the rhetoric, but that some tech bigwigs are blatantly misleading the public about the nature of this legislation.

According to its critics, SB 1047 would be hellish for the tech industry. Among other things, detractors warn that the bill would make it legal to jail start-up founders for innocent paperwork mistakes; cede the US AI lead to China; and destroy open-source development. “Without open-source AI, there is no AI start-up ecosystem and no academic research on large models. Meta will be fine, but AI start-ups will just die. Seems pretty apocalyptic to me,” LeCun warned. To make matters worse, AI investors assert that the bill manifests “a fundamental misunderstanding of the technology” and that its creators haven’t been receptive to feedback.

But when you look past this hyperbole, you’ll find a radically different landscape. In reality, the actual bill is comprised of very popular provisions, crafted with extensive input from AI developers, and endorsed by world-leading AI researchers, including the two other people seen as godfathers of AI alongside LeCun. SB 1047’s primary author says it won’t do any of the aforementioned “apocalyptic” things its critics warn against, a claim echoed by OpenAI whistleblower Daniel Kokotajlo, who supports the bill and “predict[s] that if it passes, the stifling of AI progress that critics doomsay about will fail to materialize.”

Also unlikely to materialize is an AI exodus from the state. SB 1047 applies to anybody doing business in California—the world’s fifth-largest economy and its de facto AI headquarters.

According to SB 1047 author state Senator Scott Wiener, the heart of the bill requires a set of safety measures from developers of “covered models”—AI systems larger and more expensive than the most powerful existing ones. The legislation would require that these developers provide “reasonable assurance” that their models won’t cause catastrophic harms, defined as at least $500 million in damage or a mass-casualty event. Wiener says the other key provision is that developers must be able to shut down a covered model in case of an emergency.

Wiener is far from a burn-it-down leftist. He identifies as pro-AI, innovation, and open-source. A recent Politico profile describes Wiener as “a business-friendly moderate, by San Francisco standards” and includes criticism from the left for his “coziness” with tech.

The Nation Weekly

Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

Those relationships have not shielded Wiener from the tech industry’s wrath over the bill. All three of the leading AI developers—OpenAI, Anthropic, and Google—are part of TechNet, a trade group opposing the bill (members also include Amazon, Apple, and Meta).

OpenAI initially didn’t take a public position on the bill, but a company spokeswoman spoke out against it in a New York Times article on Wednesday. The Times reported that the company told Wiener that “serious A.I. risks were national security issues that should be regulated by the federal government, not by states.”

A Microsoft lobbyist told me the company’s officially neutral but also prefers a national law. TechNet and other industry associations argue that AI safety is already “appropriately being addressed at the federal level” and that we should wait for in-progress national AI safety standards. They fail to acknowledge that Republicans have promised to block meaningful federal legislation and reverse Biden’s executive order on AI, the closest thing to national AI regulation and the source of the forthcoming standards.

And as we’ll recall, Google and Meta have publicly opposed the bill.

The nearest thing to industry support has come from Anthropic, the most safety-oriented top AI company. Anthropic published a “support if amended” letter requesting extensive changes to the bill, the most significant of which is a move from what the company calls “broad pre-harm enforcement” to a requirement that developers create safety plans as they see fit. If a covered model causes a catastrophe and its creator’s safety plan “falls short of best practices or relevant standards, in a way that materially contributed to the catastrophe, then the developer should also share liability.” Anthropic calls this a “deterrence model” that would allow developers to flexibly set safety practices as standards evolve.

Wiener says he appreciates Anthropic’s detailed feedback and that the SB 1047 team is positive about the “bulk” of their proposals, but he’s reluctant to fully embrace the shift away from pre-harm enforcement.

A researcher at a top company wrote to me that their safety colleagues “seem broadly supportive” of SB 1047 and “annoyed with the Anthropic letter.”

Vox reported that Anthropic’s attempt to water down the bill “comes as a major disappointment to safety-focused groups, which expected Anthropic to welcome—not fight—more oversight and accountability.”

Anthropic was started by OpenAI employees who, according to a November New York Times report, failed to oust Sam Altman in 2021. It has since taken $6 billion in investment from Google and Amazon, the price of doing business in capital-intensive AI development.

These kinds of investments can have an effect on company priorities—which are often suspect to begin with. As Anthropic policy chief Jack Clark himself told Vox last September, “I think the incentives of corporations are horrendously warped, including ours.”

But by comparison, the reaction to the bill from the AI investor community makes Big Tech look downright responsible.

The most coordinated and intense opposition has been from Andreessen Horowitz, known as a16z. The world’s largest venture capital firm has shown itself willing to say anything to kill SB 1047. In open letters and the pages of the Financial Times and Fortune, a16z founders and partners in their portfolio have brazenly lied about what the bill does.

They say SB 1047 includes the “unobtainable requirement” that developers “certify that their AI models cannot be used to cause harm.” But the bill text clearly states, “‘Reasonable assurance’ does not mean full certainty or practical certainty.”

They claim that the emergency shutdown provision effectively kills open-source AI. However, Wiener says the provision was never intended to apply to open-sourced models and even amended the bill to make that clear.

The “godmother of AI,” Fei Fei Li, published an op-ed in Fortune parroting this and other a16z talking points. She wrote, “This kill switch will devastate the open-source community.” An open letter from academics in the University of California system echoes this unsupported claim.

A16z recently backed Li’s billon-dollar AI start-up—context that didn’t make into Fortune.

The most consistent and perhaps most preposterous narrative is that a16z is championing “little tech” against an overreaching government that’s unduly burdening “start-ups that are just getting off the ground.” But SB 1047 applies only to models that cost at least $100 million to train and use more computing power than any known model yet has.

So these start-ups will be wealthy enough to train unprecedentedly expensive and powerful models, but won’t be able to afford to conduct and report on basic safety practices? Would a16z be happy if start-ups in their portfolio didn’t have these plans in place?

Oh, and the champion of “little tech” neglects to mention that they are invested in OpenAI and Facebook (where a16z cofounder Marc Andreessen sits on the board).

SB 1047 has also acquired powerful enemies on Capitol Hill. The most dangerous might be Zoe Lofgren, the ranking Democrat in the House Committee on Science, Space, and Technology. Lofgren, whose district covers much of Silicon Valley, has taken hundreds of thousands of dollars from Big Tech and venture capital, and her daughter works on Google’s legal team. She has also stood in the way of previous regulatory efforts.

Lofgren recently took the unusual step of writing a letter against state-level legislation, arguing that SB 1047 was premature because “the science surrounding AI safety is still in its infancy.” Similarly, an industry lobbyist told me that “this is a rapidly evolving industry,” and that by comparison, “the airline industry has established best practices.”

The AI industry does move fast, and we do remain in the dark about the best ways to build powerful AI systems safely. But are those arguments against regulating it now?

This cautious, wait-and-see approach seems to extend only to their position on regulations. When it comes to building and deploying more powerful and autonomous AI systems, the companies see themselves in an all-out race.

In the West, self-regulation is the status quo. The only significant Western mandatory rules on general AI are included in the sweeping EU AI Act, but these don’t take effect until June 2025.

All the major AI companies have made voluntary commitments. But overall, compliance has been less than perfect.

The meltdown in response to SB 1047 is evidence of an industry that is “allergic to regulation because they’ve never been meaningfully regulated,” says Teri Olle, director of Economic Security California and co-sponsor of the bill.

Opponents of SB 1047 are eager to frame it as a radical, industry-destroying measure driven by fears of an imminent sci-fi robot takeover. By shifting the conversation toward existential risk, they aim to distract from the bill’s specific provisions, which have garnered strong support in multiple statewide polls.

Representative Lofgren writes that the bill “seems heavily skewed toward addressing hypothetical existential risks.”

However, co-sponsors Wiener, Olle, and Sneha Revanur, founder and president of Encode Justice, all told me they were far more focused on catastrophic risks—a bar far below complete human extinction.

It’s true that no one really knows if AI systems could become powerful enough to kill or enslave every last person (though the heads of the leading AI companies and the most cited AI scientists have all said it’s a real possibility). But it’s very hard to simultaneously argue, as many tech boosters do, that AI will be as important as the industrial revolution, but also that there’s no risk that AI systems could enable catastrophes.

Three leading AI experts and a “founding figure” of Internet law published a letter endorsing the bill, arguing that “we face growing risks that AI could be misused to attack critical infrastructure, develop dangerous weapons, or cause other forms of catastrophic harm.” These risks, they write, “could emerge within years, rather than decades” and are “probable and significant enough to make safety testing and common-sense precautions necessary.”

Wiener says he would prefer “one strong federal law,” but isn’t holding his breath. He notes that, aside from the TikTok ban, Congress hasn’t meaningfully regulated technology in decades. In the face of this inaction, California has passed its own laws on data privacy and net neutrality (Wiener authored the latter).

Given this, Olle says, “all eyes are on Sacramento and Brussels in the EU to really chart a path for how we should appropriately regulate AI and regulate tech.” She argues that SB 1047 is about more than just regulation—it’s about the question of “Who decides? Who decides what the safety standards are going to be for this very powerful technology?” She observes that, currently, these decisions are being made by a small group of people—so few that they could “fit in a minivan”—yet they’re making choices with “massive societal impact.”

Wiener represents San Francisco and, as a result, has borne a significant personal and political cost by shepherding SB 1047, says someone working on the bill: “You don’t have to love [Wiener] on everything to realize that he is just a stubborn motherfucker.… The amount of political pain he is taking on this is just unbelievable.… He has just lost a lot of relationships and political partners and people who are just incredibly furious at him over this. And I just think he actually thinks the risks are real and thinks that he has to do something about it.”

Opponents assert that there is a “massive public outcry” against SB 1047 and highlight imagined and unsubstantiated harms that will befall sympathetic victims like academics and open-source developers. However, the bill aims squarely at the largest AI developers in the world and has statewide popular support, with even stronger support from tech workers.

If you scratch the surface, the fault lines become clear: AI’s capitalists are defending their perceived material interests from a coalition of civil society groups, workers, and the broader public.

Note: this piece has been updated to include OpenAI’s opposition to the bill.

We cannot back down

We now confront a second Trump presidency.

There’s not a moment to lose. We must harness our fears, our grief, and yes, our anger, to resist the dangerous policies Donald Trump will unleash on our country. We rededicate ourselves to our role as journalists and writers of principle and conscience.

Today, we also steel ourselves for the fight ahead. It will demand a fearless spirit, an informed mind, wise analysis, and humane resistance. We face the enactment of Project 2025, a far-right supreme court, political authoritarianism, increasing inequality and record homelessness, a looming climate crisis, and conflicts abroad. The Nation will expose and propose, nurture investigative reporting, and stand together as a community to keep hope and possibility alive. The Nation’s work will continue—as it has in good and not-so-good times—to develop alternative ideas and visions, to deepen our mission of truth-telling and deep reporting, and to further solidarity in a nation divided.

Armed with a remarkable 160 years of bold, independent journalism, our mandate today remains the same as when abolitionists first founded The Nation—to uphold the principles of democracy and freedom, serve as a beacon through the darkest days of resistance, and to envision and struggle for a brighter future.

The day is dark, the forces arrayed are tenacious, but as the late Nation editorial board member Toni Morrison wrote “No! This is precisely the time when artists go to work. There is no time for despair, no place for self-pity, no need for silence, no room for fear. We speak, we write, we do language. That is how civilizations heal.”

I urge you to stand with The Nation and donate today.

Onwards,

Katrina vanden Heuvel
Editorial Director and Publisher, The Nation

Garrison Lovely

Garrison Lovely is a freelance journalist. His work has been featured in Jacobin, Current Affairs, and New York Focus, among other places.

More from The Nation

Sundays With Noel

Sundays With Noel Sundays With Noel

Noel Parmentel was, according to his former lover and mentee, Joan Didion, the ‘outsider who lived by his ability to manipulate the inside.’

Obituary / Richard Lingeman

A Harvard faculty member wears a watermelon pin, a pro-Palestinian symbol, in Cambridge, Massachusetts, on May 10, 2024.

Bosses Are Retaliating Against Workers for Showing Solidarity With Palestinians Bosses Are Retaliating Against Workers for Showing Solidarity With Palestinians

Workers are losing their jobs and professional opportunities for expressing pro-Palestinian sentiment. Others are choosing to self-censor amid a climate of fear.

Sarah Lazare

People gathered at Hufnagle Park in Lewisburg, Pennsylvania.

When It Comes to Public Health, We Need to Tap Into People, Not Pundits When It Comes to Public Health, We Need to Tap Into People, Not Pundits

The future of our health under Trump is going to be bleak. But the solution lies in our communities, not individual personalities.

Gregg Gonsalves

Mika Brzezinski, Joe Scarborough

Mr. Scarborough Goes to Mar-a-Lago Mr. Scarborough Goes to Mar-a-Lago

The hosts of Joe Biden’s favorite political talk show have quickly pivoted to kissing the ring of the incoming president.

Chris Lehmann

A grinning Trump holds up the UFC belt. Tulsi Gabbard and Elon Musk stand in the crowd behind him, clapping.

Watching a Parallel Media Try to Make Trump the Big Sports Story Watching a Parallel Media Try to Make Trump the Big Sports Story

The president-elect did not dominate the world of sports this weekend, but Fox News and Internet tabloids are inventing new realities.

Dave Zirin

Former president Donald Trump in Milwaukee in 2020.

The First Amendment Will Suffer Under Trump The First Amendment Will Suffer Under Trump

Given what’s heading our way, we need a capacious view and robust defense of the First Amendment from all quarters.

Nan Levinson