Society / August 30, 2024

Big Tech Is Very Afraid of a Very Modest AI Safety Bill

Despite claiming to support AI safety, powerful tech interests are trying to kill SB1047.

Lawrence Lessig
Mark Zuckerberg, chief executive officer of Meta Platforms Inc., arrives for an interview on “The Circuit with Emily Chang” at Meta headquarters in Menlo Park, California, on Thursday, July 18, 2024. Facebook parent company Meta Platforms Inc. debuted a new and powerful AI model, called Llama 3.1, that Zuckerberg called "state of the art" and said will rival similar offerings from competitors.

Meta CEO Mark Zuckerberg arrives for an interview on The Circuit with Emily Chang at Meta headquarters in Menlo Park, California, on Thursday, July 18, 2024.

(Jason Henry / Bloomberg via Getty Images)

Artificial intelligence could help us solve humanity’s greatest challenges. But, left unchecked, it could cause catastrophic harm. Well-designed regulation will allow us to harness AI’s potential, while securing us from its potential to do harm—not through bureaucrats’ specifying technical procedures, but through rules that ensure that companies embrace safe procedures.

California is on the brink of passing regulation to start to do just that. Yet, despite universal recognition among leading AI executives of the risks their work poses, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB1047) has become the target of an extraordinary lobbying effort. The bill is said to be certain to stifle technical innovation in Silicon Valley and almost purposely designed to end “open-source” AI development.

Malarkey. This fight is less about “corporate capture” or the California legislature’s desire to kill its golden-egg-laying goose than it is about the same-as-it-ever-was power of money in American politics. If the bill fails—and next month will determine whether it does—it will signal yet again the loss of America’s capacity to address even the most significant threats.

Who’s Afraid of AI Safety?

At its core, SB1047 does one small but incredibly important thing: It requires that those developing the most advanced AI models adopt and follow safety protocols—including shutdown protocols—to reduce any risk that their models are stolen or deployed in a way that causes “critical harm.”

Which models? Initially, simplifying, models that cost $100,000,000 or more to train or models fine-tuned at a cost of $10,000,000 or more.

“Critical harm”? The law covers models that lead to the “creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties” or that lead to cyberattacks on critical infrastructure costing more than $500,000,000 or that, acting with limited human oversight, result in mass casualties or damage greater than $500,000,000.

Thus, the law covers an incredibly small number of model builders so as to avoid potentially huge harm. And the mandate that it deploys to avoid “critical harm” simply requires that companies adopt robust protocols to increase model safety. The law does not specify what those protocols must be. It simply requires the company, considering the state of the industry and entities advising the industry, to adopt rules to ensure that a small slice of their products are safe.

In some sense, every company developing models of this size would say it has already adopted such safety protocols. So then, why the opposition?

The problem for tech companies is that the law builds in mechanisms to ensure that the protocols are sufficiently robust and actually enforced. The law would eventually require outside auditors to review the protocols, and from the start, it would protect whistleblowers within firms who come forward to show that protocols are not being followed. The law thus makes real what the companies say they are already doing.

The Nation Weekly

Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

But if they’re already creating these safety protocols, why do we need a law to mandate it? First, because, as some within the industry assert directly, existing guidelines are often inadequate, and second, as whistleblowers have already revealed, some companies are not following the protocols that they have adopted. Opposition to SB1047 is thus designed to ensure that safety is optional—something they can promise but that they have no effective obligation to deliver.

That companies would want to avoid regulation is not surprising. What is surprising is how awful the arguments against the bill have been—especially by people who should know better.

To start with, members of Congress have written to the bill’s sponsor, State Senator Scott Wiener, telling him, “The bill requires firms to adhere to voluntary guidance issued by industry and the National Institute of Standards and Technology, which does not yet exist.” That is simply not true. The bill simply requires developers to “consider industry best practices and applicable guidance” from organizations like NIST, which NIST has already begun to supply.

These representatives continue by saying that they object that the bill “is skewed toward addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts, and workforce displacement.” It is true, of course, that the bill is not focused on lots of other AI risks. California—and Congress!—ought to address those risks, too. Indeed, California alone introduced over 50 AI bills this year, many of which are targeted at those risks. Yet how that is an argument against addressing the risks the law is addressing is not clear. It’s like saying that a bill addressing wildfire risks should be rejected because it doesn’t address flooding risks.

But consider the term “hypothetical existential risks”: Many in the field of AI have spoken of the “existential risks” that advanced AI may present—“existential” in the sense that if they are realized, humanity is over. Those risks are not SB1047’s direct concern. Its focus is on more practical harms, such as cyberattacks on critical infrastructure or economic harm of $500M or more. Every AI company that is likely to train the models that this bill would regulate—including companies such as OpenAI, Google, and Meta that oppose it—believes, or says it believes, that its most powerful AI models might pose these sorts of risks in the not-too-distant future.

Regardless, how should we think about these severe risks more generally? Some believe such risks are unavoidable. They reject the term “hypothetical.” Some believe such risks don’t exist: Like time travel, they can be imagined, but they cannot be realized. Yet most speak of these risks in probabilities, for example, “a 10 percent chance in 10 years.”

It’s not clear which of these three possibilities these members of Congress mean. They write, “There is little scientific evidence of harm of ‘mass casualties or harmful weapons created’ from advanced models.”

However, no one is claiming that we have seen “mass casualties or harmful weapons created” so far. The point of the bill is to avoid such harm, especially as models become so enormously powerful. (There is also little scientific evidence of harm of mass casualties or harmful weapons created from bioengineering; is that a reason not to regulate bioengineering?) Sure, if you’re certain such risks could not be realized then there’s no reason for this bill. But when did members of Congress become experts in AI?

However, the representatives continue further by echoing shibboleths about open-source AI. “Currently, some advanced models are released as open source and made widely available,” they write. “This openness allows smaller, lesser-resourced companies and organizations, including universities, to develop on top of them, stimulating innovation and having large economic impact.” 

That’s true enough. But then, as the members’ letter continues, truth begins to fade. They say, “This bill would reduce this practice [of open-source development] by holding the original developer of a model liable for a party misusing their technology downstream.”

Wrong. The bill creates no liability simply because someone “misus[ed]” a “technology downstream.” No doubt, it imposes upon “developers” of “covered models” the obligation to take “reasonable care to implement…appropriate measures to prevent covered models and covered model derivatives from posing unreasonable risks of causing or materially enabling critical harms”—again, “mass casualties” or economic loss exceeding $500M. Who is against that? What industry in America today, without explicit legislative exemption, is entitled to deploy, without regulation or liability, a product that creates “unreasonable risks of causing…critical harms”?

Open-source software developers have long used licenses to avoid economic liability for harms that flow from their software. For ordinary economic harm, that may well make sense. But the law of tort—independent of, but codified in important ways by, SB1047—is not bound by software licenses. In the face of “critical harms,” let alone existential risks, there is no good reason to exempt developers of software from the ordinary duty that everyone else bears: to take “reasonable care” to implement “appropriate measures” to avoid “unreasonable risk.”

What’s more, targeting the shutdown protocols that the law requires the companies to develop, the representatives write that “kill switches” “would decimate the ecosystems that spring up around [open-source] AI models.” No entrepreneur would want to build a product around an AI system if the developer could pull the plug at any time.

Again, this is wrong. First, the law does not require anyone to build a “kill switch.” It merely requires developers to have the ability to shut down their own software. Second, the law does not require anyone to enact that shutdown at any time. It only requires that the companies develop the capability and describe the protocols governing when they would be used. The rule is like a regulation requiring companies building electrical grids to include circuit breakers in their design. Do circuit breakers “decimate the ecosystems” of companies developing electrical products?

Third, the law requires “full shutdown” capability for models “controlled by” a developer. Once code is adopted and deployed by others, so long as those others are not “developers” of “covered models,” the obligation does not reach them. But fourth, and most bizarrely, imagine an open-source model did have a kill switch, and imagine the developer flipped it because a runaway model began to cause “critical harm”—again, “mass casualties” or economic harm of $500M or more. Are these members arguing that the developer should not flip the switch? Or that the entrepreneur using the model would rather cause “critical harm” than have its model stopped?

Indeed, the argument goes the other way around. Having circuit breakers built into the system makes it more likely that companies will develop products based on open-source technologies if only to avoid the ordinary tort liability that would follow any harm that those products would produce. A company building its product on top of an unreasonably dangerous product could itself face tort liability. The capability to stop runaway critical harm could make the underlying product more valuable to follow-on developers, not less.

Finally, the members of Congress write that a recent NIST report recommended that the “government should not restrict access to open-source models with widely available model weights at this time.” True, it shouldn’t, but nothing in SB1047 would. The bill does nothing to “restrict access” to models; it only requires that “developers” of “covered models” (again, those spending $100M or more) or covered “fine-tuned” models (again, those spending $10M or more), develop protocols to advance the safety of those models, at least to the extent reasonable, given the state of knowledge in the field.

A Simple First Step

SB1047 is a protocol bill. It mandates that a handful of companies take meaningful steps to adopt procedures that the leaders of every one of these companies agree such models could conceivably create.

The bill isn’t perfect. There are plenty of ways in which it could be improved. I agree with much in Anthropic’s balanced and insightful analysis of the bill—an analysis by one of the companies that would be regulated that yet concludes that the benefits of the bill outweigh the costs.

But every bill is imperfect. And this one has one more important argument in its favor: If Donald Trump is elected, he has promised to immediately remove even the minimal protections that the Biden administration has imposed. That would be catastrophic. SB1047 is not a substitute for those protections, but it is a backstop and a critical first step.

We need your support

What’s at stake this November is the future of our democracy. Yet Nation readers know the fight for justice, equity, and peace doesn’t stop in November. Change doesn’t happen overnight. We need sustained, fearless journalism to advocate for bold ideas, expose corruption, defend our democracy, secure our bodily rights, promote peace, and protect the environment.

This month, we’re calling on you to give a monthly donation to support The Nation’s independent journalism. If you’ve read this far, I know you value our journalism that speaks truth to power in a way corporate-owned media never can. The most effective way to support The Nation is by becoming a monthly donor; this will provide us with a reliable funding base.

In the coming months, our writers will be working to bring you what you need to know—from John Nichols on the election, Elie Mystal on justice and injustice, Chris Lehmann’s reporting from inside the beltway, Joan Walsh with insightful political analysis, Jeet Heer’s crackling wit, and Amy Littlefield on the front lines of the fight for abortion access. For as little as $10 a month, you can empower our dedicated writers, editors, and fact checkers to report deeply on the most critical issues of our day.

Set up a monthly recurring donation today and join the committed community of readers who make our journalism possible for the long haul. For nearly 160 years, The Nation has stood for truth and justice—can you help us thrive for 160 more?

Onwards,
Katrina vanden Heuvel
Editorial Director and Publisher, The Nation

Lawrence Lessig

Lawrence Lessig, a professor of law at Harvard Law School, is co-founder of the nonprofit Change Congress.

More from The Nation

Democratic presidential candidate Vice President Kamala Harris and former president and Republican presidential candidate Donald Trump during the first presidential debate at National Constitution Center in Philadelphia on September 10, 2024.

Abortion Took Center Stage at the Debate, but Queering Reproductive Justice Must Be the Goal Abortion Took Center Stage at the Debate, but Queering Reproductive Justice Must Be the Goal

If LGBTQIA+ communities are not centered in the fight for justice, our communities will never be free.

Candace Bond-Theriault

Republican presidential nominee former president Donald Trump and Republican vice presidential nominee US Senator JD Vance.

White People Have Never Forgiven Haitians for Claiming Their Freedom White People Have Never Forgiven Haitians for Claiming Their Freedom

Behind the vicious Trump-Vance attacks on Haitian immigrants is a long history of making the people of Haiti pay for the audacity of their revolution.

Elie Mystal

An Italian fascist in a classroom in Italy in 1930.

The 5 Themes of Fascist Education The 5 Themes of Fascist Education

To fight fascism, we need to protect honest and fearless teachers.

Jason Stanley

Documenting the First Year Without “Roe v. Wade”

Documenting the First Year Without “Roe v. Wade” Documenting the First Year Without “Roe v. Wade”

A conversation with journalist Amanda Becker about her new book, You Must Stand Up: The Fight for Abortion Rights in Post-Dobbs America.

Q&A / Larada Lee-Wallace

A sign from a reproductive justice rally outside the White House on August 23, 2022.

The Abortion Fight That Shows Just How Broken Our Healthcare System Is The Abortion Fight That Shows Just How Broken Our Healthcare System Is

The federal government is battling states over funding for family planning services—and leaving patients caught in the middle.

Regina Mahone

Elderly hands

Older Workers Deserve Rest—but the Country Isn’t Letting Them Have It Older Workers Deserve Rest—but the Country Isn’t Letting Them Have It

Millions of Americans are working well past the retirement age, not because they “simply don’t want to quit” but because they just can’t afford to do so.

Rebecca Gordon