Society / August 30, 2024

Big Tech Is Very Afraid of a Very Modest AI Safety Bill

Despite claiming to support AI safety, powerful tech interests are trying to kill SB1047.

Lawrence Lessig
Mark Zuckerberg, chief executive officer of Meta Platforms Inc., arrives for an interview on “The Circuit with Emily Chang” at Meta headquarters in Menlo Park, California, on Thursday, July 18, 2024. Facebook parent company Meta Platforms Inc. debuted a new and powerful AI model, called Llama 3.1, that Zuckerberg called "state of the art" and said will rival similar offerings from competitors.

Meta CEO Mark Zuckerberg arrives for an interview on The Circuit with Emily Chang at Meta headquarters in Menlo Park, California, on Thursday, July 18, 2024.

(Jason Henry / Bloomberg via Getty Images)

Artificial intelligence could help us solve humanity’s greatest challenges. But, left unchecked, it could cause catastrophic harm. Well-designed regulation will allow us to harness AI’s potential, while securing us from its potential to do harm—not through bureaucrats’ specifying technical procedures, but through rules that ensure that companies embrace safe procedures.

California is on the brink of passing regulation to start to do just that. Yet, despite universal recognition among leading AI executives of the risks their work poses, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB1047) has become the target of an extraordinary lobbying effort. The bill is said to be certain to stifle technical innovation in Silicon Valley and almost purposely designed to end “open-source” AI development.

Malarkey. This fight is less about “corporate capture” or the California legislature’s desire to kill its golden-egg-laying goose than it is about the same-as-it-ever-was power of money in American politics. If the bill fails—and next month will determine whether it does—it will signal yet again the loss of America’s capacity to address even the most significant threats.

Who’s Afraid of AI Safety?

At its core, SB1047 does one small but incredibly important thing: It requires that those developing the most advanced AI models adopt and follow safety protocols—including shutdown protocols—to reduce any risk that their models are stolen or deployed in a way that causes “critical harm.”

Which models? Initially, simplifying, models that cost $100,000,000 or more to train or models fine-tuned at a cost of $10,000,000 or more.

“Critical harm”? The law covers models that lead to the “creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties” or that lead to cyberattacks on critical infrastructure costing more than $500,000,000 or that, acting with limited human oversight, result in mass casualties or damage greater than $500,000,000.

Thus, the law covers an incredibly small number of model builders so as to avoid potentially huge harm. And the mandate that it deploys to avoid “critical harm” simply requires that companies adopt robust protocols to increase model safety. The law does not specify what those protocols must be. It simply requires the company, considering the state of the industry and entities advising the industry, to adopt rules to ensure that a small slice of their products are safe.

In some sense, every company developing models of this size would say it has already adopted such safety protocols. So then, why the opposition?

The problem for tech companies is that the law builds in mechanisms to ensure that the protocols are sufficiently robust and actually enforced. The law would eventually require outside auditors to review the protocols, and from the start, it would protect whistleblowers within firms who come forward to show that protocols are not being followed. The law thus makes real what the companies say they are already doing.

The Nation Weekly

Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

But if they’re already creating these safety protocols, why do we need a law to mandate it? First, because, as some within the industry assert directly, existing guidelines are often inadequate, and second, as whistleblowers have already revealed, some companies are not following the protocols that they have adopted. Opposition to SB1047 is thus designed to ensure that safety is optional—something they can promise but that they have no effective obligation to deliver.

That companies would want to avoid regulation is not surprising. What is surprising is how awful the arguments against the bill have been—especially by people who should know better.

To start with, members of Congress have written to the bill’s sponsor, State Senator Scott Wiener, telling him, “The bill requires firms to adhere to voluntary guidance issued by industry and the National Institute of Standards and Technology, which does not yet exist.” That is simply not true. The bill simply requires developers to “consider industry best practices and applicable guidance” from organizations like NIST, which NIST has already begun to supply.

These representatives continue by saying that they object that the bill “is skewed toward addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts, and workforce displacement.” It is true, of course, that the bill is not focused on lots of other AI risks. California—and Congress!—ought to address those risks, too. Indeed, California alone introduced over 50 AI bills this year, many of which are targeted at those risks. Yet how that is an argument against addressing the risks the law is addressing is not clear. It’s like saying that a bill addressing wildfire risks should be rejected because it doesn’t address flooding risks.

But consider the term “hypothetical existential risks”: Many in the field of AI have spoken of the “existential risks” that advanced AI may present—“existential” in the sense that if they are realized, humanity is over. Those risks are not SB1047’s direct concern. Its focus is on more practical harms, such as cyberattacks on critical infrastructure or economic harm of $500M or more. Every AI company that is likely to train the models that this bill would regulate—including companies such as OpenAI, Google, and Meta that oppose it—believes, or says it believes, that its most powerful AI models might pose these sorts of risks in the not-too-distant future.

Regardless, how should we think about these severe risks more generally? Some believe such risks are unavoidable. They reject the term “hypothetical.” Some believe such risks don’t exist: Like time travel, they can be imagined, but they cannot be realized. Yet most speak of these risks in probabilities, for example, “a 10 percent chance in 10 years.”

It’s not clear which of these three possibilities these members of Congress mean. They write, “There is little scientific evidence of harm of ‘mass casualties or harmful weapons created’ from advanced models.”

However, no one is claiming that we have seen “mass casualties or harmful weapons created” so far. The point of the bill is to avoid such harm, especially as models become so enormously powerful. (There is also little scientific evidence of harm of mass casualties or harmful weapons created from bioengineering; is that a reason not to regulate bioengineering?) Sure, if you’re certain such risks could not be realized then there’s no reason for this bill. But when did members of Congress become experts in AI?

However, the representatives continue further by echoing shibboleths about open-source AI. “Currently, some advanced models are released as open source and made widely available,” they write. “This openness allows smaller, lesser-resourced companies and organizations, including universities, to develop on top of them, stimulating innovation and having large economic impact.” 

That’s true enough. But then, as the members’ letter continues, truth begins to fade. They say, “This bill would reduce this practice [of open-source development] by holding the original developer of a model liable for a party misusing their technology downstream.”

Wrong. The bill creates no liability simply because someone “misus[ed]” a “technology downstream.” No doubt, it imposes upon “developers” of “covered models” the obligation to take “reasonable care to implement…appropriate measures to prevent covered models and covered model derivatives from posing unreasonable risks of causing or materially enabling critical harms”—again, “mass casualties” or economic loss exceeding $500M. Who is against that? What industry in America today, without explicit legislative exemption, is entitled to deploy, without regulation or liability, a product that creates “unreasonable risks of causing…critical harms”?

Open-source software developers have long used licenses to avoid economic liability for harms that flow from their software. For ordinary economic harm, that may well make sense. But the law of tort—independent of, but codified in important ways by, SB1047—is not bound by software licenses. In the face of “critical harms,” let alone existential risks, there is no good reason to exempt developers of software from the ordinary duty that everyone else bears: to take “reasonable care” to implement “appropriate measures” to avoid “unreasonable risk.”

What’s more, targeting the shutdown protocols that the law requires the companies to develop, the representatives write that “kill switches” “would decimate the ecosystems that spring up around [open-source] AI models.” No entrepreneur would want to build a product around an AI system if the developer could pull the plug at any time.

Again, this is wrong. First, the law does not require anyone to build a “kill switch.” It merely requires developers to have the ability to shut down their own software. Second, the law does not require anyone to enact that shutdown at any time. It only requires that the companies develop the capability and describe the protocols governing when they would be used. The rule is like a regulation requiring companies building electrical grids to include circuit breakers in their design. Do circuit breakers “decimate the ecosystems” of companies developing electrical products?

Third, the law requires “full shutdown” capability for models “controlled by” a developer. Once code is adopted and deployed by others, so long as those others are not “developers” of “covered models,” the obligation does not reach them. But fourth, and most bizarrely, imagine an open-source model did have a kill switch, and imagine the developer flipped it because a runaway model began to cause “critical harm”—again, “mass casualties” or economic harm of $500M or more. Are these members arguing that the developer should not flip the switch? Or that the entrepreneur using the model would rather cause “critical harm” than have its model stopped?

Indeed, the argument goes the other way around. Having circuit breakers built into the system makes it more likely that companies will develop products based on open-source technologies if only to avoid the ordinary tort liability that would follow any harm that those products would produce. A company building its product on top of an unreasonably dangerous product could itself face tort liability. The capability to stop runaway critical harm could make the underlying product more valuable to follow-on developers, not less.

Finally, the members of Congress write that a recent NIST report recommended that the “government should not restrict access to open-source models with widely available model weights at this time.” True, it shouldn’t, but nothing in SB1047 would. The bill does nothing to “restrict access” to models; it only requires that “developers” of “covered models” (again, those spending $100M or more) or covered “fine-tuned” models (again, those spending $10M or more), develop protocols to advance the safety of those models, at least to the extent reasonable, given the state of knowledge in the field.

A Simple First Step

SB1047 is a protocol bill. It mandates that a handful of companies take meaningful steps to adopt procedures that the leaders of every one of these companies agree such models could conceivably create.

The bill isn’t perfect. There are plenty of ways in which it could be improved. I agree with much in Anthropic’s balanced and insightful analysis of the bill—an analysis by one of the companies that would be regulated that yet concludes that the benefits of the bill outweigh the costs.

But every bill is imperfect. And this one has one more important argument in its favor: If Donald Trump is elected, he has promised to immediately remove even the minimal protections that the Biden administration has imposed. That would be catastrophic. SB1047 is not a substitute for those protections, but it is a backstop and a critical first step.

Lawrence Lessig

Lawrence Lessig, a professor of law at Harvard Law School, is co-founder of the nonprofit Change Congress.

More from The Nation

President Biden Returns To The White House After Speaking In Baltimore

President Biden Should Issue a Blanket Pardon of Undocumented Immigrants President Biden Should Issue a Blanket Pardon of Undocumented Immigrants

Protecting Trump’s enemies from prosecution just reinforces the idea of politics as retribution. Instead, Democrats should be defending his most vulnerable targets.

Chris Lehmann

A transgender rights supporter takes part in a rally outside of the US Supreme Court Building as the high court hears arguments in a case on gender-affirming care on December 4, 2024.

The Supreme Court’s Hearing on Trans Rights Was Bigotry Masquerading as Law The Supreme Court’s Hearing on Trans Rights Was Bigotry Masquerading as Law

The conservative majority spent much of the oral arguments for US v. Skrmetti trying to erase the trans community.

Elie Mystal

Will There Be a Bird Flu Epidemic Under Trump?

Will There Be a Bird Flu Epidemic Under Trump? Will There Be a Bird Flu Epidemic Under Trump?

H5N1 currently poses a real threat for human transmission. Meanwhile, Trump’s picks for public health roles don’t bode well for vaccination.

Gregg Gonsalves

Activists for and against trans rights protest outside the Supreme Court Building before the start of the United States v. Skrmetti case on December 4, 2024.

Trans Medicine’s “Merchants of Doubt” Trans Medicine’s “Merchants of Doubt”

There is plenty of uncertainty involved in gender-affirming care—as in most aspects of medicine. But the groups behind the Tennessee ban aren’t driven by science—or patient care.

Joanna Wuest

Former US president Donald Trump during a campaign event at Alro Steel in Potterville, Michigan, on August 29, 2024.

Donald Trump’s Second Administration Will Be As Women-Hating as Ever Donald Trump’s Second Administration Will Be As Women-Hating as Ever

"Your body, my choice" is only the beginning.

Rebecca Gordon

Three people in a parking lot outside a residential building look at and sign tenant union cards and materials.

In the US, Tenants Are Usually on Their Own. Can a New National Tenant Union Change That? In the US, Tenants Are Usually on Their Own. Can a New National Tenant Union Change That?

The Tenant Union Federation is fostering a wave of tenant leaders who have been pushed to the margins—many of them elderly, disabled, low-income—as they aim to transform renters i...

Thomas Birmingham