California’s proposed legislation to prevent AI disasters, Senate Bill 1047, has undergone significant changes ahead of its final vote, following pressure from Silicon Valley and suggestions from AI firm Anthropic.
The bill, originally designed to hold AI developers accountable for potential catastrophic events, has been revised to address concerns from the tech industry. On Thursday, SB 1047 passed through California’s Appropriations Committee, a crucial step toward becoming law.
Senator Scott Wiener, the bill’s sponsor, acknowledged the adjustments in a statement to TechCrunch, saying, “We accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry. These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open-source community, which is an important source of innovation.”

Despite these modifications, SB 1047 still aims to prevent large AI systems from causing mass casualties or triggering cybersecurity events costing over $500 million by holding developers liable. However, the bill now limits the power of California’s government to enforce these provisions. One of the most notable changes is the removal of the attorney general’s ability to sue AI companies for negligent safety practices before a catastrophic event occurs, a modification suggested by Anthropic. Instead, the attorney general can now seek injunctive relief, allowing them to request that a company halt operations deemed dangerous.
The bill still permits lawsuits against AI developers if their models cause a catastrophic event. Additionally, the bill no longer includes the creation of the Frontier Model Division (FMD), a proposed government agency. Instead, the bill establishes the Board of Frontier Models within the existing Government Operations Agency, expanding it from five to nine members. This board will set compute thresholds for covered models, issue safety guidance, and create regulations for auditors. Another significant amendment is the removal of criminal liability for AI labs that fail to certify safety test results “under penalty of perjury.”
The bill now requires AI labs to submit public statements outlining their safety practices without imposing any criminal consequences.