There is a California bill, SB 1047, that aims to prevent real-world disasters caused by large AI models before they occur. The bill targets AI systems that could lead to “critical harms” such as mass casualties from a weapon created by AI or cyberattacks causing significant financial damages. Developers of these AI models would be responsible for implementing safety protocols to mitigate these risks.
SB 1047 focuses on the world’s largest AI models that cost at least $100 million and use a substantial amount of compute power during training. Companies like OpenAI, Google, and Microsoft are likely to fall under the scope of this bill as they develop more advanced AI models. The bill also addresses open source models and their derivatives, holding developers accountable for the safety of these models once a certain amount is spent on their development.
To enforce the rules outlined in SB 1047, a new California agency called the Frontier Model Division (FMD) would oversee compliance. The FMD would be governed by a board comprising representatives from the AI industry, open source community, and academia. Developers must submit annual certifications assessing their AI model’s risks, safety protocols, and compliance with the bill. Failure to comply could result in penalties imposed by the California attorney general.
Proponents of SB 1047, including California State Senator Scott Wiener and prominent AI researchers, argue that the bill is necessary to address potential risks posed by advanced AI models. They emphasize the importance of proactive regulation to prevent harmful incidents before they occur. On the other hand, opponents, including Silicon Valley players like venture capitalists and tech giants, express concerns about the bill’s impact on innovation, startups, and research efforts.
The fate of SB 1047 will be determined as it moves through the California Senate’s Assembly floor with potential amendments. If passed, the bill will be sent to Governor Gavin Newsom for final approval. However, the bill is likely to face legal challenges and ongoing discussions regarding its implications for the AI ecosystem. The formation of the FMD is scheduled for 2026, allowing time for stakeholders to adjust to the regulatory framework outlined in SB 1047.