California Gov. Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act, otherwise known as SB53, into law Monday afternoon.
The bill, the first of its kind in the United States, places new AI-specific regulations on the top players in the industry, requiring them to fulfill transparency requirements and report AI-related safety incidents.
Though several states have recently passed laws regulating aspects of AI, SB53 is the first to explicitly focus on the safety of cutting-edge and powerful AI models.
In a statement, Newsom said: “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.”
The law will likely have worldwide ramifications, as 32 of the world’s top 50 AI companies are based in California. In a signing message to the State Senate, Newsom wrote that California’s “status as a global leader in technology allows us a unique opportunity to provide a blueprint for well-balanced AI policies beyond our borders–especially in the absence of a comprehensive federal AI policy framework.”
The law requires leading AI companies to publish public documents detailing how they are following best practices to create safe AI systems. The law creates a pathway for companies to report severe AI-related incidents to California’s Office of Emergency Services, while strengthening protections for whistleblowers who raise concerns about health and safety risks.
The law is backed by civil penalties for noncompliance, to be enforced by the state attorney general’s office.
In a statement, the bill’s author State Senator Scott Wiener said, “With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk.”
SB53’s passage comes exactly one year after Newsom vetoed a similar bill from State Sen. Wiener. That bill, called SB1047, attempted to assign greater liability to leading AI companies in case of adverse events.
State Sen. Wiener previously told NBC News, “Whereas SB 1047 was more of a liability-focused bill, SB 53 is more focused on transparency.”
SB53's passage follows recent announcements of increased lobbying efforts from big tech companies to limit the spread and impact of AI regulation. Announcing a new super PAC to fight AI legislation last Friday, Meta’s vice president of public policy Brian Rice told POLITICO, “Sacramento’s regulatory environment could stifle innovation, block AI progress, and put California’s technology leadership at risk.” Earlier, Meta had signaled soft support for the measure.
SB53 attracted intense criticism from industry groups like the Chamber of Progress and the Consumer Technology Association. However, leading AI company Anthropic endorsed the bill.
While several companies expressed their support for the bill, they made clear their preference for federal legislation to avoid inconsistent, state-by-state regulations.
In a statement Monday afternoon, Anthropic co-founder and head of policy Jack Clark said: "Governor Newsom’s signature on SB 53 establishes meaningful transparency requirements for frontier AI companies without imposing prescriptive technical mandates."
"While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation," he said.
In a LinkedIn statement several weeks ago, OpenAI’s chief global affairs officer, Chris Lehane, wrote: “America leads best with clear, nationwide rules, not a patchwork of state or local regulations. Fragmented state‑by‑state approaches create friction, duplication, and missed opportunities.”
Monday morning, Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., proposed a federal bill that would require leading AI developers to “evaluate advanced AI systems and collect data on the likelihood of adverse AI incidents.”
As written, the federal bill would create an Advanced Artificial Intelligence Evaluation Program within the Energy Department. Participation in the evaluation program would be mandatory, much like SB53’s mandatory transparency and reporting requirements.
World leaders are increasingly calling for AI regulation in the face of growing risks from advanced AI systems.
In remarks to the United Nations General Assembly last week, President Donald Trump said, AI “could be one of the great things ever, but it also can be dangerous, but it can be put to tremendous use and tremendous good.”
Addressing the U.N. one day after Trump, Ukrainian President Volodymyr Zelenskyy said, “We are now living through the most destructive arms race in human history because this time, it includes artificial intelligence.”
Jared Perlo is a writer and reporter at NBC News covering AI. He is currently supported by the Tarbell Center for AI Journalism.