
The US Capitol in Washington DC.
Yesterday, the US House of Representatives approved a budget package that includes a 10-year moratorium on enforcing state AI laws. Tech Policy Press seeks perspectives on the moratorium from experts, with a particular focus on its scope and definitions. Below we present comments from eight contributors.
- Justin Brookman, Director of Technology Policy, Consumer Reports
- Neil Chilson, Head of AI Policy, Abundance Institute
- Amba Kak and Sarah Myers-West, Co-executive Directors, AI Now Institute
- Jake Karr, Acting Director, Technology Law & Policy Clinic and New York University School of Law
- Christabel Randolph, Associate Director, Center for AI and Digital Policy (CAIDP)
- Matthew U. Scherer, Senior Policy Counsel, Workers' Rights and Technology, Center for Democracy & Technology
- Adam Thierer, Senior Fellow, R Street Institute
- Cody Venzke, Senior Policy Counsel, ACLU
Next week, expect additional perspectives and analysis on this issue; we invite additional contributions.
The following responses are lightly edited from the original submission.
Justin Brookman, Director of Technology Policy, Consumer Reports
The 10-year moratorium on state AI legislation is a bad idea and a clumsily executed one at that.
The moratorium seems largely motivated by SB-1047, a California AI safety bill governing the largest AI frontier models that was vetoed by Governor Gavin Newsom last fall, and a number of state bills that have attempted to address potential bias in high-stakes AI decision-making systems such as hiring, housing, and lending.
First, it is worth noting that these bills do not regulate artificial intelligence generally, but are specific in scope, as are other recently passed state laws governing narrow categories of AI, such as self-driving cars, deepfakes, facial recognition, and mental health chatbots.
So despite hysteria about AI developers having to comply with dozens of laws, in practice, developers may not have to worry about any — or at least, only general purpose laws, such as privacy laws, that might not even be impacted by the House moratorium.
However, that raises one of the biggest problems with the House moratorium — it is absolutely unclear how far it extends.
The moratorium does not apply to “generally applicable law” that holds AI to the same standards as systems that “provide comparable functions.”
However, much of the current hype around AI is based on the fact that AI enables functionality that previously did not exist. Realistic deepfakes, autonomous driving, facial recognition, automated screening of thousands of job applicants — these are capabilities that only exist because of artificial intelligence. In addition to the tremendous benefits these technologies may provide, they also can be abused, which is why many states have tried to enact targeted protections.
Due to the ambiguity of the preemption language, it will be left up to the courts to decide on how far the moratorium extends to applications where no comparable alternative exists. In practice, that could take years, delaying states from enacting common-sense protections while Congress continues to largely dither on tech regulation.
Preempting state protections is a bad idea; doing it ambiguously without offering alternative federal protections is even worse. The states have proven to be more nimble in responding to real threats emerging from novel technologies. Congress has largely passed the buck; they should not heavy-handedly block the states from acting to protect their own citizens.
Neil Chilson, Head of AI Policy, Abundance Institute
The definition of “AI” in the moratorium mirrors the one in the Biden Administration’s AI Executive Order and closely tracks definitions in many of the 1,000+ state-level AI bills. It’s broad—broad enough, arguably, to encompass tools like advanced spreadsheets and databases.
That breadth has been a real concern in other contexts. I’ve raised alarms about how vague this definition is when it’s used to impose compliance burdens in the Executive Order or state laws. Businesses of all sizes across industries are left guessing whether their tools fall under “AI.”
Consider Colorado’s SB 24-205, a “comprehensive” AI law passed last year, which casts such a wide net that legislators clarify in the text that calculators and spellcheckers don’t count as AI … unless they become “a substantial factor in making a consequential decision.” How’s that for clarity?
But the use of the definition in the moratorium is less concerning. Why? Because the moratorium’s purpose is narrow and practical: to ensure that $500 million in federal funding is spent efficiently and effectively to modernize and secure government IT systems using commercial AI. The moratorium doesn’t need to pin down what counts as “AI.” Instead, it asks a simpler question: Does a law or rule single out AI for regulation?
That’s a much easier determination. Regulations that explicitly reference “AI,” “AI models,” or “automated decision systems”—particularly those relevant to federal modernization efforts—are clearly covered. Such laws single out AI.
Equally clear: the moratorium authors do not intend for the provision to block general-purpose laws that apply to all technologies, AI or otherwise. There may be some edge cases. But the moratorium is far less vague than the laws it would limit.
In other words, to the extent Colorado’s law regulates spell-check as AI, yes, the moratorium blocks enforcement of laws regulating spell-check. And that’s a good thing.
Amba Kak and Sarah Myers-West, Co-Executive Directors, AI Now Institute
The recent proposal for a sweeping moratorium on all state AI-related legislation and enforcement flies in the face of common sense: We can’t treat the industry’s worst players with kid gloves while leaving everyday people, workers, and children exposed to egregious forms of harm. Industry claims that state laws are a “burdensome” “patchwork” of unwieldy and complex laws is not grounded in fact.
What the record shows is that bipartisan state legislatures have passed or are considering reasonable, targeted, easily administrable rules that hit at AI applications that are patently unsafe and that simply should not be allowed at all. Each of these rules has been hard fought, as state lawmakers are responding to egregious harms faced by their constituents – in most cases fought tooth and nail and even whittled down to the bare minimum by armies of Big Tech lobbyists.
Honestly, states are just tinkering at the edges of the problem – there’s a lot more to do to go after the root causes. Two-thirds of US states have laws against AI-generated deepfake porn (most recently, the state of Montana, just ten days ago). Half of US states have laws targeting AI-generated deceptive election materials. At least eleven (from Arizona to Connecticut) have introduced bills regulating health insurance companies’ use of AI to deny claims. Tennessee and California have both enacted laws protecting artists against unauthorized use of their likeness. Other bills are focused on baseline disclosures that give people a fair understanding of when and how these tools are affecting their lives and livelihoods: requiring disclosures to people affected by algorithmic decisions in areas including healthcare, employment, housing, and education. Dozens more are considering such legislation along similar lines.
These aren’t onerous obligations; they’re the ceiling of what we should be looking for, not the floor.
Our Content delivered to your inbox.
Join our newsletter on issues and ideas at the intersection of tech & democracy
Jake Karr, Acting Director, Technology Law & Policy Clinic and New York University School of Law
The language of § 43201(c) is overbroad and vague, and it would seek to strip states of regulatory power in a way that would make James Madison’s AI avatar turn in his virtual grave. Nevertheless, there’s a kernel of legislative wisdom in the bill.
Much of the state-level tech legislation over the past few years has been rash and misguided. With better and worse intentions, laws have targeted specific industries, technologies, companies, and content with language that has left them often fatally vulnerable to constitutional challenge. At the same time, there have been some qualified legislative accomplishments, like California’s Consumer Privacy Act and Privacy Rights Act, or Illinois’s Biometric Information Privacy Act, that have led to tangible benefits for residents in those states. Litigants have also seen some success in attempts to bring longstanding state consumer protection, privacy, and tort claims against companies like Clearview AI and Character.AI. And where they haven’t seen as much success, efforts have still started to lead to nuanced and helpful case law. What these laws have in common is their lack of targeting—they’re “generally applicable” in scope and intent.
The proposed moratorium would seem to carve out such generally applicable laws from its preemptive scope. The bill "may not be construed to prohibit the enforcement of" any generally applicable state law that is "imposed in the same manner on [non-AI] models and systems... that provide comparable functions to [AI models and systems].” § 43201(c)(2)(C)(ii). To be sure, that's a confusing and unhelpful standard—what does it mean to impose a law "in the same manner" on both AI systems and "comparable" non-AI systems? But the basic idea—that states can enforce generally applicable consumer protection, privacy, and tort laws against AI companies—is a sound one.
The federal government, which cannot get out of its own way, should not get in the way of states earnestly trying to make sense of emerging technologies and protect their residents. Section 43201(c) is a bad law. But preemption or not, state regulators looking to craft meaningful AI regulation that will actually stick would be wise to heed the guidance buried in this subsection of a subsection of a subsection of the proposed moratorium—singling out AI, targeting it as legislatively exceptional, can easily backfire.
Christabel Randolph, Associate Director, Center for AI and Digital Policy (CAIDP)
The moratorium provision is both sweeping and opaque, much like the technology it seeks to impact. The scope includes AI models, AI systems, and automated decision systems. Extraordinarily, the moratorium does not apply to regulation that facilitates the deployment of AI systems, but would apply to regulation that requires accuracy or robustness in design or imposes data handling obligations or civil liability. It could potentially nullify existing state AI regulations while foreclosing any future regulation for at least a decade as well.
Consider Utah’s Artificial Intelligence Policy Act, which establishes safeguards for consumer-facing AI. The Utah AI Act was one of the first AI consumer protection bills enacted in the US. It establishes liability for use of AI that violates consumer protection laws if not properly disclosed and requires disclosure when an individual interacts with AI in a regulated occupation. It protects patients and consumers who purchase financial services or need legal advice.
Utah’s law also establishes protections for users of mental health chatbots. If a therapy chatbot company collects personal information, it can’t sell that data or use it for anything other than the mental health services it offers. Utah’s law also set up an Office of AI Policy which is responsible for communicating technology-related findings to the state legislature to inform further legislative action.
The bill’s sponsor, Sen. Kirk Cullimore (R) said that “deceptive business practices are already illegal. This bill just clarifies liability when AI is involved.” Interestingly, the law was motivated by a shortage mental health professionals, but Utah’s legislators didn’t want a solution without guardrails. Margaret Busse, executive director of the Utah Department of Commerce, speaking on the law, said, “We want to put some guardrails in place now.”
There has been bipartisan support in the Senate for an AI consumer protection law, yet after years of discussion, a bill has yet to make it out of committee.
Perhaps the most pernicious aspect of the moratorium is that it applies to many state AI adjacent laws —credit scoring systems, behavioral targeting on social media, and facial recognition technology. Laws across the country that safeguard consumers, small businesses, and the public could all be tossed aside if the moratorium passes.
As a bipartisan group of 40 state attorneys general (including Utah) said, the moratorium is “neither respectful to states nor responsible public policy.” Congress should abandon the moratorium proposal and instead focus on crafting meaningful, rights-respecting federal legislation that sets a strong floor while preserving the ability of states to lead, experiment, and protect their constituents.
Matthew U. Scherer, Senior Policy Counsel, Workers' Rights and Technology, Center for Democracy & Technology
The moratorium is so sweeping that it’s hard to imagine how any law that touches on AI or automated decision-making in any way could escape it. My work focuses on automated decisions, so that's what I'm looking at most closely. The moratorium's definition of automated decision systems includes any “computational process” used to “materially influence or replace human decision making.” That would encompass any use of AI or other data-driven technologies in any decision, including minor decisions like scheduling meetings or goods shipments. And as it stands, the moratorium wouldn’t even permit existing laws to be enforced against such systems (though that may have been a drafting error).
It’s hard to even begin to describe all the types of laws that such a moratorium would affect. At face value, the moratorium would prohibit local school boards from prohibiting its students from using chatbots to draft their papers. It would prevent states from passing laws to protect residents from new types of AI-enabled scams. Or requiring human review before implementing an AI-recommended treatment in a hospital. The list could go on forever.
But given the breadth of the definitions of AI and automated decision systems, there’s no way a list of exemptions could leave state legislatures sufficient room to regulate all the things that pretty much everyone agrees should be regulated--to say nothing of new risks that crop up in the next 10 years.
The fact that the moratorium is such a clear overreach might be the best thing about it. Even if it passes, almost everyone will end up regretting it within months.
Adam Thierer, Senior Fellow, R Street Institute
Congress is right to consider an AI moratorium to address the problematic rise of state and local regulatory proposals that interfere with interstate algorithmic commerce. With over 1,000 AI-related measures now pending in the United States, innovators face the prospect of the Mother of All Regulatory Patchworks encumbering their efforts. This would have a damaging effect on new business formation, competition, investment, product development, and consumer choice in this crucially important area.
Even with a moratorium on AI-specific regulatory enactments, however, state and local lawmakers would still be free to pass new technology-neutral rules so long as those regulations do not interfere with interstate algorithmic commerce. State and local lawmakers will need to tailor their policies to ensure their rules do not discriminate against AI offerings or have extraterritorial effect.
Importantly, an AI moratorium would not prevent state lawmakers or attorneys general from addressing AI concerns through the many existing legal and regulatory remedies already on the books, including: unfair and deceptive practices law, civil rights laws, product defects law, common law remedies, and various other consumer protections. As the Massachusetts Attorney General’s office correctly noted in 2024, “existing state consumer protection, anti-discrimination, and data security laws apply to emerging technology, including AI systems, just as they would in any other context.”
The AI moratorium being considered currently would rightly limit efforts by states to try to engage in backdoor AI regulation through new rules on automated decision-making systems. But generally applicable rules for privacy and cybersecurity would not run afoul of an AI-focused regulatory moratorium.
Congress has used moratoria before to protect interstate commerce and promote emerging technology innovation and competition. The time has come for federal lawmakers to do the same for AI.
Cody Venzke, Senior Policy Counsel, ACLU
The massive risks posed by the ten-year moratorium now moving to the Senate are well documented, as it will open the door to an entirely unregulated AI ecosystem. But the moratorium suffers from another, wonkier but equally problematic issue: it is a confusing, unworkable morass.
The confusion starts with the moratorium’s main provision (Sec. 43201(c), here): no state may enforce any “law or regulation” “limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.” At first blush, the scope of law seems obvious: the bill would strike down Colorado’s imperfect but important AI Act, California’s pending automated decision-making regulations, and deepfake bills and laws across the country.
But “otherwise regulate” is sweeping language. “Regulate” does not mean “target” or single out. The Holy Bible of legal definitions (the venerable Black’s Law Dictionary) defines regulate as “control,” especially “through the implementation of rules.” Speed limits regulate cars, trucks, scooters, and bicycles, even if it does not single out any of them for special treatment. Privacy, consumer protection, and tort law all regulate AI, its developers, and its deployers, and would seemingly be caught in the moratorium’s sweep.
The moratorium’s drafters seemed to recognize this and included a “rule of construction” to exempt certain laws — but the rule of construction is also beset with drafting problems. As passed by the House, the rule of construction exempts state laws if they meet three requirements. The state law is exempted if it (roughly – I’m paraphrasing for your sake, dear reader):
- Is meant to remove AI barriers or use AI to streamline zoning, licensing, or similar activities;
- Does not impose "any substantive design, performance, data-handling, documentation, civil liability" or other obligations on AI unless it is a "generally applicable" law that applies to all technology evenly; “and”
- Does not impose a fee or bond unless the fee or bond is reasonable and applies to all technology evenly.
One part of that list is not paraphrased: the “and.” In ordinary use, “and” means that all three of those requirements must be satisfied. This matters. Although many laws will meet the second and third prongs by treating AI like any other technology, far fewer are designed to remove barriers to AI or facilitate its use in activities like zoning. Consequently, laws like civil rights, privacy, and tort law may not meet the requirements of the “rule of construction.” The rule of construction includes a separate, distinct exception for laws with criminal penalties, which is unlikely to apply to civil rights or consumer protection laws.
Perhaps this was not the intent of the moratorium’s drafters. During a hearing this week, one Representative insisted that the moratorium would not apply to generally applicable laws like consumer protection laws, telling commentators to “RTFB — read the freaking bill.” However, that is not how the bill is structured, and the bill’s “plain meaning” will govern, not its supporters’ comments in a committee hearing. The fact that changes to the moratorium altered and clarified many of its provisions, but left that “and” in place underscores that this is perhaps not a mere “scrivener’s error,” but an intentional choice by the House.
There are important policy reasons why Congress should not pass any moratorium on state AI legislation, but the ambiguity in the reconciliation bill’s approach underscores that this bill is not ready. Reconciliation is not meant to advance such significant policy changes, and the Senate should remove this provision and consider it (if at all) more thoroughly through the proper committee process.