After reading Trump was pressuring Congress to pass a bill that is > 900 pages, I was curious on how it may have been created with AI and its implications (both positive and negative).
A version from last weekend (downloadable here) overindexes on emdashes and while not proof that it was created with AI, we can presume it was highly likely.
Emdashes per page are understood to be a sign of likely AI content, since it’s more difficult for someone to type these on a US keyboard than they appear on a keyboard (though, to be fair, it is not bulletproof evidence).
I’d uploaded the pdf to Google Gemini 2.5 Pro to show emdashes per page (so the caveats: it is not completely accurate, it hallucinates, and exhibits the needle in the haystack issue (Nelson, et al. 2024)). Given the caveats, AI can work for broad comparisons like this, and was able to use 250k tokens of a 1M context window in Gemini 2.5 Pro, prompt below in caption.

Emdashes per page are over 100x the average of a bill sent to Congress, plotted below to visualize this. Encourage everyone to analyze the bill and its implications with AI but also confirm what they are reading with the source (to account for less than 100% accuracy and hallucinations).
Not really, at face value (AI with human review can be utilized as a tool to create things quicker than average), but opens the 3 most pertinent questions:
1. How is Congress reading this bill (why they are pressured to pass it)?
Most likely through AI, which is prone to the same issues as my quick scan (needle in the haystack issues will not catch everything in the bill, by design, and nor will Congress catch everything in the bill if under pressure to pass it by a date this week).
There is bias inherent in the reading of this bill (Ferrara, 2024) - doubtful that Congress is aware of the system prompts used and any other prompt engineering or reasoning inherent in a government tool provided by the Trump administration.
2. If this was created with AI, why?
I don’t know the answer, but Occam’s Razor suggests it is likely utilized to include as many elements of Project 2025 as possible, fast. Not getting into the politics of this, but the bill appears to be human reviewed to some degree.
3. Besides the bill content, how ethical is AI use by the government for these things?
It might not be all completely as bad as it sounds (intent of Project 2025 aside, since that is political).
Where it is bad, key issues include privacy violations, bias, discrimination, and reduced transparency and accountability (Saura et al., 2022; Henman, 2019). Predictive algorithms in public services can lead to harmful consequences for citizens and workers if not properly implemented and tested (Waller & Waller, 2020). Ethical AI implementation requires addressing expertise gaps, improving risk frameworks, and enhancing transparency (Zick et al., 2024). Overall, a more nuanced view of AI in government is necessary to create realistic expectations and mitigate risks (Toll et al., 2020).
Also this is not including potential applications of government officials using AI to “vibe code”, a term for people who are unexperienced to create applications without knowing the underlying mechanics, opening things up to security holes and inefficiencies…
Cite this if desired:
@online{aiencoder2025trump, author = {AI Encoder}, title = {Potential Evidence That Trump’s Big Announcement Was AI-Generated}, year = {2025}, month = {June}, url = {https://open.substack.com/pub/aiencoder/p/potential-evidence-that-trumps-big}, note = {Substack} }