ai-pocalypse Amid expectations that the Trump administration will introduce an AI Action Plan on Wednesday to boost the use of AI in government, the US National Institutes of Health (NIH) are pleading for less of it.
In guidance issued last week to researchers, NIH, part of the US Department of Health and Human Services (HHS), disallowed grant applications created with the help of generative AI.
"NIH will not consider applications that are either substantially developed by AI, or contain sections substantially developed by AI, to be original ideas of applicants," the health agency notice explains.
"If the detection of AI is identified post award, NIH may refer the matter to the Office of Research Integrity to determine whether there is research misconduct while simultaneously taking enforcement actions including but not limited to disallowing costs, withholding future awards, wholly or in part suspending the grant, and possible termination."
NIH did not respond to a request for comment but the notice says it's suddenly receiving an unusually large number of research applications, some of which appear to have been created with the help of AI tools.
"While AI may be a helpful tool in reducing the burden of preparing applications, the rapid submission of large numbers of research applications from a single Principal Investigator may unfairly strain NIH’s application review processes," the notice says.
Few scientists submit an average of more than six applications per year, NIH says, but AI tools have led some to submit more than 40 separate research applications in a single submission round.
To make matters worse, the gap between the agency’s capacity to review research applications and the rate at which AI-generated submissions arrive looks likely to grow following a report in Nature that suggests many grant reviewers face dismissal as the Trump administration seeks to appoint like-minded scientists.
- Science confirms what we all suspected: Four-day weeks rule
- Google AI Overviews are killing the web, Pew study shows (again)
- Replit makes vibe-y promise to stop its AI agents making vibe coding disasters
- Cursor AI YOLO mode lets coding assistant run wild, security firm warns
Last month, almost 500 NIH staff members signed a petition urging NIH and HHS leadership to stand up for science and academic freedom in the face of Trump administration cuts. The scientists asked for restoration of terminated research grants and the reinstatement of fired staff.
NIH says it will use unspecified technology to ferret out AI-generated research applications while emphasizing that all such applications should conform with grant policies that expect research organizations and teams to propose original ideas for funding.
The health agency concedes that AI tools may be appropriate for limited tasks when preparing research applications, even as it warns that AI use can result in plagiarism, invented citations, and other scientific misconduct.
NIH is facing the same flood of AI slop that has been overwhelming open source projects like curl, Python, and Open Collective, as well as academic publishers, journalism, web search, and social media. Wherever human content evaluation meets automated content generation, the people just can't keep up, or don't want to because the quality of the generated output is low.
This dynamic at one time prompted Anthropic, which makes the Claude AI model family, to disallow the use of generative AI by those submitting job applications.
"While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," the company's job board said earlier this year.
"We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills."
But earlier this month, Anthropic decided on a more nuanced approach, perhaps in recognition of the promotional problems inherent in disallowing the use of its own product. Job candidates are now advised, "Where it makes sense, we invite you to use Claude to show us more of you: your unique perspective, skills, and experiences." ®