There aren’t many jobs that don’t require experience. But artificial intelligence (AI) is putting some cyberattacks within reach of criminals with little more than basic computing knowledge.
While AI lowers barriers for novice hackers, experts emphasize a requirement for precision, contextual awareness, and resource access. AI-generated attacks often exploit known vulnerabilities and do not require the same precision as exploiting unknown flaws. Advanced persistent threat actors either discover zero-day vulnerabilities or buy information about them, and the tools to exploit them. Advanced persistence is the sophisticated ability to infect and spend a long time in a company’s network and systems.
AI is unable to understand subtle details of contextual awareness (e.g., social cues, organizational dynamics) that people can. “Context requires deep world knowledge and experience, including direct experience with other humans in the enterprise, something AI doesn’t have,” said Kevin Surace, chief executive officer at Santa Clara, CA-based Appvance, which provides AI-driven autonomous software test generation.
Novices may not have access to enough compute to run resource-intensive AI-driven attacks.
“To use the more advanced large language model (LLM) configurations, you need much more expensive compute,” said Jacob Anderson, owner of Beyond Ordinary Software Solutions, a professional services company.
Novice Threat Actors Benefit Most from AI Tools
AI helps novices and experienced cybercriminals alike, but it gives novices a more noticeable and significant advantage because they start from nothing, according to Vakaris Noreika, head of product for NordStellar at Nord Security, a cybersecurity company.
Novices don’t have the knowledge to use complex manual tools and coding in cyberattacks. AI-powered attack tools are often turnkey solutions that allow those with little technical expertise to launch automated attacks without needing to develop or customize malware.
How Defenders Can Neutralize AI-Driven Attacks’ Scalability
Automated patching is key to neutralizing AI attack scalability; closing known security holes helps stop many automated, AI-driven scans from finding easy targets, according to Noreika.
“Internal security teams can harness AI to automate real-time threat detection, identify phishing attempts, spot vulnerabilities, and process large volumes of threat intelligence data to uncover emerging threats,” said Josh Jacobson, director of professional services at HackerOne, a bug bounty program provider.
AI-Driven Attacks Disrupt Critical Infrastructure
It’s too early to say AI-driven attacks won’t disrupt critical infrastructure at scale, according to Ersin Uzun, director of the ESL Global Cybersecurity Institute at the Rochester Institute of Technology. “There might already have been instances that are not publicly disclosed,” said Uzun.
According to Uzun, attacks on critical infrastructure such as industrial control systems (ICS) require deep and target-specific knowledge of the operational environment. AI tools alone may not have such knowledge. Elite criminal hackers can and do obtain such information.
“All the indicators are telling me that we will see more AI-driven attacks against ICS in the future,” said Uzun.
Telling AI-Generated Noise from Genuine Threats
“Common AI attack tools are still relatively blunt tools that generate bursts of unusual activity, like port scans or looking for known vulnerabilities,” said Uzun.
“It looks different from a strategic attack like an advanced persistent threat, where a sophisticated attacker surgically moves within a system and tries to avoid activities that warn of his presence. That said, some phishing or social engineering attacks using deepfakes and AI-generated content are already getting very hard to distinguish.”
Port scan techniques can quickly check which network ports are open, helping attackers or defenders find possible entry points or vulnerabilities.
According to Noreika, automation helps to filter AI-generated noise. “Security information and event management (SIEM) tools collect all the logs, and security orchestration, automation, and response (SOAR) tools can automatically handle low-level, known-bad alerts based on rules, taking some of the load off of analysts,” said Noreika. By collecting network and system event records or logs, SIEM provides SOAR with the information it needs to respond to obvious threats that are not critical.
“AI-powered defense tools are key here, too,” Noreika said. “They use machine learning (ML) and behavioral analysis to spot patterns. Instead of looking for known virus signatures, they look for unusual activity. Is a user account suddenly accessing files it never touches? Is data flowing out in a weird pattern? AI can flag these anomalies that might signal a real, sophisticated threat, even generated by another AI.”
Socioeconomics and the Global Distribution of Novice Cybercriminals
According to Noreika, cybercrime is not spread evenly around the world. “In general, factors like economic opportunity, education level, Internet access, and the rule of law seem to play a big role,” he said. “In places where there’s decent Internet access but maybe fewer legitimate job opportunities, cybercrime can seem like an attractive option.
“The digital divide is also an essential factor. Basic Internet access or digital literacy needed to use advanced tools, let alone AI, can be scarce in poorer regions. So AI might lower the barrier for some, but lack of access remains a more significant barrier for others,” said Noreika.
Crime Groups use Cybersecurity Experts and Novice Kids, Teenagers
“Lapsus$ and Killnet [cybercrime groups] are great examples of successful ‘script kiddies’ who were young and/or less knowledgeable and used ‘spray and pray’ distributed denial of service and social engineering tactics,” said Crystal Morin, cybersecurity strategist at Sysdig, a cloud-native security and monitoring company.
Unskilled script kiddies can use pre-made tools or scripts created by others to launch cyberattacks. Spray-and-pray sends a large number of generic attacks in the hope that a few will succeed by chance. “FIN7, on the other hand, is a highly skilled crime group that has yet to be tied to any nation-state,” Morin said.
David Geer is a journalist who focuses on issues related to cybersecurity. He writes from Cleveland, OH, USA.
Submit an Article to CACM
CACM welcomes unsolicited submissions on topics of relevance and value to the computing community.
You Just Read
AI Empowers Novices to Launch Cyberattacks
© 2025 ACM 0001-0782/25/7