OWASP Global AppSec: New AI vulnerability scoring system unveiled

2 hours ago 2

WASHINGTON, D.C. — Traditional threat-modeling frameworks like the Common Vulnerability Scoring System (CVSS) are inadequate to gauge the severity of vulnerabilities in agentic AI, said AI expert, author and adjunct professor Ken Huang at the Open Worldwide Application Security Project (OWASP) Global AppSec conference here Friday (Nov. 7).

Instead, Huang said, OWASP has put together a different threat model metric, the AI Vulnerability Scoring System (AIVSS), which is based on CVSS but has extra dashes of AI special sauce.

The CVSS and other regular software vulnerability frameworks "are not enough," Huang said. "These assume traditional deterministic coding. We need to deal with the non-deterministic nature of agentic AI."

Huang is a co-leader of the AIVSS project working group at OWASP, along with Zenity Co-Founder and CTO Michael Bargury, Amazon Web Services Application Security Engineer Vineeth Sai Narajala, and Stanford University Information Security Officer Bhavya Gupta.

"Agentic AI needs a little bit of autonomy, but autonomy can introduce additional risk," explained Huang. "Autonomy is not itself a vulnerability, but it does elevate risk."

What the AIVSS is and does

The AIVSS project is a standardized framework to score and manage vulnerabilities unique to agentic and AI systems, said Huang. The idea is to quantify AI-specific risk not covered by traditional methods like CVSS.

For example, he said, even the identities used by AI agents may be hard to predict, unlike the fixed machine identities used by most traditional software.

"We cannot assume the identities used at deployment time," said Huang. "With agentic AI, you need the identity to be ephemeral and dynamically assigned. If you really want to have autonomy, you have to give it the privileges it needs to finish the task."

The AIVSS website at https://aivss.owasp.org/ includes guides for structured AI risk assessment, plus a scoring tool to calculate your own AI risk.

The score is determined by taking the CVSS base score for a traditional vulnerability, then adding to it an agentic-capabilities assessment that takes into account risk-amplifying factors such as autonomy, non-determinism, and tool use. The sum is divided by two, and the result is multiplied by an environmental context factor.

The top AI security risks, so far

For now, the AIVSS has a list of the 10 most severe agentic AI core security risks. Just don't call it a Top 10 list yet:

  1. Agentic AI Tool Misuse
  2. Agent Access Control Violation
  3. Agent Cascading Failures
  4. Agent Orchestration and Multi-Agent Exploitation
  5. Agent Identity Impersonation
  6. Agent Memory and Context Manipulation
  7. Insecure Agent Critical Systems Interaction
  8. Agent Supply Chain and Dependency Attacks
  9. Agent Untraceability
  10. Agent Goal and Instruction Manipulation
  11. "For example, for tool misuse, there shouldn't be a risk in selecting a tool," Huang explained. "But [in MCP systems] there is tool impersonation. Also insecure tool usage."

    The draft of the framework document, "AIVSS Scoring System for OWASP Agentic AI Core Security Risks," notes that "some repetition across entries is intentional."

    "Agentic systems are compositional and interconnected by design," it adds. "To date, the most common risks such as Tool Misuse, Goal Manipulation, or Access Control Violations often overlap or reinforce each other in cascading ways."

    Huang explained that the current goal of the AIVSS working group is to have version 1.0 of the framework document ready by the time of the next RSA Conference in San Francisco in late March 2026. He invited anyone to read the draft document and contribute.

    Get essential knowledge and practical strategies to use AI to better your security program.

    Read Entire Article