Jira tickets could potentially be abused for prompt injection when support staff use AI to help handle issues.
This “living off AI” proof-of-concept (PoC) attack targeting Atlassian’s Model Context Protocol (MCP) and Jira Service Management (JSM) was demonstrated by Cato Networks in a blog post Thursday.
The PoC exploit highlights the risk that arise when AI systems with internal access and permissions are able to act on untrusted external inputs.
“The security and protection of our customers’ data is our highest priority. Our teams are exploring enhanced controls designed to support more secure use of MCP clients among our customers who choose to integrate the technology with their Atlassian products,” an Atlassian spokesperson told SC Media.
Atlassian’s MCP allows customers to integrate Anthropic’s Claude models into their Jira and Confluence workflows. For example, users can use Claude to create, summarize and reply to Jira work items. The MCP provides Claude access to the internal Jira or Confluence resources needed to assist the user.
Cato’s security engineers found that a JSM support ticket could be used to deliver a prompt injection to Claude and manipulate the model into leaking internal data.
In their demo video, the Cato team crafted a ticket with instructions for the AI to retrieve and summarize other internal Jira tasks and post the summaries as a comment on the attacker’s ticket.
If a support engineer relies on the AI to automatically read and resolve issues, this prompt injection can be executed and internal data leaked before the user can recognize the attack.
Cato noted that some JSM support portals allow tickets to be submitted by anyone without any authentication, and that a Google search reveals many externally accessible service portals.
“The risk we demonstrated is not about a specific vendor — it’s about a pattern. When external input flows are left unchecked with MCP, threat actors can abuse that path to gain privileged access without ever authenticating,” the Cato blog authors wrote.
The team also described another potential scenario where the AI could be prompted, via a Jira ticket, to add a comment with a malicious link to all other open Jira issues. If successful, this could lead a support engineer to potentially click a dangerous link resulting in malware downloads or other consequences; however, unlike the previous attack, this scenario was not demonstrated as a proven exploit.
An Atlassian spokesperson told SC Media that the company strongly encourages its customers to carefully assess any risks specific to their use of the Atlassian MCP.
Several researchers have highlighted emerging risks associated with AI assistants or agents that have both access to privileged internal data and exposure to external inputs. Last week, Aim Security researchers demonstrated how a now-patched Microsoft Copilot flaw, dubbed EchoLeak, could be used to leak internal data by sending a prompt injection via email.
In May, Invariantlabs demonstrated an attack similar to Cato’s PoC, targeting GitHub’s MCP, which used a prompt injection in an issue to cause an AI agent to leak private repository details.
Get essential knowledge and practical strategies to use AI to better your security program.