Posted on October 30, 2025 • 17 minutes • 3460 words
Table of contents- Introduction
- Background awareness of issues
- Prerequisite knowledge for Authorization Control
- AI and Authorization Control
- Authorization Considerations for “AI Users”
- Authorization Considerations for “AI Builders”
- Conclusion
- Shameless Plug
Introduction
Hello. I’m Umeuchi (@Sz4rny ), a software engineer at GMO Flatt Security, Inc.
In this article, we will take a slightly different perspective and examine “AI and access control.”
Whether you are implementing a service incorporating a Large Language Model (LLM) or using an LLM-embedded development support tool, you have probably wrestled with the question: “How much authority should be granted to the AI?”. This article delves into this theme.
Background awareness of issues
The evolution of the landscape surrounding Large Language Models (LLM: Large Language Model) has been remarkable in recent years. While early LLMs merely engaged in natural conversation, LLMs have now evolved into AI agents that think and act autonomously, fundamentally changing the nature of software engineering. It is fair to say that both the experience of “building products with AI” and “making products more valuable through the power of AI” are advancing daily. It is likely that many readers have used LLMs for code completion during development or instructed AI agents to handle miscellaneous tasks.
However, the reasoning capability and autonomy of LLMs and AI agents introduce new security concerns.
For example, let’s say you developed an AI agent that can freely browse a system-managed database to answer user questions. A malicious user, X, might instruct the agent to display the personal information of another user, Y. If functionalities to check the AI instruction (prompt) content or filters to remove personal information unrelated to the requester are not properly implemented, there is a risk that the agent could be exploited, leading to a personal data leak.
This article focuses on “AI and Authorization Control,” like the example above, explaining the fundamentals of authorization control and key perspectives to consider when utilizing AI-related technologies. By reading this article, you will gain a deep understanding of the following two points:
- Authorization Control Considerations for “AI Users”: What authorization control issues may arise when utilizing LLMs or AI agents in the development flow, and how should they be addressed?
- Authorization Control Considerations for “AI Builders”: What authorization control issues may arise when providing features embedding LLMs or AI agents into products, and how should they be addressed?
Note that, unless otherwise specified below, “AI” refers to an LLM-based AI agent.
Prerequisite knowledge for Authorization Control
What is Authorization Control?
Let us review the basics of authorization control. If you are already familiar with authorization control, please skip this section and proceed to the “AI and Authorization Control” section.
Authorization control is primarily a mechanism in a system used to control “who,” “what (what operations),” and “to what (what data or functions)” is permitted. Appropriate implementation of authorization control ensures that only subjects with legitimate authority can access predetermined functions or data.
For example, consider an SNS application where you can view various people’s profiles. Suppose this application implements the following two APIs:
- GET /profile/{userId}: Displays the profile of the user corresponding to userId.
- POST /profile/{userId}: Updates the profile of the user corresponding to userId.
The former API should likely be accessible by any subject. However, the latter API should only be allowed if the user ID of the request source matches the value specified in {userId}. Otherwise, anyone could update anyone else’s profile.

[Diagram: User ID-based Authorization Control Example]
This is a very simple example, but in reality, complex authorization control based on application requirements is common, such as “users with administrator privileges can update anyone’s profile”.
Authorization should not be confused with Authentication, which refers to the process of verifying and confirming the identity of the requesting subject. Authorization control is generally performed based on the role, attributes, and system status of the authenticated subject.
Variations and Key Principles of Authorization Control
There are several representative variations of authorization control.
Perhaps the most famous and widely used is RBAC (Role-Based Access Control). In RBAC, specific permissions are grouped into roles, and those roles are assigned to the subjects accessing the application. For example, a user expected only to view data is assigned the “Viewer” role, while a user who performs data operations is assigned the “Administrator” role.

[Diagram: RBAC Example]
Next widely used is ABAC (Attribute-Based Access Control). ABAC determines authorization based on the attributes of the requesting subject, the resource being requested, and associated context.
Consider an application that displays beverage information. This application mandates that users under the age of 20 should not be shown information related to alcoholic beverages. The authorization control logic based on ABAC would be as follows:
- If the resource (beverage) being referenced is not alcoholic, permit access.
- If the referencing subject is 20 years or older, permit access.
- Otherwise, deny access.
This example only references the attributes of the requesting subject and the target resource. If this application were to be expanded overseas, age restrictions based on that country’s laws would need to be introduced. In this case, the context attribute of the user’s location/country’s age restriction would also need to be referenced.

[Diagram: ABAC Example]
In addition to these, there is also ReBAC (Relationship-Based Access Control). ReBAC controls access based on the relationship between subjects and resources, or between resources themselves. ReBAC is easily visualized using a graph. Imagine a directed graph where users and data are “points (nodes)” and their relationships are “lines (edges)”. ReBAC determines whether to permit access by checking if “a certain type of line exists between specific points”. Examples of relationships include:
- Ownership Relationship: A resource created by a user can only be edited by that user.
- Membership Relationship: A resource created by a user can be viewed by members of the group that user belongs to.
- Hierarchical Relationship: A resource created by a user can only be viewed by their subordinates, but their superior can also edit it.

[Diagram: ReBAC Example]
Regardless of the method adopted, there is one practice that must always be adhered to: the “Principle of Least Privilege (PoLP)”. PoLP is the idea that “each subject should only be granted the minimum necessary permissions required to use the application”.
For example, consider an application that manages test scores. Users with the Student role should be granted permission to view their own scores. Users with the Teacher role should be granted permission to view and edit the scores of the students they oversee. However, users with the Student role should certainly not be granted permission to edit their own scores.
While such intuitively discoverable mistakes might seem unlikely, in real development environments, it is often observed that permissions that should not have been granted are assigned due to repeated specification changes or overlooking complex requirements.
Reality and Risks of Authorization Flaws
As mentioned in a previous article, “Broken Access Control” (Authorization Flaws) was ranked No. 1 in the global OWASP Top 10 2021 report, indicating an extremely high risk of causing large-scale incidents.
What risks might materialize if authorization control implementation is flawed or if the underlying specifications contain inappropriate definitions? Broadly speaking, these risks fall into the following three categories:
- Unintended Data Viewing (Information Leakage): Data may be improperly viewed by a subject that should not have viewing privileges. This can lead to serious information leakage or privacy violations, especially if the accessed data is sensitive or personal information.
- Unintended Data Modification/Deletion: Data may be improperly updated or deleted by a subject that should not have editing privileges. This risks malicious users changing other users’ information or deleting critical data like transaction history or security logs.
- Unintended Function Execution: Specific functions or APIs may become callable by a subject that should not have execution privileges. For instance, a regular user might execute administrator-equivalent functions, or a free user might access features reserved for paid users, leading to significant technical and business risks.
AI and Authorization Control
Issues related to AI and Authorization Control
Now, let’s move to the main topic. What issues do the rise of LLMs and AI agents pose to conventional authorization control mechanisms?
From this point forward, when considering authorization control, it refers to the process of determining “whether the AI agent is allowed to access it” when the AI agent attempts to use a specific data source or functionality (e.g., file servers, databases, third-party APIs).

[Diagram: AI and Authorization Control]
Most previous authorization control systems were built on the premise of human usage scenarios and operational methods. When applications begin to be used via AI agents, the following problems are likely to emerge:
- Unpredictable Behavior by Highly Autonomous AI: Unlike humans, whose application usage flow is somewhat clear, AI can exhibit diverse behaviors depending on the prompt given. This makes it difficult to determine specifically which permissions (or roles, relationships, etc.) to grant.
- Expansion of Attack Surface via AI Agents: AI agents often possess permissions to access internal systems or databases that the user cannot directly access (e.g., to enable RAG: Retrieval Augmented Generation). If a prompt is provided that cleverly exploits the agent’s behavior, access to data or functions that should be beyond the original requester’s authority might occur.
- Dynamic Assignment of Context-Aware Permissions: AI agent permissions must not be static but determined dynamically by merging the permissions held by the agent itself and the permissions of the user who requested the task execution. Flaws in how permissions are combined based on the situation risk operations being executed that the instructing user’s authority should not allow, or conversely, permissible operations being blocked.

[Diagram: AI Agent and Authorization Control Issues]
These issues are encapsulated in the OWASP Top 10 for LLM Applications 2025 as “LLM06:2025 Excessive Agency”. Agency in this document refers to the permissions and capabilities granted to an AI agent to execute specific tasks (e.g., plugin features or function calling capabilities). The core idea of LLM06:2025 is that when such Agency is excessively granted, the AI agent might execute actions that cause damage to the user or the system.
The document highlights the following three factors as causes of LLM06:2025:
- Excessive Functionality (Excessive functionality): The agent can access functionalities unnecessary for the system’s intended operations.
- Excessive Permission (Excessive permission): The agent is granted more than the minimum necessary permissions. This includes both the type of action possible and the breadth of the scope of resources the action applies to.
- Excessive Autonomy (Excessive autonomy): The agent can execute actions that significantly impact the system or users without confirmation or approval by a specific person.
Case Study: Internal Report Generation AI Agent
Let’s examine the difficulties associated with AI and authorization control through a specific fictional case: an “Internal Report Generation AI Agent”.
This application allows internal members to give the agent instructions in natural language, prompting it to search and reference data or documents on internal file servers and databases, summarize them clearly, and display the result to the user.
The key questions are: What permissions should this agent be granted? Should these permissions change based on the requesting user, and if so, how should the final permissions be determined? And what functionality should the tools provided to the agent possess?

[Diagram: Internal Report Generation AI Agent]
Excessive Permissions
Considering the use case, the agent is merely a subject that accesses various data sources and summarizes that information. Therefore, read-only (reference) permissions seem necessary, but update or delete permissions may not be required.
If update or delete permissions were granted to this agent, there is a possibility of high-severity risks, such as documents on the file server being unintentionally updated or specific records in the database being modified.

[Diagram: Excessive Permission Assignment]
Attention must be paid not only to the type of permission but also to the scope of the permission. For instance, if the agent’s purpose is limited to “providing reports on company accounting materials,” it should access accounting-related data but not unrelated data (such as employee personal information).
Authorization flaws concerning the scope of permissions also risk unintended data viewing or operations.

[Diagram: Excessive Scope Permission]
Excessive Functionality
While the previous example focused on unnecessary permissions, the functionality of the tools provided to the agent is also a concern. Even if the agent possesses broad permissions, if the tool is only given minimum functionality, the agent cannot execute actions outside that scope.
It is crucial to eliminate unnecessary functionalities from the tools provided to the agent, and not to provide an excessive number of tools in the first place. Although providing a tool capable of transferring funds from the company bank account is an extreme example, in reality, tools that were previously used but are no longer necessary, or tools granted for testing purposes remaining active in production, are often seen.

[Diagram: Excessive Functionality Assignment]
Dynamic Permission Assignment
In most real applications, authorization control is based on the attributes and permissions of the requesting user. Examples include:
- Per-user control: User A can view their own personal information but not that of other employees.
- Role-based control: HR administrators can view HR data, but other employees cannot.
- Attribute-based control: Users in the HR department who hold a manager position or higher can view HR data, but other employees cannot.
Applications utilizing AI agents must also properly implement authorization control based on the requesting user’s attributes and permissions. In other words, similar to AssumeRole in AWS IAM, the agent needs to switch the role it assumes based on the requesting user.

[Diagram: Dynamic Permission Assignment]
This dynamic permission assignment becomes more complex when the agent’s own permissions must also be considered.

[Venn diagram of permissions]
The Venn diagram of permissions can be divided into four regions:
- P1: Permissions held by neither the user nor the agent (Reject).
- P2: Permissions held only by the requesting user.
- P3: Permissions held only by the agent.
- P4: Permissions held by both the user and the agent (Permit).
P1 and P4 authorization controls require little consideration; they should be rejected and permitted, respectively. The existence of region P2 is primarily a user experience issue. This can be solved by designing the agent to assume the user’s permissions.
The most difficult region to manage is P3. For example, the agent has access to an internal database for task fulfillment, but general employees do not.
If authorization control is implemented leniently, the task is fulfilled, but there is a risk that information the user should not have access to could be retrieved via the agent. Even using filtering mechanisms to remove sensitive information from the agent’s output, users can potentially aggregate multiple queries or use prompt injection techniques to bypass this restriction, meaning filtering is not a fundamental solution.

[Diagram: Acquisition of Sensitive Information via Agent]
Conversely, if authorization control is implemented strictly, user experience issues emerge. There is always a trade-off between “how much access/execution power to allow” and “how strictly to implement authorization control”. In actual development, achieving dynamic and flexible authorization control at a high granularity is desirable.
Summary of “AI and Authorization Control”
The crucial points for authorization control for AI are:
- First, minimize what the AI can do: Adhere to the Principle of Least Privilege (PoLP). Grant the AI only the necessary permissions and functionalities required for its designated role.
- Move towards more flexible authorization: Once PoLP is established, consider a mechanism that can flexibly assign the appropriate permissions based on the user of the AI and the specific context.
Authorization Considerations for “AI Users”
Many who are involved in programming will have experienced the “AI-assisted creation” experience, such as AI-driven code completion, code generation, or Vibe Coding using AI agents like Devin.
AI development support tools typically require access to the code repository, local file system, and certain services or resources within the development environment to function. When granting such access, developers should accurately grasp “what permissions should be granted to the AI” and minimize the permissions actually assigned.
Key authorization control considerations when using AI development support tools:
- Preventing Confidential Information Leaks: Granting excessively broad permissions risks unintended access to credentials stored in repositories or file systems. These credentials could then be used for model training or transmitted to integrated external services. Comprehensive countermeasures are necessary, such as limiting the scope of resources and services the tool can access, and disabling model learning options.
- Preventing Unintended Actions: If the tool is granted excessively broad permissions or excessive autonomy, it might execute actions beyond the developer’s intent (e.g., unexpected code changes or external service calls). Minimizing the permissions given to the tool is critical.
A further effective measure is requiring administrator approval for a task plan before the AI executes a specific task. For example, in Devin, disabling the “Devin should proceed without waiting for approval on complex tasks” option requires the developer’s approval before execution, reducing the risk of unintended outcomes.

[Diagram: Devin Option]
Authorization Considerations for “AI Builders”
“AI Builders"—those who develop AI agents or services utilizing them—must possess a sufficient understanding of system authorization control design. This section explores architectures and practices for achieving robust authorization control in such development environments.
Adopting Context-Aware Authorization Control
Simple authorization models like static RBAC are insufficient to handle the autonomy and task diversity of AI agents. Therefore, adopting a more advanced authorization control model is necessary.
“Context-Aware Authorization Control” is an approach that determines permissions and execution feasibility based on the various contexts in which the AI agent operates. Contextual factors considered include:
- Attributes of the requesting user.
- Attributes of the AI agent.
- Security level of the required resources.
- Type of action to be executed.
For example, to achieve the requirement “HR department managers or above can view HR data, others cannot,” a configuration that “references the requesting user’s attributes and grants the HR database connection tool only if those specific conditions are met” seems appropriate. The following pseudocode illustrates this:
Furthermore, implementing dynamic authorization control within a policy-based architecture centralizes the Policy Decision Point (PDP). This ensures authorization decisions are made consistently, regardless of whether the user interacts with the service directly or via an AI agent.
Using an OPA (Open Policy Agent)-based architecture, a policy implemented in Rego (as below) is set up in an authorization control microservice. Both the agent and the application query this common microservice for authorization decisions.

[Diagram: Architecture Using an Authorization Control Microservice]
This structure allows flexible changes to authorization requirements. For instance, if a new requirement is added: “Allow HR department members access, even if they are not managers, provided the access is via the AI agent,” the policy can be updated without modifying the application or agent implementation:
Leveraging AI Gateways
Another effective method is placing a dedicated AI Gateway (serving as a “checkpoint”) between the AI agent and data sources/services, ensuring all AI agent interactions pass through it. This is equivalent to providing a dedicated API for the AI agent to communicate externally.
The advantage, similar to the policy-based approach, is the centralized management of AI agent authorization decisions and accessible external functionalities. This approach also allows for the simultaneous implementation of other critical requirements, such as rate limiting, prompt sanitization, and output content filtering.

[Diagram: Architecture Using an AI Gateway]
Conclusion
This article provided an overview of authorization control in the AI era. We introduced the basics, variations, and principles of authorization control, examined associated risks, and discussed the challenges related to AI authorization using a case study. Finally, we presented crucial authorization control viewpoints for both “AI Users” and “AI Builders,” and examined specific authorization architectures for AI agents.
As AI and LLM permeate every aspect of software engineering, designing their permissions becomes the foundation for building truly trustworthy and collaborative AI. Robust authorization control mechanisms are essential to maximize the benefits derived from the autonomy of AI agents. We hope this article serves as a helpful hint for readers in their journey of “using AI” and “building AI”.
Shameless Plug
We are GMO Flatt Security, a team of world-class security engineers. Our expertise is proven by zero-day discoveries and top-tier CTF competition results.
We offer advanced penetration testing for web, mobile, cloud, LLMs, and IoT. https://flatt.tech/en/professional/penetration_test
We also developed “Takumi,” our AI security agent. Using both DAST and SAST, Takumi strengthens your application security by finding complex vulnerabilities that traditional scanners miss, such as logic flaws like broken authentication and authorization. https://flatt.tech/en/takumi
Based in Japan, we work with clients globally, including industry leaders like Canonical Ltd.

If you’d like to learn more, please contact us at https://flatt.tech/en
.png)


