Japan Reveals New Guidelines for Incorporating AI into Defense Equipment

4 hours ago 1
Japan’s prototype “Multi-Purpose Combat Support Unmanned Surface Vehicle” (left) is incorporating artificial intelligence technologyJapan’s prototype “Multi-Purpose Combat Support Unmanned Surface Vehicle” (left) is incorporating artificial intelligence technology.

On June 6, Japan’s Acquisition, Technology & Logistics Agency (ATLA) published the country’s first set of guidelines on its website for the responsible use of artificial intelligence (AI) in the research and development (R&D) of defense equipment. The guidelines aim to ensure appropriate human involvement in the R&D of AI-integrated systems by establishing risk management processes that also provide clarity for businesses, thereby promoting the development of such systems intended for procurement by the Ministry of Defense (JMOD) and the Self-Defense Forces (JSDF).

The new JMOD-compiled guidelines[1] are intended to address the legal, ethical, and operational risks associated with the degree of human control over defense equipment that incorporates AI functions. Concerns are growing both in Japan and globally, as the R&D of AI-powered autonomous systems becomes increasingly prevalent and these systems are deployed in conflicts such as those in Ukraine and the Middle East.

At the same time, JMOD’s AI guidelines—released just one week after Japan’s enactment of a broader AI bill on May 28 that aims to boost the country’s overall competitiveness in the R&D and utilization of AI-related technologies while regulating associated risks[2]—seek to promote the incorporation of AI in defense equipment by providing transparency for businesses seeking to participate in the R&D of such systems.[3]

Background and Rationale to JMOD’s New AI Guidelines

Central to the risks addressed in JMOD’s new guidelines are AI-powered “Lethal Autonomous Weapons Systems” (LAWS), which, amid the absence of an internationally agreed definition,[4] Japan has defined as systems that, “once activated, can identify, select, and engage targets with lethal force without further intervention by an operator.”[5] Japan has categorically ruled out the development of LAWS and is actively advocating for their global prohibition.[6]

The guidelines also reflect Tokyo’s stated position on AI governance more broadly, which has emphasized the importance of a “human-centric principle”[7] and the need to implement risk management measures that ensure meaningful human control over AI-enabled defense systems is maintained.

JMOD’s AI guidelines thus aim to ensure that Japan’s development of AI-integrated defense equipment complies with relevant domestic and international laws and aligns with Japan’s stated position on LAWS: namely, that international humanitarian law (IHL) applies to LAWS, and that Japan does not intend to develop autonomous weapon systems capable of using lethal force without meaningful human involvement.[8]

To uphold this position, JMOD emphasizes the necessity of establishing clear rules for the R&D of AI-integrated autonomous systems to ensure effective risk management and alignment with domestic and international law, as well as Japan’s stance on LAWS.[9]

ATLA R&D projects on AI-powered Autonomous Defense Technologies Featured in the AI Guideline Outline (with English Translations)ATLA R&D projects on AI-powered Autonomous Defense Technologies Featured in the AI Guideline Outline (with English Translations).

By establishing clear rules for risk management processes, the guidelines also aim to encourage private sector involvement in the R&D of AI-integrated defense equipment. As Japan’s Minister of Defense, Nakatani Gen, elaborated during a June 6 press conference, the guidelines are expected to “provide predictability to all business operators who wish to participate in the research and development of equipment.”[10]

JMOD’s Risk Management Process for AI-Powered Defense Equipment

The guidelines for responsible application of AI in defense equipment projects adopt a three-step risk-management process: (1) classifying defense equipment, (2) reviewing compliance with relevant laws and policies to be conducted by a “Legal and Policy Review Board”, and (3) evaluating risk management from a technical standpoint by a “Technical Review Board”.[11]

The first step, classification, involves categorizing AI-integrated defense equipment as either “high risk” or “low risk” to determine the appropriate level of scrutiny. This classification is based on the extent to which the AI system influences the overall equipment’s destructive capabilities. High-risk systems require a detailed legal and policy review, along with a technical evaluation, prior to the commencement of R&D. In contrast, low-risk systems may be self-assessed by the project implementation staff.[12]

For a project deemed “high risk,” the next step involves review by a Legal and Policy Review Board. The board is responsible for assessing two key requirements of the project, referred to as “A-1” and “A-2,” which can be summarized as follows:[13]

  • A-1: The system must not be one for which compliance with international law, including international humanitarian law (IHL), and domestic law cannot be ensured.
  • A-2: The system must not be a fully autonomous lethal weapon that operates without human involvement, lacks an appropriate level of human judgment, and cannot be operated within a responsible, human-led command and control structure.

If, at this second stage, the Legal and Policy Review Board determines that the project does not meet requirements “A-1” and “A-2,” the project is prohibited from proceeding to the R&D phase.

However, if the Legal and Policy Review Board determines that the project does not violate requirements A-1 and A-2, it is passed to the final step: review by the Technical Review Board.

Flow diagram for equipment classification and AI-system review (translated into English and amended from the Japanese original)Flow diagram for equipment classification and AI-system review (translated into English and amended from the Japanese original).

In the final step, the Technical Review Board, taking into account the findings of the Legal and Policy Review Board, is responsible for assessing seven technical requirements to ensure the appropriateness of a given AI-based system. These technical requirements, labeled “B-1” through “B-7,” can be summarized as follows[14]:

  • B-1: Human responsibility must be clarified by ensuring that the AI system is designed to allow for operators to be involved and able to take appropriate control.
  • B-2: Appropriate understanding by operators of the AI system must be cultivated by ensuring that the AI system is designed with mechanisms to ensure proper use, countermeasures to mitigate overreliance, and the ability of operators to make corrections if malfunction occurs.
  • B-3: Objectivity of the AI system must be ensured through the exploration and understanding of sources of bias, and implementation of appropriate mitigation measures in both the AI system and its datasets to limit unjustified and harmful biases.
  • B-4: Verifiability and transparency of the AI system must be ensured by clearly documenting the process of the AI system’s construction, including its design procedure, algorithms employed, and data used for learning.
  • B-5: Reliability and effectiveness must be ensured by evaluating and testing the AI system’s reliability, effectiveness, and security to ensure they operate at an acceptable level throughout their entire lifecycle.
  • B-6: Safety must be ensured by putting in place mechanisms that can reduce the risk of malfunctions or serious failures in the AI system.
  • B-7: Compliance with domestic and international law through a system design that enables operations to be conducted in accordance with applicable legal requirements.

The latest guidelines for R&D into AI-incorporated defense equipment and the associated risk management processes, as detailed above, are intended to reflect Japan’s stated position regarding LAWS and IHL from May 2024 in a working paper submitted to the United Nations (UN), as well as commitments outlined by JMOD in its July 2024 “Basic Policy for Promoting AI Utilization.”[15]

In this basic AI policy, JMOD emphasized a “Responsible AI” approach, and outlined its seven key focus areas in which it plans to utilize AI in compliance with domestic and international laws: (1) target search and acquisition, (2) data collection and analysis, (3) command and control, (4) rear and logistical support, (5) unmanned assets, (6) cybersecurity, and (7) administrative efficiency.[16]

Conclusions: Japan’s Role as an International Rule-Maker on AI

The UN has emphasized the urgent need for states to regulate all types of autonomous weapons systems, which are increasingly incorporating AI technologies in conflicts such as those in Ukraine and the Middle East. UN Secretary-General António Guterres has recommended that, by 2026, nations conclude “a legally binding instrument to prohibit lethal autonomous weapon systems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law.”[17]

Amid ongoing international discussions and limited willingness from Russia, China, and North Korea to engage at the UN level on addressing the challenges posed by LAWS, Tokyo has actively sought to articulate its own position on the global stage.[18] The Ministry of Foreign Affairs’ May 2024 working paper submitted to the UN reaffirmed Japan’s commitment to a “human-centric” approach and emphasizing that emerging technologies should be used “in a responsible manner.”[19]

JMOD’s latest guidelines on the use of AI in the development of defense equipment contribute to this effort. The ministry appears to even highlight how its approach to technical reviews is to be the most stringent amongst many of Japan’s own allies and like-minded partners.

Table from JMOD’s AI Guidelines for the Responsible Application of AI in Defense Equipment R&D, illustrating how its technical requirements, labeled “B-1” through “B-7”, are more comprehensive than those adopted by, from left to right, the United States, Australia, the United Kingdom, and FranceTable from JMOD’s AI Guidelines for the Responsible Application of AI in Defense Equipment R&D, illustrating how its technical requirements, labeled “B-1” through “B-7”, are more comprehensive than those adopted by, from left to right, the United States, Australia, the United Kingdom, and France.

Nevertheless, Tokyo is also actively working with its allies and like-minded partners on actively pursuing the R&D of autonomous systems[20] and is committed to advancing shared principles on the responsible use of emerging technologies. It has endorsed the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy”,[21] a U.S.-led initiative launched in 2023, which affirms the need to ensure that AI technologies are employed in accordance with international law, including IHL.

In summary, JMOD’s June AI guidelines, coupled with Tokyo’s broader AI bill enacted in May, represent a clear effort to boost Japan’s AI competitiveness both in the defense sector and beyond. At the same time, they are designed to ensure risk management processes are in place, reflecting Japan’s strong commitment to a rules-based international order. By contributing to the shaping of international AI governance, the Japanese government can support businesses in pursuing responsible AI, thereby contributing to the country’s overall economic competitiveness.


[1] The full publication of the guidelines is available in Japanese at: JMOD, “装備品等の研究開発における責任あるAI適用ガイドライン (第1版)” [Guidelines for Responsible AI Application in Research and Development of Defense Equipment, etc. (Version 1)” ], June 6, 2025, https://www.mod.go.jp/atla/soubiseisaku/ai_guideline/ai_guideline_ver.01.pdf.

[2] Nikkei Shimbun, “AI技術の開発・活用を推進、悪用事業者は国に調査権 初の法整備” [First legislation to promote development and use of AI technology, also gives government power to investigate businesses that misuse it], May 28, 2025, https://www.nikkei.com/article/DGXZQOUA270UW0X20C25A5000000/.

[3] JMOD, “JMOD, “装備品等の研究開発における責任あるAI適用ガイドライン (第1版)” [Guidelines for Responsible AI Application in Research and Development of Defense Equipment, etc. (Version 1)”], June 6, 2025, https://www.mod.go.jp/atla/soubiseisaku/ai_guideline/ai_guideline_ver.01.pdf, p.4.

[4] United Nations Office for Disarmament Affairs, “Lethal Autonomous Weapons Systems (LAW), https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/.

[5] MOFA, “Working paper submitted by Japan to the United Nations on emerging technologies in the area of Lethal Autonomous Weapon systems (LAWS)”, May 24, 2024, https://www.mofa.go.jp/mofaj/files/100687671.pdf , p.2.

[6] Kyodo News, “Japan sets policy against fully autonomous lethal weapons”, July 15, 2024, https://english.kyodonews.net/news/2024/07/927383440e76-japan-sets-policy-against-fully-autonomous-lethal-weapons.html.

[7] Permanent Mission of Japan to the United Nations, “Statement by H.E. Ambassador ISHIKANE Kimihiro, Permanent Representative of Japan to the United Nations, at the event for the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy”, November 13, 2023, https://www.un.emb-japan.go.jp/itpr_en/ishikane111323.html.

[8] JMOD & ATLA, “装備品等の研究開発における責任あるAI適用ガイドライン 概要” [Outline: Guidelines for Responsible AI Application in Research and Development of Defense Equipment, etc.], June 6, 2025, https://www.mod.go.jp/atla/soubiseisaku/ai_guideline/ai_guideline_ver.01_ov.pdf;

MOFA, “Working paper submitted by Japan to the United Nations on emerging technologies in the area of Lethal Autonomous Weapon systems (LAWS)”, May 24, 2024, https://www.mofa.go.jp/mofaj/files/100687671.pdf, p.4.

[9] JMOD & ATLA, “装備品等の研究開発における責任あるAI適用ガイドライン 概要” [Outline: Guidelines for Responsible AI Application in Research and Development of Defense Equipment, etc.], June 6, 2025, https://www.mod.go.jp/atla/soubiseisaku/ai_guideline/ai_guideline_ver.01_ov.pdf, slide 1.

[10] JMOD, “令和7年6月6日(金)09:09~09:22|中谷防衛大臣閣議後会見” [Friday, June 6, 2025 09:09-09:22 | Press Conference after the Cabinet Meeting by Defense Minister Nakatani], https://www.mod.go.jp/j/press/kisha/2025/0606a.html.

[11] JMOD & ATLA, “装備品等の研究開発における責任あるAI適用ガイドライン 概要” [Outline: Guidelines for Responsible AI Application in Research and Development of Defense Equipment, etc.], June 6, 2025, https://www.mod.go.jp/atla/soubiseisaku/ai_guideline/ai_guideline_ver.01_ov.pdf, slide 3.

[12] JMOD, “装備品等の研究開発における責任あるAI適用ガイドライン (第1版)” [Guidelines for Responsible AI Application in Research and Development of Defense Equipment, etc. (Version 1)” ], June 6, 2025, https://www.mod.go.jp/atla/soubiseisaku/ai_guideline/ai_guideline_ver.01.pdf, pp.7-12.

[13] For the full and precise list of requirements, please refer to the Japanese original guidelines at: JMOD, “装備品等の研究開発における責任あるAI適用ガイドライン (第1版)” [Guidelines for Responsible AI Application in Research and Development of Defense Equipment, etc. (Version 1)” ], June 6, 2025, https://www.mod.go.jp/atla/soubiseisaku/ai_guideline/ai_guideline_ver.01.pdf, p.6 & p.9.

[14] For the full and precise list of requirements, please refer to the Japanese original guidelines at: JMOD, “装備品等の研究開発における責任あるAI適用ガイドライン (第1版)” [Guidelines for Responsible AI Application in Research and Development of Defense Equipment, etc. (Version 1)” ], June 6, 2025, https://www.mod.go.jp/atla/soubiseisaku/ai_guideline/ai_guideline_ver.01.pdf, p.6 & pp.10-11.

[15] JMOD, “装備品等の研究開発における責任あるAI適用ガイドライン (第1版)” [Guidelines for Responsible AI Application in Research and Development of Defense Equipment, etc. (Version 1)” ], June 6, 2025, https://www.mod.go.jp/atla/soubiseisaku/ai_guideline/ai_guideline_ver.01.pdf, p.6 & pp.1-3.

[16] JMOD, “防衛省AI活用推進基本方針” [JMOD Basic Policy for Promoting AI Utilization], July 2024, https://www.mod.go.jp/j/press/news/2024/07/02a_03.pdf, pp.8-9.

[17] United Nations Office for Disarmament Affairs, “Lethal Autonomous Weapons Systems (LAW), https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/.

[18] Kyodo News, “Japan sets policy against fully autonomous lethal weapons”, July 15, 2024, https://english.kyodonews.net/news/2024/07/927383440e76-japan-sets-policy-against-fully-autonomous-lethal-weapons.html.

[19] MOFA, “Working paper submitted by Japan to the United Nations on emerging technologies in the area of Lethal Autonomous Weapon systems (LAWS)”, May 24, 2024, https://www.mofa.go.jp/mofaj/files/100687671.pdf.

[20] JMOD, “Australia-Japan-United States Trilateral Defence Ministers’ Meeting November 2024 Joint Statement”, November 17, 2024, https://www.mod.go.jp/en/article/2024/11/74336ab681b2d932f9ab75f4c9f2e4c4ddaf97f8.html.

[21] U.S. Department of State, “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy”, https://www.state.gov/bureau-of-arms-control-deterrence-and-stability/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy.


This article was originally posted on NSBT Japan, the first defense and security industry network in Japan. The publication provides the latest information on security business trends both within Japan and overseas. Asian Military Review began exchanging articles with NSBT Japan in April 2024.

Read the original article here.

Read Entire Article