Insight Image

Governing Lethal Autonomous Weapons in a New Era of Military AI

03 Aug 2025

Governing Lethal Autonomous Weapons in a New Era of Military AI

03 Aug 2025

Governing Lethal Autonomous Weapons in a New Era of Military AI

Lethal autonomous weapons systems (LAWS), such as drones and autonomous missiles, are no longer a distant prospect, as they are rapidly approaching operational use on the battlefield.[1] As of recently, defensive systems are the most common types of autonomous weapons. This includes systems that, once engaged, function independently through trigger mechanisms, such as anti-personnel and anti-vehicle mines.[2] At its core, a LAWS is a weapon system that, once activated, can “select and engage targets without human intervention.”[3] They represent the new era of military technology’s evolution from manual to automated and, more recently, autonomous.

In contrast to previous technologies that sought to increase range, speed, and precision, LAWS mark a paradigm change by shifting critical decision-making tasks, like target selection and engagement, from humans to computers. This change reflects both technological advancements and a fundamental change in who or what has the authority to use force. However, it is evident that key terms like “human control,” “intervention,” and “lethality” remain vague, creating uncertainty around the concept of LAWS.

Their governance is unstable due to the lack of a universal framework. This is mostly because of differing national views: some nations, like the U.S. and Russia, believe that current international legislation is enough, while others, like Serbia and Kiribati, want a total prohibition based on moral and humanitarian grounds. Meanwhile, an increasing number of dualist nations, such as the Netherlands and Germany, suggest a compromise: outlawing some applications while strictly regulating others. The differences demonstrate how urgently consistent international standards are needed.

The use of autonomous weapons has prompted a debate among military planners, roboticists, and ethicists about the development and deployment of weapons that can perform complex tasks like targeting and application of force, with little or no human oversight.[4] At the center of this argument is their challenge to humanitarian and legal norms, especially those set out in the Geneva Conventions. These developments carry the risk of undermining core ideas in armed conflict, such as responsibility, proportionality, and distinction. This risk occurs when automated devices are empowered to make life or death decisions.[5] This raises urgent questions about whether these kinds of systems can meet the standards that are expected of human fighters.

The following sections explore what makes LAWS different by examining how autonomy differs from automation, how varying levels of human control impact legal accountability, and the broader implications for the future of warfare. Then, turning to the global debate between banning and regulating these systems, followed by an overview of current international governance efforts. As international concern mounts, it is becoming increasingly clear that the question is no longer if LAWS should be governed, but how that governance should take form.

What makes LAWS different? 

Automation VS Autonomy

A key feature that distinguishes LAWS from previous weapon systems is their degree of autonomy, the ability to carry out operations like target engagement and selection without immediate human input. The terms autonomy and automation are often mixed up, which can obscure important legal and operational distinctions when evaluating the risks and responsibilities associated with each system. Automation[6] is the predictable and fixed execution of pre-programmed instructions by a system. It follows pre-established rules created by humans to respond to particular inputs, and it doesn’t change its behavior in any way. For example, landmines, which were widely used in 20th-century conflicts, are fully automated. Once deployed, it passively waits and explodes when it’s triggered by pressure or movement; it cannot differentiate between a civilian and a combatant which poses a threat even after war is over. This led to the Mine Ban Treaty in 1997, with over 160 countries agreeing to prohibit the use of anti-personnel mines as they cause human fatalities, international criticism, interfere with post-conflict rehabilitation, and place a long-term demining burden on nations.

In contrast, autonomy[7] means that a system can see, understand, make decisions, and act in a certain context or purpose with little to no direct human input. This is frequently achieved using AI or machine learning. This is an improvement over automated machines because autonomy might be able to tell the difference between civilians and combatants and stop targeting civilians by mistake. The difference between automation and autonomy isn’t just a technical issue; it’s at the core of the legal and moral debate over LAWS. It influences the choice of proportionality, accountability, and whether a weapon system adheres to international humanitarian law. The level of human control becomes a crucial differentiation at this point. There are significant implications to whether a human is actively choosing the target, overseeing it, or completely removing himself from the process. The IAI Harop (Israel Aerospace Industries) is a fully autonomous loitering munition that can conduct surveillance and strike. Without requiring exact prior intelligence, once deployed, it can autonomously search for targets emitting radar within a certain area, identify and choose a target, then carry out an attack.[8] It defies traditional definitions of missiles as it uses onboard sensors to engage targets based on real-time data.  The hybrid nature of the IAI Harop raises a greater legal and ethical concern by blending the autonomy of a drone with the impact of a missile.[9] Especially if the algorithm evolves over time, as it becomes increasingly difficult to hold a system legally accountable for illegal activity, the more autonomous it gets.[10]

Degrees of human supervision

A system can function autonomously while still being subject to human protocols in certain crucial circumstances, so human supervision in autonomous systems is not always incompatible. The spectrum of human involvement has 3 categories based on the degree of intervention:

In-the-loop, meaning a human must approve or initiate any targeting or engagement decision, such as Russia’s Marker Robot.[11] It is a multi-domain ground fighting robot with AI-powered autonomous navigation and battlefield reconnaissance capabilities. It uses machine vision and neural networks, which are technologies that allow it to process visual data and learn patterns to follow targets and adjust to changing terrain.[12] However, lethal action still requires human authorization; its architecture points to a slow transition toward complete autonomy. This classifies it as transitional and draws attention to a larger regulatory issue: many systems operate in a “grey zone” of partial autonomy that enables states to work within existing definitions and legal restrictions.[13]

On-the-loop refers to human supervision during the process and intervention when necessary, such as the South Korean Armed stationary sentry robot (SGR-A1). Using thermal and optical sensors, the SGR-A1 is able to identify, track, and vocally confront intruders on its own in the demilitarized zone between North and South Korea.[14] Despite its ability to fire without human input, current deployments reportedly require human authorization prior to engagement. The technology demonstrates that the ability to sense and track independently does not always translate into lethal action. This contrast challenges definitions that perceive autonomy as binary, rather than taking into account the many steps of the kill chain.[15]

Lastly, out-of-the-loop entails no human intervention and complete control and independence of the system once activated. A clear example is the IAI Harpy (Israel Aerospace Industries), a fully autonomous loitering munition. It is “equipped to hunt and seek targets in a designated area, locate and identify their frequency, and autonomously pursue a strike from any direction, at shallow or steep dive profiles.”[16] It functions as a “fire-and-forget”, combining both the features of an unmanned aerial vehicle (UAV) and a missile, able to independently find and target adversary radar systems without requiring precise target position information before launch.[17] Because control is given up after activation, the “out-of-the-loop” paradigm is exemplified by this total elimination of human oversight throughout the engagement phase. This raises serious questions about responsibility and compliance under international humanitarian law.

As systems evolve further away from human control, the decision-making processes buried in their algorithms become more opaque. In addition to making it challenging to comprehend or predict how a system will behave, this lack of transparency also makes it more difficult to hold individuals accountable when errors or breaches happen. It becomes increasingly difficult to attribute a system’s actions to human purpose the more autonomy it has.

Weaponized Autonomy: Advantage or Ethical Threat?

Autonomous weapons systems are increasingly viewed not just as technological marvels but as strategic assets reshaping the battlefield. Advocates argue they offer significant military advantages: acting as force multipliers, reducing the number of soldiers needed while expanding operational reach into environments too hazardous or remote for human deployment.[18] According to The U.S. Department of Defense’s Unmanned Systems Roadmap: 2007-2032,[19] their use in “dull, dirty, or dangerous” missions, such as explosive disposal or radiological cleanup, minimizes human risk. Their ability to function without fatigue or emotion allows for faster decision-making under pressure. It can also operate and strike when communications are severed, which can prove to be helpful in extreme situations. Economically, the maintenance of a single soldier is roughly $850,000 annually for an American soldier in Afghanistan, which is far more expensive than the deployment of robots like the TALON system, a small rover that can be armed.[20]

Aerial systems with autonomous targeting capabilities also offer advantages. They may be able to outperform human pilots in terms of endurance and maneuverability, making a single UAV a threat at the fleet level[21] and a strategic asset in circumstances where human limitations could jeopardize the integrity of the mission.

According to ethical experts, autonomous weapons may behave more humanely than soldiers under duress because they are not subject to emotional biases or instincts for self-preservation, which could lead to fewer violations of international humanitarian law.[22] These systems can also handle large amounts of sensory data like sounds, sights and movement and as they do not possess emotions like fear and hysteria, which can cloud their judgement, their decision-making is more consistent and data-driven.[23]

In joint human-robot teams, machines may also be more reliable in reporting unethical conduct like war violations. As they are unbiased to relationships, while human soldiers may protect each other and stay silent out of loyalty.[24]  All these possibilities suggest that autonomous systems could not only transform how wars are fought, and the moral considerations of warfare through autonomous systems.

On the other hand, as autonomous weapons near deployment, many experts warn they threaten core principles of international humanitarian law, especially distinction, proportionality, and accountability. They argue that machines that cannot reliably differentiate between civilians and combatants should not be trusted with life-and-death decisions. In these circumstances, even human soldiers might find it difficult to make decisions; leaving judgment to unreliable algorithms increases the possibility of permanent damage. A 2015 open letter signed by over 3,000 experts, including Stephen Hawking and Elon Musk, warned that LAWS could spark a third revolution in warfare, like the impact of gunpowder or nuclear arms.[25] It called for a ban on systems operating beyond meaningful human control, citing risks to global security and public trust in AI. The UN echoed these concerns, urging a moratorium on testing, production and deployment until proper international regulation is established.[26] Concerns about the accuracy of science continue. A global call to ban lethal autonomous robots brought attention to the fact that there is not enough evidence that machines can be aware of their surroundings or make moral decisions, which are limitations that could lead to a lot of collateral damage.[27] Noel Sharkey, a highly regarded computer scientist in this field, warns that such systems risk violating the Principle of Distinction, as even trained soldiers often misidentify civilians under stress.[28] Accountability is a major concern in this debate. In international humanitarian law, jus in bello is a fundamental condition that requires a person to be held responsible for civilian death. Therefore, the case of autonomous weapons makes it difficult to identify responsibility for casualties and does not meet the requirements of jus in bello and technically cannot be used in war.[29]

Autonomous weapons could also make the world less safe. If one country gets the ability to strike first or without risk, it could start an arms race or make it easier for war to start. What remains clear is that governance will play a decisive role in determining whether autonomy in warfare becomes an asset or a liability.

Ban vs. Regulation

There are two main ideas about how to proceed with the rise of lethal autonomous weapons systems: a preemptive ban or regulation. The Stop Killer Robots campaign, led by a global coalition of NGOs, calls for an international treaty banning the development and deployment of fully autonomous weapons that operate beyond meaningful human control.[30] Advocates argue that a categorical ban is the only way to prevent the ethical, legal, and humanitarian risks posed by delegating life-and-death decisions to machines. They emphasize that regulation alone is insufficient, as it cannot guarantee compliance or accountability once these systems proliferate across geopolitical fault lines. In contrast, the U.S. Department of Defense rejects the idea of a ban, opting instead for a governance framework rooted in its 2020 Ethical Principles for Artificial Intelligence.[31] These principles stress the need for responsible, traceable, and governable AI development. At the same time, they allow for new ideas and the use of autonomous technologies in a human command structure. Rather than restricting research, the DoD’s approach prioritizes ensuring that AI-enabled systems are reliable, auditable, and aligned with military values.[32] This difference is a sign of a bigger global divide: some think that pre-emptive prohibition will protect against future atrocities, while others think that adaptive regulation is the best way to keep both military advantage and moral integrity. On 2 December 2024, in response to increasing urgency, the UN General Assembly passed a resolution on lethal autonomous weapons systems with 166 votes in favor, 3 opposed (Russia, North Korea, and Belarus), and 15 abstentions. The resolution highlights worldwide concern about the use of LAWS in recent conflicts like Gaza and Ukraine by endorsing a two-tiered governance system that calls for regulatory monitoring for some LAWS and a ban on others under international law. The UN’s adoption of this resolution shows that more countries agree that the dangers of autonomous weapons are no longer just theoretical, but very real and urgent.

In his New Agenda for Peace, the Secretary-General called for a legally binding treaty prohibiting LAWS from operating remotely without human oversight, with a 2026 target completion date.[33] Member states reaffirmed the importance of collaborating, assuring that they would keep discussions going on a possible international tool to address new technologies that grant lethality autonomy, at the September Summit of the Future.[34] It is clear that governance is now necessary and unavoidable.

International Governance Efforts

The international movement to regulate lethal autonomous weapons is progressing, but the efforts are splintered and largely non-binding. The principal multilateral body having discussions about regulations on LAWS is the UN Group of Governmental Experts (GGE) on LAWS, created under the Convention on Certain Conventional Weapons (CCW). Starting in 2014, the GGE has been tasked with exploring definitions, examining ethical considerations, and reporting on the ongoing necessity for “meaningful human control” over autonomous weapons. However, despite years of deliberation, the group has failed to reach consensus on a binding legal framework due to geopolitical disagreements and differing national interests.[35]

More recently, the REAIM Summit (Responsible AI in the Military Domain) has emerged as a multilateral dialogue platform led by the Netherlands and South Korea. It brings together states, civil society, and industry to build shared norms for responsible military AI use, including transparency, accountability, and human oversight. REAIM has directly addressed concerns surrounding lethal autonomous weapons, including through dedicated sessions at its 2023 summit that emphasized the need for meaningful human control over the use of force. The summit’s official Call to Action explicitly urged states to share best practices and develop governance frameworks to prevent autonomous systems from operating without adequate oversight. Working groups since then have concentrated on creating pre-deployment testing guidelines, ethical review procedures, and operational restrictions that are especially suited to LAWS. Although REAIM promotes voluntary commitments, it lacks legal authority and does not establish enforceable obligations.[36]

In parallel, soft law mechanisms, such as ethical declarations, codes of conduct, and national AI principles, have gained popularity as flexible alternatives to formal treaties. The OECD’s AI Recommendations, NATO’s Principles of Responsible Use, and the U.S. Department of Defense’s AI Ethical Principles are a few examples. Additionally, the 2019 Ethics Guidelines for Trustworthy AI from the European Union place a strong emphasis on human agency, openness, and safety in the development of AI, including applications related to defense.[37] The G7 Hiroshima AI Process, launched in 2023, reflects growing interest in multilateral alignment on the responsible use of emerging technologies, encouraging countries to adopt shared voluntary standards.[38] In the military domain, non-binding pledges to guarantee significant human control and prevent unintended escalation from autonomous systems are outlined in the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which was presented at the 2023 REAIM Summit.[39] Although these frameworks are not legally binding and differ from one jurisdiction to another, they show a growing global agreement that weapons autonomy should be governed by principled frameworks rather than being completely prohibited, indicating a readiness to incorporate such systems into combat under precise operational and ethical standards.

Conclusion and Recommendations

The governance of lethal autonomous weapons stands at the center of modern military and legal discourse. Evidently, LAWS challenge foundational principles of international humanitarian law, particularly accountability, proportionality, and distinction, while also introducing valuable capabilities such as real-time targeting, emotion-free decision-making, and human-machine collaboration that may enhance operational precision warfare. Meanwhile, a growing global will to address the issue is indicated by governance initiatives, such as the two-tiered resolution passed by the UN General Assembly and REAIM’s call for pre-deployment standards and ethical supervision. Also, various opinions about how to balance innovation and restraint, human control and autonomy, have led to a fragmented but steadily improving governance landscape that shows deeper structural disagreements. This change shows how individuals are starting to realize that autonomous weapons are not just a terrifying idea from the future but a real possibility that needs to be shaped now.

A crucial first step toward effective governance is establishing clear and universally accepted definitions. According to the UN, as of now there is no legally binding definition of LAWS, only loose characterizations from expert groups. Without a shared understanding, states apply inconsistent rules, making meaningful regulation nearly impossible. Equally urgent is defining “meaningful human control,” the most contested term in the LAWS debate. In its absence, countries interpret it to suit their strategic needs, allowing wide variation in how much autonomy systems can have, often without accountability. This ambiguity risks undermining trust and creating legal loopholes. While definitions alone won’t solve every governance challenge, they provide a critical foundation. With clear, shared language, states can begin to apply universal safeguards and build coherent international norms. Ultimately, it will offer the baseline needed to ensure that governance efforts are aligned and effective, reducing risks and setting limits on how autonomy can be integrated into warfare.

While numerous voluntary frameworks already exist, such as the U.S. Department of Defense’s Ethical Principles for AI and NATO’s Principles of Responsible Use, they are fragmented, which limits their collective impact. Expanding soft law into a unified global framework would help translate these values into actionable governance tools. States can start operationalizing common norms by coming together around minimal international standards, such as forbidding autonomous engagement in civilian areas or demanding explainability in targeting choices. This approach offers a flexible yet coordinated way to govern military AI systems, particularly in the interim period where consensus on hard law remains elusive. It would also give governments an understandable strategy, strengthen shared expectations, close loopholes before they can be used, and build international trust in the responsible use of autonomy in warfare.

Also, building cross-domain coordination mechanisms is essential, as the governance of LAWS sits at the intersection of law, ethics, defense, and technology, domains that often operate separately. Establishing intergovernmental task forces that bring together ethicists, technologists, military experts, and legal scholars to advise bodies like the UN or the Convention on Certain Conventional Weapons (CCW) would ensure that decisions reflect diverse expertise and anticipate complex trade-offs. This kind of advisory structure would help craft more balanced, forward-looking governance strategies that are both technically and ethically grounded.

Furthermore, countries should initiate institutionalizing AI governance within their defense sectors by establishing dedicated departments or agencies tasked with overseeing the ethical, legal, and strategic use of AI-enabled military systems. These national bodies would not only enhance domestic oversight but also act as formal liaison points with future international governance structures, helping to translate global standards into practical national policies. These departments could close the current gap between technological innovation and governmental accountability in warfare by aligning with guidelines issued by a permanent international body. This would guarantee that emerging regulations, norms, or laws are applied consistently.

Finally, instead of relying on fragmented forums or temporary working groups to address what is clearly a long-term challenge, there is a pressing need for a permanent international governance body dedicated to LAWS. It should be an efficient, long-lasting organization that helps autonomous warfare grow by providing legal clarity, ethical oversight, and a coordinated setting of standards. Such a body would provide a stable platform for states, technologists, legal scholars, and civil society to engage in ongoing dialogue, transparency initiatives, and the development of common norms. It could also introduce concrete mechanisms, such as a transparency registry where states voluntarily disclose their LAWS capabilities and operational doctrines, like the present Arms Trade Treaty’s reporting system, promoting trust and accountability. This body would make sure that the future of LAWS is not shaped in isolation, but in line with the principles that define responsible and legitimate warfare by making governance based on ongoing multilateral cooperation instead of just occasional negotiations. The challenge ahead is not just to contain the risks, but to align the development of LAWS with the values and norms that define legitimate warfare.


[1]Benjamin Perrin and Masoud Zamani, “The Future of Warfare: National Positions on the Governance of Lethal Autonomous Weapons Systems,” Lieber Institute—West Point, February 11, 2025, accessed June 2025, https://lieber.westpoint.edu/future-warfare-national-positions-governance-lethal-autonomous-weapons-systems/.

[2]“Background on LAWS in the CCW,” United Nations Office for Disarmament Affairs, accessed June 2025, https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/.

[3]Benjamin Perrin and Masoud Zamani, “The Future of Warfare: National Positions on the Governance of Lethal Autonomous Weapons Systems.”

[4] Amitai Etzioni and Oren Etzioni, “Pros and Cons of Autonomous Weapons Systems,” Military Review, May–June 2017, accessed June 2025, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/.

[5] Neil Davison, Autonomous Weapon Systems under International Humanitarian Law,” International Committee of the Red Cross, 2017, accessed June 2025, https://www.icrc.org/sites/default/files/document/file_list/autonomous_weapon_systems_under_international_humanitarian_law.pdf.

[6] Benjamin Perrin, “Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New International Treaty,” ASIL Insights 29, no. 1 (January 24, 2025), American Society of International Law, accessed June 2025, https://www.asil.org/insights/volume/29/issue/1.

[7] Mariarosaria Taddeo and Alexander Blanchard, “A Comparative Analysis of the Definitions of Autonomous Weapons Systems,” Science and Engineering Ethics 28, no. 37 (2022), https://doi.org/10.1007/s11948-022-00392-3.

[8] Automated Research, “Israel Aerospace Industries Harop Loitering Munition,” Weapon Systems Database, accessed June 7, 2025, https://automatedresearch.org/weapon/israel-aerospace-industries-harop-loitering-munition/.

[9] “AI in the Battlefield: The Rise of Autonomous Weapons,” Dawn, March 2, 2024, https://www.dawn.com/news/1909385.

[10] United Nations Regional Information Centre (UNRIC), “UN Addresses AI and the Dangers of Lethal Autonomous Weapons Systems,” UNRIC, March 20, 2024, https://unric.org/en/un-addresses-ai-and-the-dangers-of-lethal-autonomous-weapons-systems/.

[11] Army Recognition, “Russia to Begin Serial Production of Marker Land Robot with Kornet Anti-Tank Missile, Drone Swarm Capabilities,” Army Recognition, March 25, 2025, https://armyrecognition.com/news/army-news/2025/russia-to-begin-serial-production-of-marker-land-robot-with-kornet-anti-tank-missile-drone-swarm-capabilities.

[12] Dylan Malyasov, “The State of Autonomy, AI & Robotics for Russia’s Ground Vehicles,” European Security & Defence, June 28, 2023, https://euro-sd.com/2023/06/articles/31798/the-state-of-autonomy-ai-robotics-for-russias-ground-vehicles/.

[13] Mariarosaria Taddeo and Alexander Blanchard, “A Comparative Analysis of the Definitions of Autonomous

Weapons Systems.”

[14] Jennifer Jun, “The South Korean Sentry—A Killer Robot to Prevent War?,” Lawfare, July 1, 2021,https://www.lawfaremedia.org/article/foreign-policy-essay-south-korean-sentry%E2%80%94-killer-robot-prevent-war

[15] Mariarosaria Taddeo and Alexander Blanchard, “A Comparative Analysis of the Definitions of Autonomous

Weapons Systems.”

[16] ​​ Israel Aerospace Industries, “HARPY: Loitering Attack Weapon System,” IAI, accessed June 7, 2025, https://www.iai.co.il/p/harpy.

[17] Naz Modirzadeh, Dustin Lewis, and Emmeline B. Reeves, “Lethal Autonomous Weapons Systems under International Law,” American Society of International Law Insights 29, no. 1 (2024), https://www.asil.org/insights/volume/29/issue/1.

[18] John W. Raymond, “The Pros and Cons of Autonomous Weapons Systems,” Military Review, May–June 2017, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/.

[19] James R. Clapper Jr. et al., Unmanned Systems Roadmap: 2007-2032 (Washington, DC: Department of Defense [DOD], 2007), 19, accessed 28 March 2017, http://www.globalsecurity.org/intell/library/reports/2007/dod-unmanned-systems-roadmap_2007-2032.pdf.

[20] David Francis, “How a New Army of Robots Can Cut the Defense Budget,” Fiscal Times, April 2, 2013, accessed March 8, 2017, http://www.thefiscaltimes.com/Articles/2013/04/02/How-a-New-Army-of-Robots-Can-Cut-the-Defense-Budget.

[21]Michael Byrnes, “Nightfall: Machine Autonomy in Air-to-Air Combat,” Air & Space Power Journal 23, no. 3 (May–June 2014): 54, accessed March 8, 2017, http://www.au.af.mil/au/afri/aspj/digital/pdf/articles/2014-May-Jun/F-Byrnes.pdf?source=GovD.

[22] Toscano, Christopher P. “Friend of Humans”: An Argument for Developing Autonomous Weapons Systems, jnslp.com/wp-content/uploads/2015/05/Friend-of-Humans.pdf. Accessed 29 June 2025.

[23] Ronald C. Arkin, “The Case for Ethical Autonomy in Unmanned Systems,” Journal of Military Ethics 9, no. 4 (2010): 332–41.

[24] Ibid.

[25] “Autonomous Weapons: An Open Letter from AI [Artificial Intelligence] & Robotics Researchers,” Future of Life Institute website, July 28, 2015, accessed March 8, 2017, http://futureoflife.org/open-letter-autonomous-weapons/.

[26] Christof Heyns, Report of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, United Nations Human Rights Council, 23rd Session, Agenda Item 3, United Nations Document A/HRC/23/47 (September 2013).

[27] International Committee for Robot Arms Control (ICRAC), “Scientists’ Call to Ban Autonomous Lethal Robots,” ICRAC website, October 2013, accessed March 24, 2017, icrac.net.

[28] Noel Sharkey, “Saying ‘No!’ to Lethal Autonomous Targeting,” Journal of Military Ethics 9, no. 4 (2010): 369–83, https://doi.org/10.1080/15027570.2010.537903.

[29] “The Intersections of Jus in Bello and Autonomous Weapons Systems,” National High School Journal of Science, accessed April 2025, https://nhsjs.com/wp-content/uploads/2025/04/The-Intersections-of-Jus-in-Bello-and-Autonomous-Weapons-Systems.pdf.

[30] Stop Killer Robots. “Facts About Autonomous Weapons.” Stop Killer Robots. Accessed June 7, 2025. https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/.

[31] U.S. Department of Defense, “DoD Adopts Ethical Principles for Artificial Intelligence,” February 24, 2020, https://www.defense.gov/News/Releases/Release/article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/

[32] C. Todd Lopez, “DoD Adopts 5 Principles of Artificial Intelligence Ethics,”DOD News, February 25, 2020, accessed June 2025, https://www.defense.gov/News/News-Stories/Article/Article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/

[33] United Nations Regional Information Centre, “UN Addresses AI and the Dangers of Lethal Autonomous Weapons Systems,” UNRIC, accessed June 2025. https://unric.org/en/un-addresses-ai-and-the-dangers-of-lethal-autonomous-weapons-systems/.

[34] Ibid.

[35] United Nations Office for Disarmament Affairs, CCW Group of Governmental Experts on LAWS Report 2023, UNODA, July 2023, https://unoda-web.s3-accelerate.amazonaws.com/wp-content/uploads/2023/07/CCW-GGE-LAWS-Report-2023.pdf.

[36] REAIM 2023, Outcome Document of the Responsible AI in the Military Domain Summit, Accessed June 2025.

[37] European Commission, “Ethics Guidelines for Trustworthy AI.” European Commission, April 2019. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

[38] G7 Leaders. “Statement on the Hiroshima AI Process,” May 2023, https://digital-strategy.ec.europa.eu/en/library/g7-leaders-statement-hiroshima-ai-process.

[39] U.S. Department of State, “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” State.gov, February 2023, https://www.state.gov/bureau-of-arms-control-deterrence-and-stability/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy.

Related Topics