Insight Image

Governing Lethal Autonomous Weapons: The Future of Warfare and Military AI

03 Aug 2025

Governing Lethal Autonomous Weapons: The Future of Warfare and Military AI

03 Aug 2025

Governing Lethal Autonomous Weapons: The Future of Warfare and Military AI

Lethal autonomous weapons systems (LAWS), such as drones and autonomous missiles, are no longer a distant prospect, as they are rapidly approaching operational use on the battlefield.[1] As of recently, defensive systems are the most common types of autonomous weapons. This includes systems that, once engaged, function independently through trigger mechanisms, such as anti-personnel and anti-vehicle mines.[2] At its core, a LAWS is a weapon system that, once activated, can “select and engage targets without human intervention.”[3] The rise of lethal autonomous weapons systems is the most recent phase in the continuous advancement of military technology, which has seen a gradual progression from manual weapons to increasingly automated and now autonomous systems. By transferring crucial decision-making tasks, such as target selection and engagement, from people to computers, LAWS represents a paradigm shift from earlier technologies that aimed to improve range, speed, and precision. This shift reflects not only developments in technology but also a fundamental shift in the power to use force and who or what has the ability to do so.

It is evident that key terms like “human control,” “intervention,” and “lethality” remain vague, creating substantial uncertainty around the concept of LAWS. These ambiguities continue to hinder efforts to reach the consensus needed for meaningful progress.

With no universal framework in existence, the governance of lethal autonomous weapons systems (LAWS) is still a topic of discussion. This is mostly because of differing national views: some nations, like the US and Russia, believe that current international legislation is enough, while others, like Serbia and Kiribati, want a total prohibition based on moral and humanitarian grounds. Meanwhile, an increasing number of dualist nations, such as the Netherlands and Germany, suggest a compromise: outlawing some applications while strictly regulating others. These differences, which stem from divergent views on “human control” and legal responsibility, highlight how urgently we need unified international standards before LAWS proliferate.[4]

The use of autonomous weapons has prompted a debate among military planners, roboticists, and ethicists about the development and deployment of weapons that can perform increasingly advanced functions, including targeting and application of force, with little or no human oversight.[5] The fundamental issue with LAWS lies in their challenge to existing norms of international humanitarian law and ethics, particularly those highlighted in the Geneva Conventions.

These advances run the danger of weakening fundamental principles like responsibility, proportionality, and distinction, which form the foundation of legal behavior in armed conflict, by giving automated machines the authority to make judgments that might mean the difference between life and death.[6] This presents urgent questions regarding whether autonomous systems can function within the moral and legal parameters that human fighters have historically accepted.

As the governance of innovative military technology is dangerously fragmented, it calls for an urgent comprehensive framework to address accountability, legality and global norms in an environment where LAWS are redefining warfare. A unified global framework is essential to prevent a destabilizing arms race where civilian lives will be at risk.

The following sections explore what makes LAWS different by examining the difference between autonomy and automation, how varying levels of human control impact legal accountability, and the broader implications for the future of warfare. Then, turning to the global debate between banning and regulating these systems, followed by an overview of current international governance efforts. As international concern mounts, it is becoming increasingly clear that the question is no longer if LAWS should be governed, but how that governance should take form.

What makes LAWS different? 

Automation vs. Autonomy

A key feature that distinguishes LAWS from previous weapon systems is their degree of autonomy, the ability to carry out operations like target engagement and selection without immediate human input. The terms autonomy and automation are often mixed up, which can obscure important legal and operational distinctions when evaluating the risks and responsibilities associated with each system. Automation[7] is the predictable and fixed execution of pre-programmed instructions by a system. It follows pre-established rules created by humans to respond to particular inputs, and it doesn’t change its behavior in any way. For example, landmines, which were widely used in 20th-century conflicts, are fully automated. Once deployed, it passively waits and explodes when it’s triggered by pressure or movement; it cannot differentiate between a civilian and a combatant, which poses a threat even after war is over. This led to the Mine Ban Treaty in 1997, with over 160 countries agreeing to prohibit the use of anti-personnel mines as they cause human fatalities, international criticism, interfere with post-conflict rehabilitation, and place a long-term demining burden on nations.

In contrast, autonomy[8] means that a system can see, understand, make decisions, and act in a certain context or purpose with little to no direct human input. This is frequently achieved through the use of AI or machine learning. This is a step forward from automated machines, as autonomy could possibly find a distinction between civilians and combatants and remove the possibility of endangering non-combatants through indiscriminate targeting. The distinction between automation and autonomy is not just a technical detail; it’s central to the legal and moral debate surrounding LAWS. It influences the choice of proportionality, accountability, and whether a weapon system adheres to international humanitarian law. The level of human control becomes a crucial differentiation at this point. There are significant implications to whether a human is actively choosing the target, overseeing it, or completely removing himself from the process. The IAI Harop (Israel Aerospace Industries) is a fully autonomous loitering munition that can conduct surveillance and strike. Without requiring exact prior intelligence, once deployed, it can autonomously search for targets emitting radar within a certain area, identify and choose a target, and then carry out an attack.[9] It defies traditional definitions of missiles, as it uses onboard sensors to engage targets based on real-time data. The hybrid nature of the IAI Harop raises a greater legal and ethical concern by blending the autonomy of a drone with the impact of a missile.[10] Especially if the algorithm evolves over time, as it becomes increasingly difficult to hold a system legally accountable for illegal activity, the more autonomous it gets.[11]

Degrees of Human Supervision

Human supervision in autonomous systems is not inherently incompatible, as a system may operate autonomously while still being governed by human protocols in critical situations. The spectrum of human involvement has three categories based on the degree of intervention. In-the-loop, meaning a human must approve or initiate any targeting or engagement decision, such as Russia’s Marker Robot.[12] It is a multi-domain ground fighting robot with AI-powered autonomous navigation and battlefield reconnaissance capabilities. It uses machine vision and neural networks, which are technologies that allow it to process visual data and learn patterns to follow targets and adjust to changing terrain.[13] However, lethal action still requires human authorization; its architecture points to a slow transition toward complete autonomy. This classifies it as transitional and draws attention to a larger regulatory issue: many systems operate in a “grey zone” of partial autonomy that enables states to work within existing definitions and legal restrictions.[14]

On-the-loop refers to human supervision during the process and intervention when necessary, such as the South Korean Armed stationary sentry robot (SGR-A1). Using thermal and optical sensors, the SGR-A1 is able to identify, track, and vocally confront intruders on its own in the demilitarized zone between North and South Korea.[15] Despite its ability to fire without human input, current deployments reportedly require human authorization prior to engagement. The technology demonstrates that the ability to sense and track independently does not always translate into lethal action. This contrast challenges definitions that perceive autonomy as binary, rather than taking into account the many steps of the kill chain.[16]

Lastly, out-of-the-loop entails no human intervention and complete control and independence of the system once activated. A clear example is the IAI Harpy (Israel Aerospace Industries), a fully autonomous loitering munition. It is “equipped to hunt and seek targets in a designated area, locate and identify their frequency, and autonomously pursue a strike from any direction, at shallow or steep dive profiles.”[17] It functions as a “fire-and-forget,” combining both the features of an unmanned aerial vehicle (UAV) and a missile, able to independently find and target adversary radar systems without requiring precise target position information before launch.[18] Because control is given up after activation, the “out-of-the-loop” paradigm is exemplified by this total elimination of human oversight throughout the engagement phase. This raises serious questions about responsibility and compliance under international humanitarian law.

As systems evolve further away from human control, the decision-making processes buried in their algorithms become more opaque. In addition to making it challenging to comprehend or predict how a system will behave, this lack of transparency also makes it more difficult to hold individuals accountable when errors or breaches happen. It becomes increasingly difficult to attribute a system’s actions to human purpose the more autonomy it has.

Weaponized Autonomy: Advantage or Ethical Threat?

Autonomous weapons systems are increasingly viewed not just as technological marvels but as strategic assets reshaping the battlefield. Advocates argue they offer significant military advantages: acting as force multipliers, reducing the number of soldiers needed while expanding operational reach into environments too hazardous or remote for human deployment.[19] According to The U.S. Department of Defense’s Unmanned Systems Roadmap: 2007-2032,[20] their use in “dull, dirty, or dangerous” missions, such as explosive disposal or radiological cleanup, minimizes human risk. Their ability to function without fatigue or emotion allows for faster decision-making under pressure. It can also operate and strike when communications are severed, which can prove to be helpful in extreme situations. Economically, the long-term cost of deploying robots, such as the TALON system—a small rover that can be equipped with weapons—is far lower than the upkeep of a single soldier in combat zones, which is roughly $850,000 per year for an American soldier in Afghanistan.[21]

Other benefits are present in aerial systems equipped with autonomous targeting capabilities that could outperform human pilots in both stamina and maneuverability, potentially turning a single UAV into a fleet-level threat,[22] a strategic asset in situations where human limitations could compromise the mission’s integrity.

Ethically, some experts argue that autonomous weapons, free of emotional biases and self-preservation instincts, could act more humanely than soldiers under duress, potentially reducing violations of international humanitarian law. These systems can also handle large amounts of sensory data like sounds, sights and movement and as they do not possess emotions like fear and hysteria, which can cloud their judgment, their decision-making is more consistent and data-driven.[23]

In joint human-robot teams, machines may also be more reliable in reporting unethical conduct like war violations. As they are unbiased to relationships, while human soldiers may protect each other and stay silent out of loyalty.[24] All these possibilities suggest that autonomous systems could not only transform how wars are fought but also the moral considerations of warfare through autonomous systems.

On the other hand, as autonomous weapons near deployment, many experts warn they threaten core principles of international humanitarian law, especially distinction, proportionality, and accountability. They argue that machines that cannot reliably differentiate between civilians and combatants should not be trusted with life-and-death decisions. Even human soldiers could struggle in these situations; outsourcing judgment to opaque algorithms heightens the risk of irreversible harm. A 2015 open letter signed by over 3,000 experts, including Stephen Hawking and Elon Musk, warned that LAWS could spark a third revolution in warfare, similar to the impact of gunpowder or nuclear arms.[25] It called for a ban on systems operating beyond meaningful human control, citing risks to global security and public trust in AI. The UN echoed these concerns, urging a moratorium on testing, production and deployment until proper international regulation is established.[26] Concerns over scientific accuracy persist; a global call to ban lethal autonomous robots highlighted the lack of evidence that machines can achieve sufficient situational awareness or ethical judgment, which are limitations that may cause a high level of collateral damage.[27] Noel Sharkey—a highly regarded computer scientist in this field—warns that such systems risk violating the Principle of Distinction, as even trained soldiers often misidentify civilians under stress.[28] Accountability is a major concern in this debate. In international humanitarian law, jus in bello is a fundamental condition that requires a person to be held responsible for civilian death. Therefore, in the case of autonomous weapons, it makes it difficult to identify responsibility for casualties and does not meet the requirements of jus in bello and technically cannot be used in war.[29]

Autonomous weapons may also destabilize global security; if one state develops pre-emptive or risk-free strike capabilities, it could trigger arms races or lower the threshold for war. What remains clear is that governance will play a decisive role in determining whether autonomy in warfare becomes an asset or a liability.

Ban vs. Regulation

The debate over how to address the rise of lethal autonomous weapons systems has crystallized around two competing approaches: a pre-emptive ban versus regulation. The Stop Killer Robots campaign, led by a global coalition of NGOs, calls for an international treaty banning the development and deployment of fully autonomous weapons that operate beyond meaningful human control.[30] Advocates argue that a categorical ban is the only way to prevent the ethical, legal, and humanitarian risks posed by delegating life-and-death decisions to machines. They emphasize that regulation alone is insufficient, as it cannot guarantee compliance or accountability once these systems proliferate across geopolitical fault lines. In contrast, the U.S. Department of Defense rejects the idea of a ban, opting instead for a governance framework rooted in its 2020 Ethical Principles for Artificial Intelligence.[31] These principles stress the importance of responsible, traceable, and governable AI development, while still allowing for innovation and deployment of autonomous technologies within a human command structure. Rather than restricting research, the DoD’s approach prioritizes ensuring that AI-enabled systems are reliable, auditable, and aligned with military values.[32] This divergence reflects a broader global rift: where some see pre-emptive prohibition as a safeguard against future atrocities, others see adaptive regulation as a pragmatic path to maintaining both military advantage and ethical integrity. On 2 December 2024, in response to increasing urgency, the UN General Assembly passed a resolution on lethal autonomous weapons systems with 166 votes in favor, 3 opposed (Russia, North Korea, and Belarus), and 15 abstentions. The resolution highlights worldwide concern about the use of LAWS in recent conflicts like Gaza and Ukraine by endorsing a two-tiered governance system that calls for regulatory monitoring for some LAWS and a ban on others under international law. The UN’s adoption of this resolution underscores the growing international consensus that the risks posed by autonomous weapons are no longer hypothetical, but urgent and real.

In his New Agenda for Peace, the Secretary-General called for a legally binding treaty to ban LAWS operating without human oversight, with a target completion date of 2026.[33] At the September Summit of the Future, efforts were continued as member states reaffirmed the importance of multilateral cooperation and committed to advancing talks on a possible international tool to address emerging technologies in lethal autonomy, highlighting the fact that governance is now necessary and unavoidable.[34]

International Governance Efforts

Efforts to govern lethal autonomous weapons systems (LAWS) at the international level have gained momentum but remain fragmented and largely non-binding. The UN Group of Governmental Experts (GGE) on LAWS, established under the Convention on Certain Conventional Weapons (CCW), has been the primary multilateral forum discussing potential regulations. Since 2014, the GGE has explored definitions, ethical concerns, and the need to maintain meaningful human control. However, despite years of deliberation, the group has failed to reach consensus on a binding legal framework due to geopolitical disagreements and differing national interests.[35]

More recently, the REAIM Summit (Responsible AI in the Military Domain) has emerged as a multilateral dialogue platform led by the Netherlands and South Korea. It brings together states, civil society, and industry to build shared norms for responsible military AI use, including transparency, accountability, and human oversight. REAIM has directly addressed concerns surrounding LAWS, including through dedicated sessions at its 2023 summit that emphasized the need for meaningful human control over the use of force. The summit’s official Call to Action explicitly urged states to share best practices and develop governance frameworks to prevent autonomous systems from operating without adequate oversight. Subsequent working groups have focused on establishing ethical review protocols, pre-deployment testing standards, and operational limitations specifically tailored to LAWS. While REAIM encourages voluntary commitments, it does not create enforceable obligations and lacks legal authority.[36]

In parallel, soft law mechanisms, such as ethical declarations, codes of conduct, and national AI principles, have gained popularity as flexible alternatives to formal treaties. Examples include the U.S. Department of Defense’s AI Ethical Principles, NATO’s Principles of Responsible Use, and the OECD’s AI Recommendations. Also, the European Union’s 2019 Ethics Guidelines for Trustworthy AI emphasize human agency, transparency, and safety in AI development, including defense-related applications.[37] The G7 Hiroshima AI Process, launched in 2023, reflects growing interest in multilateral alignment on the responsible use of emerging technologies, encouraging countries to adopt shared voluntary standards.[38]  In the military domain, the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy presented at the 2023 REAIM Summit outlines non-binding commitments to ensure meaningful human control and avoid unintended escalation from autonomous systems.[39]

While these frameworks remain non-binding and vary across jurisdictions, they reflect a growing international consensus that autonomy in weapons is not to be banned outright, but rather governed through principled frameworks, signaling a willingness to integrate such systems into warfare under clear ethical and operational guidelines.

Conclusion and Recommendations

The governance of lethal autonomous weapons systems stands at the center of modern military and legal discourse. Evidently, LAWS challenge foundational principles of international humanitarian law, particularly accountability, proportionality, and distinction, while also introducing valuable capabilities such as real-time targeting, emotion-free decision-making, and human-machine collaboration that may enhance operational precision warfare. Meanwhile, governance efforts from the UN General Assembly’s two-tiered resolution to REAIM’s push for pre-deployment standards and ethical oversight signal a growing international will to address the issue. Additionally, diverging views on the balance between innovation and restraint, human control and autonomy, have shaped a fragmented but steadily advancing governance landscape, one that reflects deeper structural disagreements. This evolution signals growing recognition that autonomy in weapons is not simply a futuristic concept to be feared or dismissed, but a present reality to be shaped.

A crucial first step toward effective governance is establishing clear and universally accepted definitions. According to the UN, as of now, there is no legally binding definition of LAWS, only loose characterizations from expert groups. Without a shared understanding, states apply inconsistent rules, making meaningful regulation nearly impossible. Equally urgent is defining “meaningful human control,” the most contested term in the LAWS debate. In its absence, countries interpret it to suit their strategic needs, allowing wide variation in how much autonomy systems can have, often without accountability. This ambiguity risks undermining trust and creating legal loopholes. While definitions alone won’t solve every governance challenge, they provide a critical foundation. With clear, shared language, states can begin to apply universal safeguards and build coherent international norms. Ultimately, it will offer the baseline needed to ensure that governance efforts are aligned and effective, reducing risks and setting limits on how autonomy can be integrated into warfare.

While numerous voluntary frameworks already exist, such as the U.S. Department of Defense’s Ethical Principles for AI and NATO’s Principles of Responsible Use, they are fragmented, which limits their collective impact. Expanding soft law into a unified global framework would help translate these values into actionable governance tools. By converging around minimum international standards, such as prohibiting autonomous engagement in civilian zones or requiring explainability in targeting decisions, states can begin operationalizing shared norms even without a binding treaty. This approach offers a flexible yet coordinated way to govern military AI systems, particularly in the interim period where consensus on hard law remains elusive. It would also offer governments a practical roadmap for implementation while reinforcing collective expectations and closing loopholes before they are exploited and strengthening international trust in the responsible use of autonomy in warfare.

Also, building cross-domain coordination mechanisms is essential, as the governance of LAWS sits at the intersection of law, ethics, defense, and technology, domains that often operate separately. Establishing intergovernmental task forces that bring together ethicists, technologists, military experts, and legal scholars to advise bodies like the UN or the Convention on Certain Conventional Weapons (CCW) would ensure that decisions reflect diverse expertise and anticipate complex trade-offs. This kind of advisory structure would help craft more balanced, forward-looking governance strategies that are both technically and ethically grounded.

Furthermore, countries should initiate institutionalizing AI governance within their defense sectors by establishing dedicated departments or agencies tasked with overseeing the ethical, legal, and strategic use of AI-enabled military systems. These national bodies would not only enhance domestic oversight but also act as formal liaison points with future international governance structures, helping to translate global standards into practical national policies. By aligning with guidance issued by a permanent international body, these departments would ensure that emerging regulations, norms or laws are applied consistently, bridging the current gap between technological innovation and governmental accountability in warfare.

Finally, instead of relying on fragmented forums or temporary working groups to address what is clearly a long-term challenge, there is a pressing need for a permanent international governance body dedicated to LAWS. It should be a structured, enduring institution tasked with guiding the evolution of autonomous warfare through legal clarity, ethical oversight, and coordinated standard-setting. Such a body would provide a stable platform for states, technologists, legal scholars, and civil society to engage in ongoing dialogue, transparency initiatives, and the development of common norms. It could also introduce concrete mechanisms, such as a transparency registry where states voluntarily disclose their LAWS capabilities and operational doctrines, like the present Arms Trade Treaty’s reporting system, promoting trust and accountability. By anchoring governance in sustained multilateral cooperation rather than occasional negotiations, this body would ensure that the future of LAWS is not shaped in isolation, but in alignment with the principles that define responsible and legitimate warfare. The challenge ahead is not just to contain the risks, but to align the development of LAWS with the values and norms that define legitimate warfare.


[1]Benjamin Perrin and Masoud Zamani, “The Future of Warfare: National Positions on the Governance of Lethal Autonomous Weapons Systems,” Lieber Institute—West Point, February 11, 2025, accessed June 2025, https://lieber.westpoint.edu/future-warfare-national-positions-governance-lethal-autonomous-weapons-systems/.

[2]“Background on LAWS in the CCW,” United Nations Office for Disarmament Affairs, accessed June 2025, https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/.

[3]Benjamin Perrin and Masoud Zamani, “The Future of Warfare: National Positions on the Governance of Lethal Autonomous Weapons Systems.”

[4] Ibid.

[5] Amitai Etzioni and Oren Etzioni, “Pros and Cons of Autonomous Weapons Systems,” Military Review, May–June 2017, accessed June 2025, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/.

[6] Neil Davison, Autonomous Weapon Systems under International Humanitarian Law, International Committee of the Red Cross, 2017, accessed June 2025, https://www.icrc.org/sites/default/files/document/file_list/autonomous_weapon_systems_under_international_humanitarian_law.pdf.

[7] Benjamin Perrin, “Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New International Treaty,” ASIL Insights 29, no. 1 (January 24, 2025), American Society of International Law, accessed June 2025, https://www.asil.org/insights/volume/29/issue/1.

[8] Mariarosaria Taddeo and Alexander Blanchard, “A Comparative Analysis of the Definitions of Autonomous Weapons Systems,” Science and Engineering Ethics 28, no. 37 (2022), https://doi.org/10.1007/s11948-022-00392-3.

[9] Automated Research, “Israel Aerospace Industries Harop Loitering Munition,” Weapon Systems Database, accessed June 7, 2025, https://automatedresearch.org/weapon/israel-aerospace-industries-harop-loitering-munition/.

[10] “AI in the Battlefield: The Rise of Autonomous Weapons,” Dawn, March 2, 2024, https://www.dawn.com/news/1909385.

[11] United Nations Regional Information Centre (UNRIC), “UN Addresses AI and the Dangers of Lethal Autonomous Weapons Systems,” March 20, 2024, https://unric.org/en/un-addresses-ai-and-the-dangers-of-lethal-autonomous-weapons-systems/.

[12] Army Recognition, “Russia to Begin Serial Production of Marker Land Robot with Kornet Anti-Tank Missile, Drone Swarm Capabilities,” Army Recognition, March 25, 2025, https://armyrecognition.com/news/army-news/2025/russia-to-begin-serial-production-of-marker-land-robot-with-kornet-anti-tank-missile-drone-swarm-capabilities.

[13] Dylan Malyasov, “The State of Autonomy, AI & Robotics for Russia’s Ground Vehicles,” European Security & Defence, June 28, 2023, https://euro-sd.com/2023/06/articles/31798/the-state-of-autonomy-ai-robotics-for-russias-ground-vehicles/.

[14] Mariarosaria Taddeo and Alexander Blanchard, “A Comparative Analysis of the Definitions of Autonomous Weapons Systems.”

[15] Jennifer Jun, “The South Korean Sentry—A Killer Robot to Prevent War?,” Lawfare, July 1, 2021, https://www.lawfaremedia.org/article/foreign-policy-essay-south-korean-sentry%E2%80%94-killer-robot-prevent-war.

[16] Mariarosaria Taddeo and Alexander Blanchard, “A Comparative Analysis of the Definitions of Autonomous Weapons Systems.”

[17] ​​ Israel Aerospace Industries, “HARPY: Loitering Attack Weapon System,” IAI, accessed June 7, 2025, https://www.iai.co.il/p/harpy.

[18] Naz Modirzadeh, Dustin Lewis, and Emmeline B. Reeves, “Lethal Autonomous Weapons Systems under International Law,” American Society of International Law Insights 29, no. 1 (2024), https://www.asil.org/insights/volume/29/issue/1.

[19] John W. Raymond, “The Pros and Cons of Autonomous Weapons Systems,” Military Review, May–June 2017, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/.

[20] James R. Clapper Jr. et al., Unmanned Systems Roadmap: 2007-2032 (Washington, DC: Department of Defense [DOD], 2007), 19, accessed 28 March 2017, http://www.globalsecurity.org/intell/library/reports/2007/dod-unmanned-systems-roadmap_2007-2032.pdf.

[21] David Francis, “How a New Army of Robots Can Cut the Defense Budget,” Fiscal Times, April 2, 2013, accessed 8 March 2017, http://www.thefiscaltimes.com/Articles/2013/04/02/How-a-New-Army-of-Robots-Can-Cut-the-Defense-Budget.

[22]Michael Byrnes, “Nightfall: Machine Autonomy in Air-to-Air Combat,” Air & Space Power Journal 23, no. 3 (May–June 2014): 54, accessed March 8, 2017, http://www.au.af.mil/au/afri/aspj/digital/pdf/articles/2014-May-Jun/F-Byrnes.pdf?source=GovD.

[23] Ronald C. Arkin, “The Case for Ethical Autonomy in Unmanned Systems,” Journal of Military Ethics 9, no. 4 (2010): 332–41.

[24] Ibid.

[25] “Autonomous Weapons: An Open Letter from AI [Artificial Intelligence] & Robotics Researchers,” Future of Life Institute website, 28 July 2015, accessed 8 March 2017, http://futureoflife.org/open-letter-autonomous-weapons/.

[26] Christof Heyns, Report of the Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, United Nations Human Rights Council, 23rd Session, Agenda Item 3, United Nations Document A/HRC/23/47 (September 2013).

[27] International Committee for Robot Arms Control (ICRAC), “Scientists’ Call to Ban Autonomous Lethal Robots,” ICRAC website, October 2013, accessed March 24, 2017, icrac.net.

[28] Noel Sharkey, “Saying ‘No!’ to Lethal Autonomous Targeting,” Journal of Military Ethics 9, no. 4 (2010): 369–83, https://doi.org/10.1080/15027570.2010.537903.

[29] “The Intersections of Jus in Bello and Autonomous Weapons Systems,” National High School Journal of Science, accessed April 2025, https://nhsjs.com/wp-content/uploads/2025/04/The-Intersections-of-Jus-in-Bello-and-Autonomous-Weapons-Systems.pdf.

[30] Facts About Autonomous Weapons,” Stop Killer Robots, accessed June 7, 2025, https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/.

[31] U.S. Department of Defense, “DoD Adopts Ethical Principles for Artificial Intelligence,” February 24, 2020, https://www.defense.gov/News/Releases/Release/article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.

[32] C. Todd Lopez, “DoD Adopts 5 Principles of Artificial Intelligence Ethics,”DOD News, February 25, 2020, accessed June 2025, https://www.defense.gov/News/News-Stories/Article/Article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/.

[33] United Nations Regional Information Centre. “UN Addresses AI and the Dangers of Lethal Autonomous Weapons Systems.”

[34] Ibid.

[35] United Nations Office for Disarmament Affairs, CCW Group of Governmental Experts on LAWS Report 2023, UNODA, July 2023, https://unoda-web.s3-accelerate.amazonaws.com/wp-content/uploads/2023/07/CCW-GGE-LAWS-Report-2023.pdf.

[36] REAIM 2023, Outcome Document of the Responsible AI in the Military Domain Summit, accessed June 2025.

[37] European Commission, “Ethics Guidelines for Trustworthy AI,” April 2019, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

[38] European Commission, “G7 Leaders’ Statement on the Hiroshima AI Process,” October 30, 2023, https://digital-strategy.ec.europa.eu/en/library/g7-leaders-statement-hiroshima-ai-process.

[39] U.S. Department of State. “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” State.gov, February 2023, https://www.state.gov/bureau-of-arms-control-deterrence-and-stability/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy.

Related Topics