The rapid proliferation of autonomous weapon systems (AWS) in warfare offers a glimpse into the transformative potential and the ethical dilemmas of AWS. For instance, during the ongoing conflict in Ukraine, both sides have deployed drones capable of advanced targeting and decision-making, highlighting the operational advantages and inherent risks of autonomous technology. Ukrainian drones have reportedly taken down high-value targets like Russian fighter jets, demonstrating precision and cost efficiency, but also raising questions about accountability when mistakes occur (Epstein, 2025). Meanwhile, Russia’s production of over a million drones in a single year showcases how rapidly these technologies can scale, often outpacing regulatory frameworks (Evans, 2025). These developments underscore the urgency of addressing the ethical, legal, and policy dimensions of AWS, as they represent a paradigm shift in how conflicts are waged and controlled.
The progression toward fully autonomous weapon systems represents one of the most complex and consequential challenges in modern military innovation. As advancements in artificial intelligence (AI) and automation transform the nature of warfare, policymakers and military strategists must navigate a critical juncture: how to embrace the operational advantages of AWS while maintaining ethical and legal accountability. Central to this debate is determining when, if ever, humans should relinquish their role in decision-making loops, particularly in lethal scenarios. The stakes are high, as these systems promise faster decision-making in increasingly dynamic combat environments, but they also risk undermining fundamental principles of humanity and accountability.
The development of autonomous weapon systems hinges on navigating critical questions: When is it appropriate to remove humans from the decision-making loop? How can trust be cultivated in transitioning from autonomous systems to lethal autonomy? What policies ensure ethical compliance in this shift? I argue that integrating technological innovation with robust ethical safeguards is essential. AWS can only become a viable and ethically sound weapon system in the U.S. military if operational effectiveness is coupled with clear moral and legal accountability. The findings demonstrate that achieving this balance requires a focus on maintaining human oversight, ensuring transparency, and fostering trust through policy evolution and international collaboration.
Defining Autonomy and Lethal Autonomous Weapon Systems (LAWS)
Autonomous commanding focuses on enhancing situational awareness by automating repetitive and time-consuming tasks, such as data gathering, analysis, and actionable insights. Systems like the U.S. Department of Defense’s (DoD) Project Maven, an AI-enabled initiative, exemplify this by automating drone footage analysis to detect hostile activity, significantly reducing human workload while maintaining human oversight (Sayler, 2020). In contrast, the transition to systems capable of independently selecting and engaging targets—known as lethal autonomous weapon systems—introduces significant ethical and legal complexities.
AWS are commonly categorized into three levels of human interaction: human-in-the-loop, where humans approve target selection; human-on-the-loop, where humans monitor and can intervene; and human-out-of-the-loop, where systems operate independently. These categories illustrate a continuum of autonomy, with the latter raising critical concerns about the replacement of human judgment in life-and-death decisions (Sayler, 2025). Notably, the 2023 update to U.S. Directive 3000.09, which redefined “human” oversight as “operator” oversight, signals a potential shift toward accepting AI entities in oversight roles. This policy evolution reflects growing comfort with “bots controlling bots” scenarios, heightening concerns about accountability and the risks of delegating lethal decisions to autonomous systems (Barbosa, 2023).
Despite advancements, the U.S. military remains cautious, maintaining a significant comfort level with human-on-the-loop systems to uphold ethical principles requiring deliberate human judgment in lethal scenarios. Project Maven, for example, leverages AI for autonomous target identification, but final engagement decisions rest with human operators, preserving accountability (Sayler, 2020). Similarly, semi-autonomous systems, like predictive maintenance algorithms for the F-35, a supersonic stealth strike fighter-jet, streamline operations without supplanting human oversight during critical decision points (Sayler, 2020). This approach reflects an operational philosophy that balances AI’s capabilities with ethical and legal standards, such as compliance with international humanitarian law.
However, the increasing sophistication of these systems raises questions about the evolving scope of human involvement. Scholars like Peter Asaro emphasize the need for robust frameworks, such as human-in-the-loop and human-on-the-loop models, to ensure adherence to international humanitarian law principles like distinction and proportionality, which are crucial for avoiding indiscriminate harm (Asaro, 2012). Simultaneously, broader interpretations of “appropriate human judgment” in U.S. policies suggest a shift toward supervisory human roles, where autonomous systems manage much of the operational decision-making. While such a shift could enhance efficiency, it risks distancing humans from critical decisions, raising concerns about accountability and the ethical implications of dehumanizing warfare (Perrin, 2025). Striking the right balance between operational efficiency and ethical imperatives will be essential as autonomy in military systems continues to evolve.
On-Going Global Drone Development
Global advancements in drone technology and its militarization are accelerating, with significant implications for international security. The U.S. leads the charge, incorporating drones into its largest military overhaul since the Cold War. This initiative, estimated at $36 billion over five years, aims to equip each combat division with approximately 1,000 drones for surveillance, supply transport, and attack capabilities (Haner & Garcia, 2019).
The transformation is informed by lessons from the Ukraine-Russia conflict, where drones have redefined warfare, emphasizing adaptability, low costs, and real-time operational advantages (Gordon, 2025). The conflict has highlighted the vulnerabilities of traditional military hardware against inexpensive drones. Ukrainian forces have utilized upgraded drones to strike high-value Russian assets, while Russia has reportedly produced over a million drones in the last year, emphasizing the scalability and accessibility of this technology (Epstein, 2025). Russia has embraced a strategy prioritizing fully autonomous systems, despite its lower financial and technical resources. The Kremlin’s plans to “roboticize” 30% of its combat power by 2030 reflect a push toward reducing human involvement in decision-making (Haner & Garcia, 2019).
China is rapidly advancing its drone capabilities, driven by seamless state integration with private industries under the “Next Generation Artificial Intelligence Development Plan.” This ambitious strategy initiative outlines a clear trajectory for China to achieve global AI dominance by 2030, prioritizing innovation in military and civilian applications alike (Haner & Garcia, 2019). Supported by a $250 billion defense budget, the plan has already led to remarkable achievements, such as demonstrating swarming technology with 1,000 synchronized drones—a milestone that underscores China’s ability to develop and deploy sophisticated autonomous systems (Haner & Garcia, 2019). China’s strategy heavily relies on its private sector, where companies like Ziyan and DJI play pivotal roles by aligning their technological advancements with national security goals. The strong public trust in AI, with 70% of Chinese citizens supporting its adoption, further accelerates this progress, creating a cooperative environment that enables rapid innovation and practical implementation (Wilner & Babb, 2020). This integration between state directives and private capabilities uniquely positions China to redefine the future of autonomous warfare.
South Korea has emerged as a global leader in developing defensive autonomous systems, exemplified by the Samsung SGR-A1 turret. This stationary robotic sentry, capable of autonomous threat detection and engagement, is emblematic of South Korea’s emphasis on leveraging AI for strategic defensive purposes (Haner & Garcia, 2019). Designed to guard critical infrastructure and demilitarized zones, the SGR-A1 highlights South Korea’s focus on minimizing human exposure to conflict zones while maintaining robust security capabilities. Beyond military applications, South Korea’s commitment to automation is reflected in its exceptionally high robot-to-worker ratio—631 robots per 10,000 workers—the highest globally (Haner & Garcia, 2019).
With global military spending on autonomous systems projected to reach $34 billion by the end of 2025, the rapid proliferation of these technologies necessitates robust international oversight to prevent misuse by authoritarian regimes and non-state actors (Haner & Garcia, 2019). The lack of progress in regulating autonomous weapons underscores the urgency for global norms to mitigate risks to international security and democratic peace.
Existing Autonomous Systems in Military Applications
The current landscape of autonomous systems in military applications showcases a wide range of platforms with varying levels of autonomy. Some systems, like the STM Kargu-2, are designed for specific combat scenarios and have already been deployed in conflicts (Jones, 2021). The Kargu-2, a Turkish quadcopter drone, exemplifies advanced autonomy with its ability to operate in swarms, function without GPS, and employ facial recognition and machine-learning algorithms for targeting identification and engagement (Jones, 2021; Nasu, 2021). This drone is an anti-personnel weapon capable of autonomously selecting and attacking human targets using onboard image classification tools (Nasu, 2021). The Kargu-2 was notably used in Libya in 2020, marking one of the first instances of autonomous targeting in combat (Jones, 2021). According to a UN Panel of Experts, there was not enough evidence to say for certain that there was a loss of human life caused by drone systems operating without human supervision (Nasu, 2021).
Other systems, such as the American made Kratos Valkyrie XQ-58A and MQ-9 Reaper, highlight advancements in integrating artificial intelligence with traditional military platforms. The Valkyrie, still in testing, is designed for air-to-air and air-to-surface combat, using AI for independent decision-making in tasks like surveillance and electronic warfare (Weapons, 2023). The MQ-9 Reaper, enhanced with the Agile Condor pod, which is effectively a flying supercomputer, autonomously navigates, identifies ground anomalies, and adjusts its flight path based on situational awareness, though U.S. policy mandates human oversight for lethal decisions (Jones, 2021). Additionally, portable loitering munitions like the Switchblade 300 and 600, developed by AeroVironment, combine portability with autonomy, allowing operators to scan for and select targets while leveraging AI to enhance battlefield flexibility (Sexton & Carneal, 2024).
Similarly, loitering munitions, such as the Harpy and Harop drones, developed by Israel Aerospace Industries, offer fully autonomous capabilities in targeting radar systems and operating as “suicide drones.” These systems have seen operational use in conflicts like the escalation in the Nagorno-Karabakh war in 2020, demonstrating the increasing prevalence of autonomous technologies in modern warfare (Sexton & Carneal, 2024).
Russia’s newly developed V2U loitering munition marks a significant advancement in autonomous drone capabilities (Hardie, 2025). First observed in 2024 and increasingly deployed by early 2025, the V2U is a “smart suicide” unmanned aerial vehicle capable of autonomously navigating GPS-denied environments, identifying targets using visual terrain-matching, and possibly coordinating in swarms (Hardie, 2025). Built with commercial components, the V2U reflects a hybrid of domestic and imported technologies, mainly from China (Defense Express, 2025). Ukrainian military intelligence reports that Russia deploys 30-50 of these drones daily, training their AI in real combat settings (Hardie, 2025). In one notable instance, a swarm of seven V2Us reportedly attacked a civilian area, raising questions about the system’s capacity to distinguish non-combatants or the ethical parameters within which it was programmed (Hardie, 2025).
The autonomous nature of the V2U presents serious ethical and legal concerns. The drone’s reliance on commercial-grade imaging for object classification risks misidentification, particularly in asymmetric warfare scenarios where combatants and civilians are visually indistinguishable. These developments underscore the urgency of establishing international norms and regulatory frameworks to ensure that autonomous systems can be used responsibly and in compliance with international humanitarian law (Perrin, 2025).
Risks and Advantages of Lethal Autonomous Weapon Systems
The adoption and proliferation of lethal autonomous weapons systems (LAWS) pose significant risks. Unlike nuclear weapons, which deter conflict through the promise of massive destruction, LAWS’ accessibility and precision may lower the threshold for war (Wilner & Babb, 2020). Fully autonomous systems lack reliable mechanisms to ensure adherence to international humanitarian law, risking violations of the principles of distinction and proportionality (Meyer, 2024). Furthermore, the absence of global regulatory frameworks exacerbates the risk of weapon misuse by authoritarian regimes and non-state actors, contributing to strategic instability (Haner & Garcia, 2019). Algorithmic biases within autonomous systems, rooted in skewed data and human programming, could lead to unintended civilian casualties and perpetuate societal inequities, as biased systems misclassify individuals or over-rely on flawed decision-making processes (Meyer, 2024).
Compounding these concerns are interoperability challenges within alliances like NATO. Disparities in AI governance and capabilities among member states can hinder collective defense and operational cohesion. Uneven readiness and ethical disagreements about AI’s role in warfare undermine the credibility and effectiveness of multinational coalitions (Wilner & Babb, 2020).
Despite the risks, LAWS present transformative potential in modern warfare. Autonomous systems could enhance battlefield efficiency by eliminating human error and fatigue, reducing war crimes through consistent, ethically guided decision-making. Properly designed, they could align with international humanitarian law by improving target discrimination, thus minimizing civilian casualties. Their speed, endurance, and precision provide operational advantages, enabling rapid, sustained responses without risking human lives (Wilner & Babb, 2020; Sayler, 2020).
Public-private collaboration and innovation are critical to maintaining strategic parity, especially against U.S. perceived adversaries like China, which integrates state and corporate resources to rapidly advance AI capabilities. Without equivalent collaboration, democratic states risk losing their technological edge (Wilner & Babb, 2020).
Progressive Steps Towards Autonomous Weapon Systems (AWS)
The transition toward fully operational AWS demands a deliberate and measured approach to ensure ethical alignment and operational efficacy. Central to this journey is establishing trust between humans and autonomous systems, particularly in life-and-death decisions. Achieving this trust involves developing systems capable of both reliability and ethical compliance through rigorous testing, operational transparency, and adherence to international humanitarian law. Historical examples, such as the incremental integration of AI in surveillance and navigation systems like the U.S. Navy’s Sea Hunter and Project Maven illustrate the potential benefits of autonomous technology when balanced with human oversight (Sayler, 2020; Schmidt, 2021). As AWS technology evolves, these foundational steps form the blueprint for advancing toward more complex applications while safeguarding accountability and ethical integrity.
Incremental Trust Building
Building trust in AWS requires a methodical approach that begins with rigorous testing and validation of non-lethal systems. The U.S. Department of Defense (DoD) has emphasized this through initiatives such as the Joint Common Foundation, a $106 million project aimed at creating environments for testing and validating AI systems (Sayler, 2020). These frameworks ensure that non-lethal autonomous systems are rigorously evaluated for reliability and ethical compliance before progressing to lethal applications. Testing must simulate realistic scenarios, aligning with DoD Directive 3000.09, which mandates that AWS include mechanisms for terminating engagements if objectives cannot be met (Sayler, 2025). Systems like the Loyal Wingman, designed to autonomously handle navigation and operational tasks, represent early steps in integrating autonomy while maintaining human oversight. Similarly, swarming technology has been explored to test cooperative, low-cost autonomous vehicles that enhance military effectiveness without compromising accountability (Sayler, 2020).
Training autonomous systems with human oversight in controlled combat scenarios further strengthens trust and operational readiness. Programs like Project Maven demonstrate the utility of AI in analyzing surveillance footage and identifying threats, all while ensuring final decision-making rests with human operators (Sayler, 2020). This aligns with ethical principles emphasizing “appropriate human judgment,” as outlined in DoD policies (Sayler, 2025). Ensuring compliance with international humanitarian law is central to this process, requiring systems to meet standards of distinction, proportionality, and accountability. For example, integrating human authorization points into AWS ensures that operators remain actively involved in decisions with life-and-death implications, fostering trust and accountability without sacrificing operational efficiency (Schmidt et al., 2021).
Moreover, addressing challenges like algorithmic bias and explainability is critical in refining autonomous systems for combat use. Biases in AI can perpetuate inequities or lead to misclassifications, as noted in cases where algorithms disproportionately misidentify individuals based on gender or race (Meyer, 2024). Collaborative efforts among technologists, policymakers, and ethicists are necessary to refine these systems continually, ensuring they uphold ethical standards and human rights. Transparency in AI development, coupled with robust diversity in design teams, can mitigate biases and improve public trust in these technologies (Meyer, 2024). The DoD’s adherence to rigorous training, validation, and compliance frameworks ensures a balanced approach, advancing autonomy while maintaining ethical oversight and operational reliability.
Effectiveness in Decision Accuracy
Decision accuracy is a critical metric for evaluating autonomous weapon systems, particularly as these systems increasingly operate in high stakes combat environments. AWS must consistently distinguish between valid targets (combatants) and non-targets (civilians) to comply with the principles of distinction under international humanitarian law (Umbrello, 2020). Advanced AI systems offer potential improvements over human decision-makers, who may be prone to emotional or cognitive biases in stressful conditions. For instance, automated systems operate without the emotional distress or fatigue that often compromises human judgment, potentially making them superior moral actors in combat scenarios (Meyer, 2024). However, challenges persist. Current systems often struggle with nuanced decision-making, such as distinguishing between civilians and combatants in ambiguous situations or adhering to proportionality, which requires weighing military advantage against potential civilian harm (Umbrello, 2020). These limitations highlight the importance of ongoing refinement in AI algorithms to ensure both precision and ethical compliance.
Reliability Under Real-Time Constraints
Reliability under real-time constraints is another essential evaluation metric for AWS, as military operations often demand split-second decision-making. Autonomous systems offer advantages in operating at “machine speed,” allowing for rapid data processing and responses that far exceed human capabilities (Williams, 2021). This speed can provide strategic advantages, enabling AWS to outmaneuver adversaries and neutralize threats more efficiently. However, reliance on machine-speed decisions raises concerns about accountability, as systems may act too quickly for human operators to intervene effectively in critical scenarios (Williams, 2021). Rigorous testing and validation, as mandated by DoD Directive 3000.09, are necessary to ensure that AWS maintains reliability without compromising human oversight (Sayler, 2025). For example, the U.S. Navy’s Sea Hunter vessel, which autonomously navigates and conducts months-long submarine-hunting missions, demonstrates the potential for high reliability in autonomous systems under real-time operational constraints (Sayler, 2020).
Reduction in Operational Risks
AWS must also be evaluated based on their ability to reduce operational risks, including collateral damage, friendly fire, and unintended escalation of conflicts. Evidence suggests that AWS can mitigate risks associated with human error, such as emotional responses under pressure, by adhering strictly to preprogrammed ethical guidelines (Umbrello, 2020). For example, swarming technologies, which use large numbers of low-cost autonomous vehicles, can enhance precision in military operations while reducing reliance on large-scale, indiscriminate weapons (Sayler, 2020). However, the risk of algorithmic bias in decision-making poses a significant challenge. Biased AWS could misclassify civilians as combatants or disproportionately affect certain demographic groups, exacerbating social inequities (Meyer, 2024). Addressing these risks requires regular evaluation of AWS algorithms and transparent mechanisms for identifying and mitigating biases, ensuring that systems operate both effectively and equitably in diverse operational contexts.
Collectively, these metrics—decision accuracy, real-time reliability, and risk reduction—provide a comprehensive framework for assessing the ethical and operational viability of AWS. To advance the development of these systems responsibly, governments and stakeholders must prioritize rigorous testing, adherence to International Humanitarian Law, and continuous refinement of AI capabilities to align with ethical and strategic objectives.
Ethical Considerations and Policy Framework
In the realm of warfare, maintaining human agency in life-and-death decisions remains a cornerstone of ethical military practice. The introduction of lethal autonomous weapons systems (LAWS) challenges this principle by transferring decision-making about lethal force to machines. Such a shift raises profound ethical and legal questions. The United Nations has taken a strong stance against fully autonomous weapons, with Secretary-General António Guterres describing them as “politically unacceptable and morally repugnant (United Nations Office for Disarmament Affairs, 2023).” Guterres advocates for a legally binding instrument to prohibit LAWS without human oversight by 2026, emphasizing that they cannot comply with international humanitarian law due to their inability to make nuanced moral judgments (UNODA, 2023).
Ethical frameworks underscore the necessity of human judgment to ensure that military actions are morally justified. Autonomous systems, no matter how advanced, lack the ability to deliberate on ethical principles like proportionality and distinction, both critical to international humanitarian law compliance (Meyer, 2024). Scholars, such as Peter Asaro, emphasize that the moral and legal responsibility for lethal actions cannot be delegated to algorithms, which operate without consciousness or moral agency (Meyer, 2024). Machines follow preprogrammed protocols and are incapable of understanding the human impact of their decisions or reflecting on moral implications, making them fundamentally unsuitable for independent lethal decision-making (Asaro, 2012).
Moral responsibility in warfare also entails accountability, a concept inherently tied to human intentionality and judgment. Autonomous systems create a moral “offloading” problem, where responsibility for lethal outcomes becomes ambiguous (Horowitz, 2016). While machines might theoretically adhere more strictly to ethical rules than humans, this does not eliminate the need for human oversight to address unanticipated situations or moral ambiguities (Umbrello, 2020). Moreover, reliance on machines risks dehumanizing warfare by removing the emotional and ethical weight of taking a life, a key consideration in justifying lethal actions (Meyer, 2024; Umbrello, 2020).
The irreplaceable role of humans in moral decision-making is rooted in the complexity of ethical reasoning, emotional engagement, and the ability to weigh the consequences of actions. Sentience and empathy provide a moral depth that machines cannot replicate, emphasizing the necessity of retaining humans in the decision-making loop (Véliz, 2021). Moral decision-making requires subjective experience, emotional depth, and the capacity for genuine ethical reasoning, qualities that artificial systems inherently lack. This centrality of human judgment is not only vital for ensuring justice but also for preserving the moral integrity of warfare (Stenseke, 2022; Asaro, 2012).
Ultimately, while LAWS offer operational efficiencies and may reduce human error, their deployment without human oversight undermines core ethical principles. As nations grapple with these challenges, the consensus remains clear: humans must retain ultimate responsibility for decisions involving the use of lethal force. This ensures accountability, aligns with established international laws, and preserves the moral and humanitarian standards essential in armed conflict (UNODA, 2023; Asaro, 2012).
Policy Evolution
As AWS advances rapidly, the policy landscape must evolve to balance their military utility with essential ethical constraints. AWS offer unprecedented advantages in speed, precision, and risk reduction for human combatants (Meyer, 2024). However, their capacity to operate independently, especially in lethal engagements, poses profound challenges to existing legal and moral frameworks. To harness AWS effectively without eroding humanitarian standards, international law must establish clear and enforceable thresholds for human oversight in life-or-death decisions.
Currently, international humanitarian law forms the backbone of ethical warfare governance. It mandates human judgement in applying the principles of distinction, proportionality, and necessity, ensuring that combatants are properly identified and civilian harm is minimized (Perring & Zamani, 2025). Article 36 of the Geneva Convention further requires that all new weapons be reviewed for international humanitarian law compliance, making it a critical tool for regulating AWS (Perrin, 2025). Yet, these frameworks often lack specificity about what constitutes “meaningful human control” in an age of algorithmic autonomy (Meyer, 2024). For instance, while the U.S. asserts its commitment to international humanitarian law, it rejects universal thresholds for human oversight, advocating instead for flexible, state-defined standards (Perrin & Zamani, 2025). This ambiguity opens the door to diverging interpretations and weakens accountability across borders.
Ethical arguments state that the act of taking a life should be grounded in informed, accountable human decision-making, an ethical line that machines cannot cross without eroding the foundation of international law (Asaro, 2012). Proposals have emerged for a new international treaty focused not just on the effects of AWS, but on regulating the decision-making process itself (Perrin, 2025). Such a treaty would prohibit fully autonomous systems that lack human intervention and require all AWS to include clear, auditable human authorization mechanisms for lethal force.
Key judgements in warfare, like determining direct participation in hostilities or assessing proportionality, are deeply contextual and morally complex (Meyer, 2024). Translating these human judgements into code is technically challenging and ethically fraught (Asaro, 2012). This challenge is exacerbated by the rapid pace of AI development and the growing deployment of semi- and fully-autonomous systems in contemporary conflicts, such as those in Ukraine and Gaza. The UN’s December 2024 resolution reflects increasing consensus around a dual-tiered approach: prohibit systems incapable of complying with international humanitarian law while regulating those with partial autonomy (Perrin, 2025).
Moreover, the global proliferation of low-cost, AI-enabled loitering munitions and swarm technologies raises the stakes for regulatory harmonization. Authoritarian states and non-state actors may exploit ethical gaps, using AWS to conduct attacks without accountability (Umbrello, 2020; Wilner & Babb, 2020). Without clear international standards, states with lower ethical thresholds could gain coercive advantages over more constrained democracies. This emerging “moral asymmetry” underscores the strategic necessity of aligning policy, ethics, and military capability (Wilner & Babb, 2020).
To meet this challenge, the international community should adopt a comprehensive and enforceable treaty that defines the boundaries of autonomy in warfare. Such a framework should clearly delineate the limits of machine-led lethal decision-making; require consistent human oversight and accountability mechanisms; mandate transparency in AI development and deployment; and promote international collaboration among government, technologists, and ethicists (Meyer, 2024).
Ultimately, AWS can play a vital role in future combat, but only if their use is governed by strong, shared ethical commitments. By embedding human values and legal safeguards into the design and regulation of AWS, states can maintain operational superiority without compromising the moral fabric of international law.
Losing the Human in the Loop and Ensuring Ethical Compliance
The question of when and why to remove humans from decision-making loops in autonomous weapon systems (AWS) hinges on balancing operational efficiency with ethical imperatives. Current debates emphasize that while AWS may surpass humans in areas such as precision, endurance, and resistance to fatigue, fully removing humans from the loop presents significant ethical and legal challenges. Human oversight ensures decisions align with principles of international humanitarian law, such as distinction and proportionality, which require nuanced judgment not replicable by current AI technologies (Asaro, 2012; Meyer, 2024).
However, certain scenarios may justify reducing human involvement, such as rapid-response situations where human reaction times are inadequate. For example, defensive systems like the U.S. Patriot missile or Israel’s Iron Dome operate with limited human oversight to intercept high-speed threats, demonstrating the potential benefits of automation in constrained scenarios (Horowitz, 2016). Even in such cases, ethical considerations demand that humans maintain ultimate accountability for lethal actions, ensuring that AWS are employed within strictly defined parameters to avoid arbitrary or indiscriminate harm.
Ensuring ethical compliance by autonomous systems involves embedding moral and legal frameworks into their design while maintaining robust mechanisms for oversight and accountability. The concept of “ethical governors,” which program international humanitarian law principles into AWS, has been proposed to constrain their actions within legal and moral boundaries. However, this approach faces challenges due to the interpretative nature of international humanitarian law and the difficulty of translating complex human judgments into algorithmic processes (Asaro, 2012; Perrin, 2025).
Transparency in system design and deployment is critical to building trust and enabling accountability. Regular testing, refinement, and mitigation of biases in algorithms are essential for ensuring ethical outcomes. Collaborative efforts involving governments, technologists, and ethicists are necessary to develop systems that adhere to international norms while upholding human rights and justice (Meyer, 2024). Furthermore, treaties regulating the development and use of AWS must mandate human control over key decisions, with mechanisms for intervention and deactivation to prevent unintended consequences (Perrin, 2025).
Ultimately, while AWS offers the promise of enhanced precision and reduced human risk, their ethical deployment depends on a clear understanding of when human oversight can be minimized without compromising moral responsibility. Ensuring compliance requires a combination of technical safeguards, policy frameworks, and international cooperation to address the evolving challenges posed by autonomous systems in warfare.
The Future of Autonomous Weaponry
The future of autonomous systems hinges on whether they remain tools to assist human decision-making or evolve into independent decision-makers. As supporting tools, autonomous systems could enhance operational efficiency by handling data analysis, reconnaissance, and logistical tasks, allowing human commanders to focus on strategic decisions. This approach aligns with ethical concerns by ensuring humans remain in control of life-and-death decisions (Asaro, 2012; Meyer, 2024). Conversely, transitioning to fully autonomous decision-makers risks eroding accountability and adherence to international humanitarian law, especially in complex, ambiguous scenarios where human judgment is irreplaceable (Perrin, 2025). Striking this balance will depend on technological advancements and the international community’s willingness to regulate autonomy levels in warfare.
To ensure ethical deployment, the convergence of policy and technology must prioritize transparency, accountability, and compliance with established legal norms. Collaborative frameworks, such as those proposed by the United Nations, advocate for integrating ethical programming into autonomous systems while maintaining mechanisms for human oversight (Perrin, 2025). Advances in machine learning could support these goals by enabling systems to simulate ethical reasoning within strict parameters, though this requires continuous refinement to address algorithmic biases (Meyer, 2024). Nations that prioritize collaboration between policymakers, technologists, and ethicists will likely lead the effort in harmonizing legal and technological standards for autonomous weapons (Wilner & Babb, 2020). Future conflict scenarios will likely test these frameworks, but the continued prioritization of human agency and accountability will remain central to upholding the principles of justice and humanity in warfare.
Implications
This research underscores the critical need for a measured and ethical approach to the development and deployment of Autonomous Weapon Systems (AWS). Key findings reveal that achieving the balance between autonomy, trust, and policy is paramount. While AWS offers operational advantages such as speed, precision, and adaptability, these benefits must not come at the expense of human oversight or ethical accountability. Clear policy frameworks, coupled with mechanisms for ensuring ethical compliance, are essential to maintaining trust between humans and autonomous systems, particularly in life-or-death decision-making scenarios.
The path forward demands a collective effort among policymakers, technologists, and international stakeholders to prioritize responsible innovation. By embedding ethical considerations into the design and deployment of AWS, the defense community can navigate the complexities of modern warfare while minimizing harm and preserving human dignity. The future of autonomous systems in defense depends on maintaining this balance, ensuring that technological advancements serve humanity’s broader goals of security, justice, and peace.
Sources
Asaro, Peter. “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making.” International Review of the Red Cross 94.886 (2012): 687–709. Web.
Barbosa, Lutiana Valadares Fernandes. 2023. “Exploring the 2023 U.S. Directive on Autonomy in Weapon Systems: Key Advancements and Potential Implications for International Discussions.” CEBRI-Journal Year 2, No. 7: 117-136.
Epstein, Jake. “Army Secretary Says Us Can’t Keep Pumping Money into Expensive Weapons That Can Be Taken out by an $800 Russian Drone.” Business Insider, Business Insider, 7 May 2025, https://www.businessinsider.com/army-secretary-us-cant-make-expensive-weapons-vulnerable-cheap-drones-2025-5.
Evans, Ryan. “The Army’s Upcoming Transformation, with Secretary Driscoll and Gen. George.” War on the Rocks, 7 May 2025, https://warontherocks.com/2025/05/the-armys-upcoming-transformation-with-secretary-driscoll-and-gen-george/.
Gordon, Michael R. “U.S. Army Plans Massive Increase in Its Use of Drones.” Wall Street Journal, 30 Apr. 2025, https://www.wsj.com/politics/national-security/us-army-drones-shift-20cc5753?st=3NFHyt.
Haner, J. and Garcia, D. (2019), The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development. Glob Policy, 10: 331-337. https://doi.org/10.1111/1758-5899.12713
Hardie, John. “Ukrainian Intelligence Details Russia’s New V2U Autonomous Loitering Munition.” FDD’s Long War Journal, FDD’s Long War Journal, 14 June 2025, https://www.longwarjournal.org/archives/2025/06/ukrainian-intelligence-details-russias-new-v2u-autonomous-loitering-munition.php.
Hicks, Kathleen H. “DOD DIRECTIVE 3000.09 AUTONOMY IN WEAPON SYSTEMS.” DoD Issuances, 25 Jan. 2023, www.esd.whs.mil/DD/DoD-Issuances/.
Horowitz, Michael C.; The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons. Daedalus 2016; 145 (4): 25–36. doi: https://doi.org/10.1162/DAED_a_00409
Jones, Taylor. “Real-Life Technologies That Prove Autonomous Weapons Are Already Here.” Future of Life Institute, 12 Nov. 2021, https://futureoflife.org/aws/real-life-technologies-that-prove-autonomous-weapons-are-already-here/.
“Lethal Autonomous Weapon Systems (Laws).” United Nations Office for Disarmament Affairs, 2023, https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/.
Meyer, Kjell. “AI-Driven Unmanned Aerial Vehicles in Modern Warfare: Assessing the Ethical Boundaries of Just War Theory.” Ghent University: Faculty of Political and Social Sciences, Christoph Vogel, 2024, libstore.ugent.be/fulltxt/RUG01/003/213/939/RUG01-003213939_2024_0001_AC.pdf.
Nasu, Hitoshi. “The Kargu-2 Autonomous Attack Drone: Legal & Ethical Dimensions.” Lieber Institute West Point, 10 June. 2021, https://lieber.westpoint.edu/kargu-2-autonomous-attack-drone-legal-ethical/.
“Obscure Russian V2U Drone Unraveled by Intelligence: Autonomous Loitering Munition Powered by Nvidia Chip: Defense Express.” Defense Express, 9 June 2025, https://en.defence-ua.com/weapon_and_tech/obscure_russian_v2u_drone_unraveled_by_intelligence_autonomous_loitering_munition_powered_by_nvidia_chip-14798.html.
Perrin, Benjamin. “Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New International Treaty.” ASIL, 24 Jan. 2025, https://www.asil.org/insights/volume/29/issue/1.
Perrin, Benjamin, Zamani, Masoud. “The Future of Warfare: National Positions on the Governance of Lethal Autonomous Weapons Systems.” Lieber Institute West Point, 11 Feb. 2025, https://lieber.westpoint.edu/future-warfare-national-positions-governance-lethal-autonomous-weapons-systems/.
Sayler, Kelley M. “Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems.” Congress.Gov, 2 Jan. 2025, https://www.congress.gov/crs-product/IF11150.
Sayler, Kelley M. “Artificial Intelligence and National Security.” Congress.Gov, 10 Nov. 2020, https://www.congress.gov/crs-product/R45178.
Schmidt, E., Work, R., Catz, S., & Horvitz, E. (2021). Final Report. National Security Commission on Artificial Intelligence. https://reports.nscai.gov/final-report/
Sexton, Mike, and Erik Carneal. “Lethal Autonomous Weapons 101.” Third Way, 28 Feb. 2024, https://www.thirdway.org/memo/lethal-autonomous-weapons-101.
Stenseke, J. (2022). The Morality of Artificial Friends in Ishiguro’s Klara and the Sun. Journal of Science Fiction and Philosophy 5. https://philarchive.org/rec/STETMO-63
Umbrello, S., Torres, P. & De Bellis, A.F. The future of war: could lethal autonomous weapons make conflict more ethical?. AI & Soc 35, 273–282 (2020). https://doi.org/10.1007/s00146-019-00879-x
“Weapons.” Autonomous Weapons Watch, https://autonomousweaponswatch.org/weapons?type=3&categories=&tags=60. Accessed 8 May 2025.
Williams, John. “‘Effective, Deployable, Accountable: Pick Two’: Regulating Lethal Autonomous Weapon Systems.” E-International Relations, 12 Aug. 2021, https://www.e-ir.info/2021/08/12/effective-deployable-accountable-pick-two-regulating-lethal-autonomous-weapons-system/.
Wilner, Alex, and Casey Babb. “New Technologies and Deterrence: Artificial Intelligence and Adversarial Behavior.” NL ARMS Netherlands Annual Review of Military Studies: 2020 Deterrence in the 21st Century—Insights from Theory and Practice, Asser Press, The Hague, Netherlands, 2020, pp. 401–417, https://library.oapen.org/bitstream/handle/20.500.12657/43303/2021_Book_NLARMSNetherlandsAnnualReviewO.pdf?sequence=1#page=412. Accessed 2025.





