This paper explores how artificial intelligence is changing the character of regional conflicts and influencing the balance of power, focusing on the case of Iran, Israel, and the United States. The study looks at the historical transition from traditional military confrontation to more digital and technology-driven forms of warfare. It also examines practical examples of how AI is applied in military and security operations. Finally, the research considers what these developments may mean for regional stability, deterrence strategies, and the future direction of warfare.
The study uses a qualitative and descriptive-analytical approach. It combines systems analysis with a detailed case study of the tensions between Iran, Israel, and the United States.
How is artificial intelligence reshaping the nature of regional conflicts and the balance of power in the triangular relationship between Iran, Israel, and the United States?
First- The Transformation from Conventional Warfare to Digital Warfare

Conventional warfare is generally understood as armed conflict between states or organized military forces that mainly use traditional weapons and battlefield tactics, rather than nuclear, chemical, or biological weapons. It is usually marked by direct, large-scale battles between regular armies, clearly defined fronts, and the central importance of controlling land, sea, and air areas. For much of the twentieth century, conventional wars depended on industrial-era mass mobilization, hierarchical command structures, and operations that relied heavily on firepower. [1][2][3]
Missile systems became an important part of conventional military forces, giving states the ability to strike enemy troops, infrastructure, and strategic targets at longer distances. Tactical ballistic missiles, cruise missiles, and anti-armor guided missiles worked alongside traditional artillery and air-delivered munitions, increasing both the reach and accuracy of strikes. These weapons allowed militaries to project power beyond the front lines and to threaten an opponent’s supply lines, airbases, and command centers, which influenced deterrence and coercive strategies within the conventional warfare framework. [3][4][5]
Armored units and tanks have historically been the core of land-based conventional forces, combining speed, protection, and firepower to break through enemy lines and exploit openings. Modern tanks and armored vehicles operate alongside artillery, infantry, and air support in combined-arms formations designed for high-intensity, maneuver-focused battles. Anti-armor tactics developed in response, using man-portable and vehicle-mounted guided missiles, precision artillery, and attack aircraft, creating an ongoing cycle of offensive and defensive innovation centered on armored platforms. [4][3]
Direct combat is another key feature of conventional warfare, often involving visible troop concentrations, planned battles, and attrition campaigns aimed at weakening the enemy and gaining control of territory. Intelligence has always been important, but traditionally it was gathered and analyzed over longer periods, and commanders had to make decisions with incomplete information. While precision-guided weapons and networked communications started to change these practices toward the end of the twentieth century, conventional warfare remained largely based on industrial-era weapons, formations, and doctrines. [2][6][1][3]
The emergence of digital technologies, networked systems, and cyberspace has given rise to what many analysts describe as digital warfare, in which conflict extends into virtual domains and relies heavily on information processing, connectivity, and software. Digital warfare does not replace conventional operations but overlays and transforms them by integrating cyber, information, and AI‑enabled capabilities into the traditional kill chain. States increasingly view control over data, networks, and information flows as a critical source of military advantage and a primary target for adversary operations.[7][6][8]
Cyber warfare refers to hostile activities conducted through or against information systems, networks, and digital infrastructure, aiming to disrupt, degrade, or destroy an adversary’s capabilities or to gain strategic advantage. Such operations can include penetration of command-and-control networks, attacks on critical infrastructure, theft of sensitive data, and manipulation of military or civilian systems. Cyber operations often occur below the threshold of overt armed attack, offering states a tool for continuous competition, coercion, and preparation of the battlespace without necessarily triggering large‑scale conventional responses.[6][8][7]
Information warfare involves the deliberate use and manipulation of information to influence perceptions, decision‑making, and behavior of adversaries, allies, and domestic audiences. It encompasses activities such as disinformation campaigns, psychological operations, perception management, and the shaping of narratives through traditional and social media. In the digital age, information warfare is amplified by online platforms, algorithmic content curation, and the availability of detailed data on target audiences, enabling more precise and persistent influence operations in conjunction with kinetic and cyber actions.[9][7][6]
Intelligent military systems represent a further evolution of digital warfare, integrating AI and advanced software into sensors, weapons, and command and control structures. These systems can autonomously detect and classify targets, optimize engagement plans, coordinate swarms of drones, and provide real‑time decision support to commanders. As militaries become more digitally dependent, their combat effectiveness increasingly rests on the resilience and performance of these intelligent systems, but so does their vulnerability to cyber-attacks, data manipulation, and system failures.[10][11][12][6]
Second- The Role of Artificial Intelligence in Military Operations
Artificial intelligence has become a key factor in the shift from conventional to digital warfare because it changes how military forces gather, interpret, and act on information. AI tools can manage enormous amounts of data far beyond what humans can handle, spot patterns or unusual activity in complex operational settings, and support quicker, more informed decisions. As a result, AI is increasingly used across the military, from intelligence collection and operational planning to targeting, logistics, and defensive measures.[11][12][10]
In analyzing military data, AI and machine-learning systems help combine information from many sources, including satellites, UAVs, radar, signals intelligence, and open-source reporting, into a single operational picture. These systems can automatically label objects in images, detect changes over time, flag suspicious activity in communications or financial networks, and even forecast likely actions of adversaries. This shortens the traditional intelligence cycle significantly, allowing near real-time identification of threats and supporting faster targeting and operational decisions. [12][10][6][11]
Unmanned aerial vehicles (UAVs) are one of the most visible areas where AI has transformed military operations. AI-equipped UAVs can navigate autonomously or semi-autonomously, plan optimal routes, recognize targets, and coordinate in swarms, which increases coverage while reducing risks to human pilots. For reconnaissance, AI helps UAVs analyze sensor data onboard and send only the most relevant information back to operators. In strike missions, AI supports target classification and guides weapons under human supervision. Swarming strategies further exploit AI, coordinating many low-cost UAVs to challenge defenses and carry out distributed, complex operations. [10][11][12]
AI is also increasingly central to smart defense systems, including advanced air and missile defense, perimeter security, and electronic warfare tools. Machine-learning algorithms help filter out radar noise, distinguish real threats from decoys, and adapt to changing tactics almost in real time. AI-enabled command and control systems connect these defenses with larger operational networks, helping commanders manage interceptors, monitor sensor coverage, and coordinate responses across multiple domains. Overall, these changes show how AI is moving military power away from reliance on mass and firepower toward information, connectivity, and speed driven by algorithms, forming the foundation of the broader transition from conventional to digital warfare. [5][6][10]
Third- Applications of Artificial Intelligence in Regional Conflicts
- Unmanned Aerial Vehicles and Autonomous Combat Systems
Unmanned aerial vehicles, or UAVs, have become one of the most visible and important ways that artificial intelligence is applied in regional conflicts. AI allows these drones to navigate on their own or semi-independently, avoid obstacles, plan optimal routes, and adjust quickly to changing conditions on the battlefield. When used for surveillance, AI-powered computer vision lets UAVs detect, classify, and follow vehicles, personnel, and infrastructure. This greatly reduces the human effort needed to analyze images and makes it possible to maintain continuous monitoring over contested areas. [13][14][15]
In combat situations, AI helps with identifying and engaging targets, including spotting high-value objectives, evaluating potential collateral damage, and guiding precision weapons even in difficult weather, complex terrain, or under electronic warfare interference. Swarming technologies, which link large numbers of relatively low-cost UAVs through distributed algorithms, improve both attack and defense capabilities by overwhelming enemy defenses, coordinating sensors, and enabling complicated, multi-directional strikes. Similar AI-driven systems on land and at sea, such as unmanned ground vehicles (UGVs) and unmanned surface or underwater vessels (USVs/UUVs). These capabilities expand across different domains, offering reconnaissance, mine-clearing, and strike options while keeping human soldiers out of direct danger. In regional conflicts, such AI-enabled platforms are particularly appealing to both major powers and middle-sized states because they allow persistent presence, plausible deniability for strikes, and flexible options for escalation. [14][15][16][13]
- Artificial Intelligence in Military Intelligence Analysis
Military intelligence today increasingly depends on AI to handle the enormous amount of information generated in modern conflicts. Battles now produce huge volumes of data from sources such as satellites, UAVs, radar, electronic intercepts, open-source platforms, financial systems, and social media—far more than human analysts can process quickly. Machine-learning systems help by automatically identifying and classifying objects in images, spotting patterns in movements or communications, detecting unusual activity that could indicate preparations for attacks, and linking different streams of information into a coherent operational picture. [15][16][13][14]
AI-powered data fusion tools bring together inputs from multiple sources to provide better estimates of an adversary’s capabilities, troop positions, and possible intentions, which support more accurate and timely decisions at tactical, operational, and strategic levels. Predictive models can produce probabilistic forecasts of likely enemy actions, potential escalation scenarios, or weaknesses in critical infrastructure, allowing commanders to anticipate developments instead of just reacting to them. In regional conflicts, where foreign powers, local states, and non-state actors operate across open borders and hybrid battlefields, AI-enhanced intelligence is essential for monitoring proxy networks, missile deployments, cyber threats, and information campaigns almost in real time. [16][17][18][19][20][21][13][15]
- AI-Driven Cyber Warfare
AI has become both a way to strengthen cyber defenses and a potential source of weakness. On the defensive side, machine‑learning algorithms power systems that detect intrusions and unusual activity by monitoring network traffic, user behavior, and system logs. These tools can adapt faster than traditional signature‑based approaches, allowing earlier identification of zero‑day exploits and sophisticated persistent threats. AI also helps automate responses, letting security teams prioritize alerts, isolate affected systems, and deploy patches or configuration changes at machine speed, which greatly improves reaction times. [22][13][15]
On the offensive side, AI can improve reconnaissance, find vulnerabilities, and automate exploitation by scanning networks, spotting misconfigurations, and creating more convincing phishing or social‑engineering attacks. Generative models can produce customized messages or deepfakes that increase the chance of successful intrusion, while reinforcement-learning techniques can help refine attack strategies in defended and changing environments. In regional conflicts, AI-driven cyber operations may target military command networks, air-defense systems, logistics infrastructure, and even civilian services, often alongside conventional strikes and information campaigns. At the same time, using AI in cyber defense introduces new dangers, such as adversarial attacks on machine‑learning models or data poisoning that can mislead or disable defensive systems. [18][20][21][23][13][15][16][22]
- Artificial Intelligence in Information Warfare and Disinformation Campaigns
AI has become an important tool in information warfare, especially for creating and spreading disinformation. Advanced data‑analysis and audience‑profiling tools allow states and non-state actors to divide audiences, identify influential figures, and craft messages that exploit social tensions or grievances. Machine-learning systems can help decide the best timing, format, and platform for content to increase engagement and virality, while bots and automated accounts amplify these messages to reach large audiences quickly. [23][24][16]
Generative AI has added a new dimension to disinformation by making it possible to quickly produce realistic but fake text, images, audio, and video—commonly called deepfakes. These can be used to create false evidence, impersonate leaders, or confuse people during crises and conflicts. In regional disputes, such as between Iran and Israel, AI-assisted disinformation has been applied to shape domestic and international perceptions, undermine opponents’ credibility, and influence how states calculate risks and resolve. At the same time, AI tools are also used to detect manipulated media and coordinated fake activity, showing an ongoing competition between offensive and defensive AI in the information space. [19][21][24][16][18][22][23]
Conceptual Model Description

To illustrate how artificial intelligence functions across different aspects of regional conflict, we might imagine a conceptual diagram or infographic like this:
At the center, place a node called “AI Capabilities,” which acts as the main hub. From there, four primary branches extend outward:
1) UAVs & Autonomous Systems, connected to reconnaissance, strike missions, and swarming operations.
2) Intelligence Analysis, linked to combining data from multiple sources and generating predictive assessments.
3) Cyber Operations, divided into offensive cyber actions and cyber defense measures.
4) Information Warfare, split into disinformation campaigns and efforts to influence perceptions.
Each of these branches then points to a final node labeled “Impact on Regional Conflict,” representing how AI affects escalation, deterrence, and the regional balance of power. This layout corresponds to a flowchart encoded in a Mermaid diagram file (AI_Regional_Conflict_Model) and can be used to create an infographic that clearly shows how AI capabilities connect to practical applications and shape regional conflicts.
Fourth- Case Study: The Conflict between Iran, Israel, and the United States

The conflict among Iran, Israel, and the United States is part of a wider struggle for influence in the Middle East. Iran seeks to expand its strategic depth and strengthen its position through alliances and proxy networks in countries such as Iraq, Syria, Lebanon, and Yemen. Israel and the United States, on the other hand, aim to stop Iran from becoming a regional power that could threaten Israel’s security, weaken U.S. partners, and reshape the regional balance. This rivalry has created what is often called a “shadow war,” marked by covert operations, proxy battles, and occasional direct confrontations, making it difficult to clearly separate peace from open conflict. [25][26][27]
One of the key strategic issues in this triangular conflict is Iran’s nuclear program. Iran insists that its nuclear activities are for peaceful energy purposes, but Israel and the United States see them as a potential step toward nuclear weapons, which they consider an existential or serious strategic threat. The 2015 Joint Comprehensive Plan of Action (JCPOA) temporarily limited Iran’s nuclear activities, yet later U.S. withdrawal and renewed sanctions triggered a resumption of nuclear development and increased tensions. Both Israel and the United States have carried out covert and overt actions—including cyberattacks, targeted killings of nuclear scientists, and strikes on nuclear and military sites—to slow or disrupt Iran’s nuclear progress. In this environment, AI and digital technologies have become essential tools for all three actors to manage escalation, project influence, and pursue strategic goals. [26][28][27][29]
- Cyber Warfare in the Conflict: The Case of Stuxnet
One of the earliest and most notable cases of cyber warfare in this conflict is the Stuxnet operation, which is widely believed to have been a joint U.S.-Israeli effort targeting Iran’s Natanz nuclear facility around 2009–2010. Stuxnet was a highly sophisticated malware designed to infiltrate the computer systems controlling Iran’s IR‑1 centrifuges through Siemens programmable logic controllers (PLCs), even though the facility was isolated from the internet. Analysts have found that the malware altered centrifuge speeds and other operational settings in a way that caused physical damage while showing normal readings to operators, which delayed detection. [29][30][31]
Technical evaluations indicate that Stuxnet destroyed roughly 1,000 centrifuges at Natanz, causing a serious but temporary disruption to Iran’s uranium enrichment. The operation showed that advanced digital tools could produce strategic effects similar to conventional military strikes—specifically, damaging critical nuclear infrastructure—without open combat or immediate loss of life. It also marked a turning point in the militarization of cyberspace, demonstrating that cyber operations could go beyond espionage to deliver precise, covert attacks on industrial control systems, giving states new options in regional power contests. For Iran, the attack revealed vulnerabilities in critical infrastructure and prompted investments in cyber capabilities and asymmetric strategies to reduce future risks. [30][31][29]
- The Use of Advanced Technologies and Intelligent Systems
Unmanned aerial vehicles (UAVs) have become a key element in the ongoing conflict between Iran, Israel, and the United States. Iran possesses a relatively large and battle-tested drone fleet, ranging from small surveillance UAVs to bigger systems capable of long-range strikes, many of which use AI-assisted navigation and target recognition to follow terrain and coordinate attacks. Israel and the United States, in contrast, operate networks of reconnaissance drones, loitering munitions, and strike UAVs that are integrated into broader ISR and targeting architectures. These systems allow for continuous surveillance, suppression of enemy air defenses, and precision strikes against Iranian leadership and command centers. AI-supported data fusion and decision-making tools play a central role in these operations, managing sensor inputs, prioritizing targets, and coordinating complex air campaigns at high speed.[32][33]
Advanced surveillance platforms are equally important in this conflict. Israel and the United States employ layered ISR networks—satellites, high-altitude drones, signals intelligence platforms, and ground sensors—linked through digital communications and analyzed with AI assistance. Studies of algorithmic targeting in the Iranian-Israeli confrontation show how Israeli systems combine these data streams to enable precision strikes and drone or missile defenses, while U.S. forces use AI-enabled battle management to coordinate multi-domain operations. Iran, despite having fewer resources, relies on drones, regional proxies, and cyber intrusions to monitor U.S. and Israeli positions while attempting to evade or overwhelm surveillance networks. Across all actors, AI-driven intelligence analysis and machine-learning tools are essential for tracking proxy networks, monitoring missile deployments, and predicting retaliatory moves across multiple fronts.[33][25][32]
The case study highlights a significant shift toward cyber-focused aspects of conflict. Operations like Stuxnet and other cyber campaigns targeting infrastructure, command systems, and information networks show that cyber tools have become central instruments of strategy, able to achieve real effects while maintaining plausible deniability and avoiding open war. Cyber and kinetic operations are increasingly interconnected, with cyberattacks often preparing the way for air or missile strikes or magnifying their psychological impact, embedding digital operations firmly into regional conflict dynamics.[31][29][32][33]
At the same time, the case emphasizes how deeply AI has become embedded in modern security practices. AI underpins algorithmic targeting, air and missile defense, UAV operations, cyber defense and offense, and the management of information operations, influencing how quickly states can detect threats, allocate resources, and react to crises. This reliance on AI contributes to new forms of digital deterrence, where states signal power not only through traditional missiles and aircraft but also through their ability to disrupt enemy systems, defend their own networks, and manage escalation at machine speed. Yet, the opacity, rapidity, and potential fragility of AI-driven systems also bring risks of misunderstanding, accidental escalation, and loss of human control, raising pressing questions about stability, norms, and accountability in the Iran-Israel-United States conflict.[27][34][25][32][33]
Fifth- The Impact of Artificial Intelligence on the Regional Balance of Power
- Reshaping Regional Power Dynamics
Artificial intelligence has become a fundamental factor in regional geopolitics, similar in impact to earlier game‑changers like nuclear weapons or precision-guided munitions, but with wider integration into civilian and military life. Control over AI ecosystems—including access to data, computing power, skilled personnel, and semiconductor supply chains—now strongly influences a state’s ability to project power, respond to crises, and maintain influence in its neighborhood. In areas like the Middle East, AI-enabled intelligence, surveillance, and reconnaissance (ISR), targeting, and air‑defense systems give technologically advanced states clear operational advantages, while also creating opportunities for less powerful actors to adopt asymmetric strategies.[35][36][37][38]
At the same time, AI changes the way vulnerabilities appear. Militaries and economies that are heavily digitized can operate more efficiently and respond faster, but they also become more exposed to cyberattacks, data manipulation, and disruptive AI-driven operations targeting key infrastructure. This double-edged nature—both empowering and creating dependence—makes traditional measures of regional power more complicated, since states must consider not just the number of weapons but also the strength and resilience of their digital and algorithmic systems. Consequently, AI contributes to a more fluid and sometimes unstable regional balance of power, where advantages can shift quickly with new technological breakthroughs or disruptions in critical supply chains.[36][37][38][35]
- Technological Arms Race in Artificial Intelligence
The growing importance of AI in the military has triggered a kind of technological arms race, with countries pouring resources into AI research, defense applications, and AI‑ready military infrastructures. Leading powers present AI as a key factor for future warfare and national strength, creating competition over skilled personnel, valuable datasets, and strategic sectors like advanced semiconductors and cloud computing. At the regional level, this rivalry spreads as allies and competitors try to develop compatible AI systems, collaborate on joint projects, or gain access to foreign technologies to avoid falling behind or becoming strategically dependent.[37][39][38][40][35][36]
This race is not only about numbers but also about “algorithmic” superiority, emphasizing better data quality, model accuracy, and integration into military doctrine and command structures. Experts warn that deploying AI-enabled weapons and decision-support tools quickly in a competitive environment may outstrip proper testing and governance, raising the chance of mistakes, misjudgments, or unintended escalation. Efforts to develop norms, confidence-building measures, and “responsible AI” for military use reflect worries that unregulated competition could destabilize crises and threaten both regional and global security.[39][41][40][35][36]
- The Influence of AI on Deterrence Strategies
AI shapes deterrence in at least two main ways: by improving detection and attribution, and by changing how states perceive their own vulnerabilities and the risks of escalation. AI‑enabled data fusion and continuous surveillance allow countries to watch more of the battlespace and cyberspace, making it easier to spot signs of hostile activity and to identify attackers faster and more accurately. This better monitoring can reinforce deterrence by punishment, since potential adversaries may believe that even covert or “gray zone” operations are more likely to be noticed and punished.[41][35][36][37]
At the same time, AI-driven automation and shorter decision cycles can weaken deterrence stability, because they create pressure to act quickly and introduce uncertainty about how AI systems will behave under stress. If states worry that AI-assisted first strikes, whether in cyber or kinetic forms, could seriously damage their defenses or command networks, they might adopt more preemptive or escalatory strategies. In regional conflicts marked by distrust and fragile communication, AI-based deterrence can therefore be both stabilizing and destabilizing, depending on how detection, attribution, and command-and-control systems are designed and managed.[40][35][36][39][41]
- The Role of Technology in Empowering Smaller States and Non-State Actors
Compared with nuclear weapons, AI is much easier to access and has dual‑use characteristics, meaning that even smaller states or non‑state groups can develop significant capabilities without needing a huge industrial base. Autonomous AI tools and cyber systems reduce the technical barriers for sophisticated operations, letting relatively small actors carry out disruptive attacks on critical infrastructure, steal sensitive information, or influence information environments on a large scale. In this way, they can sometimes “punch above their weight,” gaining influence in regional politics and forcing larger powers to reconsider their strategies.[42][43][36]
Non‑state actors can take advantage of open-source AI models, commercial drones, and widely available software to create AI-enabled tools for reconnaissance, targeting, or propaganda, gradually eroding some of the traditional advantages held by state militaries. For regional states, AI provides a method to compensate for conventional weaknesses through investments in cyber capabilities, UAVs, and information operations. However, it also increases their exposure to well-equipped militant groups and criminal networks. As AI spreads, the balance of power in the region becomes more diffuse and less predictable, with more actors capable of influencing security, disrupting deterrence, and escalating or localizing conflicts through digital means.[43][35][36][42]
Sixth- Challenges and Risks of Military Artificial Intelligence
- Security Risks
Military AI systems create serious security challenges at multiple levels—technical, operational, and strategic. At the technical level, AI models and their data pipelines expand the “attack surface” of armed forces, making them vulnerable to adversarial exploits, data poisoning, model theft, or system spoofing. Such attacks can mislead AI into misclassifying targets or distorting situational awareness, which in turn could produce unlawful or unintended decisions when these outputs are directly linked to weapons or command‑and‑control systems.[44][45][46]
From an operational perspective, many AI systems are opaque and fragile, and when combined with low‑quality or biased training data, they can give unreliable results in real‑world conditions—for instance, confusing civilians with legitimate military targets. Relying on these systems for critical functions risks speeding operations toward mistakes, reducing meaningful human oversight, and increasing the chance of “imprecise or unlawful targeting.” At the strategic level, the black-box nature of AI and rapid automated decision-making may worsen misperceptions and accidental escalation. States might misinterpret AI-generated warnings or manipulative data as signs of an imminent attack, particularly during crises or in contexts involving nuclear command systems.[47][45][46][44]
- International Legal Challenges
International humanitarian law (IHL) applies to all forms of warfare, including autonomous and AI‑assisted weapons, but it does not yet have specific rules designed solely for military AI. Instead, these systems are evaluated under existing principles, such as distinction between civilians and combatants, proportionality of force, and precautions during attacks. States are also obliged to review new weapons to ensure they are legal. This means that AI‑enabled systems are not automatically illegal, but they can be unlawful in their design or use if they prevent commanders from making necessary legal judgments or if they cannot reliably comply with IHL on the battlefield.[48][49]
Using AI in warfare raises several legal and practical challenges. First, AI’s decision-making can be difficult to explain, making weapon reviews and assessments of distinction and proportionality more complicated. Second, the development of AI often involves private companies and dual‑use technologies, creating uncertainty about who is responsible and how oversight should work, especially when systems are adapted for combat. Third, accountability becomes tricky if autonomous systems malfunction or cause unlawful harm. While states remain ultimately responsible under international law, it is increasingly hard to assign responsibility to developers, commanders, or operators. These issues are central to ongoing discussions at the UN and other forums, highlighting the lack of clear limits on autonomy and human control in military AI.[45][50][49][44][48]
- Ethical Issues Related to Autonomous Weapons
Ethical discussions about AI in warfare often ask whether it is right to let machines make life‑and‑death choices. Critics point out that autonomous weapons cannot exercise real moral judgment or read the subtle cues that humans rely on to tell civilians from combatants, understand intentions, or judge if force is proportionate. Even if these systems are technically precise, they may still conflict with humanitarian principles, because they reduce human lives to data points in optimization processes.[51][52]
Another concern is the potential for “moral deskilling” and the weakening of human responsibility. When operators start trusting or relying too much on algorithmic suggestions, their own judgment and accountability can decline, increasing dependence on systems they do not fully understand. This accountability gap—where it is unclear who is morally or legally liable if an autonomous system causes harm—challenges just‑war principles and could normalize impersonal, automated killing. On top of that, biases in the data used to train AI can produce unfair outcomes, putting certain groups or regions at higher risk and worsening existing inequalities or grievances.[49][52][44][51]
Seventh- The Future of Warfare in the Age of Artificial Intelligence
AI is likely to push warfare toward being faster, more data‑driven, and spread across wider areas. Future battles may involve autonomous or semi‑autonomous swarms of drones, AI-assisted decision-making in high-level command centers, cyber operations targeting key infrastructure, and constant algorithmic monitoring that keeps competition “always on,” even below formal declarations of war. These changes can make operations more efficient and reduce risks for one’s own forces, but they also create the danger that escalation and collateral damage could spread through tightly linked technical systems rather than deliberate political decisions.[50][46][47][45]
At the same time, growing public concern and expert warnings have sparked a pushback against unchecked military AI. There are calls for banning or strictly limiting some autonomous weapons, ensuring meaningful human control, and creating stronger transparency and auditing practices. The future of warfare in the AI era will therefore be shaped not only by new technologies but also by how well international efforts succeed in regulating and setting norms for AI use in the military. Whether AI ends up stabilizing or destabilizing regional and global security will depend on how states combine innovation with restraint, and how they keep human judgment, accountability, and humanitarian principles at the center of military action.[52][53][45][50][49]
Conclusion
The study suggests that artificial intelligence has evolved from a supporting tool into a central strategic element in regional conflicts, especially within the triangular relationship between Iran, Israel, and the United States. AI is now embedded across UAVs and other autonomous systems, intelligence analysis, cyber operations, and information warfare, fundamentally changing how states project power and manage crises. This integration speeds up decision-making, broadens surveillance and targeting capabilities, and enables states to carry out high-impact operations—such as Stuxnet-style cyberattacks or AI-assisted precision strikes—without the need for large-scale conventional deployments.[54]
At the same time, the research highlights that AI increases both military strengths and vulnerabilities. Forces that are highly digitized gain new operational advantages but also face greater exposure to cyberattacks, data manipulation, and failures in complex AI systems. The wider availability of AI also empowers smaller states and non-state actors, who can use commercial drones, open-source AI models, and cyber tools to challenge established power hierarchies and complicate traditional deterrence. Overall, AI contributes to a more fluid and contested regional balance of power, where advantage depends not only on material resources but also on digital infrastructure, algorithmic performance, and effective risk management.[55]
Artificial intelligence is reshaping regional conflicts along three main dimensions: operational practice, strategic interaction, and escalation dynamics. Operationally, AI-enabled systems compress the observe–orient–decide–act cycle, allowing continuous ISR, near real-time targeting, and integrated cyber-kinetic operations. In confrontations like those between Iran, Israel, and the United States, this has produced “machine-speed” engagements in air and missile defense, swarm drone operations, and algorithmic targeting, where humans increasingly supervise rather than directly control every action.[56]
Strategically, AI alters how states perceive deterrence, vulnerability, and escalation risks. Better detection and attribution can reinforce deterrence by punishment, yet the speed and opacity of AI systems may increase mistrust and encourage preemptive moves, especially when actors fear AI-enabled strikes could disable critical networks or command systems. The spread of AI-powered cyber and information tools suggests that conflicts will increasingly include “gray zone” activity—continuous cyber probing, disinformation, and proxy operations—blurring the line between peace and war. Looking forward, regional conflicts will likely be shaped by the interaction of AI-enhanced conventional forces, advanced cyber and information operations, and technologically empowered non-state actors, making crisis management and escalation control more challenging.[56]
Overall, the findings suggest that artificial intelligence has moved from being just a supporting tool to becoming a central factor shaping modern regional conflicts. It now drives new forms of precision strikes, continuous surveillance, and integrated cyber‑information campaigns, while at the same time introducing new vulnerabilities and uncertainties. The case of Iran, Israel, and the United States shows how AI can amplify both state power and systemic risk—enabling faster, more effective operations but also opening up new channels for escalation and governance challenges.
The study, therefore, argues that AI’s role in regional security is inherently dual-use. It provides significant operational and strategic benefits, yet it also disrupts existing power balances and complicates traditional ideas of deterrence and crisis management. Whether AI ultimately helps stabilize or destabilize, the Middle East will depend on how states develop, regulate, and apply AI-enabled systems, and whether they can successfully create norms, safeguards, and cooperative practices to manage its use responsibly.
References
-
- “Conventional Warfare,” Wikipedia, Last modified February 17, 2002, https://en.wikipedia.org/wiki/Conventional_warfare.
- “What Is Conventional Warfare?” Small Wars Journal, January 2, 2012, https://smallwarsjournal.com/2012/01/03/what-is-conventional-warfare/.
- “The Role of Conventional Forces in Modern Warfare and Hybrid Threats,” Finabel Information Flash, July 2022. https://finabel.org/wp-content/uploads/2022/07/IF-20.07-1.pdf.
- “Strategic Approaches in Modern Anti-Armor Warfare Strategies,” Valor Journey, August 6, 2024, https://valorjourney.com/anti-armor-warfare-strategies/.
- “Tomahawks, Bunker-Busters and Ballistic Missiles: Weapons Driving the Israel–Iran War,” The Times of India, February 27, 2026, https://timesofindia.indiatimes.com/defence/tomahawks-bunker-busters-and-ballistic-missiles-weapons-driving-israel-iran-war/amp_articleshow/128892406.cms.
- Digitally-Enabled Warfare: The Capability–Vulnerability Paradox, CNAS (Center for a New American Security), 2017, Washington, DC: CNAS, https://www.cnas.org/publications/reports/digitally-enabled-warfare-the-capability-vulnerability-paradox.
- “The Digital Age of Cyber and Information Warfare,” IDS International, February 21, 2023, https://idsinternational.com/blog/the-digital-age-of-cyber-and-information-warfare/.
- “Cyber-Conventional Confluence: The Evolution of Modern Battlefields,” Modern Diplomacy, February 26, 2024, https://moderndiplomacy.eu/2024/02/26/cyber-conventional-confluence-the-evolution-of-modern-battlefields/.
- “Israel–Iran Conflict Unleashes Wave of AI Disinformation,” BBC News, June 20, 2025, https://www.bbc.com/news/articles/c0k78715enxo.
- “AI in Aerospace & Defence: Autonomous Systems, Surveillance, Decision Support,” Arensic International, November 22, 2025, https://arensic.international/ai-in-aerospace-defence-autonomous-systems-surveillance-decision-support/.
- “AI in Military: Top Use Cases You Need to Know,” SmartDev Blog, September 9, 2025, https://smartdev.com/ai-use-cases-in-military/.
- “Military Applications of Artificial Intelligence,” Wikipedia, 2022, https://en.wikipedia.org/wiki/Military_applications_of_artificial_intelligence.
- Ibid.
- “AI in Military: Top Use Cases You Need to Know,” SmartDev Blog, September 9, 2025, https://smartdev.com/ai-use-cases-in-military/.
- “AI in Aerospace & Defence: Autonomous Systems, Surveillance, Decision Support,” Arensic International, November 22, 2025, https://arensic.international/ai-in-aerospace-defence-autonomous-systems-surveillance-decision-support/.
- Digitally‑Enabled Warfare: The Capability–Vulnerability Paradox, CNAS (Center for a New American Security), 2017, Washington, DC: CNAS, https://www.cnas.org/publications/reports/digitally-enabled-warfare-the-capability-vulnerability-paradox.
- “AI for Military Decision‑Making,” CSET (Center for Security and Emerging Technology), March 31, 2025, https://cset.georgetown.edu/publication/ai-for-military-decision-making/.
- “Israel–Iran War and Artificial Intelligence,” Red Analysis, June 30, 2025, https://redanalysis.org/2025/06/30/israel-iran-war-ai/.
- “Algorithmic Targeting in the Iranian–Israeli Confrontation: Technical Realities, Legal Thresholds, and the Boundaries of Human Control.” F1000Research 14 (1200), 2025, https://f1000research.com/articles/14-1200.
- “Cyber‑Conventional Confluence: The Evolution of Modern Battlefields,” Modern Diplomacy, February 26, 2024. https://moderndiplomacy.eu/2024/02/26/cyber-conventional-confluence-the-evolution-of-modern-battlefields/.
- “AI and the Evolution of Asymmetric Cyber Warfare: Insights from the 2025 Israel–Iran Conflict,” TRENDS Research & Advisory, August 25, 2025, https://trendsresearch.org/insight/ai-and-the-evolution-of-asymmetric-cyber-warfare-insights-from-the-2025-israel-iran-conflict/.
- “Artificial Intelligence in the Military Domain and Its Implications for International Peace and Security,” UNIDIR (United Nations Institute for Disarmament Research), December 4, 2025, https://unidir.org/publication/artificial-intelligence-in-the-military-domain-and-its-implications-for-international-peace-and-security-an-evidence-based-road-map-for-future-policy-action/.
- “The Digital Age of Cyber and Information Warfare,” IDS International, February 21, 2023, https://idsinternational.com/blog/the-digital-age-of-cyber-and-information-warfare/.
- “Israel‑Iran Conflict Unleashes Wave of AI Disinformation,” BBC News, June 20, 2025. https://www.bbc.com/news/articles/c0k78715enxo.
- Council on Foreign Relations, “Confrontation between the United States and Iran,” Global Conflict Tracker, 2026, https://www.cfr.org/global-conflict-tracker/conflict/confrontation-between-united-states-and-iran.
- “Israel–Iran–United States Conflict: Historical Background and Recent Issues,” Shankar IAS Parliament, 2024, https://www.shankariasparliament.com/current-affairs/gs-ii-health/israeliranunited-states-conflict-historical-background-and-recent-issues.
- “Israel–Iran Conflict,” Encyclopaedia Britannica, 2025, https://www.britannica.com/event/Israel-Iran-conflict.
- “Explained: Israel–Iran Tensions and Recent Escalations,” BBC News, 2025, https://www.bbc.com/news/articles/crlddd02w9jo.
- “Background of the Iran–Israel War,” Wikipedia, 2024, https://en.wikipedia.org/wiki/Background_of_the_Iran–Israel_war.
- “Did Stuxnet Take Out 1,000 Centrifuges at the Natanz Enrichment Plant?” Institute for Science and International Security (ISIS), 2010, https://isis-online.org/isis-reports/did-stuxnet-take-out-1000-centrifuges-at-the-natanz-enrichment-plant/.
- “Stuxnet Malware and Natanz: Update of ISIS December 22, 2010 Report,” Institute for Science and International Security (ISIS), February 16, 2011, https://isis-online.org/isis-reports/stuxnet-malware-and-natanz-update-of-isis-december-22-2010-reportsupa-href1/.
- “U.S.–Israeli Strikes on Iran: Use of Drones and AI,” ETC Journal, March 2, 2026, https://etcjournal.com/2026/03/02/u-s-israeli-strikes-on-iran-use-of-drones-and-ai/.
- “Algorithmic Targeting in the Iranian–Israeli Confrontation: Technical Realities, Legal Thresholds, and the Boundaries of Human Control,” F1000Research 14 (1200), 2025, https://f1000research.com/articles/14-1200.
- “AI Could Be Giving US Lethal Edge in Iran War – but There Are Dangers,” Sky News, March 4, 2026, https://news.sky.com/story/ai-could-be-giving-us-lethal-edge-in-iran-war-but-there-are-dangers-13514784.
- “The Impact of Artificial Intelligence on Regional Security,” UNIDIR (United Nations Institute for Disarmament Research), 2025, https://unidir.org/wp-content/uploads/2025/02/UNIDIR_The_Impact_of_Artificial_Intelligence_on_Regional_Security.pdf.
- “Artificial Intelligence, International Competition, and the Balance of Power,” TNSR (Texas National Security Review), 2018, https://tnsr.org/2018/05/artificial-intelligence-international-competition-and-the-balance-of-power/.
- “Artificial Intelligence and the Changing Balance of Power in the Middle East,” Arab News, May 30, 2024, https://www.arabnews.com/node/2519841.
- “Basic Geopolitics of Artificial Intelligence: Digital Sovereignty and New Power Balances in the 21st Century,” Notizie Geopolitiche, 2024, https://www.notiziegeopolitiche.net/basic-geopolitics-of-artificial-intelligence-digital-sovereignty-and-new-power-balances-in-the-21st-century/.
- “Artificial Intelligence and the Future of Strategic Competition,” Oxford University Press, 2023, https://academic.oup.com/edited-volume/41989/chapter-abstract/386782541?redirectedFrom=fulltext.
- “Algorithmic Deterrence: U.S.–China AI Arms Race,” EPIS Think Tank. 2024, https://www.epis-thinktank.com/publications/algorithmic-deterrence:-u.s.-china-arms-race.
- “Deterrence through AI-Enabled Detection and Attribution,” Kissinger Center for Global Affairs, 2024, https://kissinger.sais.jhu.edu/programs-and-projects/kissinger-center-papers/deterrence-through-ai-enabled-detection-and-attribution/.
- “Agentic Artificial Intelligence and Autonomous Cyber Operations,” arXiv, 2025, https://arxiv.org/html/2503.04760v1.
- “Leveling the Battlefield: AI-Enabled Technology in the Hands of Non-State Actors,” Pacific Forum, 2024, https://pacforum.org/publications/yl-blog-90-leveling-the-battlefield-ai-enabled-technology-in-the-hands-of-non-state-actors/.
- “The Risks and Inefficacies of AI Systems in Military Targeting Support,” International Committee of the Red Cross (ICRC), 2024, https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/.
- “Safety and War: Safety and Security Assurance of Military AI Systems,” AI Now Institute, 2024, https://ainowinstitute.org/publications/safety-and-war-safety-and-security-assurance-of-military-ai-systems.
- “Navigating Cyber Vulnerabilities in AI‑Enabled Military Systems,” European Leadership Network, 2024, https://europeanleadershipnetwork.org/commentary/navigating-cyber-vulnerabilities-in-ai-enabled-military-systems/.
- “Analysts Weigh Risks of Artificial Intelligence for Military Purposes,” ADF Magazine, 2025, https://adf-magazine.com/2025/04/analysts-weigh-risks-of-artificial-intelligence-for-military-purposes/.
- “Autonomous Weapon Systems in International Humanitarian Law,” Joint Air Power Competence Centre (JAPCC), 2023, https://www.japcc.org/articles/autonomous-weapon-systems-in-international-humanitarian-law/.
- Autonomous Weapon Systems under International Humanitarian Law, International Committee of the Red Cross (ICRC), 2021, https://www.icrc.org/sites/default/files/document/file_list/autonomous_weapon_systems_under_international_humanitarian_law.pdf.
- “Why Military AI Needs Urgent Regulation,” DiploFoundation, 2024, https://www.diplomacy.edu/blog/why-military-ai-needs-urgent-regulation/.
- “Artificial Intelligence in Modern Warfare: Ethics, Autonomy, and Global Security,” Australian Institute of International Affairs, 2024, https://www.internationalaffairs.org.au/qld-news/artificial-intelligence-in-modern-warfare-ethics-autonomy-and-global-security/.
- “The Backlash against Military AI: Public Sentiment, Ethical Tensions, and the Future of Autonomous Warfare,” TRENDS Research & Advisory, September 24, 2025, https://trendsresearch.org/insight/the-backlash-against-military-ai-public-sentiment-ethical-tensions-and-the-future-of-autonomous-warfare/.
- “The Impact of Artificial Intelligence on Regional Security,” UNIDIR (United Nations Institute for Disarmament Research), 2025, https://unidir.org/wp-content/uploads/2025/02/UNIDIR_The_Impact_of_Artificial_Intelligence_on_Regional_Security.pdf.
- Ibid.
- “AI Could Be Giving US Lethal Edge in Iran War – but There Are Dangers,” Sky News, March 4, 2026, https://news.sky.com/story/ai-could-be-giving-us-lethal-edge-in-iran-war-but-there-are-dangers-13514784.
- “Algorithmic Targeting in the Iranian–Israeli Confrontation: Technical Realities, Legal Thresholds, and the Boundaries of Human Control,” F1000Research. 2025, https://f1000research.com/articles/14-1200.