Artificial intelligence (AI) has transformed the media landscape in the Middle East, posing potential threats to the stability of countries in the region. This insight examines how AI is reshaping online influence operations. The influence operations of ISIS, the Muslim Brotherhood, and the Islamic Republic of Iran are examined as case studies and integrated with psychological literature on influence and persuasion. These instances were chosen because they each represent a distinct level of capacity, ranging from a diminished terrorist group to a state actor. This indicates that AI technologies can be exploited by a diverse range of groups for hostile purposes. Finally, the study discusses possible future risks and outlines potential pathways policymakers in the region could adopt to help mitigate these challenges.
1. ISIS’s influence operations
Figure 1: Pro-ISIS content on Twitter

Source: Figure reproduced from “What It’s like to Be Recruited by ISIS Online,” Business Insider, May 22, 2015, https://www.businessinsider.com/what-its-like-to-be-recruited-by-isis-2015-5.
A study that involved approximately 6,000 individuals in various Arab countries revealed a worrying result: respondents using the internet as a source of political news were significantly more likely to support ISIS.[[1]] At its peak, ISIS proactively carried out propaganda campaigns online. The group incessantly exploited social media algorithms’ tendency to promote provocative content [[2]], mass-producing millions of pro-ISIS tweets.[[3]] ISIS even created an application called the Dawn of Glad Tidings, which posted pro-ISIS tweets on the accounts of the app’s users.[[4]] Such methods exploit a psychological bias in which content is considered more persuasive the more it is repeated, even if false, known as the illusory truth effect.[[5]] The repeated proliferation of such content can also distort perceptions of the popularity of an extremist group such as ISIS, which is, in reality, deeply unpopular.[[6]] This plays to their advantage, since humans show a herd-like psychological tendency to heuristically perceive information deemed popular as more credible.[[7]]
Even though such examples are alarming, the international community can consider the territorial defeat of ISIS as a blessing because it happened before AI technologies had reached their current stage of advancement. For instance, at its peak, ISIS could post 40,000 pro-ISIS tweets in a single day through the Dawn of Glad Tidings app.[[8]] These tweets were dictated by human social media managers. In today’s world, an organization can simply use AI-generated bots to mass-produce such tweets at a much larger scale, tapping into the aforementioned illusory truth effect. Developing a bot merely requires a sustainable and open Application Programming Interface (API), which any user can access. These bots rely on large language models (LLMs), which learn statistical patterns in language.[[9]] Once trained, the model can constantly generate new tweets by predicting the next word in a sequence, guided by prompts. Indeed, ISIS today, which is still active across the globe despite losing territorial control, uses bots to amplify its content on platforms such as Telegram.[[10]] These bots focus on activities, such as fostering communities of ISIS supporters on such platforms.
Other AI tools used by ISIS include Generative AI to enhance the persuasiveness of its propaganda online and profile potential recruits to target with personalized messages, selecting individuals who seek extremist content online for microtargeting.[[11]] It has been found that LLMs can be just as persuasive propagandists as humans[[12]] yet, crucially, LLMs allow for propaganda to be generated with minimal effort and cost. Having already proven themselves capable of conducting effective propaganda campaigns online, LLMs can become a potentially powerful propaganda weapon in the hands of ISIS. This is especially so given that the percentage of Middle Eastern populations who use social media continues to increase exponentially.[[13]]
ISIS has also engaged with the use of deepfakes.[[14]] Modern influence campaigns can use the tools of generative adversarial networks (GANs) and diffusion models to create hyper-realistic human faces.[[15]] For instance, ISIS produced an AI-generated video resembling an official news broadcast to disseminate propaganda on an underground ‘Rocket.Chat’ forum following its deadly attack in Russia in 2024.[[16]] Despite its status as an underground group, these slick, professional-style broadcasts can create an air of legitimacy. Additionally, ISIS members have recently employed Automatic Speech Recognition Systems (ASRS) to instantly translate and transcribe terrorist speeches and even created guides for their followers on how to use AI content creators.[[17]]
Overall, the very fact ISIS can use such tools today, despite their diminishment in size and resources, is reflective of just how accessible AI tools have become. Clearly, even small, decentralized cells can now produce propaganda without advanced technical skills. As the next section below discusses, however, AI tools are also exploited today by a transnational extremist organization in the Middle East that remains large and influential: the Muslim Brotherhood.
2. The Muslim Brotherhood’s digital brigades
Figure 2: Pro-Muslim Brotherhood content online

Source: Figure reproduced from Linda Herrera and Mark Lotfy, “E-Militias of the Muslim Brotherhood: How to Upload Ideology on Facebook,” Jadaliyya, September 2012, https://www.jadaliyya.com/Details/27013.
The Muslim Brotherhood engaged with the digital sphere to promote its ideology as early as the 2012 Egyptian presidential election. During this period, it employed “E-militias” in the form of fake Facebook accounts to sway Egyptian opinion in its favor.[[18]] To this day, the organization continues to conduct influence operations in Egypt and other parts of the Middle East. For example, pro-Brotherhood accounts spread hashtags to give the illusion of mass support for revolution.[[19]] Such hashtags can further exploit the bias of social media algorithms, which today run on AI machine-learning models, for attention-grabbing content. The Brotherhood, akin to ISIS, also appears to engage in the strategies of bot-farming and deepfakes. In 2020, Facebook took down 8,000 fake accounts linked to the Brotherhood, which were spreading misinformation across countries such as Egypt, Turkey and Morocco.[[20]] More recently, in 2021, Meta took down a network of accounts run from Turkey targeting Instagram users in Libya, organizing common hashtags. These accounts were traced back to the Brotherhood-affiliated Libyan Justice and Construction Party and often used AI-generated profile pictures.[[21]] A 2025 study found that individuals frequently misclassified such AI-generated images, including human portraits, as authentic.[[22]] This may be especially concerning given the primacy of visual information in human perception formation.[[23]]
Beyond enhancing credibility and persuasion, the use of these fake identities can also help Brotherhood members evade legal risks in countries where the organization is proscribed. In the UAE, for example, posting content supporting the Brotherhood would violate Article 21 of the country’s federal decree-law on countering rumors and cybercrimes.[[24]] Obscuring the real identities of those operating these accounts threatens to diminish the ability to enforce such laws.
However, while ISIS has used bots and deepfakes to create overt propaganda, the Brotherhood also engages in more subtle tactics. Brotherhood-affiliated individuals have been linked to the ISNAD operation, which leveraged AI-assisted text generation to create fake online accounts impersonating users from target countries, sowing internal discord through promoting politically charged rhetoric.[[25]] By using AI tools to appear as users from the local country, these individuals engaged in a subtle form of digital espionage. For centuries, spy agencies have employed real individuals heavily trained to pretend to be from a target country to achieve political results; with AI, it is increasingly a simple task. The method of masquerading as locals online in a society targeted for an influence operation can exploit the phenomenon that messages delivered by in-group members are considered more psychologically persuasive.[[26]] Moreover, the tactic of fueling internal political divisions, rather than overtly expressing support for Brotherhood positions, creates greater deniability. Once again, this method obscures attribution and makes it difficult for authorities to clearly identify the Brotherhood as the source of the destabilizing rhetoric.
On the whole, these examples and dynamics further underscore the ways AI has reshaped modern influence operations among non-state actors. This provides a useful foundation for understanding how state actors such as Iran now employ comparable and often more sophisticated AI-driven strategies, as is discussed in the following section.
3. The use of AI-influence operations by state actors: the case of Iran
Figure 3: Visual depiction of Iranian online influence operations, combining the Iranian flag with digital code imagery

Source: Figure reproduced from “Microsoft Special Report: Iran’s Adoption of Cyber-Enabled Influence Operations,” CSO Online, June 14, 2023, https://www.csoonline.com/article/575581/microsoft-special-report-iran-s-adoption-of-cyber-enabled-influence-operations.html.
The previous sections described how artificial intelligence can be and has been utilized by ISIS and the Muslim Brotherhood in the Middle East. Inevitably, however, such organizations do not hold a monopoly over malicious influence operations. A state actor—Iran—has recently exploited innovations in AI to conduct coordinated cyber and hacking attacks.[[27]] Further, in 2024, OpenAI disrupted an Iranian influence operation using ChatGPT accounts to generate long-form articles and social media comments aimed at swaying opinions on global political events.[[28]] These tactics are not isolated developments. Mirroring our preceding examples, Iran has long invested in the mass distribution of propaganda online. For example, a 2020 study compiled a dataset of 1.7 million Iranian state-linked tweets—many generated by bots—which sought to inflame hostility toward Saudi Arabia at a moment of heightened geopolitical tension.[[29]] These activities are often organized primarily through the Islamic Revolutionary Guard Corps (IRGC) and a number of affiliated cyber groups.[[30]]
Akin to the strategies employed by the Muslim Brotherhood, the Iranian regime often sees greater utility in inflaming internal tensions rather than propaganda campaigns that overtly express support for the regime. Their operations stretch as far as the United Kingdom, where a report linked thousands of social media posts expressing support for Scottish independence to a coordinated Iranian bot-farming campaign mimicking Scottish users.[[31]] These posts amassed an estimated 224 million potential impressions and attracted more than 126,000 interactions. Closer to home, Iranian operators conduct similar influence operations in the online sphere in Arab countries—e.g., impersonating local Arab news outlets in countries such as Egypt, Saudi Arabia and Bahrain.[[32]]
Where Iranian influence operations are perhaps more unique than the others discussed so far in this study is in their effective use of social engineering. Social engineering involves exploiting cognitive heuristics, such as a tendency to comply with requests from authoritative figures,[[33]] to obtain sensitive information from individuals. Other heuristics exploited by groups engaging in social engineering include the similarity heuristic[[34]]—the tendency of individuals to trust people who seem similar to them. In the Iranian case, these tactics are not simply used to gain technical access for espionage but are also embedded within broader influence operations aimed at constraining political actors. The most prominent example is the Charming Kitten cyber-espionage group, whose operations frequently target individuals of strategic interest to the regime. By impersonating researchers, journalists, or policy experts, Charming Kitten operators have approached academics and think-tank analysts, presenting fabricated conference invitations or interview requests as a means of establishing rapport.[[35]] Once trust has been built, the operators direct the target to a credential-harvesting website disguised as a legitimate academic resource. This facilitates unauthorized access to the target’s accounts, enabling surveillance of individuals whose work shapes public discourse and foreign policy debates concerning Iran.
Worryingly, the Iranian regime has shown an interest in using LLMs to assist with such social engineering operations, including in the generation of email content.[[36]] Because LLMs can produce highly persuasive, personalized content, they significantly increase the sophistication and scalability of social engineering attacks while also reducing detectability.[[37]] Malicious actors can now deploy AI-generated text, audio, and video, expanding the range of attack vectors and making their deception operations appear markedly more credible. For example, modern social engineering operations can also use AI voice cloning tools to credibly imitate trusted figures.[[38]] In addition, deep learning presents the possibility of specialized models that can analyze large volumes of publicly available information to tailor manipulation strategies to individual targets.[[39]] Such systems could, for instance, infer a person’s likely concerns, professional interests, or communication style, allowing attackers to craft messages that feel contextually appropriate and psychologically credible. Once again, these are operations that, in the past, would have required considerable manual effort but now may increasingly be done automatically. These developments thus add to major risks for the future, which are broadly expanded upon in the next section.
Future risks
The cases discussed in this study may be a mere prelude to the future. As alluded to above in the case of Iran, LLMs have increased the risk of social engineering operations conducted by hostile states, which should only further increase with time as LLMs increasingly improve in their ability to imitate human writing styles.[[40]] These increasing LLM advancements may also enhance the persuasiveness of bot-farming campaigns, highlighted in all cases here.
Relatedly, another relevant potential future consequence of the increasingly powerful impersonation abilities of AI arises from the examples of both the Muslim Brotherhood and Iran. While their influence operations have imitated the online civilian populations of societies, a recent study found that LLMs can also credibly impersonate politicians.[[41]] In line with the cognitive bias individuals show toward figures of authority, this may further increase the persuasiveness of such operations. Assisting the potential for such developments is the advancement of deepfake technologies, which are already used to imitate politicians[[42]] as a form of information warfare.[[43]] Deepfakes are likely to become more realistic with time, as advances in generative modeling continue to improve elements such as emotional consistency and lip synchronization.[[44]]
Furthermore, in the case of ISIS’s influence operations, the group’s leveraging of generative AI to identify and profile prospective recruits and deliver personalized messages to individuals who have shown an interest in extremist content online was briefly discussed. The ability of LLMs to predict personality types based on digital footprints[[45]] is expected to continue to be used for microtargeting. Personality traits associated with extremism include narcissism, psychopathy and sadism.[[46]] Psychological traits such as intolerance for uncertainty are also associated.[[47]] These traits could theoretically be detected through analyses of patterns in an individual’s online searches.[[48]] Such microtargeting of individuals online based on personality predictors has already been used for political advertising campaigns,[[49]] most famously in the Cambridge Analytica scandal. AI-driven predictive models could, in the future, automatically label users as suitable for recruitment by extremist groups such as ISIS based on such personality prediction. Predictors of personality could also be used to enhance the aforementioned ability of AI to assist with social engineering operations specialized to particular individuals.
Another related future risk concerns chatbots. In 2024, the UK government’s independent terrorism legislation reviewer warned of the growing threat of chatbot-based recruitment.[[50]] A study in the same year found that LLM-generated texts were made to mirror the rhetoric of extremist groups and seemed so authentic that they even deceived professional experts on the groups in question.[[51]] By simulating natural, human-like interactions, AI chatbots can cultivate a feeling of personal connection and trust with users.[[52]] Because trust is central to successful recruitment,[[53]] such systems could function as scalable recruiters, adapting narratives to individual vulnerabilities.[[54]]
Concluding remarks: policy pathways for prevention
AI is rapidly transforming the dynamics of influence operations. Operations that once required vast manpower can now be carried out with relative ease and at limited cost. This study has attempted to highlight the growing threats this poses in the Middle East through an examination of how these technologies are increasingly used by a diverse range of actors—from relatively depleted terrorist organizations, in the case of ISIS, to developed state actors, in the case of Iran. With the ability to infuse influence operations with these technologies only likely to increase over time, it is imperative that governments develop countermeasures ahead of time to neutralize this threat.
One such potential countermeasure can involve the use of inoculation theory.[[55]] This “pre-bunking” strategy preemptively trains individuals to spot manipulative material by presenting them with softened versions of extremist claims alongside clear rebuttals. Acting as a psychological vaccine, this helps individuals build long-term cognitive immunity to manipulation, reducing the likelihood that they will be influenced by similar tactics in the future. It is thus considered a more effective long-term strategy than standard fact-checking techniques. Relevant to the current report, a recent study tested an inoculation game in former ISIS-held regions of Iraq. Participants role-played terrorist recruiters online as a method of learning the manipulative tactics of these groups from first-hand experience.[[56]] Those in the study who played this game later showed greater resilience to manipulation by extremists.
A key area in which Middle Eastern states can incorporate inoculation as a policy strategy is in the classroom. Education curricula can use inoculation to build early digital and AI literacy against extremism. This could take the form of prebunking games, which train students in recognizing the examples of AI-driven manipulation discussed here, such as bot-farming, deepfakes and social engineering. Similar strategies are already successfully used in the Finnish education system.[[57]] Incorporating such techniques into school curricula may be an especially useful measure given that adolescents are particularly susceptible to social influence.[[58]] Inoculation can also be incorporated into government communication campaigns, where it has been shown to preemptively strengthen belief in the capacity of a state to deal with hypothetical future crises such as terrorist attacks.[[59]] Such techniques can be extended to the domain of influence operations. Government communications could preemptively warn (with refuted examples) that hostile actors will likely exploit future crises online using AI tools, even before such crises occur.
In addition, while this study has focused on the risk posed by emerging AI technologies for influence operations, policymakers can increasingly support the use of AI technologies to counter them. For example, AI tools such as machine learning algorithms (particularly deep learning ones) can be used to detect bots on social media platforms.[[60]] Machine-learning models also show the potential to detect and counter social engineering operations.[[61]] e.g., using Natural Language Processing models (NLPs) trained on examples of phishing attempts, as well as to detect deepfakes.[[62]] Countries such as the UAE, which has recently established the AI Centre of Excellence to assist its sovereign AI strategy, can potentially help shape regional standards for AI-enabled counter-manipulation tools.
Other steps policymakers can take are more regulatory in nature. For instance, Middle Eastern governments and platforms can require AI-generated images, videos, and text to include mandatory labeling and watermarking, as the European Union has done.[[63]] To specifically target the issue of extremists avoiding legal repercussions through the use of bots, governments can continue to introduce regulatory obligations on platforms to monitor and limit automated account activity. These can include increasing identity-based verification requirements for creating social media accounts. One approach toward doing so could be a blockchain-based identity verification system, which could help ensure that each account corresponds to a verifiable individual without compromising user privacy.[[64]] Ultimately, building resilience to AI-enabled influence operations will require a coordinated blend of education, regulation, and technological innovation across the region.
[1] James A. Piazza and Ahmet Guler, “The Online Caliphate: Internet Usage and ISIS Support in the Arab World,” Terrorism and Political Violence 33, no. 6 (2021): 1256–75, https://doi.org/10.1080/09546553.2019.1606801.
[2] Smitha Milli, Micah Carroll, Yike Wang, et al., “Engagement, User Satisfaction, and the Amplification of Divisive Content on Social Media,” preprint, arXiv, 2023, https://doi.org/10.48550/ARXIV.2305.16941.
[3] Majid Alfifi, Parisa Kaghazgaran, James Caverlee et al., “A Large-Scale Study of ISIS Social Media Strategy: Community Size, Collective Influence, and Behavioral Impact,” Proceedings of the International AAAI Conference on Web and Social Media 13 (July 2019): 58–67, https://doi.org/10.1609/icwsm.v13i01.3209.
[4] Faisal Irshaid, “How ISIS Is Spreading Its Message Online,” BBC News, June 19, 2014, https://www.bbc.co.uk/news/world-middle-east-27912569.
[5] Lisa K. Fazio, Nadia M. Brashier, B. Keith Payne et al., “Knowledge Does Not Protect against Illusory Truth.,” Journal of Experimental Psychology: General 144, no. 5 (2015): 993–1002, https://doi.org/10.1037/xge0000098.
[6] Robert Luzsa and Susanne Mayr, “False Consensus in the Echo Chamber: Exposure to Favorably Biased Social Media News Feeds Leads to Increased Perception of Public Support for Own Opinions,” Cyberpsychology: Journal of Psychosocial Research on Cyberspace 15, no. 1 (2021), https://doi.org/10.5817/CP2021-1-3
[7] Robert B. Cialdini and Noah J. Goldstein, “Social Influence: Compliance and Conformity,” Annual Review of Psychology 55, no. 1 (2004): 591–621, https://doi.org/10.1146/annurev.psych.55.090902.142015.
[8] J. M. Berger, “How ISIS Games Twitter,” The Atlantic, June 16, 2014, https://www.theatlantic.com/international/archive/2014/06/isis-iraq-twitter-social-media-strategy/372856/.
[9] Siyu Li, Jin Yang, and Kui Zhao, “Are You in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks,” preprint, arXiv, 2023, https://doi.org/10.48550/ARXIV.2307.10337.
[10] Abdullah Alrhmoun, Charlie Winter, and János Kertész, “Automating Terror: The Role and Impact of Telegram Bots in the Islamic State’s Online Ecosystem,” Terrorism and Political Violence 36, no. 4 (2024): 409–24, https://doi.org/10.1080/09546553.2023.2169141.
[11] C. Kozlowskyj, “ISIS’s Adoption of Generative AI Tools,” Bloomsbury Intelligence and Security Institute, September 9, 2025, https://bisi.org.uk/reports/isis-adoption-of-generative-ai-tools, accessed November 24, 2025.
[12] Josh A Goldstein, Jason Chao, Shelby Grossman et al., “How Persuasive Is AI-Generated Propaganda?,” PNAS Nexus 3, no. 2 (2024): pgae034, https://doi.org/10.1093/pnasnexus/pgae034.
[13] “Digital 2024: The United Arab Emirates,” DataReportal – Global Digital Insights, February 21, 2024, https://datareportal.com/reports/digital-2024-united-arab-emirates.
[14] Muhammad Nur Abdul Latif Al Waro’i, “False Reality: Deepfakes in Terrorist Propaganda and Recruitment,” Security Intelligence Terrorism Journal (SITJ) 1, no. 1 (2024): 41–59, https://doi.org/10.70710/sitj.v1i1.5.
[15] Riccardo Corvi, Davide Cozzolino, Giada Zingarini et al., “On The Detection of Synthetic Images Generated by Diffusion Models,” ICASSP 2023 – 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, June 4, 2023, 1–5, https://doi.org/10.1109/ICASSP49357.2023.10095167.
[16] Alessandro Bolpagni, “The Use of Artificial Intelligence within the Salafi-Jihadi Ecosystem on Rocket.Chat: The Unfolding of a New Frontier for Propaganda,” National Security and the Future 25, no. 2 (2024): 213–26, https://doi.org/10.37458/nstf.25.2.7.
[17] Early Terrorist Experimentation with Generative Artificial Intelligence Services, Tech Against Terrorism, Briefing, November 2023, accessed November 21, 2025, https://techagainstterrorism.org/hubfs/Tech%20Against%20Terrorism%20Briefing20-%20Early%20terrorist%20experimentation%20with%20generative%20artificial%20intelligence%20services.pdf.
[18] Linda Herrera and Mark Lotfy, “E-Militias of the Muslim Brotherhood.”
[19] Ahmed El Gody, “The Revolution Continues: Mapping the Egyptian Twittersphere a Decade after the 2011 Revolution,” First Monday, ahead of print, August 3, 2022, https://doi.org/10.5210/fm.v27i8.11775.
[20] “Facebook Takes Down Alleged Muslim Brotherhood Accounts Sharing ‘Inauthentic’ Information,” Middle East Eye, November 6, 2020, https://www.middleeasteye.net/news/facebook-muslim-brotherhood-accounts-taken-down.
[21] Ahmad El-Assasy, “Meta: Fake Libyan Facebook & Instagram Accounts Run By Muslim Brotherhood Removed,” Libya Review, January 21, 2022, https://libyareview.com/20644/meta-fake-libyan-facebook-instagram-accounts-run-by-muslim-brotherhood-removed/.
[22] Daniela Velásquez-Salamanca, Miguel Ángel Martín-Pascual, and Celia Andreu-Sánchez, “Interpretation of AI-Generated vs. Human-Made Images,” Journal of Imaging 11, no. 7 (2025): 227, https://doi.org/10.3390/jimaging11070227.
[23] Jeffrey T. Hancock and Jeremy N. Bailenson, “The Social Impact of Deepfakes,” Cyberpsychology, Behavior, and Social Networking 24, no. 3 (2021): 149–52, https://doi.org/10.1089/cyber.2021.29208.jth.
[24] United Arab Emirates, Federal Decree-Law on Countering Rumors and Cybercrimes, accessed November 20, 2025, https://uaelegislation.gov.ae/en/legislations/1526.
[25] “The ISNAD Campaign in the Israel–Iran War,” INSS, accessed November 21, 2025, https://www.inss.org.il/publication/isnad-iran-israel/.
[26] Diane M. MacKie, M. Cecilia Gastardo-Conaco, and John J. Skelly, “Knowledge of the Advocated Position and the Processing of In-Group and Out-Group Persuasive Messages,” Personality and Social Psychology Bulletin 18, no. 2 (1992): 145–51, https://doi.org/10.1177/0146167292182005.
[27] Michael Mieses, Noelle Kerr, and Nakissa Jahanbani, “Artificial Intelligence Is Accelerating Iranian Cyber Operations,” Lawfare, October 9, 2024, https://www.lawfaremedia.org/article/artificial-intelligence-is-accelerating-iranian-cyber-operations.
[28] “Disrupting a Covert Iranian Influence Operation,” OpenAI, accessed November 17, 2025, https://openai.com/index/disrupting-a-covert-iranian-influence-operation/.
[29] Bastian Kießling, Jan Homburg, Tanja Drozdzynski et al., “State Propaganda on Twitter: How Iranian Propaganda Accounts Have Tried to Influence the International Discourse on Saudi Arabia,” in Disinformation in Open Online Media, ed. Christian Grimme et al., vol. 12021 (Springer International Publishing, 2020), https://doi.org/10.1007/978-3-030-39627-5_14.
[30] “Iran Amplifies Cyber Support for Hamas,” Microsoft, Security Insider, accessed November 17, 2025, https://www.microsoft.com/en-us/security/security-insider/threat-landscape/iran-surges-cyber-enabled-influence-operations-in-support-of-hamas.
[31] “Uncovering Iran’s Online Manipulation Network,” Cyabra, accessed November 17, 2025, https://cyabra.com/reports/uncovering-irans-online-manipulation-network/.
[32] Mona Elswah and Mahsa Alimardani, “Propaganda Chimera: Unpacking the Iranian Perception Information Operations in the Arab World,” Open Information Science 5, no. 1 (2021): 163–74, https://doi.org/10.1515/opis-2020-0122.
[33] Stanley Milgram, “Behavioral Study of Obedience.,” The Journal of Abnormal and Social Psychology 67, no. 4 (1963): 371–78, https://doi.org/10.1037/h0040525.
[34] Murtaza Ahmed Siddiqi, Wooguil Pak, and Moquddam A. Siddiqi, “A Study on the Psychology of Social Engineering-Based Cyberattacks and Existing Countermeasures,” Applied Sciences 12, no. 12 (2022): 6042, https://doi.org/10.3390/app12126042.
[35] Kenny Vo, “‘Among the World’s Most Powerful’: Analyzing the Evolution of Iran’s Cyber Espionage, Disruption, and Information Operations Capabilities,” Studies in Conflict & Terrorism, August 12, 2025, 1–16, https://doi.org/10.1080/1057610X.2025.2545790.
[36] “North Korea and Iran Using AI for Hacking, Microsoft Says,” Technology, The Guardian, February 14, 2024, https://www.theguardian.com/technology/2024/feb/14/north-korea-iran-ai-hacking-microsoft.
[37] Sean Gallagher, Ben Gelman, Salma Taoufiq et al., “Phishing and Social Engineering in the Age of LLMs,” in Large Language Models in Cybersecurity, ed. Andrei Kucharavy et al. (Springer Nature Switzerland, 2024), https://doi.org/10.1007/978-3-031-54827-7_8.
[38] Alexandru-Raul Matecas, Peter Kieseberg, and Simon Tjoa, “Social Engineering with AI,” Future Internet 17, no. 11 (2025): 515, https://doi.org/10.3390/fi17110515.
[39] Jingru Yu, Yi Yu, Xuhong Wang et al., “The Shadow of Fraud: The Emerging Danger of AI-Powered Social Engineering and Its Possible Cure,” preprint, arXiv, 2024, https://doi.org/10.48550/ARXIV.2407.15912.
[40] Eugenia Iofinova, Andrej Jovanovic, and Dan Alistarh, “Position: It’s Time to Act on the Risk of Efficient Personalized Text Generation,” preprint, arXiv, 2025, https://doi.org/10.48550/ARXIV.2502.06560.
[41] Steffen Herbold, Alexander Trautsch, Zlata Kikteva et al., “Large Language Models Can Impersonate Politicians and Other Public Figures,” preprint, arXiv, 2024, https://doi.org/10.48550/ARXIV.2407.12855.
[42] Soubhik Barari, Christopher Lucas, and Kevin Munger, “Political Deepfakes Are as Credible as Other Fake Media and (Sometimes) Real Media,” The Journal of Politics 87, no. 2 (2025): 510–26, https://doi.org/10.1086/732990.
[43] Dominika Kuźnicka-Błaszkowska and Nadiya Kostyuk, “Emerging Need to Regulate Deepfakes in International Law: The Russo–Ukrainian War as an Example,” Journal of Cybersecurity 11, no. 1 (2025): tyaf008, https://doi.org/10.1093/cybsec/tyaf008.
[44] Diqiong Jiang, Jian Chang, Lihua You et al., “Audio-Driven Facial Animation with Deep Learning: A Survey,” Information 15, no. 11 (2024): 675, https://doi.org/10.3390/info15110675.
[45] Max Murphy, “Artificial Intelligence and Personality: Large Language Models’ Ability to Predict Personality Type,” Emerging Media 2, no. 2 (2024): 311–24, https://doi.org/10.1177/27523543241257291.
[46] Emily Corner, Helen Taylor, Isabelle Van Der Vegt et al., “Reviewing the Links between Violent Extremism and Personality, Personality Disorders, and Psychopathy,” in Violent Extremism, 1st ed., by Caroline Logan (Routledge, 2021), https://doi.org/10.4324/9781003251545-4.
[47] Simona Trip, Carmen Hortensia Bora, Mihai Marian et al., “Psychological Mechanisms Involved in Radicalization and Extremism. A Rational Emotive Behavioral Conceptualization,” Frontiers in Psychology 10 (March 2019): 437, https://doi.org/10.3389/fpsyg.2019.00437.
[48] Danny Azucar, Davide Marengo, and Michele Settanni, “Predicting the Big 5 Personality Traits from Digital Footprints on Social Media: A Meta-Analysis,” Personality and Individual Differences 124 (April 2018): 150–59, https://doi.org/10.1016/j.paid.2017.12.018.
[49] Almog Simchon, Matthew Edwards, and Stephan Lewandowsky, “The Persuasive Effects of Political Microtargeting in the Age of Generative Artificial Intelligence,” PNAS Nexus 3, no. 2 (2024): pgae035, https://doi.org/10.1093/pnasnexus/pgae035.
[50] “Urgent Need for Terrorism AI Laws, Warns Think Tank,” Technology, BBC News, January 3, 2024, https://www.bbc.co.uk/news/technology-67872767.
[51] Stephane J. Baele, Elahe Naserian, and Gabriel Katz, “Is AI-Generated Extremism Credible? Experimental Evidence from an Expert Survey,” Terrorism and Political Violence 37, no. 8, August 2, 2024, 1–17, https://doi.org/10.1080/09546553.2024.2380089.
[52] Anne Zimmerman, Joel Janhonen, and Emily Beer, “Human/AI Relationships: Challenges, Downsides, and Impacts on Human/Human Relationships,” AI and Ethics 4, no. 4 (2024): 1555–67, https://doi.org/10.1007/s43681-023-00348-8.
[53] John F. Morrison, “The Trustworthy Terrorist: The Role of Trust in the Psychology of Terrorism,” in Victims and Perpetrators of Terrorism: Exploring Identities, Roles and Narratives, ed. Orla Lynch and Javier Argomaniz (London: Routledge, 2017), 16. https://doi.org/10.4324/9781315182490-8.
[54] Tyler Houser and Beidi Dong, “The Convergence of Artificial Intelligence and Terrorism: A Systematic Review of the Literature,” Studies in Conflict & Terrorism, July 14, 2025, 1–24, https://doi.org/10.1080/1057610X.2025.2527608.
[55] William J. McGuire, “Some Contemporary Approaches,” in Advances in Experimental Social Psychology, vol. 1 (Elsevier, 1964), https://doi.org/10.1016/S0065-2601(08)60052-0.
[56] Nabil Saleh, Fadi Makki, Sander van der Linden et al., “Inoculating against Extremist Persuasion Techniques – Results from a Randomised Controlled Trial in Post-Conflict Areas in Iraq,” Advances.in/Psychology 1 (2023), https://doi.org/10.56296/aip00005.
[57] Kari Kivinen, “In Finland, We Make Each Schoolchild a Scientist,” Issues in Science and Technology 29, no. 3 (2023): 41–42, https://doi.org/10.58875/FEXX4401.
[58] Brett Laursen and René Veenstra, “Toward Understanding the Functions of Peer Influence: A Summary and Synthesis of Recent Empirical Research,” Journal of Research on Adolescence 31, no. 4 (2021): 889–907, https://doi.org/10.1111/jora.12606.
[59] Bobi Ivanov, William J. Burns, Timothy L. Sellnow et al., “Using an Inoculation Message Approach to Promote Public Confidence in Protective Agencies,” Journal of Applied Communication Research 44, no. 4 (2016): 381–98, https://doi.org/10.1080/00909882.2016.1225165.
[60] Kai‐Cheng Yang, Onur Varol, Clayton A. Davis et al., “Arming the Public with Artificial Intelligence to Counter Social Bots,” Human Behavior and Emerging Technologies 1, no. 1 (2019): 48–61, https://doi.org/10.1002/hbe2.115.
[61] Hussam N. Fakhouri, Basim Alhadidi, Khalil Omar et al., “AI-Driven Solutions for Social Engineering Attacks: Detection, Prevention, and Response,” 2024 2nd International Conference on Cyber Resilience (ICCR), IEEE, February 26, 2024, 1–8, https://doi.org/10.1109/ICCR61006.2024.10533010.
[62] Anwar Mohammed, “Deep Fake Detection and Mitigation: Securing Against AI-Generated Manipulation,” Journal of Computational Innovation 4, no. 1 (2024), https://researchworkx.com/index.php/jci/article/view/55.
[63] Bram Rijsbosch, Gijs van Dijck, and Konrad Kollnig, “Adoption of Watermarking for Generative AI Systems in Practice and Implications under the New EU AI Act,” preprint, arXiv, 2025, https://doi.org/10.48550/ARXIV.2503.18156.
[64] Stefano Pedrazzi and Franziska Oehmer, “Communication Rights for Social Bots?: Options for the Governance of Automated Computer-Generated Online Identities,” Journal of Information Policy 10 (May 2020): 549–81, https://doi.org/10.5325/jinfopoli.10.2020.0549.