Today, artificial intelligence (AI) plays a crucial role in our daily lives. Media were among the first to incorporate AI algorithms into their workflows since they were consistently seeking ways to enhance the quality, quantity, and speed of their publications. This insight offers an overview of the opportunities, challenges, and risks that twenty-first-century journalists encounter when utilizing AI in their profession. It discusses various aspects of the media landscape, including traditional satellite broadcasting and alternative media, while exploring different genres of journalism, ranging from investigative to socio-political. The research draws on materials from Arab, European, and Russian scholars, as well as the author’s personal experiences working for an international television channel. The insight also offers recommendations for enhancing the application of AI in journalism, which can be implemented by companies involved in AI development. This study may prove beneficial for journalism students, seasoned journalists, and anyone interested in gaining a deeper understanding of the role of AI in the media sector.
Opportunities of AI in media
Today, users—whether they are readers or viewers—choose media sources that are the first to publish specific pieces of information or news. If a news agency fails to be the first to report on an event, especially if it is an exclusive story, it is unlikely to secure even second place; instead, it will most likely go unnoticed and be relegated to the bottom of search results.
The capacity of AI to assist journalists in processing information rapidly represents a significant opportunity for its application in media. AI functions as a style editor, efficiently condensing news content so that algorithms can easily recognize and elevate it in search rankings. When journalists receive exclusive video or photographic material, they can swiftly add a logo or watermark to maintain the exclusivity that fosters popularity, recognition, and priority in reporting.
Moreover, an increasing number of materials require specific parts to be blurred to prevent account blocking or penalties. This security measure, utilizing blur effects, is another advantage of incorporating AI into journalism. The need to publish news quickly to outpace competing media outlets is common among news channels, agencies, online portals, and social networks such as X and Telegram.
A separate layer consists of media that provide readers with a deeper immersion into the topic by addressing historical facts and data relevant to a particular event. When preparing analytical reports, historical accounts, political programs, or long-read articles, journalists can utilize AI to access a wealth of information far beyond what could be found through manual searches. In this context, AI assists in searching and analyzing databases as well as historical documents, delivering pertinent research in specific fields.
AI is also essential for investigative journalism. Editors and reporters in this genre often need to delve deeply into topics, verifying the reliability of information and facts while working to expose fake news. Previously, certain data might have confused journalists, but today, AI effectively eliminates false statements at the initial stage, simplifying the process for journalists seeking the truth in their investigations [3].
In the sea of news and content, AI helps to find exactly the reader or viewer who will be waiting for your post, news or program. Thus, with the help of AI, a journalist can now study his audience and its requests and create more personalized and targeted content in a matter of minutes without resorting to large static studies.
One of the most significant advantages of utilizing AI in journalism during the era of globalization and interconnected media is its language capabilities. The ability of AI to recognize and process natural language (NLP) allows journalists from various countries to access information without being hindered by language barriers. This is particularly beneficial for journalists who are not native Arabic speakers in the Middle East, where considerable differences in pronunciation and dialects exist. Consequently, the automation of material preparation, the acceleration of publication speed, the in-depth analysis of supplementary materials, the identification of false information, and the seamless engagement with source languages—all these features—represent the essential opportunities that AI offers to journalists today.
Challenges of AI usage in media
One of the potential challenges associated with using AI in media is the manipulation of algorithms, particularly within alternative media. Alternative media often function on platforms that prioritize decentralization, such as blockchain networks or decentralized social platforms. This characteristic allows AI algorithms in these media to be employed in a more community-focused manner. In contrast to large social networks, where algorithms typically aim to maximize engagement and advertising revenue, alternative media can leverage AI to serve different objectives, such as promoting a specific ideology or supporting a particular community. It is important to note that many alternative media outlets utilize algorithms to counteract censorship and manipulation. For instance, they may deploy algorithms to detect and filter out disinformation or to circumvent content blocks.
However, the same AI algorithms can also be exploited to disseminate disinformation or manipulate public opinion. AI-powered social media algorithms can create so-called information bubbles in which users only see information that aligns with their beliefs, leading to greater polarization [6]. AI systems in media often collect and process large amounts of personal user data, including viewing history, preferences, demographics, and more, to show the reader the content he/she would like to see. This raises concerns about the potential misuse of this data, information leaks, and violations of privacy.
Quality is one of the distinguishing features of classical journalism. If earlier it took quite a long time for the material to be prepared for publication, now with the development of social networks and the ability to “swipe” any content that the viewer did not like in the first three seconds, journalists pay less and less attention to quality in its classical sense. Today, the quality of media is assessed by the quality of visual content. The quality of the text is assessed by the level of erudition of the target audience of a particular publication or channel. Therefore, AI cannot substitute for human judgment. Its application in the media may result in the oversimplification of complex issues and a disregard for nuances. Moreover, an overreliance on AI can diminish the quality of analytical journalism [4].
In investigative journalism, some exclusive footage or recorded conversations purportedly provided to the creators of the program play a major role. Conversations are especially often used as a kind of leak for journalists to reveal what should remain inaccessible. Such conversations are often available only to special services that protect our peace. However, the quality of the conversation or the voice altered due to technical problems are simple tasks for AI to replace reality and issue different information under the heading of secret within the framework of investigative journalism, while it will only be fake, misleading ordinary readers and viewers. It should be noted that such leaked conversations by phone or radio are characteristic of military investigative journalism, but with the help of AI, not everyone can reveal the authenticity of these materials.
Risks to be faced
The automation of processes related to information publication and text material preparation, combined with the widespread availability of AI tools, poses a risk of diminishing trust in media as a reliable source of news and information overall [1]. Limitations on the part of AI in understanding the context of an event are risky when using AI in journalism [2]. A deep understanding of socio-cultural realities is directly correlated with an understanding of the deeper meanings that journalists convey in their work. We also acknowledge the inherent discursiveness of journalism, as news emerges within society and holds significance for specific social groups due to various socio-cultural factors and cultural backgrounds. The absence of contextual understanding—both verbal and non-verbal—that characterizes every society heightens the risk of misunderstandings between AI algorithms and individuals, as well as between these algorithms and society at large.
The diminishing presence of the human element in news and content creation through AI poses significant risks to the media landscape. This issue particularly impacts journalism genres that rely on polymodal text for content, such as cartoon publications or those employing satire as a means of conveying information [7]. The human factor in journalism enables readers to grasp the author’s level of erudition, as authors often employ various linguistic tropes, such as anaphora, wordplay, and hyperbole, to convey deeper meanings within the news. While AI can generate machine-produced linguistic wordplay, it frequently falls short in executing these techniques, as their creation demands a nuanced understanding of the world, perspectives, and the socio-cultural context that shape the life of the journalist.
Furthermore, the use of AI in media can result in violations of human rights [4]. AI systems are employed to monitor individuals in both workplace and public settings. Data is collected and processed, with specific groups having access to personal information about individuals. These systems can infringe upon human rights, particularly the right to privacy. Moreover, the accessibility of this data to certain individuals or organizations can facilitate discrimination and social control, ultimately leading to broader societal oppression.
Is neutrality possible in the age of AI?
In today’s world, as we witness the rewriting of history, denial of information, and reinterpretation of significant global events, journalists often question what serves as a reliable foundation or reference point—akin to a Greenwich Mean Time—for perspectives. Scholars have engaged in extensive debates about the neutrality of AI in media, yet they have not reached a consensus. Currently, there is no universal algorithm governing AI processes; each neural network is developed by specific companies or nations, making discussions about neutrality within this context challenging.
To achieve objectivity as a key criterion for assessing AI’s neutrality in media, two significant limitations must be addressed. First, the restricted access to certain databases hinders journalists from obtaining comprehensive information necessary for content creation. Second, the lack of an international regulatory committee composed of experts to standardize AI processes for global application complicates matters further. Consequently, the neutrality of AI in media can presently be realized only through a journalist’s diligence in cross-referencing facts using various AI sources, thereby establishing a definitive benchmark for their publication.
Bias within AI in media implementation
The introduction of AI into journalism has sparked debate about the implications of bias in these technologies. Critics argue that AI’s reliance on large data sets can perpetuate and exacerbate existing social prejudices, reinforcing stereotypes and disproportionately marginalizing underrepresented groups. However, some argue that bias is inherent in all forms of storytelling and journalism, both human and machine [9]. Bias, when properly managed, has long been an important tool in journalism. The human element of journalism is inherently subjective, and bias allows journalists to interpret and contextualize facts in ways that make a story more meaningful to an audience. The pursuit of perfect AI, free of human bias, ignores the important role of human elements in journalism, such as emotional response, cultural context, and human judgment [8]. Rather than striving for an unattainable well-rounded AI, the focus should be on creating AI systems that can improve journalistic practice by identifying and mitigating harmful bias while preserving the core storytelling qualities that make journalism human.
Most developers of AI resources are Western, and the people who train these algorithms are products of their socio-cultural environment, which is absorbed and educated over a long period of time. Modern international journalism cannot be built now on the exclusive use of AI systems, since, for example, it does not consider the interests and—let’s call them—the internal experiences of other countries that are not the creators of AI. For example, using AI to draw up a portrait of the Russian-Ukrainian crisis, resources educated with Western bias will not give you the fact that Russia tried to preserve its language, culture and traditions, which for a long time were under pressure from the other side of the conflict.
It is believed that the Russian development of AI tools by Kaspersky better and more deeply understands the socio-cultural context of Russian realities, both during the Soviet Union and now, since the developers are carriers of this cultural code and therefore the bias of this division of AI will certainly be in favor of the Slavic mentality. However, no matter how beautiful and clear the material AI Kaspersky presented to you as a journalist, it will never give you clear information about very specific things of the West or the East: I wonder if AI can truly explain the significance of a traditional Arab dress, such as the “abaya”, being gifted from one president to another.
Perspectives of AI usage in media
Speaking about the prospects of using AI in journalism in general, it is important to take note of the existing specific experiences of using AI in media. For example, AI is used to automatically create publications with weather warnings and disseminate information through local media (e.g., El Vocero de Puerto Rico); AI is used to decipher live broadcasts and press conferences (e.g., KSAT-TV, RIA Novosti, Michigan Radio); AI is used to cover the continuous updating of presidential election results (e.g., The Washington Post and their Heliograf system); and AI is employed to generate visual graphics and animations, as well as to streamline editing and production workflows (e.g., Dubai TV, RT TV).
AI has significant potential to improve the work of journalists by providing new tools for data analysis, automation of routine tasks, and even content creation. However, there are serious concerns about bias, originality of content, and the ethical use of AI in journalism. These issues require the joint efforts of journalists and AI developers to develop solutions that take these aspects into account [5].
Also, the use of AI in journalism requires the creation of a strong international legal framework to ensure ethical standards are respected and the rights of all participants in the media process are protected. This is possible only through international regulation and cooperation, which will create a safe environment for the use of AI in the media. One of the proposed steps is the development of a single declaration on the use of AI in media by international committees. Such a declaration could standardize processes and ensure uniformity of approaches.
In addition, measures to ensure transparency in the collection and processing of information must be implemented to prevent data leaks. Users must be able to control their data, understand how it is used and make changes if necessary. Openness and accountability will therefore be important aspects of increasing trust in the integration of AI into journalism. Ultimately, technological developments must go hand in hand with respect for personal data and maintaining ethical standards in journalistic practice.
One of the interesting and promising aspects of media and AI work at one workplace is the creation of news anchors with the help of AI. AI anchors are widely used in many international and local TV channels in different languages, for example, Roya TV and their anchor named Fareed, China’s state news agency Xinhua AI anchor named Qiu Hao, Sputnik and their AI anchor Victoria, India Today Group’s AI anchor Sana, a London-based Arabic newspaper “Elaph” and their AI figure named Hala Al Wardi, Sharq News and their anchor Hadil Eleyan.
Working on major television channels, one can observe how much depends on the news anchor: what mood she/he is in today, how she/he feels, whether she/he managed to get acquainted with the topic before the broadcast, or if this or that breaking news caught him/her already on the air and she/he feels unsure, whether she/he hears the director and editor-in-chief in his/her ear. Many observers claim that the anchor is a person who should never be bothered by these. However, the anchor is an organism with its inherent shortcomings and flaws, like all the others. In this regard, many channels have already implemented a system of using AI anchors for news or broadcasts.
AI anchors work according to the type of algorithmic ingenuity. However, in this case, we observe some imperfections in the system of using AI anchors on the air. Firstly, the articulation on the host’s face can be complicated and look artificial if the anchor speaks a non-European language, for example, Arabic. The guttural and emphatic sounds of Arabic are very difficult for developers of AI hosts. Secondly, the set of gestures of the AI host is limited and occurs according to a certain algorithm, while a live host can involuntarily use kinesics that will be closer and more understandable to the viewer. Thirdly, in front of the TV screen, there is a person with his/her own special emotions. AI presenters are limited in emotional intelligence, which can cause rejection and mistrust in the viewer because, for instance, he did not sigh deeply once more when reading sad news or, on the contrary, smiled a little when reading uninteresting news. All verbal and non-verbal codes of a real presenter are read by viewers, allowing to make news less official and less dry, while as an AI-presenter—it is a set of certain, though well-trained, algorithms of action. We envision the potential of AI presenters in their ability to operate effectively in unstable or hazardous situations, during emergencies, and in routine news broadcasts, thereby enhancing the overall system.
Conclusion
AI in media is advancing rapidly and confidently. Much like a coin, the use of AI in journalism has two sides. On one hand, the swift publication, translation, verification, and editing of content are significant advantages that AI offers to journalists. However, this speed can also lead to a decline in quality, potential information leaks, and algorithm manipulation.
An AI-generated presenter will never request sick leave or take a vacation, nor will it show emotion over tragic news stories. While AI enables journalists to process vast amounts of information in multiple languages within minutes, it often overlooks the socio-cultural contexts of human experience and media resources, which can result in absurd outcomes.
It is crucial to remember that AI does not create, shoot, or present media content independently. Journalists serve the public, readers and viewers alike. Therefore, AI currently functions as an assistant to journalists rather than a replacement. Ultimately, what we publish goes through our editorial lens, which is informed by our experience, knowledge, and understanding of our audience.
References
1. Al Debaisi, A. A., “Artificial intelligence journalism: Professional and ethical challenges,” IUG Journal of Human Research 31, no. 3 (2023): 4. https://doi.org/10.33976/IUGJHR.31.3/2023/4
2. Al-Zoubi, A. H., and Al-Qudah, M. A., “Ethical Challenges of Artificial Intelligence Adoption in Newsrooms: A Case Study of Al Mamlaka TV, Jordan,” International Journal of Advanced Computer Science and Applications 14, no. 1 (2023): 1-10.
3. Broussard, M., Artificial Unintelligence: How Computers Misunderstand the World, (MIT Press, 2018).
4. Crawford, K., Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021).
5. Curran, Noel, “Navigating AI in publich service media: challanges and opportunities,” EBU Operating Eurovision and Euroradio, November 12, 2024.
6. Davydov S. G., Zamkov A. V., Krasheninnikova M. A., and Lukina M. M., “USE OF ARTIFICIAL INTELLIGENCE TECHNOLOGIES IN RUSSIAN MEDIA AND JOURNALISM,” Bulletin of Moscow University, Series 10. Journalism, No. 5. 2023. https://cyberleninka.ru/article/n/ispolzovanie-tehnologiy-iskusstvennogo-intellekta-v-rossiyskih-media-i-zhurnalistike.
7. Dugalich N.M., Shavtikova A.T., and Izildin O., “Strategies for Сreating the Image of a Politician in an Arabic Polycode Text of the Series “٣ اختيار”” RUDN Journal of Language Studies, Semiotics and Semantics 14, no. 3 (2023): pp. 946-959. doi: 22363/2313-2299-2023-14-3-946-959
8. Gondwe G., “Is AI Bias in Journalism Inherently Bad? Relationship Between Bias, Objectivity, and Meaning in the Age of Artificial Intelligence,” Harvard: Berkman Klein Center, 2025.
9. Jones B., “How can we innovate responsibly with AI for journalism?,” 2023.