In recent years, artificial intelligence (AI) has established itself as a leading technology with transformative potential, capable of profoundly reshaping markets, production processes, and social dynamics. However, while it serves as a key enabler of innovation, it also risks exacerbating inequalities if its diffusion remains concentrated in the hands of a few actors. The evolutionary trajectory of AI cannot be considered neutral, as the sector shows growing concentration within a small number of large technology companies and a limited number of countries capable of controlling its critical infrastructure.[1]
Indeed, the AI market is dominated by a few large firms, often referred to as “Big Tech”, including Google, Microsoft, Amazon, and Meta. These actors not only develop models and applications but also control essential infrastructure (cloud computing, GPUs, proprietary datasets), creating a vertically integrated situation that complicates competition.[2] This scenario raises crucial questions concerning competition, resource accessibility, prospects for genuinely inclusive innovation, the emergence of new forms of technological dependency, and a widening digital divide.[3]
This study addresses the issue of monopolization in AI through a three-step approach. First, it builds a theoretical framework analyzing AI market concentration, with particular emphasis on the economic and technological mechanisms reinforcing the dominance of major firms. Second, it develops a detailed analysis of structural barriers and competitive distortions across the entire value chain, exploring the anatomy of monopoly.
Finally, it assesses the systemic consequences of such concentration on startups, public institutions, and emerging countries, discussing how asymmetries of power may turn AI into an exclusive good rather than a driver of inclusive development. It also examines international regulatory responses comparatively, highlighting similarities and differences among strategies adopted by the European Union (EU), the United States (U.S.), the United Kingdom (UK), and multilateral organizations.
Anatomy of the monopoly
Economic and technological mechanisms fueling the dominance of major firms
Concentration in AI markets is not accidental: it is the outcome of a complex interaction between economic and technical dynamics, which systematically reinforce the dominant positions of a few global players.[4]
A key element is represented by economies of scale and scope. Large firms reduce the average cost of their products and services by spreading fixed costs for infrastructure, research, and development across a vast user base. At the same time, they exploit the same technical resources, data, and internal expertise to create different applications, from machine learning to automatic translation to virtual assistants. This combination allows them to lower marginal costs, accelerate innovation, and make it harder for new entrants to compete on equal terms.[5] An emblematic example is the computational cost of training frontier models: the OECD (2024) estimates that training GPT-4 required over 25,000 NVIDIA A100 GPUs and an investment exceeding US$100 million. Only major players such as Microsoft, Google, or Amazon have the infrastructure and resources to sustain such expenses; unsurprisingly, in 2022, AWS, Microsoft Azure, and Google Cloud together controlled nearly 70% of the global cloud market (OECD, 2024).[6]
Added to this structural advantage is vertical integration. Large platforms not only develop AI models but also control data centers, data access, and even the development of final applications. In this way, they achieve operational efficiencies that smaller companies can hardly replicate. The combined ownership of proprietary data, advanced computational infrastructure, and specialized expertise creates a very high entry barrier, further reinforcing the dominance of incumbents.[7] The European Commission has observed how Microsoft, for example, does not limit itself to offering cloud services via Azure but integrates OpenAI solutions into Office and Bing, consolidating closed value chains that complicate external penetration.[8] Another important element is represented by network effects. The more users and applications revolve around a digital platform, the greater its value becomes. This triggers a lock-in dynamic, in which users and firms struggle to leave the dominant ecosystem, discouraging the spread of alternative solutions.[9]
Alongside these market dynamics lies the geopolitical dimension of technological sovereignty. Increasingly, states aim to reduce their dependency on infrastructure and innovations developed abroad, promoting national industrial policies and investments in autonomous technological resources. Although justified by security and competitiveness needs, these efforts often fragment the global market into distinct technological and economic blocs, with the United States, China, and Europe pursuing parallel and rarely cooperative strategies. The consequences are significant: international competition risks being reduced and rigidified, as the free circulation of data, standards, and services is restricted by protectionist measures and exclusionary reciprocity.[10]
Structural barriers and competitive distortions in the AI value chain
If economic and technological mechanisms strengthen the giants’ positions, the very structure of the AI value chain further reduces market contestability. Numerous structural barriers hinder new entrants and make competition less equitable.[11]
Beyond high entry costs, another blocking factor is proprietary technological standards. Many protocols and interoperability criteria are developed and controlled by a small number of large companies, effectively defining gateways to the market. Integrating a new solution is possible only by adhering to parameters imposed by leaders, reinforcing their role as private regulators.[12] According to the OECD (2022), switching costs for migrating from one cloud infrastructure to another can exceed 20-25% of the value of a multi-year contract, drastically reducing the incentives for firms and public administrations to abandon dominant providers. This contributes to entrenched contractual and technological lock-ins.[13]
Switching costs also play a major role: migrating from one platform to another is expensive and risky for both companies and public administrations. This dependency on suppliers consolidates lock-in, further reducing mobility across technological ecosystems and discouraging the adoption of alternatives.[14]
Another source of distortion is algorithmic collusion. Increasingly, algorithms used by different firms interact in ways that implicitly coordinate market behavior, aligning prices without explicit agreements. These collusive outcomes, difficult to detect with traditional antitrust tools, reduce real competition and boost the profits of dominant operators.[15]
Finally, complex or particularly burdensome regulations may have a paradoxical effect. Although intended to ensure safety and transparency, rules such as the European AI Act risk favoring large incumbents, which have the resources to absorb global compliance costs. For new entrants, however, these regulatory barriers can prove insurmountable.[16] The JRC Technical Report (2023) estimates compliance costs between €400,000 and €2 million annually per company—marginal for Big Tech, but a serious obstacle for around 40% of innovative European SMEs, which may lack the resources to comply.[17]
Systemic Effects
The growing concentration of AI markets, driven by economic and technological mechanisms favoring major players, does not merely redefine competitive balances within the high-tech sector. It also generates systemic effects that spread throughout the economic and institutional fabric, shaping the development prospects of startups, the autonomy of public institutions, the growth trajectories of emerging countries, and ultimately the breadth of the global digital divide. Understanding these dynamics is crucial for designing policies that go beyond regulating technological giants, contributing instead to preserving pluralism, innovation, and equitable access to AI opportunities.[18]
Startups and innovation
New enterprises, often celebrated as engines of innovation, face a particularly hostile environment in the AI field. The entry costs associated with acquiring large-scale computational infrastructure and datasets, combined with intense competition to attract top talent, make it nearly impossible to challenge established platforms directly. In this context, many startups are forced to operate in niche segments or to provide complementary services to the products of large incumbents, relinquishing the chance to develop disruptive technologies. Often, the options are limited to integrating as minor suppliers within dominant ecosystems or being acquired by major firms, which absorb skills and innovations, further strengthening their market position.[19] A significant figure: according to the OECD (2023), over 60% of European AI startups that raised more than €10 million were acquired within five years, often by U.S. Big Tech, reducing the likelihood that independent players capable of challenging market leaders could emerge.[20]
Despite the complexity of the current landscape, AI startups possess several strategic alternatives beyond acquisition, which nevertheless continues to represent the dominant trajectory. One option consists of pursuing competitive advantage through a focus on narrowly defined sectors or highly specialized applications in which incumbent firms lack domain-specific expertise. Furthermore, organizations that implement business models centered on innovation and efficiency—enabled by AI-based organizational capabilities such as grounding, bounding, and recasting—may establish distinctive forms of competitive differentiation. This approach entails the transformation of generic AI technologies into unique, customized solutions for well-defined tasks, subsequently anchored in contractual arrangements designed to prevent expropriation by competitors.[21]
At the same time, the European Union actively fosters the autonomy of startups through initiatives such as GenAI4EU (expanded to nearly €700 million), the EIC Accelerator (providing up to €2.5 million in grants and €15 million in equity), and the Digital Europe Programme (ranging from €500,000 to €2 million). In addition to financial support, these programs supply mentoring, pilot opportunities, and access to expert networks, thereby reinforcing the structural resilience of emerging ventures.[22]
Nevertheless, acquisitions remain the prevailing market mechanism.[23] In 2024, 187 transactions were recorded, amounting to over US$27 billion, with technology conglomerates accounting for 62% of deals and approximately 30% representing talent-focused acquisitions (acqui-hires). The share of acquisitions conducted by non-AI firms increased markedly, from 10% in 2014 to 45% in 2023. However, empirical analyses suggest that, following acquisition, the patenting activity of startups tends to diminish, pointing to objectives that appear more anti-competitive than innovation-driven.[24]
In this regard, it is important to note that in the Gulf countries—particularly the United Arab Emirates (UAE) and Saudi Arabia—public policies are fostering a favorable environment for the development of technology startups. This ecosystem rests on substantial public investment, fiscal and regulatory incentives, innovative infrastructures, and the gradual strengthening of intellectual property protection. Specifically, governments act as accelerators through sovereign wealth funds and development banks, which channel patient capital into strategic sectors such as artificial intelligence, deep tech, and the climate economy, thereby supporting both the emergence and the scaling of enterprises.[25]
Free zones in the UAE provide significant advantages, including multi-decade tax exemptions, full foreign ownership, and profit repatriation, while instruments such as crowdfunding and venture debt further expand financing options. At the same time, universities and academic ecosystems contribute to the formation of local STEM talent, thus promoting the localization of knowledge and technologies.[26] However, startups benefit mainly from economic rather than legal protections, as regulations on rights, privacy, and technological appropriation remain underdeveloped compared to EU or U.S. standards. The state’s dominant role as financier and regulator accelerates innovation but also creates dependency, leaving startups vulnerable to political or strategic shifts.[27]
Public institutions and digital sovereignty
Public administrations are increasingly dependent on infrastructure and services developed by large foreign companies. This dependency stems not only from budgetary constraints and the need to adopt readily available and scalable solutions but also from the lock-in effect generated by proprietary technological standards, which make migration to alternative platforms complex and costly. The result is a progressive loss of digital sovereignty: governments and institutions no longer have effective control over the data they collect or the tools they use to deliver essential services to citizens. Such vulnerability translates into operational rigidity and resilience risks, as the functioning of public systems becomes exposed to commercial and regulatory decisions taken in external jurisdictions. Unsurprisingly, in recent years, several governments have promoted strategies of ‘digital sovereignty’, aiming to establish national cloud infrastructures and adopt open standards.[28] For example, in 2023, Germany signed agreements with Microsoft and Amazon to extend cloud services to federal and local administrations, with a total value exceeding €3 billion. This triggered an intense political and academic debate on privacy, security, and digital sovereignty, confirming the lock-in risks highlighted by the European Commission (JRC, 2023).[29]
Emerging countries and global asymmetries
While startups and public institutions in advanced economies face dependency on a few dominant suppliers, for emerging countries, the issue is even deeper. The lack of large-scale computational infrastructure and relevant datasets, combined with a shortage of highly specialized human capital, places many economies in a structurally weak position.[30] According to UNESCO (2023), fewer than 20% of universities in sub-Saharan Africa have access to adequate computational power for advanced AI projects, and the continent contributes less than 1% to global scientific publications on AI. This imbalance prevents the development of robust local ecosystems and reinforces the subordinate role of emerging economies as simple users of standardized solutions produced in global innovation hubs.[31]
If Africa remains largely a consumer rather than a producer of AI technologies, long-term consequences may include widening global inequalities, reduced opportunities for its young workforce, and a deepening technological divide. The lack of a robust local ecosystem makes it difficult for universities and firms with limited resources to access advanced infrastructures and programs, leaving them on the margins of global digital value chains.[32] This situation risks wasting one of the continent’s key demographic advantages—its young population—through unemployment, underutilization of skills, and exclusion from the digital economy.
Moreover, dependence on external technologies limits the ability to adapt AI to local needs in healthcare, education, and agriculture, undermines competitiveness, and leaves Africa subject to externally imposed standards. In effect, this perpetuates an extractive model in which data and value are generated and concentrated outside the continent.[33] However, UNESCO and several policy briefs suggest that this trajectory can be altered through coordinated investments in digital infrastructure, advanced training, research and development, and inclusive public policies, enabling Africa to evolve from a passive user to an active and competitive actor in the global technological revolution.[34]
The UNCTAD report Trade Performance and Commodity Dependence (2003) demonstrates that Africa’s trade structure is characterized by strong reliance on commodity exports and the import of manufactured and higher value-added goods. Between the 1980s and 2000, Africa’s share of global exports fell from 6.3% to 2.5%, while manufactures accounted for less than 1% of total African global exports—an indicator of the continent’s limited capacity to locally transform its own resources into finished products. More than 70% of the value of African exports derives from commodities such as minerals, precious metals, hydrocarbons, agricultural products, and other basic resources.[35]
Africa holds vast mineral and energy resources that could enable cost-effective development of data centers and manufacturing. Yet, the continued exchange of raw materials for finished goods hampers local value chains and industrial growth, leaving economies exposed to global price volatility and extractive trade dynamics. This undermines competitiveness, limits the ability to address local needs, deepens technological dependency, and reduces opportunities for the young workforce.[36]
Digital divide and social polarization
Concentration in AI markets further amplifies an already known phenomenon: the widening of the digital divide. Central urban areas and larger economic actors benefit disproportionately from AI progress, while peripheral regions, SMEs, and citizens with lower digital skills are excluded from its advantages. This leads to territorial and economic polarization, with technological hubs strengthening their ability to attract investments and talent, while communities risk falling further behind. Even in employment, the asymmetry is evident: demand for advanced skills grows rapidly, while many low-skilled jobs are automated, without adequate reskilling programs. Eurostat (2023) data show that in Europe, 46% of adults with low education levels lack basic digital skills, compared to 12% among university graduates. Moreover, next-generation broadband penetration in rural areas remains below 60%, against over 85% in urban areas. This disparity results in unequal access to opportunities generated by AI, deepening social polarization.[37]
Political responses cannot be limited to traditional antitrust interventions but must take the form of a multi-level strategy including investments in shared infrastructure, policies for equitable access to data and computational power, widespread training programs, and international cooperation mechanisms. Only in this way will it be possible to counter monopolistic drifts and ensure that AI helps reduce, rather than exacerbate, existing inequalities.[38]
Regulatory Responses
The comparative analysis of regulatory responses to AI adopted in major global blocs highlights the diversity of approaches and the emergence of plural models of governance, each reflecting different regulatory sensitivities, industrial policies, and legal systems. A systematic review of academic and institutional sources allows the identification of convergences and divergences in the responses of the EU, the U.S., the UK, and multilateral organizations,[39] and China.[40]
EU: preventive and uniform approach
The EU stands out for adopting a homogeneous and advanced regulatory framework through the AI Act, which entered into force on 1 August 2024, and will be progressively applied by 2027. This legal framework, a global reference point, classifies AI systems based on risk, imposing differentiated obligations for high-risk systems and for so-called general-purpose AI models (GPAI). It not only prohibits uses deemed unacceptable, such as social scoring, but also introduces stringent procedures for assessment, transparency, traceability, and documentation of high-risk systems. It also establishes the new European AI Office, tasked with supervising and implementing the regulation at a continental scale.[41]
Of particular importance is the introduction of regulatory sandboxes, designed to help SMEs develop and adapt AI solutions in line with the new rules. The European Commission estimates that the AI Act will affect around 10,000 European companies, 90% of which are SMEs. To support this process, France launched a national sandbox in 2025 that already hosts 35 startups in the health and financial sectors, offering practical support and mentoring for compliance. The effectiveness of the framework is reinforced by a sanctioning regime that includes fines of up to 7% of global turnover, demonstrating the EU’s intention to ensure compliance through strong enforcement mechanisms.[42]
United States: flexible and decentralized approach
In the United States, the federal system and an innovation-oriented context have given rise to a patchwork of rules mainly inspired by soft law, alongside sector-specific guidelines and best practices developed by bodies such as the National Institute of Standards and Technology (NIST). Federal initiatives, such as the 2023 Executive Order on Safe, Secure, and Trustworthy AI, do not create binding national legislation but promote technical standards and audit procedures in sensitive sectors, often leaving implementation to individual states and agencies such as the Federal Trade Commission.[43] A key component of this strategy is the NIST AI Risk Management Framework (AI RMF 1.0), published in 2023, already adopted by over 200 organizations across business, government, and academia as a voluntary standard for managing AI-related risks.[44] At the same time, the 2023 Executive Order introduced a requirement for federal agencies to conduct safety tests on frontier models before deploying them in critical sectors such as healthcare, finance, and defense, thereby strengthening governance without imposing rigid centralized legislation.[45]
UK: proportionate and experimental approach
The United Kingdom’s approach deliberately positions itself between the EU’s regulatory formalism and U.S. soft regulation, combining flexibility and pragmatism. The UK government opted not to introduce a general AI law, instead assigning sectoral regulators (such as the ICO for data protection or the FCA for finance) the task of applying common guidelines in practice. This model has led to the creation of specific tools such as toolkits and playbooks to guide public administrations and businesses in the safe and responsible use of AI.[46] An example is the AI Regulatory Sandbox launched in 2024, involving 200 companies (65% SMEs) in the healthtech and fintech sectors, enabling them to test AI systems in a protected environment. According to the Department for Science, Innovation and Technology, this experiment has already generated estimated compliance cost savings of £80 million over two years, showing the advantages of proportionate and collaborative regulation.[47]
Multilateral organizations: cooperative and non-binding approach
At the multilateral level, organizations such as the OECD and the United Nations play a role in promoting general principles and setting technical and ethical standards to encourage global convergence. Although non-binding, initiatives such as the 2019 OECD Recommendation on AI or UN guidance on AI ethics provide a common reference framework.[48] A particularly significant example is the OECD AI Policy Observatory, which by 2024 included 47 participating countries, representing over 90% of global AI investments. The platform has registered more than 700 national policy initiatives, from SME support to promoting transparency and non-discrimination practices, making it the main infrastructure for convergence and coordination in multilateral AI governance.[49]
China: authoritarian and strategic approach
The governance of AI in China is characterized by strong centralization, with the state assuming a leading role in the development of technologies deemed strategic both economically and geopolitically. Through regulatory instruments such as the Provisional Measures for the Administration of Generative Artificial Intelligence Services (2023), the government primarily intervenes in the regulation of services directed at the public, imposing strict procedures for oversight, security assessment, and the allocation of liability to private providers. At the same time, it grants broader discretion to research and development activities, especially when directed toward domestic applications of strategic relevance. This framework reflects a logic of “authoritarian pragmatism”: on the one hand, industrial and technological expansion is actively promoted, while on the other, strict control is maintained over content and social stability, distinguishing between publicly accessible applications and those confined to experimental or private contexts.[50]
The Chinese regulatory framework imposes stringent requirements on transparency, authenticity, and diversity of training data, introduces obligations for content labeling, and mandates social and security impact assessments for systems that may affect the public sphere. In parallel, the role of major national technology companies is reinforced, with the state acting simultaneously as regulator and industrial promoter. In this perspective, AI regulation functions as a dynamic instrument aimed at reconciling the promotion of economic growth and global leadership with the selective control of the social and political implications of emerging technologies.[51]
From a comparative standpoint, the Chinese model diverges both from the European approach—based on preventive, uniform regulation grounded in risk assessment and embodied in the AI Act—and from the U.S. approach, which is characterized by decentralized governance and reliance on soft-law instruments that prioritize business freedom even at the expense of strict requirements for transparency and accountability. While the EU seeks to safeguard markets and fundamental rights through binding and homogeneous rules, and the United States focuses on preserving technological leadership through sector-specific standards and guidelines, China pursues an adaptive model of governance, in which regulation serves as a strategic lever to support the national ecosystem and accelerate digital transformation.[52]
Conclusions
This study highlights how artificial intelligence is not a neutral driver of progress but a transformative force that often reinforces existing power dynamics. The concentration of the AI value chain in the hands of a few Big Tech companies—Amazon, Microsoft, Google, and Meta—has fostered vertical integration and network effects that restrict competition and limit bottom-up innovation. Structural barriers such as economies of scale, proprietary standards, high switching costs, and systematic startup acquisitions create systemic disadvantages for actors unable to match such resources in infrastructure, data, or talent.[53]
These dynamics produce risks on multiple levels. Despite their innovative potential, startups are frequently pushed toward niche strategies, asymmetric partnerships, or acquisition paths that undermine autonomy. Governments face dependence on infrastructures controlled by private market leaders, with consequences for digital sovereignty. Meanwhile, emerging economies are relegated to the role of technology consumers, widening the global digital divide and reinforcing inequalities across regions and societies.[54]
Regulatory responses have been diverse but fragmented: the EU’s stringent, risk-based AI Act, the U.S. reliance on soft law, the UK’s adaptive pragmatism, multilateral coordination attempts, and China’s strategic and authoritarian approach. Yet none fully ensure pluralism or fairness. The key danger lies in self-reinforcing mechanisms that entrench oligopolistic control, restrict institutional and business choices, and concentrate innovation governance in the hands of a few.[55]
At the same time, growing awareness of these risks has inspired promising initiatives. Regulatory sandboxes, digital sovereignty strategies, open data ecosystems, public-private consortia, and skills development programs point toward more inclusive governance models. To succeed, such models must ensure shared infrastructures, equitable access to resources, and rules that balance safety with competitiveness. Above all, they require transnational cooperation and broader participation across the innovation value chain.[56]
Only through regulation that promotes accessibility and pluralism can AI become a driver of sustainable, widely shared progress rather than a source of new inequalities.[57]
[1] Max von Thun and Daniel Hanley, “Stopping Big Tech from Becoming Big AI: A Roadmap for Using Competition Policy to Keep Artificial Intelligence Open to All,” Open Markets Institute, 2024, https://tinyurl.com/2s3dtf7k.
[2] Ibid.
[3] OECD, “Artificial Intelligence, Data and Competition,” Paris: OECD Publishing, 2024, https://tinyurl.com/ypj2ycjh.
[4] Ibid.
[5] Anton Korinek and Jai Vipra, “Concentration of Intelligence: Scalability and Market Structure in Artificial Intelligence,” Economic Policy 40, no. 121 (2025): 225–256, https://tinyurl.com/36nj23rs.
[6] OECD, Artificial Intelligence, Data and Competition, OECD Artificial Intelligence Papers, No. 18, May 2024, Paris: OECD Publishing, https://tinyurl.com/3bv32bdr.
[7] OECD, “Artificial Intelligence, Data and Competition,” Paris: OECD Publishing, 2024, https://tinyurl.com/ypj2ycjh.
[8] European Commission, Speech by Executive Vice President Margrethe Vestager at the European Commission workshop on “Competition in Virtual Worlds and Generative AI,” June 28, 2024, https://tinyurl.com/v8ff9xa3.
[9] Max von Thun and Daniel Hanley, “Stopping Big Tech from Becoming Big AI: A Roadmap for Using Competition Policy to Keep Artificial Intelligence Open to All.”
[10] OECD, “Artificial Intelligence, Data and Competition,” Paris: OECD Publishing, 2024, https://tinyurl.com/ypj2ycjh.
[11] Ibid.
[12] Ibid.
[13] Carl Magnus Magnusson and Daniel Blume, “Digitalisation and Corporate Governance,” OECD Corporate Governance Working papers, no. 26, Paris: OECD Publishing, 2022, https://tinyurl.com/54xawvyf.
[14] OECD, “Artificial Intelligence, Data and Competition,” Paris: OECD Publishing, 2024, https://tinyurl.com/ypj2ycjh.
[15] Ibid.
[16] Max von Thun and Daniel Hanley, “Stopping Big Tech from Becoming Big AI: A Roadmap for Using Competition Policy to Keep Artificial Intelligence Open to All.”
[17] Joint Research Centre (JRC), European Commission, “The AI Act: A Help or Hindrance for SMEs?” JRC Technical Report, July 2023, file:///C:/Users/rache/Downloads/JRC136884_01%20(1).pdf
[18] Max von Thun and Daniel Hanley, “Stopping Big Tech from Becoming Big AI: A Roadmap for Using Competition Policy to Keep Artificial Intelligence Open to All.”
[19] OECD, “Artificial Intelligence, Data and Competition,” Paris: OECD Publishing, 2024, https://tinyurl.com/ypj2ycjh.
[20] OECD, “OECD Science, Technology and Innovation Outlook 2023,” OECD Publishing, 2023, https://tinyurl.com/4zzkfu45.
[21] Andrei Hagiu, “Artificial Intelligence and Competition Policy,” Information Economics and Policy 70 (2025): Article 101080, https://tinyurl.com/4scc3b5m.
[22] European Commission, “Digital Europe Programme: Annual Work Programme 2025,” Digital Strategy, 2025, https://tinyurl.com/jfyk4hk8.
[23] Andrei Hagiu, “Artificial Intelligence and Competition Policy,” Information Economics and Policy 70 (2025): Article 101080, https://tinyurl.com/4scc3b5m.
[24] 11. OECD, “Mergers and Their Effect on Startup Innovation,” OECD Science, Technology and Innovation Policy Papers, no. 150, 2025. OECD Publishing, https://tinyurl.com/4zzkfu45.
[25] Startup Genome, “Global Startup Ecosystem Report 2025: The New Frontier—The Rise of the Gulf as a Global Innovation Driver,” 2024, https://tinyurl.com/3xp3edvx.
[26] “Powered by Ambition: Building Enduring Innovation Ecosystems in the Middle East,” Boston Consulting Group, 2025, https://tinyurl.com/bd27ysap.
[27] “Global Startup Ecosystem Report 2025: The New Frontier—The Rise of the Gulf as a Global Innovation Driver,” Startup Genome, 2024, https://tinyurl.com/3xp3edvx.
[28] Kristina Irion, “Government Cloud Computing and National Data Sovereignty,” Policy and Internet 4, no. 3–4 (2012): 40–61, https://tinyurl.com/3ppcsx64.
[29] Konrad Wolfenstein, “Germany’s Federal Government’s Multi-Cloud Strategy: Between Digital Sovereignty and Dependence,” Xpert.digital, April 21, 2025, https://tinyurl.com/cwjk9xbf.
[30] “The State of AI in Africa Report 2023,” Centre for Intellectual Property and Information Technology Law (CIPIT), 2023, https://tinyurl.com/34kpaxmt.
[31] UNESCO, “Harnessing the AI Era in Higher Education: A Handbook for Higher Education Stakeholders,” Paris: UNESCO, 2023, https://tinyurl.com/5n6f33zk.
[32] UNESCO, “AI and Education: Safeguarding Human Agency in Automated Learning Environments,” 2024, https://tinyurl.com/yn723sub.
[33] UNESCO, “Technology and Innovation Report: Artificial Intelligence and Skills Development in Emerging Economies,” Paris: UNESCO, 2023, https://tinyurl.com/mu6n8bek.
[34] UNESCO, “AI and Education: Safeguarding Human Agency in Automated Learning Environments.”
[35] United Nations Conference on Trade and Development (UNCTAD), “Trade Performance and Commodity Dependence,” Geneva and New York, 2003, https://tinyurl.com/35r9v875.
[36] Ibid.
[37] “Digital Skills in 2023: The Impact of Education and Age,” Eurostat News, 21 February 2024, https://tinyurl.com/5efzm8jv.
[38] Ibid.
[39] OECD, “OECD Science, Technology and Innovation Outlook 2023.”
[40] Kristjan Prenga, “AI regulation in the EU, the US and China: An NLP quantitative and qualitative lexical analysis of the official documents,” Journal of Ethics and Legal Technologies 6, no. 2 (December 2024): 132–150, https://tinyurl.com/39nfh3x2
[41] European Commission, “AI Act | Shaping Europe’s Digital Future,” Digital Strategy, 2025, https://tinyurl.com/3vah674z.
[42] ArtificialIntelligenceAct.eu, “High-Level Summary of the AI Act” and “Regulatory Sandbox Approaches for AI: Overview of EU Member States,” 2024–2025, https://tinyurl.com/4f6p54u9.
[43] Executive Order 14110, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” signed by President Biden, October 30, 2023. https://tinyurl.com/2yx3kyr5.
[44] Executive Order 14110, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” signed by President Biden, October 30, 2023, https://tinyurl.com/2yx3kyr5.
[45] “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” National Institute of Standards and Technology (NIST), January 2023, https://tinyurl.com/5ejyt2xp.
[46] Asress Hailu Gikay, “Risks, Innovation and Adaptability in the UK’s Incrementalism versus the European Union’s comprehensive artificial intelligence regulation,” International Journal of Law and Information Technology 33, 2024, https://tinyurl.com/vrrxmyuc.
[47] Information Commissioner’s Office (ICO), “In-Depth Report on the 2024 Regulatory Sandbox,” https://tinyurl.com/ms8x9tvm.
[48] OECD, “Recommendation of the Council on Artificial Intelligence,” May 22, 2019 (updated May 2024), https://tinyurl.com/35jru6k4.
[49] OECD.AI (OECD AI Policy Observatory), “Country Dashboard and Policy Initiatives Repository,” OECD, 2024, https://oecd.ai/en/.
[50] Kristjan Prenga, “AI regulation in the EU, the US and China: An NLP quantitative and qualitative lexical analysis of the official documents.”
[51] Ibid.
[52] Ibid.
[53] OECD, “Artificial Intelligence, Data and Competition,” Paris: OECD Publishing, 2024, https://tinyurl.com/ypj2ycjh.
[54] Ibid.
[55] Max von Thun and Daniel Hanley, “Stopping Big Tech from Becoming Big AI: A Roadmap for Using Competition Policy to Keep Artificial Intelligence Open to All.”
[56] Ibid.
[57] Ibid.