Neck-and-neck on tech, legitimacy may decide the race. Not chips or data, but which governance wins trust?
On 23rd July, President Donald Trump unveiled America’s AI Action Plan,[1] a roadmap reflecting a conviction that the country’s prosperity and security hinge on its ability to dominate the artificial intelligence (AI) economy. The plan, announced alongside three executive orders, Promoting the Export of the American AI Technology Stack;[2] Accelerating Federal Permitting of Data Center Infrastructure;[3] Preventing Woke AI in the Federal Government,[4] lays out federal policy actions centered on removing red tape, expanding U.S. infrastructure, and asserting “global dominance” through export power and security policy. Its priorities are clear: reduce regulatory friction, fast-track data centers and chip manufacturing, and utilize American AI companies as a strategic lever against China.
In this plan, AI is framed less as a technology to be carefully governed and more as an instrument of national power, with ethical trade-offs secondary to growth and geopolitical competition. The plan is not only a national AI strategy but also a partial foreign policy doctrine toward China. Perhaps uniquely among global AI strategies (here it’s worth checking the Center for AI and Digital Policy’s AI and Democratic Values Index 2025[5]), it explicitly names a country as an adversary. Simply put, the 25-page document declares, “we need to ‘Build, Baby, Build!’” to counter China’s influence.
Yet the U.S. plan comes with a self-contradiction. While it casts China as an authoritarian threat, warning against censorship, state-directed AI, and erosion of civil liberties, ironically, it advances measures that echo those very practices. The “anti-woke AI” order mandates that large language models (LLMs) used in federal agencies be “truthful” and “ideologically neutral,” effectively requiring compliance with the administration’s own ideological standards.
According to the Executive Order, the United States should not deploy AI systems in government with ideological bias. That is why a group of non-profits and consumer advocacy organizations in the United States have protested the federal government’s decision to procure the LLM Model Grok by arguing the model violates the Administration’s executive order by producing “specific ideological viewpoints rather than objective facts.”[6] The organizations ask for safety tests and transparent risk assessments as a condition of continued procurement.
“Bias-free” as a bias
In that effect, the AI Action Plan both condemns and reproduces aspects of the Chinese model. The “anti-woke” directive[7] (along with a Fact Sheet[8]) establishes core “unbiased AI principles” for federal AI procurement and refers to diversity practices and climate change as “dogmas.”
Declaring a model “bias-free” itself could be bias, because it assumes that “neutrality” means removing certain perspectives. Alternatively, if you deliberately over-sample marginalized voices to correct historical underrepresentation, you introduce another kind of bias, but one designed to counterbalance existing inequities. Moreover, topics such as race and gender linked to one’s identity often evoke strong emotions. Feelings are subjective, and there is not always an objective, neutral and/or scientific truth.
Is a bias-free model possible?
Every model reflects choices made in data collection, labeling, optimization, and deployment. Language itself encodes culture, history, and power structures, while even “objective” datasets reflect who collected them, what was included or excluded, and how they were processed. This applies to both predictive and generative AI.
Predictive AI learns from historical data to forecast likely outcomes, for example, anticipating a drone attack route so defenses can be positioned. Generative AI, by contrast, creates new outputs that never existed in its training data, for example, generating a convincing video of an attack that never happened, the kind of deepfake that could escalate a crisis.
But data misleads in mysterious ways, and things can easily go wrong, particularly for predictions involving human behavior. Human behavior is influenced by countless factors, including the environment, culture, economics and randomness. They do not follow strict predictable laws or compare to physical laws as in biology, where AI can lead to major scientific discoveries. Human behavior is dynamic and highly context-dependent. As computer scientists Arvind Narayanan and Sayash Kapoor note in AI Snake Oil,[9] “a good prediction is not necessarily a good decision.”
Bias in multilingual LLMs
Recent studies reveal the practical implications of bias. Johns Hopkins University[10] research found that multilingual LLMs reproduce systemic biases. The study illustrates: three users ask about the longstanding India-China border dispute. A Hindi-speaking user receives answers shaped by Indian sources, a Chinese-speaking user sees only Chinese perspectives, and an Arabic-speaking user, lacking native-language sources, receives content dominated by American English materials. All three users’ perspectives are locked into a preexisting information bubble, and they leave with different understandings of the conflict.
LLMs’ war-bias
More concerning, Stanford University[11] research on LLMs identifies a war bias and risks of conflict escalation. Previous qualitative studies of LLMs as defense planners reveal that most models escalate within the considered timeframe, even in neutral scenarios without conflict. Models display sudden, unpredictable escalation, develop arms-race dynamics, and in rare cases, even opt to deploy nuclear weapons. Chain-of-thought analyses reveal alarming justifications for violent actions. These findings are in line with previous work on non-LLM-based, computer-assisted wargaming.
How to govern bias?
Bias carries civilian and defense consequences. In civilian domains, models trained on historical housing or employment data can reproduce and legitimize past discrimination even when designed to be “objective.” In defense, autonomous weapons and decision-support systems trained on combat data may encode narrow threat perceptions and underweight civilian-protection standards. Effective policy treats bias as a governance choice and not as a flaw that can be wished away. It establishes red lines[12] to protect fundamental rights and human dignity, tests and audits, and conducts impact assessments. For LLMs, diversity means broadening datasets, talent, and viewpoints so models are not tuned to a single social lens.
These measures are needed for building trust with allies and winning legitimacy. Ultimately, however, bias is a social problem and not a technical one that must be resolved through academic freedoms and skilled dialogues across differences, without the aggressive truth filters that limit context, but with compassion. Human critical thinking and oversight are indispensable for preventing bias from becoming hyperscalers of discrimination.
AI governance as soft power
On 26th July, days after America’s AI declaration, at Shanghai’s World AI Conference, Beijing released a “Global AI Governance Action Plan.”[13] It emphasized “algorithmic bias, diversity, inclusion, transparency”, and safeguards for “personal privacy, environmental protection, sustainable development.” The plan called for participation from the United Nations and the Global South. As China observer Charles Mok put it: “The US aims to win the AI race, but China wants to win friends first.”[14]
In contrast, the executive order promoting exports of the American “tech stack,” combined with the U.S. administration’s tariff policy and the new Department of War,[15] can seem coercive, pressuring other nations to adopt a full suite of U.S. technologies across hardware, cloud, data, and sector-specific applications in education, healthcare, agriculture and transportation—or risk being cast as adversaries.
What’s at stake? Marietje Schaake warns of “AI colonialism,”[16] noting that countries dependent on foreign AI systems face unique vulnerabilities. Unlike other technologies, AI models often make “black box” decisions, which makes manipulation or weaponization easier to hide. Once these systems are integrated into infrastructure, defense and security, the stakes are high.
The U.S. AI plan acknowledges such vulnerabilities at home, urging domestic AI infrastructure “free from foreign adversary information and communications technology and services (ICTS), including both software and hardware.” The same logic applies to China’s tech stack. If both systems reflect ideological choices, what truly differentiates them, and which should the markets adopt?
This is where the soft power of AI governance comes in. While bias cannot be eliminated, it is possible to build AI ecosystems worthy of public trust through transparency and public accountability. In a race where China and the U.S. are neck-and-neck on model performance, talent, and research, technical superiority alone can’t guarantee dominance. What matters is governance: which tech stack offers algorithmic transparency, public accountability, inclusive design and decision-making, and credible safeguards based on universally agreed upon values.[17]
Drawing on political scientist Joseph Nye’s soft power concept in foreign policy,[18] the winner ushering widespread AI adaptation and diffusion may be the one to leverage these values. AI’s strategic influence depends on setting rules that others willingly adopt.
At the end, as China analyst Dan Wang puts it,[19] we may not end up with one or the other. “Competition is long-lasting, and the sooner that we let go of this idea that it is just going to be one technology that determines everything, it’s just going to be one cultural product—it is not one anything.” The race will likely play out for decades. There will not be a decisive victory as there was at the end of the Cold War against the Soviet Union.
Implications for Gulf states
The UAE with Jais,[20] Qatar with Fanar,[21] and Saudi Arabia with Allam[22] have made significant investments in LLMs in Arabic, signaling ambition and a drive for technological sovereignty. They are also edging toward alignment with international governance frameworks.
For Washington’s Gulf partners, where AI is increasingly embedded in defense modernization and dual-use strategies, the divergence between U.S. and Chinese approaches carries strategic consequences. If Washington treats governance as secondary while Beijing dresses its offerings in the language of fairness and cooperation, the U.S. risks losing legitimacy even while keeping pace technically. Conversely, countries that demand governance standards from partners and adopt them at home will protect their societies from opaque, high-risk deployments and position themselves as indispensable rule-shapers in the coming AI order.
The United States’ new multibillion-dollar partnerships with Saudi Arabia and the UAE, spanning cloud infrastructure, semiconductors, and frontier AI models, are critical test cases. Pablo Chaves of the Center for Security and Emerging Technology argues pointily: “They will show whether Washington can build a transparent, enforceable architecture that endures across administrations or risk becoming cautionary tales of missed opportunity.”[23]
Countries that prioritize privacy-respecting, rights-based systems will build durable trust and influence; reliance on surveillance-heavy technologies will erode public confidence and international credibility. Bias and risk are inevitable. The decisive question is who governs AI with legitimacy and which ecosystem international partners will choose to trust.
[1] The White House, “America’s AI Action Plan,” July 2025, https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[2] The White House, Executive Orders, “Promoting the Export of the American AI Technology Stack,” July 23, 2025, https://www.whitehouse.gov/presidential-actions/2025/07/promoting-the-export-of-the-american-ai-technology-stack/.
[3] The White House, Executive Orders, “Accelerating Federal Permitting of Data Center Infrastructure,” July 23, 2025, https://www.whitehouse.gov/presidential-actions/2025/07/accelerating-federal-permitting-of-data-center-infrastructure/.
[4] The White House, Executive Orders, “Preventing Woke AI in the Federal Government,” July 23, 2025, https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/.
[5] “Artificial Intelligence and Democratic Values Index,” Center for AI and Digital Policy, 2025, https://www.caidp.org/reports/aidv-2025/.
[6] Alexandra Kelley, “Advocacy groups ask OMB to axe Grok AI procurement,” Nextgov/FCW, August 28, 2025, https://www.nextgov.com/artificial-intelligence/2025/08/advocacy-groups-ask-omb-axe-grok-ai-procurement/407773/.
[7] The White House, Executive Orders, “Preventing Woke AI in the Federal Government,” July 23, 2025, https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/.
[8] The White House, “Fact Sheet: President Donald J. Trump Prevents Woke AI in the Federal Government,” July 23, 2025, https://www.whitehouse.gov/fact-sheets/2025/07/fact-sheet-president-donald-j-trump-prevents-woke-ai-in-the-federal-government/.
[9] Arvind Narayanan and Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Princeton University Press, September 24, 2024), https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil?srsltid=AfmBOop3o5VwDBNrGXr91yXutqLSNmhVzKP-gsa6iFrp06yHj5-Wa24S.
[10] Nikhil Sharma, Kenton Murray, and Zing Xiao, “Faux Polyglot: A Study on Information Disparity in Multilingual Large Language Models,” arXiv 2407.05502, February 2025.
[11] Juan-Pablo Rivera, Gabriel Mukobi, Anka Reuel, Max Lamparth, Chandler Smith, and Jacquelyn Schneider, “Escalation Risks from LLMs in Military and Diplomatic Contexts,” Stanford University, May 2, 2024, https://hai.stanford.edu/policy/policy-brief-escalation-risks-llms-military-and-diplomatic-contexts.
[12] Christabel Randolph and Marc Rotenberg, “The AI Red Line Challenge,” Tech Policy Press, September 3, 2024, https://www.techpolicy.press/the-ai-red-line-challenge/.
[13] Ministry of Foreign Affairs, People’s Republic of China, “Global AI Governance Action Plan,” July 26, 2025, https://www.fmprc.gov.cn/mfa_eng/xw/zyxw/202507/t20250729_11679232.html.
[14] Charles Mok, “The US Aims to Win the AI Race, But China Wants to Win Friends First,” Tech Policy Press, August 8, 2025, https://www.techpolicy.press/the-us-aims-to-win-the-ai-race-but-china-wants-to-win-friends-first/.
[15] The White House, Executive Order, “Restoring the United States Department of War,” September 5, 2025, https://www.whitehouse.gov/presidential-actions/2025/09/restoring-the-united-states-department-of-war/.
[16] Marietje Schaake, “Beware America’s AI colonialism,” Financial Times, August 20, 2025, https://www.ft.com/content/80bc0d67-faaf-4373-ad18-db15da721054.
[17] United Nations, Universal Declaration of Human Rights, 10 December 1948, https://www.un.org/en/about-us/universal-declaration-of-human-rights.
[18] Joseph S. Nye, Jr., Soft Power: The Means to Success in World Politics, (Harvard University, Public Affairs Books, 2005), https://www.wcfia.harvard.edu/publications/soft-power-means-success-world-politics.
[19] “This is why America is losing to China,” The New York Times, September 4, 2025, https://www.nytimes.com/2025/09/04/opinion/china-global-superpower-dan-wang.html.
[20] “Meet “Jais”, The World’s Most Advanced Arabic Large Language Model,” Mohamed Bin Zaed University of Artificial Intelligence, August 30, 2023, https://mbzuai.ac.ae/news/meet-jais-the-worlds-most-advanced-arabic-large-language-model-open-sourced-by-g42s-inception/.
[21] Fanar Arab Artificial Intelligence Project, https://www.fanar.qa/en.
[22] “Saudi Arabia’s $100 Billion HUMAIN AI Company to Launch “Allam” LLM,” aiworldljournal.com, August 23, 2025,
[23] Pablo Chavez, “U.S. AI Statecraft: From Gulf Deals to an International Framework,” Center for Security and Emerging Technology, October 2025, https://cset.georgetown.edu/wp-content/uploads/CSET-U.S.-AI-Statecraft.pdf.