As OpenAI, the
American artificial intelligence research and deployment company, celebrates the
one-year anniversary of its launch of ChatGPT, the media, academia, civil
society, the private sector, and world governments have all since tried to discern the
potential benefits versus the risks of AI’s impending impact
Regulation surrounding AI’s safety and security has continued to lag behind the pace at which AI technology is currently being developed. AI is developing so quickly that governments and their regulatory bodies are now trying to catch up, wondering how to control something that has already taken off like wildfire. AI is Pandora’s box, brimming with endless possibilities while undoubtedly rife with potentially unknown hazards.
As the world of governance attempts to pull alongside the tech industry, which has already begun interweaving AI into endless facets of the consumer marketplace as well as the healthcare, education, and defense sectors, U.S. President Joe Biden pulled the brakes last week, signaling it was time for the U.S. government to implement an all-hands-on-deck approach to control the growth of artificial intelligence.
President Biden’s Executive Order Calls for Increased AI Safety and Guardrails
On 30 October, President Biden announced the signing of an Executive Order (EO), which aims to promote the responsible development and innovation of artificial intelligence. The EO, which took nearly a year to craft, includes input from Homeland Security and the Department of Defense in order to address concerns over AI’s potential impact on public health and national security. Considering the unethical exploitation of AI can lead to an increase in algorithmic discrimination, the EO also calls for increased measures to ensure artificial intelligence advances equity and civil rights. Algorithmic discrimination can occur when automated systems arbitrarily favor one group of people over another based on a person’s race, ethnicity, or gender.
More specifically, the EO mandates all companies developing generative artificial intelligence models that pose a risk to national security, economic security, or public health to share their safety test results, along with any other critical information, with the U.S. government, a directive that is expected to spark doubt and hesitation among industry leaders. In an interview with Reuters, Bradley Tusk, CEO at Tusk Ventures, stated, “Tech companies would likely shy away from sharing proprietary data with the government over fears it could be provided to rivals.”
In order to ensure AI systems operate under their intended function and remain impermeable to misuse by malign actors, the EO also calls for the development of standardized evaluations of complex AI systems. The National Institute of Standards and Technology has been tasked with the authority of developing a series of policies, testing protocols, and rigorous standards that will be used to verify the safety, security, and trustworthiness of AI systems before the technology is unveiled to the public sphere. As the potential of cybersecurity hacks of critical infrastructure has long remained a threat to national security, the Department of Homeland Security has been entrusted with the establishment of the AI Safety and Security Board, which will apply similar testing protocols and standards to key infrastructure sectors across the country.
To ensure additional layers of protection are implemented at a national scale, both the Department of Homeland Security and the Department of Energy have been tasked with addressing AI systems most urgent security risks ‘with respect to biotechnology, cybersecurity, critical infrastructure, and other national security dangers.’
The Risk of AI-Induced Personal and Financial Fraud
Protection against AI-enabled fraud also remains a high priority for the Biden Administration. During a press conference following the signing of the EO, Biden facetiously commented that AI fraudsters have the ability to exploit voice cloning technology with such accuracy that today’s generative AI has the capacity to fool even one’s friends and family. Yet the dangers of AI-generated deep fakes, particularly photographs, imagery, and ‘voice overs’, have potentially far deeper consequences, especially as artificial intelligence technology continues to rapidly develop.
There is a rising concern that AI technology will be optimized to streamline online financial fraud through the exploitation of personal biometric data, whereby cybercriminals create new identities with stolen or fabricated information, known as ‘synthetic identity fraud.’ According to the financial consulting firm Deloitte, synthetic identity fraud is projected ‘to generate at least US$23 billion in losses by 2030, prompting many banks and fintechs to develop more advanced biometric security systems to weed out would-be perpetrators.’
Considering the potential of such massive financial risk, President Biden’s EO authorized the Department of Commerce to develop guidelines and best practices for AI safety and security, including assessing and auditing AI capabilities with a strong emphasis on cybersecurity and biosecurity risks. In order to reduce the risk of exposure to synthetic content, the EO urges the development of science-backed techniques to assist the public in identifying AI-generated information through the use of watermarking or other detection technology.
Existing AI image detection software, at least those made available to the public, currently remains in a nascent stage. In July 2023, the New York Times put five AI-detection software to the test in order to gauge their ability to detect deep-fake images. Detection software relies on sophisticated algorithms that are able to distinguish AI-generated images from those taken by a camera – or even artwork. The Times found that while the detection technology is rapidly progressing, it still falls short in detecting every single fake, even those that are the most obvious. For example, two out of five AI image detection software confirmed that an image of X CEO Elon Musk in an embrace with what appears to be a robot with a female human head was indeed real.
AI Insight Forum: Leaders of the Tech World Unite in Washington
In September 2023, Senate Majority Leader Chuck Schumer invited a group of two dozen tech executives, including Elon Musk and Bill Gates, Meta’s Mark Zuckerberg, and Google CEO Sundar Pichai, along with industry advocates and skeptics, to attend a forum on Capitol Hill to share with lawmakers their views of what meaningful AI legislation should look like. While the majority of lawmakers from both sides of the isle agreed there must be some form of government oversight in place, there remained a lack of consensus on how to move forward. Some Republican members have voiced concerns regarding overregulation, a standard party talking point, as the GOP has historically opposed any form of extensive government regulation. Following the event, Elon Musk told reporters it was important for the tech titans in attendance “to have a referee,” adding that establishing AI regulations would ensure “companies take actions that are safe and in the interest of the general public.”
Following September’s AI Insight Forum, Senator Schumer organized a follow-up session in October, inviting a bipartisan group of U.S. Senators along with a group of leading financial lenders, academics, civil society, and tech-industry experts to discuss how to harness AI’s potential to enable innovation. In his opening remarks, Senator Schumer noted “AI could be our most spectacular innovation yet, a force that can ignite a new era of technological advancement, scientific discovery, and industrial might.” Nonetheless, Schumer added that if artificial intelligence is not managed safely and proper guardrails are not put into place, such a scenario ‘could stifle or even halt innovation altogether.’
Developing a Global Approach to AI Regulations
As our globalized approach to technology requires more than a one-size-fits-all approach, the EO also underscores the importance of the U.S. working with its allies and partners abroad to develop an international framework mandating the safety and usage of AI. Two days following the announcement of the Executive Order, Vice-President Kamala Harris travelled to London to participate in Britain’s landmark AI Safety Summit. The global gathering, which hosted 25 nations, including the U.S. and China, resulted in the signing of a landmark agreement dubbed the ‘Bletchley Declaration,’ which acknowledges ‘risks arising from AI are inherently international in nature’ and should be addressed through international cooperation. The declaration also recognizes that ‘AI systems are already deployed across many domains of daily life, including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase,’ adding that because AI is already woven into the fabric of society, now is the time to act and regulate accordingly.
During the same week, the Group of Seven (G7) agreed to a ‘code of conduct’ intended for companies that develop advanced artificial intelligence systems. The 11 guiding principles, which are voluntary, aim to ensure organizations that develop AI systems promote ‘safe, secure, and trustworthy AI worldwide.’ The objective of the guiding principles is to encourage AI developers to identify and mitigate potential risks and to report on any misuse once their products are launched into the market. The code of conduct calls on AI developers to invest heavily in security controls to limit exploitation or fraudulent use of complex AI systems. The G7 advises companies to abide by the arbitrary guidelines until ‘governments develop more enduring and/or detailed governance and regulatory approaches.’
National Security Concerns
A chronic concern for governments will remain the existential risk AI systems may pose, especially if exploited to undermine national security or public health by malign actors, both foreign and domestic. Underscoring the broad extent to which such risks can impact all facets of society, ranging from critical infrastructure to nuclear and cybersecurity systems, the G7 Code of Conduct also urges AI developers to avoid designing systems that both fundamentally ‘undermine democratic values’ and those that can ‘facilitate terrorism, enable criminal misuse, or pose substantial risks to safety, security, and human rights.’
Conversely, the intelligence community must also react accordingly to the rise of rapidly evolving artificial intelligence. In a recent interview, the Central Intelligence Agency’s (CIA) Director of Artificial Intelligence, Lakshmi Raman, stated that AI will undoubtedly disrupt how the intelligence business operates. AI has already begun to rewrite the playbook in terms of how the intelligence community disseminates intelligence and gathers information. Traditional intelligence tradecraft must follow at a consistent pace with today’s evolving advanced AI technologies through increased rapprochement with the tech industry to ensure the U.S. government stays one step ahead of its challengers.
Artificial Intelligence and the U.S.-China strategic Relationship
In 2017, China, America’s fiercest competitor, announced its highly ambitious program for domestic development of artificial intelligence with the goal of becoming the world’s ‘major AI innovation center’ by 2030, although today’s figures speak otherwise. The U.S. currently far surpasses China in AI investments, spending $47.4 billion in 2022, nearly three and a half times the amount compared to China’s investment of $13.4 billion.
In October 2022, in an attempt to restrict Chinese access to AI processor chips, which are essential in driving the computing power of extensive AI systems, the U.S. Department of Commerce’s Bureau of Industry and Security implemented a series of export controls that limited China’s ability to purchase and manufacture high-end chips used for military applications.
A year later, the Biden Administration announced additional export controls for AI chips and chipmaking tools. Expanding upon previous measures, the recent controls were intended to ‘close loopholes’ – mainly the restriction of exports to Russia and Iran – in order to prevent transshipments from eventually landing in China. According to U.S. Commerce Secretary Gina Raimondo, the objective of the restrictions was to ‘prevent Chinese access to advanced semiconductors that could fuel breakthroughs in artificial intelligence, especially with military uses.’
The United States’ relationship with China is considered the most complex bilateral relationship for Washington, framed primarily around strategic competition. Further advancements in AI will play a critical role in the dynamics of the future trajectory of the U.S.-Sino relationship, especially as AI developers tailor their products to boost defensive capabilities, allowing AI to tip the scales in terms of competitive advantage and geopolitical balance.
President Biden’s signing of the Artificial Intelligence Executive Order will be the first of many measures designed to safeguard the expanding role and impact AI systems have as they continue to be interwoven throughout all facets of society. While the proposed guidelines are meant to serve as a critical guardrail, the responsibility of enacting enforceable legislation remains with Congress. At a time when America is experiencing some of its deepest political polarization, U.S. lawmakers should be mindful when recalling the saying, “A house divided cannot stand.”
Although the Biden Administration’s EO is comprehensive in its proposed efforts, it nonetheless has limited power. More importantly, the EO can be reversed by future administrations as it is not legally binding. At present, the EO can only function within the confines of existing authorities and executive branch agencies. President Biden has urged Congress to pass AI legislation, which would maintain the legal authority to rein in AI technology with enforceable powers. Remarkably, heavily regulating AI is one of the few areas that receives bipartisan support among Americans. According to a Morning Consult poll conducted this past June, more than half of registered U.S. voters, including 57% of Democrats and 50% of Republicans, support rigorous regulation of AI development, including creation of new regulatory bodies specifically designed to oversee it.
Although the suggestion of passing AI legislation has received, in theory, bipartisan support, Washington currently remains preoccupied with a looming government shutdown, dual conflicts in Ukraine and Gaza, and, not to mention, a presidential election less than a year away. The probability of passing legislation during campaign season, only to give President Biden an electoral boost, is nothing short of a ‘deep fake’ reality.
 “Algorithmic Discrimination Protections,” The White House, https://www.whitehouse.gov/ostp/ai-bill-of-rights/algorithmic-discrimination-protections-2/#:~:text=Algorithmic%20discrimination%20occurs%20when%20automated,orientation)%2C%20religion%2C%20age%2C. (Date retrieved November 7, 2023).
 “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.
 Jeff Mason, Trevor Hunnicutt and Alexandra Alper, “Biden administration aims to cut AI risks with executive order,” Reuters, October 31, 2023, https://www.reuters.com/technology/white-house-unveils-wide-ranging-action-mitigate-ai-risks-2023-10-30/.
 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” The White House, October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
 Satish Lalchand, Val Srinivas, and Jill Gregorie, “Using biometrics to fight back against rising synthetic identity fraud,” Deloitte, July 27, 2023, https://www2.deloitte.com/xe/en/insights/industry/financial-services/financial-services-industry-predictions/2023/financial-institutions-synthetic-identity-fraud.html.
 Stuart A. Thompson and Tiffany Hsu, “How Easy Is It to Fool A.I.-Detection Tools?” The New York Times, June 28, 2023, https://www.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html.
 Mary Claire Jalonick and Matt O’Brien, “Tech industry leaders endorse regulating artificial intelligence at rare summit in Washington,” Associated Press, September 14, 2023, https://apnews.com/article/schumer-artificial-intelligence-elon-musk-senate-efcfb1067d68ad2f595db7e92167943c.
 David Shepardson, Moira Warburton and Mike Stone, “Tech titans meet US lawmakers, Musk seeks 'referee' for AI,” Reuters, September 14, 2023, https://www.reuters.com/technology/musk-zuckerberg-gates-join-us-senators-ai-forum-2023-09-13/.
 “Majority Leader Schumer Opening Remarks At The Senate’s Second AI Insight Forum,” Senate Democrats, October 24, 2023, https://www.democrats.senate.gov/newsroom/press-releases/majority-leader-schumer-opening-remarks-at-the-senates-second-ai-insight-forum.
 “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023,” UK Government, November 1, 2023, https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
 Foo Yun Chee, “Exclusive: G7 to agree AI code of conduct for companies,” Reuters, October 29, 2023, https://www.reuters.com/technology/g7-agree-ai-code-conduct-companies-g7-document-2023-10-29/.
 “Hiroshima Process International Guiding Principles for Advanced AI system,” European Commission, October 30, 2023, https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system.
 “A Conversation with Lakshmi Raman at POLITICO's AI & Tech Summit,” Politico, September 28, 2023, https://www.politico.com/video/2023/09/28/a-conversation-with-lakshmi-raman-at-politicos-ai-tech-summit-00118914.
 “China's ambitions in artificial intelligence,” European Parliament, https://www.europarl.europa.eu/RegData/etudes/ATAG/2021/696206/EPRS_ATA(2021)696206_EN.pdf. (Date retrieved November 11, 2023).
 “Artificial Intelligence Index Report 2023: Chapter 4: The Economy,” Standford University, 2023, https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report-2023_CHAPTER_4.pdf.
 “Commerce Implements New Export Controls on Advanced Computing and Semiconductor
Manufacturing Items to the People’s Republic of China (PRC),” Bureau of Industry and Security, U.S. Department of Commerce, October 7, 2022, https://www.bis.doc.gov/index.php/documents/about-bis/newsroom/press-releases/3158-2022-10-07-bis-press-release-advanced-computing-and-semiconductor-manufacturing-controls-final/file.
 Kif Leswing, “U.S. curbs export of more AI chips, including Nvidia H800, to China,” CNBC, October 17, 2023, https://www.cnbc.com/2023/10/17/us-bans-export-of-more-ai-chips-including-nvidia-h800-to-china.html#:~:text=The%20U.S.%20Department%20of%20Commerce,chips%2C%20senior%20administration%20officials%20said.
 “Experts react: What does Biden’s new executive order mean for the future of AI?” Atlantic Council, October 30, 2023, https://www.atlanticcouncil.org/blogs/new-atlanticist/experts-react/experts-react-what-does-bidens-new-executive-order-mean-for-the-future-of-ai/.
 “AI Regulation Takes Baby Steps on Capitol Hill,” TIME, September 14, 2023, https://time.com/6313892/ai-congress-regulation-hearings/.
©2023 Trends Research & Advisory, All Rights Reserved.