The capabilities of artificial intelligence (AI) are advancing rapidly. Over the past decade, progress has moved from relatively rudimentary large language models (LLMs) in 2017 to systems in 2025 that approach human-level performance across many cognitive tasks. While these developments offer substantial opportunities for innovation and scientific advancement, they also introduce risks that must be managed carefully. This report examines the intersection of AI-related risks and governance mechanisms, offering a structured framework for navigating this increasingly complex landscape.
The pace of AI development shows little sign of slowing. New scaling laws continue to emerge, and technical constraints once expected to limit progress have proven more surmountable than anticipated. At the same time, advanced capabilities are spreading widely, with opensource models steadily narrowing the gap with proprietary systems.
AI risks span a broad spectrum, ranging from algorithmic bias and labor-market disruption to cybersecurity threats and the potential erosion of human control over increasingly autonomous systems. At a fundamental level, AI reshapes the risk landscape by amplifying the potential impact of adverse events while increasing uncertainty about their likelihood and timing.
This report categorizes AI risks by the types of harm they may cause:
- Physical harm: e.g., accidents involving AI systems or use by malicious actors
- Property harm: e.g., cybercrime scenarios
- Psychological harm: e.g., bias, discrimination, nonconsensual imagery
- Societal harm: e.g., misinformation, job displacement
- Geopolitical harm: e.g., shifts in power dynamics, asymmetric warfare
- Technological harm: e.g., risks from autonomous systems
Effective governance of these risks requires a multi-layered approach spanning corporate, intercorporate, national, and international levels. At the corporate level, firms must implement responsible AI principles, risk management practices, and internal governance mechanisms. Intercorporate governance relies on partnerships and shared standards to enable agile coordination across firms. National governance should apply risk-based regulatory approaches, including thresholds and clearly defined prohibited use cases. At the international level, both soft law instruments—such as principles and guidelines—and hard law mechanisms, including treaties and conventions, play an essential role.