} ?>
(Yicai) Sept. 4 -- How to promote innovation in artificial intelligence while building scientific and effective governance systems has become an important topic in policy research and practice.
One viewpoint is that because the development and application of AI are still in their early stages, raising governance issues at this point may impose unnecessary constraints on technological progress. This perspective underestimates not only the risks AI poses to society, but also the critical role governance mechanisms play in guiding technological development.
Governance is not the opposite of innovation. It is an indispensable institutional support for achieving the healthy, orderly, and sustainable development of AI. Moreover, the cross-border nature of AI, its wide-ranging impact, and the systemic risks it entails make it clear that AI governance cannot be confined to the national level. It must be regarded as a new form of global public affairs.
Three Common Dimensions of AI Governance
AI governance is a dynamic, multidimensional, and multi-stakeholder participatory process. Its purpose is not only to head off potential risks but also to shape the direction of AI development and define its boundaries, aligning technological progress with social values.
The common dimensions of AI governance include ethical principles, policy support, and market incentives, as well as regulatory and standards frameworks.
Ethical principles include, but are not limited to, the safety and controllability of AI; transparency and explainability (ensuring users can understand the operational mechanisms and decision-making processes of AI); fairness and non-discrimination (preventing AI from exacerbating social inequalities); and accountability and traceability.
In this regard, the National New Generation AI Governance Expert Committee proposed eight governance principles for "responsible AI" in 2019. Similarly, the European Union and the Organisation for Economic Co-operation and Development have also released multiple AI ethics frameworks.
At the level of policy support and market incentives, governments can provide the institutional foundation for AI innovation through financial investment, funding of research and development, infrastructure development, talent policies, and public procurement.
At the same time, it is also important for governments to ensure the diversity and sustainability of the technological innovation ecosystem through antitrust measures, data sharing, and support for small and medium-sized enterprises. For example, China’s New Generation AI Development Plan, released in 2017, emphasized an innovation path led by the state and coordinated with enterprises.
Regulatory and standards frameworks are also indispensable. Regulation does not equal restriction. As well as laws and regulations, its scope includes technical standards, risk identification, accountability mechanisms, compliance assessment, and tiered management.
The EU’s AI bill has entered its final legislative stage. It categorizes AI into "prohibited," "high-risk," "limited-risk," and “minimal-risk," proposing differentiated regulatory requirements. This provides an important reference model for the tiered management of AI.
Challenges Facing Global AI Governance
In practice, global AI governance faces many challenges. The first is differences arising from varying technological approaches. For example, the large language model developed in China, DeepSeek, has achieved significant breakthroughs in areas such as search enhancement, Chinese semantic understanding, and reasoning capabilities.
But this new technological approach also raises governance issues. Should Chinese LLMs be evaluated using the same standards as foreign LLMs? Should special protection mechanisms be established for Chinese training datasets? There are no globally unified solutions to these issues.
Second, while AI is evolving at an exponential pace, progress with governance is lagging behind, partly because official systems tend to evolve more slowly and remain fragmented. For instance, within less than six months of OpenAI's release of GPT-4, many comparable LLMs were launched globally, but most countries have yet to define clear legal categories for LLMs, the boundaries of data use, or implement or concrete control mechanisms.
Lastly, there is the influence of geopolitical factors, with various conflicts creating barriers to AI collaboration. Issues that could be handled through broad technical, ethical, and standards collaboration are increasingly being framed as strategic competition.
For instance, AI development has gradually evolved into a competitive race dominated by a few nations and led by major tech companies. Global collaborative research and risk-sharing have become increasingly difficult to achieve in the current geopolitical landscape.
It is clear that without cooperation, AI governance will struggle to address cross-border risks. Without inclusivity, it will exacerbate the intelligence divide, and if it lacks legitimacy, it will erode public trust. So AI governance must return to the right path of global collaboration.
(The author of this article is dean of the Schwarzman College at Tsinghua University and is a director of the Institute for AI International Governance.)
Editors: Dou Shicong, Tom Litting