Agility key to effective artificial intelligence governance
Editor's Note: In an interview with China Youth Daily, Wang Jiangping, former vice-minister of industry and information technology, highlighted the importance of developing a governance approach to artificial intelligence technologies that both ensures safety and facilitates innovation. Below are excerpts of the interview. The views don't necessarily represent those of China Daily.
Currently the most significant challenge to AI governance is the speed at which governance systems need to be updated: Technology is advancing at full tilt, and regulations lag behind. Specifically, efforts are needed to urgently address three major issues during the 15th Five-Year Plan (2026-30) period.
The first is the renewal of the governance philosophy. Strict regulation and fast development of AI should not be viewed as two opposing options. Instead, agile governance is needed. Governance is not about "stepping on the brakes", but about "putting up road signs" and "installing guardrails". It is necessary to define the red lines of safety while leaving sufficient room for trial, error and the evolution of innovation.
The second is the breakthrough in governance technology. Key technologies are needed to carry out effective governance and promote "human-AI alignment" to ensure that the goals, behaviors and outputs of AI systems are consistent with human values, intentions and social norms. It is necessary to invest heavily in the research and development of alignment technologies, and establish national-level evaluation standards and laboratories, so that one can effectively keep an eye on the development of AI technologies.
The last is the improvement of governance laws and regulations. Efforts should be accelerated to develop a hierarchical, categorized, precise and effective regulatory system. There should be mandatory standards and continuous monitoring for high-risk applications such as autonomous driving and smart healthcare. There should also be mechanisms which ensure corporate self-regulation and third-party supervision. This way, the government, enterprises and society will be able to jointly participate in AI governance.
A great difficulty in international coordination over AI governance lies in profound differences in governance philosophies, something that is further complicated by geopolitics.
On the one hand, some developed countries attempt to establish technology standards centered on their own values through "minilateralism". This may lead to fragmentation of regulations and the exclusion of most developing countries. On the other hand, more than 90 percent of the world's computing power is concentrated in a small number of regions. Many countries lack infrastructure and talent, creating development inequality and making it difficult for them to participate effectively in dialogue on AI governance.
An ideal and effective international coordination mechanism for AI governance should feature at least three elements. The first is inclusiveness. It is essential to make the United Nations the main channel for governance, so that all countries can participate on an equal footing. Second, the mechanism should focus on urgent global risks on which consensus can be reached, such as deepfakes, AI weaponization, and loss of control of AI systems. Basic international norms and response frameworks should be developed for these "red alert" issues.
Third, AI governance should not shun countries' need for growth. The coordination mechanism should include concrete technical assistance, capacity-building and knowledge-sharing programs to help developing countries bridge the intelligence divide and ensure that technological dividends are shared by all.
In the future, global collaboration on AI governance should follow the spirit of respecting sovereignty and cultural diversity, and aim to achieve the greatest balance between security and development, so that AI will serve the interests of all humanity.
































