日韩精品久久一区二区三区_亚洲色图p_亚洲综合在线最大成人_国产中出在线观看_日韩免费_亚洲综合在线一区

Global EditionASIA 中文雙語Fran?ais
Opinion
Home / Opinion / From the Press

Superintelligence development: Better slow than sorry

chinadaily.com.cn | Updated: 2026-02-12 18:12
Share
Share - WeChat
Jin Ding/China Daily

Editor's note: As tech giants and research institutes across the world are racing to develop artificial general intelligence and even aiming to usher in the future of superintelligence, an open letter issued months ago and calling for temporary "prohibition" on the development of superintelligence has garnered support from some scientists, including artificial intelligence pioneers. Zeng Yi, a researcher at the Institute of Automation of the Chinese Academy of Sciences, spoke to Peng Fei, a commentator of People's Daily, about the impact superintelligence could have and why safety should be the top priority. Below are excerpts of the interview. The views don't necessarily represent those of China Daily.

Artificial general intelligence generally refers to an information processing tool with high generalization capability, which approaches or reaches the level of human intelligence and boasts broad application prospects.

Artificial superintelligence, by contrast, refers to intelligence that surpasses human intelligence in all aspects and is regarded as a life-like entity. This means it would develop autonomous consciousness, and many of its thoughts and actions would likely be incomprehensible to humans, and therefore less controllable.

It is hoped that superintelligence will be "super-altruistic", but what if it turns out to be "super-malevolent"? It is this sense of uncertainty that causes concern.

Superintelligence cannot be simply compared to any technological tool in history, as the possibility of it possessing independent cognition and surpassing human intelligence presents an unprecedented challenge. If the goals of superintelligence are inconsistent with human values, even minor deviations could be amplified by its capabilities and lead to catastrophic consequences.

Safety must be the first priority for the development of superintelligence. That is to say, it should be embedded in its "genes". Safety guardrails should not be lowered over concerns that they may affect the model's capabilities. Comprehensive assessment is needed to identify as many potential hazards as possible and strengthen the model's safety.

Typical security issues such as privacy leakage and disinformation can be effectively addressed and short-term risks properly handled through the technical cycle of "attack-defense-evaluation" and the continuous upgrading of the model.

But in the long run, the real challenge lies in aligning artificial superintelligence with human expectations. Reinforcement learning from human feedback, the current approach that embeds human values into AI through human-machine interaction, will likely prove ineffective for superintelligence.

Given that superintelligence may develop self-awareness, an ideal vision is to make it develop moral intuition, empathy and altruism on its own, rather than merely relying on values and rules imposed from the outside. Risks can only be minimized when AI evolves from being ethically compliant to having morality.

Humanity needs to prevent the development of AI from turning into an "arms race". The creation of the world's first superintelligence might not require international cooperation, but ensuring that superintelligence is safe and reliable for all humanity will require global collaboration.

The world needs an efficient and effective international institution to coordinate the governance of AI and ensure its safety. In August 2025, the United Nations General Assembly decided to establish the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance to promote sustainable development and bridge the digital divide. Explorations in this regard should be further deepened and continued.

Those countries with advanced AI technologies bear a greater responsibility and obligation to prevent the reckless development of superintelligence in the absence of rules.

China advocates building a community with a shared future for humanity and a community with a shared future in cyberspace. Emphasizing the coordination of development and safety, the country has put forward the Global AI Governance Initiative. These initiatives deserve global promotion and implementation in relation to AI as well.

It is better to slow down a bit to lay a solid foundation for safety, than to seek quick success and instant benefits that might lead human society into an irreversible and perilous situation.

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US