Trial and error reduction
AI can help governments reduce the cost of policy mistakes, but it should support rather than replace accountable decision-making
Governments worldwide are entering a period in which costly policy mistakes are becoming increasingly difficult to absorb. Slower economic growth, aging populations and rapid technological change are narrowing fiscal space and magnifying the consequences of governance misjudgments. In this environment, the central challenge for policymakers is no longer whether reform is necessary, but how it can be pursued with lower risk and fewer irreversible errors.
According to the OECD, public spending in advanced economies already averages more than 40 percent of GDP, resulting in highly rigid fiscal structures that make both policy reversals and corrective adjustments significantly more costly. At the same time, global aging is expected to reduce annual GDP growth by between 0.5 and 1 percentage point in the coming decades, while slower productivity growth further limits the capacity to absorb policy misallocation errors. These pressures are compounded by the structure of today’s interconnected economies. Digital platforms, global supply chains and financial integration allow policy spillovers to travel faster and further, and the World Bank has repeatedly shown that elevated policy uncertainty is associated with weaker investment and employment outcomes, particularly in open economies. Under such conditions, trial-and-error governance becomes not only costly, but systemically risky.
It is against this backdrop that artificial intelligence is gaining attention as a practical governance tool. Beyond administrative efficiency, AI is being integrated into analytical and predictive functions that inform decision-making and service design. Global research by Ernst & Young and Oxford Economics finds that 96 percent of public-sector organizations recognize the need to accelerate AI adoption. Meanwhile, OECD surveys indicate that more than 60 percent of member governments already use AI tools in policy analysis, forecasting and public service design, reflecting a growing shift toward data-supported decision-making.
What distinguishes AI-enabled governance from traditional policymaking is its point of entry into the policy process. Rather than being used only after policies are launched, AI is increasingly applied at the front end of decision-making. Before a policy is finalized, simulation tools are being used to test how different options might affect households, companies, regions and public finances under a range of conditions. Instead of relying solely on historical experience or limited pilot programs, policymakers explore multiple scenarios in a virtual environment.
This approach allows governments to ask more concrete questions in advance. How would different income groups respond to a change in taxation or subsidies? How might companies adjust investment or employment under new regulatory requirements? Which regions or sectors would bear the greatest adjustment costs? By modeling these responses before implementation, potential risks and unintended effects can be identified earlier, when policy design remains flexible.
For example, in China’s urban governance, Hangzhou’s City Brain initiative uses AI-driven traffic simulation and real-time data analytics to model and optimize urban mobility before implementing control strategies. As a result, average travel speed has increased by around 15 percent, and emergency response times have been significantly reduced by refining signal timing and resource allocation. International institutions such as the World Bank similarly use AI-enabled microsimulation models to assess the distributional and fiscal impacts of alternative subsidy designs before implementation, reducing the social costs of policy experimentation.
AI-supported simulation shifts part of this learning process to the pre-decision stage. It enables policymakers to compare alternative designs, stress-test assumptions and identify trade-offs without exposing society to the full costs of real-world experimentation.
Moreover, simulations suggest that policies designed with positive intentions can nonetheless produce adverse outcomes, such as inadvertently raising compliance costs in ways that disadvantage small companies and weaken their competition and innovation. Crucially, AI-supported experimentation does not merely help policymakers choose among predefined options; it can also surface latent dynamics and risk channels that may not be apparent through conventional reasoning or past experience.
Clear boundaries, however, remain essential. AI can offer projections and structured analysis, but it cannot supply legitimacy or resolve value conflicts. Even the most accurate model can only describe what is likely to happen under certain assumptions; it cannot determine what should be done. Decisions involving fairness, social priorities and ethical trade-offs remain the responsibility of human judgment.
This principle is widely reflected in international governance debates. The European Union’s Artificial Intelligence Act, together with frameworks developed by the United Nations and its agencies, places human oversight at the center of AI governance. These initiatives reflect a shared understanding that AI should support accountable decision-making rather than replace it. Data can inform choices, but public deliberation and institutional checks ultimately determine whether decisions are accepted.
Lower technical costs of experimentation do not eliminate governance risk. A central vulnerability of AI-supported policymaking lies in its limited transparency and weak explainability. Many AI systems are built on highly complex models that are difficult for non-specialists to interpret, making it challenging to clearly trace how specific policy recommendations are produced. When decision-support tools function as black boxes, policymakers may struggle to justify policy outcomes, while errors or distortions can remain concealed until they manifest as tangible real-world consequences.
Algorithmic bias is another major challenge. When AI systems are trained on historical data and deployed without sufficient oversight, they can replicate and institutionalize existing social inequalities. During the COVID-19 pandemic, the United Kingdom’s exam regulator Ofqual introduced an algorithm to standardize students’ grades after nationwide exams were canceled. The model combined teachers’ assessments with schools’ historical performance. As a consequence, students from schools with weaker historical outcomes — many of which are located in lower-income and disadvantaged communities — were systematically downgraded, even when individual performance indicators were strong. The episode illustrates how, without adequate oversight, AI systems can embed structural inequality into formal policy decisions, thereby weakening public trust in governance.
Ultimately, effective governance depends not only on technical capability, but on public confidence and institutional credibility. AI can reduce the financial and administrative costs of policy experimentation, but it cannot substitute for transparency, accountability or social consensus. The World Bank research consistently shows that countries combining digital tools with institutional reform achieve stronger policy outcomes than those relying on technology in isolation.
As governments increasingly turn to AI to navigate uncertainty, governance capacity must evolve in parallel. Legal frameworks need to be strengthened, algorithmic use must be made transparent and channels for public participation should be expanded. AI can help societies experiment more safely, but whether it leads to better governance will depend not on computing power, but on the principles that guide its use.
Zhou Yujia is an assistant researcher at the Department of Computer Science and Technology at Tsinghua University. Liu Yiqun is a professor at the Department of Computer Science and Technology, and the director of the Office for Research at Tsinghua University.
The authors contributed this article to China Watch, a think tank powered by China Daily. The views do not necessarily reflect those of China Daily.
Contact the editor at editor@chinawatch.cn.
































