Maximizing the benefits of artificial intelligence and managing the risks will require innovative policies with global reach
Beginning in the 18th century, the Industrial Revolution ushered in a series of innovations that transformed society. We may be in the early stages of a new technological era—the age of generative artificial intelligence (AI)—that could unleash change on a similar scale.
History, of course, is filled with examples of technologies that left their mark, from the printing press and electricity to the internal combustion engine and the internet. Often, it took years—if not decades—to comprehend the impact of these advances. What makes generative AI unique is the speed with which it is spreading throughout society and the potential it has to upend economies—not to mention redefine what it means to be human. This is why the world needs to come together on a set of public policies to ensure AI is harnessed for the good of humanity.
The rapidly expanding body of research on AI suggests its effects could be dramatic. In a recent study, 453 college-educated professionals were given writing assignments. Half of them were given access to ChatGPT. The results? ChatGPT substantially raised productivity: the average time taken to complete the assignments decreased by 40 percent, and quality of output rose by 18 percent.
If such dynamics hold on a broad scale, the benefits could be vast. Indeed, firm-level studies show AI could raise annual labor productivity growth by 2–3 percentage points on average: some show nearly 7 percentage points. Although it is difficult to gauge aggregate effect from these types of studies, such findings raise hopes for reversing the decline in global productivity growth, which has been slowing for more than a decade. A boost to productivity could raise incomes, improving the lives of people around the world.
But it is far from certain the net impact of the technology will be positive. By its very nature, we can expect AI to shake up labor markets. In some situations, it could complement the work of humans, making them even more productive. In others, it could become a substitute for human work, rendering certain jobs obsolete. The question is how these two forces will balance out.
A new IMF working paper delved into this question. It found that effects could vary both across and within countries depending on the type of labor. Unlike previous technological disruptions that largely affected low-skill occupations, AI is expected to have a big impact on high-skill positions. That explains why advanced economies like the US and UK, with their high shares of professionals and managers, face higher exposure: at least 60 percent of their employment is in high-exposure occupations.
On the other hand, high-skill occupations can also expect to benefit most from the complementary benefits of AI—think of a radiologist using the technology to improve her ability to analyze medical images. For these reasons, the overall impact in advanced economies could be more polarized, with a large share of workers affected, but with only a fraction likely to reap the maximum productivity benefits.
Meanwhile, in emerging markets such as India, where agriculture plays a dominant role, less than 30 percent of employment is exposed to AI. Brazil and South Africa are closer to 40 percent. In these countries, the immediate risk from AI may be reduced, but there may also be fewer opportunities for AI-driven productivity boosts.
Over time, labor-saving AI could threaten developing economies that rely heavily on labor-intensive sectors, especially in services. Think of call centers in India: tasks that have been offshored to emerging markets could be re-shored to advanced economies and replaced by AI. This could put developing economies’ traditional competitive advantage in the global market at risk and potentially make income convergence between them and advanced economies more difficult.
Redefining human
Then there are, of course, the myriad ethical questions that AI raises.
What’s remarkable about the latest wave of generative AI technology is its ability to distill massive amounts of knowledge into a convincing set of messages. AI doesn’t just think and learn fast—it now speaks like us, too.
This has deeply disturbed scholars such as Yuval Harari. Through its mastery of language, Harari argues, AI could form close relationships with people, using “fake intimacy” to influence our opinions and worldviews. That has the potential to destabilize societies. It may even undermine our basic understanding of human civilization, given that our cultural norms, from religion to nationhood, are based on accepted social narratives.
It's telling that even the pioneers of AI technology are wary of the existential risks it poses. Earlier this year, more than 350 AI industry leaders signed a statement calling for global priority to be placed on mitigating the risk of “extinction” from AI. In doing so, they put the risk on par with pandemics and nuclear wars.
Already, AI is being used to complement judgments traditionally made by humans. For example, the financial services industry has been quick to adapt this technology to a wide range of applications, including introducing it to help conduct risk assessments and credit underwriting and recommend investments. But as another recent IMF paper shows, there are risks here. As we know, herd mentality in the financial sector can drive stability risks, and a financial system that relies on only a few AI models could put herd mentality on steroids. In addition, a lack of transparency behind this incredibly complex technology will make it difficult to analyze decisions when things go wrong.
Data privacy is another concern, as firms could unknowingly put confidential data into the public domain. And knowing the serious concerns about embedded bias with AI, relying on bots to determine who gets a loan could exacerbate inequality. Suffice it to say, without proper oversight, AI tools could actually increase risks to the financial system and undermine financial stability.
Public policy responses
Because AI operates across borders, we urgently need a coordinated global framework for developing it in a way that maximizes the enormous opportunities of this technology while minimizing the obvious harms to society. That will require sound, smart policies—balancing innovation and regulation—that help ensure AI is used for broad benefit.
Legislation proposed by the EU, which classifies AI by risk levels, is an encouraging step forward. But globally, we are not on the same page. The EU’s approach to AI differs from that of the US, whose approach differs from that of the UK and China. If countries, or blocs of countries, pursue their own regulatory approach or technology standards for AI, it could slow the spread of the technology’s benefits while stoking dangerous rivalries among countries. The last thing we want is for AI to deepen fragmentation in an already divided world.
Fortunately, we do see progress. Through the G7’s Hiroshima AI process, the U.S. executive order on AI, and the UK AI Safety Summit, countries have demonstrated a commitment to coordinated global action on AI, including developing and—where needed—adopting international standards.
Ultimately, we need to develop a set of global principles for the responsible use of AI that can help harmonize legislation and regulation at the local level.
In this sense, there is a parallel to cooperation on the shared global issue of climate change. The Paris Agreement, despite its limitations, established a shared framework for tackling climate change, something we could envision for AI too. Similarly, the Intergovernmental Panel on Climate Change—an expert group tracking and sharing knowledge about how to deal with climate change—could serve as a blueprint for such a group on AI, as others have suggested. I am also encouraged by the UN’s call for a high-level advisory body on AI as part of its Global Digital Compact, as this would be another step in the right direction.
Given the threat of widespread job losses, it is also critical for governments to develop nimble social safety nets to help those whose jobs are displaced and to reinvigorate labor market policies to help workers remain in the labor market. Taxation policies should also be carefully assessed to ensure tax systems don’t favor indiscriminate substitution of labor.
Making the right adjustments to the education system will be crucial. We need to prepare the next generation of workers to operate these new technologies and provide current employees with ongoing training opportunities. Demand for STEM [science, technology, engineering, and math] specialists will likely grow. However, the value of a liberal arts education—which teaches students to think about big questions facing humanity and do so by drawing on many disciplines—may also increase.
Beyond those adjustments, we need to place the education system at the frontier of AI development. Until 2014, most machine learning models came from academia, but industry has since taken over: in 2022, industry produced 32 significant machine learning models, compared with just three from academia. As building state-of-the-art AI systems increasingly requires large amounts of data, computer power, and money, it would be a mistake not to publicly fund AI research, which can highlight the costs of AI to societies.
As policymakers wrestle with these challenges, international financial institutions (IFIs), including the IMF, can help in three important areas.
First, to develop the right policies, we must be prepared to address the broader effects of AI on our economies and societies. IFIs can help us better understand those effects by gathering knowledge at a global scale. The IMF is particularly well positioned to help through our surveillance activities. We are already doing our part by pulling together experts from across our organization to explore the challenges and opportunities that AI presents to the IMF and our members.
Second, IFIs can use their convening power to provide a forum to share successful policy responses. Sharing information about best practices can help to build international consensus, an important step toward harmonizing regulations.
Third, IFIs can bolster global cooperation on AI through our policy advice. To ensure all countries reap the benefits of AI, IFIs can promote the free flow of crucial resources—such as processors and data—and support the development of necessary human and digital infrastructure. It will be important for policymakers to carefully calibrate the use of public instruments; they should support technologies at an early stage of development without inducing fragmentation and restrictions across countries. Public investment in AI and related resources will continue to be necessary, but we must avoid lapsing into protectionism.
An AI future
Because of AI’s unique ability to mimic human thinking, we will need to develop a unique set of rules and policies to make sure it benefits society. And those rules will need to be global. The advent of AI shows that multilateral cooperation is more important than ever.
It's a challenge that will require us to break out of our own echo chambers and consider the broad interest of humanity. It may also be one of the most difficult challenges for public policy we have ever seen.
If we are indeed on the brink of a transformative technological era akin to the Industrial Revolution, then we need to learn from the lessons of the past. Scientific and technological progress may be inevitable, but it need not be unintentional. Progress for the sake of progress isn’t enough: working together, we should ensure responsible progress toward a better life for more people.
Opinions expressed in articles and other materials are those of the authors; they do not necessarily reflect IMF policy.