5 min (1403 words) Read

Download PDF

A shift toward centralization and concentration could snuff out technology’s productive potential

In the mid-20th century, the Soviet Unions technological successes, notably launching Sputnik and sending Yuri Gagarin into space, convinced many observers that centrally planned economies might outperform market-driven ones. Prominent economists such as Paul Samuelson predicted that the USSR would soon overtake the United States economically, while Oskar Lange, a Polish economist and socialist, argued that emerging computer technologies could effectively replace the outdated market mechanism.

Yet, paradoxically, the USSR collapsed just as the computer revolution took off. Despite considerable investments—including Nikita Khrushchevs attempt to create a Soviet counterpart to Silicon Valley on the edge of Moscow in Zelenograd—the USSR failed to harness the promise of computing technology. The obstacle was not a shortage of scientific talent, but institutions inhospitable to exploration. Whereas Silicon Valley thrived on decentralized experimentation, with inventors job-hopping among start-ups running multiple concurrent experiments, innovation in Zelenograd was centrally controlled and orchestrated entirely by Moscow government officials.

As Friedrich Hayek argued, the main difficulty with central planning wasnt processing data but gathering essential local knowledge. Soviet planners could manage standardized operations but faltered during technological uncertainty, lacking benchmarks to monitor factory performance and punish slackers. Despite early rapid growth, the USSR stagnated, unable to adapt to new technological frontiers, and eventually collapsed.

These insights are still relevant, particularly as new forms of artificial intelligence again raise the question of whether centralized authority, such as Chinas AI-driven surveillance state, or corporate concentration—as among Silicon Valleys big tech companies—can leverage new technologies effectively to manage the economy and society.

Frontier innovation

Conventional theories of wealth and poverty that emphasize factors like geography, culture, or institutions struggle to explain dramatic economic reversals. Geographic conditions, which remained unchanged, cannot account for the USSRs shift from rapid growth to collapse. Cultural factors also evolve too slowly to explain swift economic booms and subsequent busts. While institutions such as laws and regulations can change more abruptly, institutional theories based on universal conditions are similarly incomplete; for instance, both the USSR and China experienced decades of rapid growth despite lacking secure private property rights. Ultimately, understanding economic progress requires examining how institutions and culture interact dynamically with technological changes.

Recognizing that economic performance is tied to this shifting interaction reframes the familiar policy debate over technological progress. One side advocates decentralized innovation driven by small firms in lightly regulated markets; the other promotes state-led industrial policy executed by powerful bureaucracies. However, both approaches are optimal only under certain conditions: Centralized bureaucracies effectively exploit accessible technologies and drive catch-up growth, whereas decentralized systems excel at pioneering innovations at the technological frontier. Over time, economic governance must adapt or risk stagnation.

Japan as Number One

Even when the Soviet Union dissolved in 1991, Americas relief was tempered by a new anxiety: Many scholars and journalists believed that Japan would soon eclipse the US. Ezra Vogels 1979 best-seller, Japan as Number One, had already warned of Tokyos growing edge in computers and semiconductors, a gain seemingly as dramatic as its earlier rise in automobiles. Yet the computer revolution that followed told a different story. From the early 1990s US software-driven productivity soared, while Japanese firms clung stubbornly to hardware.

Japans ascent had rested on a tightly coordinated production system. Because Japanese firms could take equity stakes in their suppliers—something US antitrust law discouraged—they wove dense knowledge networks reinforced by just-in-time logistics, computer-aided design, and reprogrammable machine tools. The result was striking efficiency: Japanese autoworkers were 17 percent more productive than their US counterparts by 1980, leading Ford and GM to report steep losses.

The Japanese edge, however, came less from inventing new products than from refining Western ones. Color televisions, the Walkman, and VCRs became global hits only after Japanese engineers reengineered them for cost and durability. In a seminal study, economist Edwin Mansfield found that roughly two-thirds of Japanese R&D targeted process improvements—the mirror image of the product-heavy US effort—allowing faster translation of laboratory advances into cheap, marketable goods.

But those very strengths became limitations. Eminent observers such as Alfred Chandler Jr. had expected the computer age to reward hardware perfection and streamlined production—factors that favored Japan—but it was the dynamism of US start-ups like Apple and Microsoft that proved decisive. US antitrust policy, rooted in the 1890 Sherman Antitrust Act, pried open markets by forcing IBM to unbundle its hardware and software and by breaking up AT&T just before the commercial internet took off. Without a single gatekeeper, entrepreneurs could innovate freely, and the web expanded unimpeded.

Japans looser competition rules, by contrast, fostered cartelization and entrenched keiretsu conglomerates. The same coordination that once sped incremental upgrades now slowed the leap to software and internet-based business models, crowding out new entrants. Japans technological momentum stalled. Even within the US, regions organized around fierce competition, such as Silicon Valley, outperformed more hierarchical, vertically integrated areas like New Englands Route 128 tech cluster.

End of coordinated capitalism

Japan is not an isolated example. After World War II, Western Europes economy grew quickly by adopting US methods of mass production across a broad range of industries. This strategy worked well for several decades, but by the 1970s, Europe had exhausted the backlog of American technology. To maintain growth, it would need to shift toward a model based on innovation rather than merely catch up with existing technologies.

This shift proved challenging. Europes economic institutions were shaped by a long history of industrial catch-up, established in the late 19th century to absorb British technology and reinforced during the postwar era when Europe was closing the gap with the US. These institutions were designed to support stable and predictable economic growth through careful planning, coordinated industries, and close cooperation between businesses, banks, and governments. Such coordinated capitalism was effective when the task was clear—catching up with established industrial practices—but became an obstacle when faced with the uncertainty and disruption caused by the computer revolution and new information technologies.

In France, the governments system of indicative planning, which set economic targets to coordinate investments, worked well with incremental and predictable technological progress. But with rapid technological change, planners were overwhelmed and unable to forecast accurately and direct resources effectively.

Similarly, Italys state-owned enterprises, crucial during the postwar boom, proved rigid and unresponsive to a new age of technological turbulence. In Spain and Portugal, the heavy influence of the state, combined with entrenched interests, severely limited economic flexibility, hampering innovation and adaptation. Consequently, these Southern European nations experienced prolonged economic stagnation during the computer revolution, often referred to as two lost decades.”

From Hayek to Moravec

The lesson is clear: Economic miracles stall when the institutions that enabled past successes become misaligned with new challenges. The Soviet Union and much of Europe stumbled when rigid mass production models failed to adapt to the unpredictability of the computer age, while Japan faltered as the epicenter of innovation shifted from hardware to software. Today, Chinas growth is increasingly constrained by tightened party control, and the US faces a similar peril whenever monopoly power remains unchecked. The danger that centralization and concentration will snuff out innovation now hangs over AI. Because AI performance has historically improved mainly by scaling up computing power and data availability, many observers concluded that AI is a contest best left to a handful of  national champions.” That belief is seductive—and mistaken.

As in the computer revolution, true breakthroughs come from exploring the unknown, not from perfecting what is already formalized. Large language models (LLMs)—AI systems trained to generate and understand human language—grew 10,000-fold in scale between 2019 and 2024 yet still scored only about 5 percent on the ARC reasoning benchmark, a test that assesses advanced problem-solving abilities. Meanwhile, leaner approaches such as program search (which generates explicit programs to solve tasks) have topped 20 percent, and newer in-context learning methods (where models learn from examples without retraining) are racing ahead.

Nor will AI soon make human exploration obsolete. Hans Moravecs old observation still holds: What is effortless for humans (such as walking a trail) remains hard for machines, and vice versa. Language models trained on the entire internet still lack the sensorimotor experience of any four-year-old. Until we can encode that embodied knowledge, centralized AI systems will trail the decentralized experimentation billions of humans perform daily.

Ingenuity flourishes precisely where precedent is thin. Inventors, scientists, and entrepreneurs thrive on turning the unknown into opportunity. By contrast, large language models default to statistical consensus. Imagine an LLM trained in 1633—it would steadfastly uphold Earth as the universes center; given 19th century literature, it would confidently deny that humans could ever fly, echoing the long list of failed trials that preceded the Wright brotherssuccess. Even Google DeepMinds Demis Hassabis admits reaching true artificial general intelligence may need several more innovations.”

Control and competition

Those are unlikely to emerge from centralized scale alone; they will come, as before, from widening the arena of experimentation and lowering the barriers to entry. However, in the age of AI, both China and the US are moving in the opposite direction, increasing central control and reducing competitive dynamism.

China’s most dynamic sectors remain driven by private or foreign-backed firms, while state-owned enterprises lag. Yet Beijing is recentralizing authority: Licenses, credit, and contracts now favor politically reliable conglomerates, antitrust law is wielded selectively, and anti-corruption campaigns make loyalty a prerequisite for survival.  Once-vital provincial experimentation has withered as officials chase crude indicators such as patent counts, flooding registries with low-value filings. Patronage is eclipsing transparent rules, and loyalty is displacing competence, eroding the state’s capacity to nurture frontier-level innovation and pushing the economy toward slower, less-innovation-driven growth.

To be sure, China still benefits from a substantial talent pool and a government deeply committed to technological advancement. But as in Western countries, firms lacking strong political connections—such as the AI start-up DeepSeek—prove most innovative. Although authorities might permit these companies to operate with relative autonomy as long as their activities align with national goals, the absence of robust legal protections leaves them vulnerable to shifts in political priorities. Consequently, firms must invest resources in building political alliances, diverting attention and capital from driving innovation. And the governments control over critical information technologies frequently tempts authorities to strengthen their political dominance over society, potentially stifling grassroots innovation.

The US shows the same symptoms in different guise. Since the computer era of the 1990s, its industries have grown markedly more concentrated, undercutting the fluid competition that once characterized Silicon Valley. A web of noncompete clauses now hampers labor mobility, curbs the flow of tacit knowledge, and discourages scientists and engineers from founding rival firms. Because start-ups are central to translating laboratory insights into commercial products, this drag on talent circulation weakens the very mechanism—creative destruction—that reallocates market share toward fresh ideas. Economists Germán Gutiérrez and Thomas Philippon show that the trend is driven less by unavoidable scale economies than by incumbent lobbying that hard-codes regulatory advantages, from patent extensions to sector-specific licensing hurdles.

This pattern also threatens AI. Beneath todays veneer of intense competition, Microsofts deep alliance with OpenAI already controls about 70 percent of the commercial LLM market, while Nvidia provides about 92 percent of the specialized graphics-processing units (GPUs) used to train these models. Together with Alphabet, Amazon, and Meta, these incumbents have also been quietly buying stakes in promising AI start-ups. Sustaining a policy regime that safeguards the competitive arena itself, rather than the fortunes of particular firms, is essential if the next generation of transformative innovators is to deliver the promised boost to productivity. Thats as true for the AI age as it was for the computer era.

Podcast

As tech innovation, particularly in the field of AI, is increasingly focused on a few key players, the industries benefiting from these tools have also become more concentrated. In this podcast, Carl Benedikt Frey says that concentration of AI-using industries will push the direction of technological change further towards automation rather than product innovation.

CARL BENEDIKT FREY is the Dieter Schwarz Associate Professor of AI and Work at Oxford University. This article draws on his most recent book, How Progress Ends: Technology, Innovation, and the Fate of Nations.

Opinions expressed in articles and other materials are those of the authors; they do not necessarily reflect IMF policy.