The Progress Paradox

Credits

Matt Prewitt is the president of RadicalxChange Foundation. He is a former antitrust lawyer, writer and technology strategist, whose writing focuses on the politics of technology.

In March 2024, Lina Khan took the stage before an audience of foreign policy experts to argue that the United States must resist growing calls to protect “national champions” in the technology sector. One of her arguments, familiar to all Americans, was that innovation is the fruit of market competition. Or, as she put it, “history and experience show us that lumbering monopolies mired in red tape and bureaucratic inertia cannot deliver the breakthrough innovations and technological advancement that hungry startups tend to create.” For example, she said, antitrust actions in prior years against IBM and AT&T paved the way for developments like personal computing and the internet. By contrast, government efforts to protect national champions like Boeing from competition have resulted in stagnant growth and cautionary tales.

Khan is quite right that national champions are bad, if for no other reason than because they produce brittle public dependencies on unaccountable private power. But the supposed natural alliance between market competition and innovation is more American mythology than nearly everyone across the political spectrum — right, left and center — would like to acknowledge. In truth, those nimble startups are not competing in anything like an ideal market.

Markets have always required some form of protectionist intervention — like intellectual property law — to help foster innovation. In recent years, startups have innovated because of a rich symbiosis with tech giants and their monopoly-seeking investors. Startups are indeed hungry, but their hunger is not to serve consumer needs or the national interest; it is to join the happy ranks of the monopolists. The nature of technological innovation is that competitive markets, without being “managed,” do not inspire it.

Today, this may sound bizarre, heterodox and jarring. But it was once fairly mainstream opinion. In the middle part of the 20th century, many of America’s most celebrated economic minds taught that competitive markets cause technological progress to stagnate. During the neoliberal era that followed, from the 1980s to the 2010s, this idea was largely forgotten and pushed to the margins of politics and academia. But it never lost its kernel of truth.

Old Wisdom

Economist John Kenneth Galbraith taught in the 1950s that under highly competitive conditions, private firms invest little or nothing in research and development, because competition pushes their profit margins too low to afford it. In the 1960s, the economist Kenneth Arrow further explained that competitive markets provide little or no incentive to invest in information goods like science and technology, because when markets function efficiently, the fruits of research are instantly copied and widely disseminated. These arguments also apply to artistic breakthroughs, philosophical ideas, journalism and much more. Markets are good at serving clear consumer demands, but advancing knowledge just isn’t what they do.

Midcentury American policymakers largely accepted this fact, thereby recognizing that the much-ballyhooed market economy did not foreordain the United States’ technological superiority over the Soviet Union. On the contrary, no matter how efficient the American corporate sector was at producing cheap toothpaste, the Soviet Union’s ability to decree massive, focused, ongoing state investments in science meant it could always pull ahead technologically (and thus also economically and militarily). How else to account for Sputnik? To keep pace, the U.S. government had to do the same, investing heavily in nonmarket institutions like the National Institutes of Health, the National Science Foundation and the Department of Defense.

Yet the deep lessons of Galbraith and Arrow were never fully absorbed into the market-enthused mainstream of American political thought. And as California’s technology scene morphed from the hierarchical defense-contractor culture of the 1960s to the libertarian Silicon Valley culture of the 1980s and ‘90s, it was almost entirely forgotten. A new faith emerged, one that preached that markets would generate progress of their own accord, and conversely, that technological breakthroughs would create new markets, in a virtuous cycle.

This doctrine was ideologically convenient for almost all politicians. The center-right could lean on it to pitch bustling markets as a path to technological progress. And the center-left could use it to pitch state-driven technological progress as the key to a thriving market society. It fit the optimistic, end-of-history spirit of the age like a glove. But the insights of Galbraith and Arrow remained true. Information goods are the wellspring of technological (as well as economic and cultural) “progress,” but only limits on market efficiency and abridgments of perfect competition create incentives to produce them.

“The supposed natural alliance between market competition and innovation is more American mythology than nearly everyone across the political spectrum … would like to acknowledge.”

Examples of such abridgments of market competition include direct state investments and intellectual property rights, like patents, which amount to temporary monopolies on information goods. But crucially, another important abridgment of markets is simply regular old monopoly — cornering resources, swallowing up competitors and otherwise creating dependencies that can be turned into pricing power. Monopoly power places certain private actors in a unique position to profit from information goods, including technical progress. Consequently, it also induces them to invest in those goods.

New Folly

Few Silicon Valley investors would have been able to articulate this in 1990, but by 2010, the sharpest had grokked it. Their biggest wins came when they invested money not in the best technologies or those addressing clear and present market demands, but in those most conducive to achieving and retaining monopoly power: sticky software platforms, social media networks, and infrastructural chokepoints. Any contradiction between this strategy and the Valley’s pervasive pro-market ideology was abstract and easily ignored. After all, the neoliberal period was the era of sugar-free soda and dairy-free butter: Everyone was glad to believe they could have their cake and eat it too.

In a way, turning a blind eye to the discomfiting economics of technology was the “glue” in the neoliberal consensus. Socially permissive liberals were happy to imagine that rapid social change was not being bought at the expense of creating unaccountable private power concentrations, while the Chamber of Commerce was happy to imagine that all this technological progress was bubbling forth from free and fair markets.

Justice Antonin Scalia surely induced some cognitive dissonance for neoliberals when, in the 2004 decision Verizon v. Trinko, he drove antitrust doctrine into the jaws of this contradiction. Writing for a unanimous court, he announced: “The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices — at least for a short period — is what attracts ‘business acumen’ in the first place; it induces risk-taking that produces innovation and economic growth.”

Hardly shying from the paradox, the Supreme Court thus simply expanded the permissibility of monopoly, with all its associated incentives for technological development, over competitive markets that might subordinate business activity to the needs of society and its consumers. That case, along with many similar developments, gradually transformed the way American law conceived of private power and, step by step, rolled out the red carpet for it.

It is still politically awkward to acknowledge that the United States has remained a technological leader into the 21st century largely by tolerating monopolies. But it is hardly a secret. In public writings such as 2014’s “Competition is for Losers,” Peter Thiel states with real candor: “Americans mythologize competition and credit it with saving us from socialist bread lines. Actually, capitalism and competition are opposites. Capitalism is premised on the accumulation of capital, but under perfect competition, all profits are competed away. The lesson for entrepreneurs is clear: if you want to create and capture lasting value, don’t build an undifferentiated commodity business.”

Technology entrepreneurs seeking venture funding have eagerly followed Thiel’s cue, building technologies aimed at addicting, corralling and manipulating users, while trying to tamp down political and intellectual narratives that could threaten their monopolies. Driving down bread prices — that is to say, determining how best to efficiently provide consumers with the basic things they need to thrive — is no longer an “interesting problem” to American capitalists, even though it is quite far from being “solved.” And Americans feel the results: growing dependencies on cheap (but often harmful) technology products, paired with crushingly expensive food, medical care, housing, utilities and transportation. Cheap innovations” and unaffordable, poor-quality necessities.

Plenty of readers will still be skeptical. To many, the cutthroat Silicon Valley startup ecosystem proves on its face that competition, not monopoly, produces innovative technology. But look closer. Silicon Valley startups are routinely valued at many multiples more than their revenue or profits straightforwardly justify. What explains that? Investors are betting that those startups will eventually either become monopolies or merge into existing ones through acquisitions or other forms of financial consolidation. In other words, technology startups are “competing” not to serve consumers’ needs, but to become — or join — monopolies. The aim for many of today’s Silicon Valley startups is a high-profile acquisition and a lucrative “exit.”

“It is still politically awkward to acknowledge that the United States has remained a technological leader into the 21st century largely by tolerating monopolies.”

After uniting with monopoly power, erstwhile startups gain the ability not just to charge non-competitive prices, but to exert monopoly power in subtler ways. The social scientist Kean Birch and economist D.T. Cochrane have usefully classified these unconventional forms of digital monopoly power as “enclave rents” (power from controlling ecosystems of devices), “expected monopoly rents” (capitalizing expected future monopoly power in share values, thus allowing owners to accelerate growth and acquisition), “engagement rents” (behavioral insights into deeply dependent and/or surveilled users), and “reflexivity rents” (using market dominance to influence future (favorable) policy and enforcement).

Without an eye toward these and similar forms of monopoly power, one cannot fully understand the lofty valuations of startups. Thus, when they lose their trajectory toward uniting with monopoly, they also lose their access to capital — and promptly stop “innovating.”

Past Innovation ‘Champions’

The early 20th century is replete with examples of the close, uneasy kinship between monopoly and innovation. Take telecommunications. When Bells early patents expired in 1894, many new operators entered the market. Prices were driven down, and telecommunications became accessible. Every drop of possible use was squeezed out of the existing infrastructure, to the immediate advantage of consumers. In other words, the infrastructure was exploited efficiently. At the same time, technology stagnated. Operators did not invest in complex new long-distance infrastructure or switching technology, because to maximize the usefulness of such innovations, and also satisfy pro-competition local regulators, they would have had to provide connectivity to rival operators, without these rivals having to have incurred any of the upfront costs. Gradually, it became clear that although there were fairly obvious ways to improve telecommunications technology, no one was doing it.

The landscape shifted in the financial crisis of 1907 when AT&T, financed by J.P. Morgan, bought up many small telecommunications operators. This gave it an effective monopoly over long-distance lines, which it then swiftly improved, researching and developing many complementary technological upgrades along the way. All this put it in a position to charge consumers exorbitant prices, which it also did.

Threatened by the U.S. Attorney General with antitrust action, in 1913 AT&T agreed to the so-called “Kingsbury Commitment,” promising to permit smaller operators to buy the use of its long-distance lines. In the bargain, it secured what amounted to legal approval of its monopoly — and promptly accelerated its technological pathbreaking. In 1914, AT&T built the first coast-to-coast line.

With AT&T now in a comfortably exclusive position to profit from advanced telecommunications, it spent subsequent years investing in advanced automatic switching research. In 1925, AT&T founded Bell Labs, an incubator enabling top research technologists to work while insulated from market pressures, which eventually gave birth to the personal computing revolution. This overall picture is paradoxical, complex and discomfiting. It is simultaneously reasonable to doubt that AT&T’s monopoly served the public interest, and also difficult to dispute that it accelerated investment in knowledge.

The kinship between monopoly and innovation is structural and timeless. To put it in the simplest possible terms: information is valuable only as far as it can be controlled, and it is hard to control. As the writer Stewart Brand said, it “wants to be free.” This means that to transform information into profit, you need something like hard power. You need to have exclusive dominion over some part of the system. Being “just-another-vendor-in-the-marketplace” does not cut it.

When the Wilson administration effectively blessed AT&T’s dominance, it echoed an older way of thinking about corporations. In the early 19th century — hardly ancient history in 1913 — corporations could be created only by special legislative acts, and only for clearly defined, time-or-space-limited projects, such as building bridges or exploiting colonies. Even as recently as the late 19th century, corporations still had to enumerate their purposes for the state’s approval: They couldn’t simply be used as general vehicles for any profitable opportunity. Far from a minor bureaucratic detail, this older and more prescriptive understanding of a corporations telos reflected a recognition that chartering corporations constitutes an essentially hazardous delegation of a state’s responsibility to order society.

Wilson wanted AT&T to be dominant because he wanted it to develop telecommunications for the good of society. Inherent in this conception of the company as a national champion was a certain assumption of its subordination to the government’s authority to uphold the common good. But over the next century, this assumption was lost.

“The neoliberal period was the era of sugar-free soda and dairy-free butter: Everyone was glad to believe they could have their cake and eat it too.”

By the 1990s, private ownership of corporations that lacked any telos beyond enriching shareholders was considered the norm. Thus, when Boeing was permitted in 1997 to become the only domestic airplane producer with scant conditions, it simply exploited and squandered this privilege (however limited it might have been in light of continued competition from Airbus). Instead of using the luxury of temporarily reduced competition to innovate in the national interest, it extracted profits, slashed quality, created foreign dependencies and nonetheless fell behind Airbus in the commercial market.

Similarly, in the early 2010s, when the Federal Trade Commission approved Facebook’s acquisitions of Instagram and WhatsApp, it imposed minimal conditions. The Obama administration blithely fêted Silicon Valley’s ascendance, blessing its dominance without imposing any meaningful public responsibility. Big Tech and its investors interpreted this not as a grant of responsibility for civic infrastructure, but as a blank check to pillage the social fabric.

The Irony Of Open Source

The mixed results of the open-source movement serve as another case in point. Open source has been sold to the public as a means of dissolving monopolies and accelerating technological progress. Its effects on power are more complex than that. By diffusing certain bodies of technical knowledge, open source prevents those bodies of knowledge from forming the basis of a monopoly. Potentially, it also enables more people to work on them, so that new techniques can develop faster. But who then pays for such efforts to advance knowledge? In the long run, the answer is either (a) nobody; or (b) somebody with an upstream monopoly that benefits from the diffusion of the open-sourced knowledge.

We saw shades of this in the brief DeepSeek panic. Nervous AI investors thought, for a day or two, that DeepSeek’s powerful open-source model meant that the AI giants have no “moat.” Their panic subsided when they remembered that big tech companies do not necessarily need to own a moat around AI models so long as they control enough other moats, like access to uniquely large amounts of computing power, energy, financial capital, political capital and consumer attention. This means Big Tech remains in pole position to capture a huge share of the value that AI unlocks, even if Silicon Valley’s engineers ultimately prove incapable of keeping its frontier models dramatically ahead of those being built in Shanghai, Singapore, Lagos and St. Petersburg. For this reason, global markets continue to use the big tech companies as easy conduits for investing trillions in AI.

The Vexing Politics Of Technology

All this is achingly annoying to the ideology of the old American center-left, because it explodes the Clinton and Obama era narratives that private innovation serves the public interest. In fact, when innovation is funded by investors betting that they can exploit it via monopoly power, it’s unlikely to leave society better off. To be sure, an (entirely hypothetical) innovation economy owned and tightly managed by trustworthy public authorities might serve the public interest handsomely — but this was almost the opposite of recent Democratic administrations’ innovation agendas. And one can see why: To advocate for such a model, Democrats would need to abandon their accommodation of private markets and instead argue for state control of technology infrastructure to a degree that has been at odds with American culture since the late 1970s.

Nothing more is required to grasp why the Democratic establishment’s technology policy now feels pressure from a socialist wing advocating for public ownership of infrastructure; an anti-monopoly movement that rejects the accommodation of Big Tech, venture capital and private equity; and an interventionist Trump administration.

But the ideological inconvenience is no less severe for Republicans from the libertarian-leaning center-right. Were they to acknowledge that innovation does not truly arise from free market competition, logic would compel them to concede either (a) that their true goal is not really a culture of fair market competition, but of raw contestation for power; or (b) that they do not, in fact, value innovation unqualifiedly. Lo and behold, precisely this schism has emerged in the Trump-era American right, with the Silicon Valley “tech right” representing the former, and populists and cultural conservatives representing the latter.

Where to from here? Both sides of the battered American center must first face their mistakes. This will be painful, not only because their errors have resulted in profound and long-term mis-governance, but also because their blind spots are deeply entangled with old, hard-to-kick ideological habits. The center-left’s sunny techno-utopianism traces its roots back to the rationalism of the French Revolution, via Karl Marx and early 20th-century progressives. The center-right’s fervent market fundamentalism is equally a relic of bygone eras, reflecting the thought of Friedrich Hayek and Milton Friedman — idea-warriors who pitched competitive markets as a cure-all largely to one-up the utopian promises of their techno-optimistic progressive foes. Thus, today, center-right and center-left thinking both feel like artifacts from yesterday’s ideological trenches. A new form of centrism that wants to speak to 2026 would need to thoroughly clear the decks.

“Both sides of the battered American center must first face their mistakes.”

This reckoning dovetails with a complex broader reappraisal of China and the West’s relationship with it. In the 1990s, the West misjudged its China policy by insisting on what was then a politically convenient belief: that democratization is the natural result of material prosperity. We risk a disturbingly similar mistake if we now insist upon the belief that breakthrough technological progress is the natural result of competitive markets (or, just as tenuously, democratic society). Nothing prevents China from outracing Silicon Valley and the West in AI, infrastructure and more.

Hard Choices

The point is not that the West should copy China, but that it cannot afford to duck hard choices. Shall we entrust our shared destiny to a hyper-empowered private sector accountable only to investors? Or, perhaps, shall we build societies in which conventional technological progress is not paramount? Or shall we ask our governments to manage technological progress in harmony with some robust conception of the common good? A new political center needs to face this choice boldly, not sweep it under the rug.

If America continues to prioritize private-sector-led technology development, it will come at a foreseeable, devastating cost to the social fabric — a savage new chapter in the book of modern catastrophes. If, on the other hand, we prioritize citizens’ social and economic well-being — as Europe has since World War II — we risk sacrificing technological dynamism. This should not be dismissed too hastily, but it is viable only if accompanied by strategies to avoid geo-strategic eclipse by other means (e.g., post-war Europe flourished through decades of technological non-leadership, but with the help of an American security shield and a huge reserve of accumulated wealth and prestige).

There is a third option: If we further empower the state to steer the progress and deployment of technology, we may avoid the worst outcomes. For example, we might just be able to install a meaningful, accountable sense of the common good at the head of the vast technical enterprise. We might be able to use the law nimbly, shielding the most precious elements of domestic life, religious life, education and culture from technology’s too-rapid disruptions.

However, this balancing act raises other difficulties. Relevant state authorities would need rigorous and enforced ethics measures to ensure that private interests are weeded out. They would also need to be guided by a coherent and unabashedly moral-philosophical vision, rather than the now-prevailing mishmash of contradictory ideas about technology’s proper role in society. Achieving this coherence would likely entail softening old commitments to certain liberal conceptions of the government’s role in cultivating the good life.

The job of centrists is to accept this troubling trilemma and craft a reasonable way forward. For example, technological infrastructure can and should, in many contexts, be owned and developed by public-interested actors. There is nothing “leftist” about this, because it is conducive not just to individuals’ economic well-being, but also to tradition and social stability. Marx, after all, wanted technology to upend prevailing social relations, and thus might even have celebrated Silicon Valley’s wild derailments of old norms and institutions.

In 2025, public ownership of infrastructure carries almost the opposite cultural meaning it had in Marx’s day, when it was a strictly efficiency-enhancing and “accelerationist” proposition. Today, it can just as easily moderate as accelerate technology’s economic and cultural disruptions. In fact, public ownership arguably represents a new face of moderate social conservatism.

Further, intellectual property — which is to say, the thicket of old rights-based compromises between markets and monopoly — can and should be profoundly recalibrated. Simply abandoning intellectual property rules, or allowing them to lose practical relevance, serves no one except existing monopolists. The pattern of needed intellectual property (IP) reforms is complex, but it should tilt more or less gently toward social stability, cultural virtues and widely distributed welfare.

For example, copyright’s fair use exception should be clarified to exclude AI training, thus sharpening copyright as a sword for human creators of new works against economic expropriation by AI, as Republican Sen. Josh Hawley of Missouri and Democratic Sen. Richard Blumenthal of Connecticut have proposed. But patents should be limited in scope and shortened in duration, to make it harder for companies and investors to corner markets and profit from old or relatively obvious inventions, like minor tweaks to well-understood drugs. And although legal protection of trade secrets and trademarks must continue, they should be counterbalanced with heightened responsibilities to the public.

The details of the needed reforms are technical, but the principles are not: When private actors use their IP to ends that society broadly condemns — advancing objectionable eugenic interventions, for example, or using trademarks to distort rather than clarify consumers’ perceptions of value — then society shouldn’t go out of its way to protect their investments. Such crucial conversations are long overdue.

“Public ownership arguably represents a new face of moderate social conservatism.”

Similarly, it is fashionable to dismiss data policy given the failures of the European Union’s strict privacy laws, known as the General Data Protection Regulation (GDPR). After all, GDPR didn’t meaningfully protect people from having their information exploited. But emerging legal models of data ownership still hold promise. For example, it has long been considered unworkable to create new rights in data that entitle individuals to share in data’s downstream profits and influence future use of the information. This is because individuals’ interests in data “overlap,” so that as consumers seeking convenience and good deals, we will always pull each other into a race to the bottom.

(For example, when I agree to Gmail’s terms of use, I also compromise your rights, because copies of your emails to me are in my Gmail inbox. When I agree to share my own genetic information with 23andMe, I also compromise the interests of my parents, siblings, and children. Digital services trying to provide powerful and seamless experiences, and individuals seeking such experiences, are incentivized to ignore these negative externalities. Consequently, in the name of convenience, consumers tend to undermine one another’s interests in a dynamic similar to the prisoner’s dilemma.)

But what if data rights could only be exerted via large, carefully regulated associations, not signed away by individuals? This has not been tried, and it might work. Such a new class of rights could be a crucial new tool for restoring a modicum of power back to consumers and for bringing order to the digital economy.

Last but not least, a recalibrated antitrust doctrine could be centered as a unifying, coalition-building project. In a cruel irony, antitrust law currently operates to prevent the formation of needed coalitions among many actors who are now being crushed by our heavily monopolized and consolidated economy. For example, large consolidated businesses can negotiate lower prices with their suppliers, but if many separate small businesses act in “combination” to do the same, it is unlawful. Such perversions of the spirit of antitrust now pervade American society, so that the huge gains from economic consolidation are captured mostly by financiers who can buy up businesses and combine them. We should, cautiously and advisedly, find ways to direct these windfalls instead to smaller business owner-operators who, much more than financial owners, contribute “off the ledger” to the fabric of communities.

A guiding principle for the next generation of technology policies might be this: Entrepreneurship contributes to the public interest when it competes to solve ordinary people’s existing problems, but not when it competes to lock consumers into ecosystems, addict them to dubious novelties, augment unaccountable monopolies or disrupt values and traditions that enjoy broad support. Indeed, when investors and technologists who transform society for the worse are rewarded with indefinite ownership of the infrastructure upon which the transformed society depends, everyone else loses. In short, we should be open to the state and other, ideally, non-market-driven institutions like the arts, civil society and even religion, exerting more influence upon where technological innovation goes next.

By embracing the seemingly apolitical, private-monopoly-led model of technological advancement, American centrists inked a Faustian bargain with social and economic dissolution. The truth is that technology — like media and other forms of informational power — is inherently political. It is categorically different from other kinds of market activity. It develops under the auspices of states or monopolies, transforming the social and cultural contexts within which politics occurs. Centrists must come to grips with this if they ever want to find a path back to their traditional stabilizing role.