The Fourth Revolution: The Present and Future of Artificial Intelligence
— A Roundtable on the AI Revolution
第四次革命:人工智能的现状与展望 ---人工智能革命四人谈综述
Ma Siwei(马四维)
【Editor’s note(编者按): On January 9, 2026, Michael Burry(迈克尔·伯里), Dwarkesh Patel(德瓦凯什·帕特尔), Patrick McKenzie(帕特里克·麦肯齐), and Jack Clark(杰克·克拉克)opened a shared Google Doc and began arguing under a blunt title: “The AI revolution is here. Will the economy survive the transition?” The subject was artificial intelligence and the human future. Burry is the investor who foresaw the 2008 crash; at a time when everyone else was piling into subprime mortgage assets, he had already seen the collapse coming. Now he is watching trillions flow into AI infrastructure, and he is skeptical. Clark is a co-founder of Anthropic, a leading AI lab racing to “build the future.” On his own show, Patel has interviewed marquee figures from Mark Zuckerberg to Tyler Cowen, circling the same question: where is all this heading? The four of them, with McKenzie moderating, were “pulled into” a single Google Doc to ask one core question: is artificial intelligence a genuine opportunity of the age, or are we witnessing, in real time, a historic misallocation of capital?】
In the past few years, artificial intelligence has barged into public view in a very un-sci-fi way. It did not arrive as Iron Man or Skynet. It showed up as an “intelligent input method” that can chat, write code, and draw pictures. Capital markets and the state, however, did not treat it casually. Trillions of dollars have been poured into chips, data centers, electricity, and talent. A handful of giants are charging ahead as if someone has wound their springs tight.
Placed on a longer historical timeline, this scene looks less like a passing tech fad and more like a threshold. Humanity may be standing at the door of a “fourth revolution.”
The agricultural revolution tamed the land. The industrial revolution tamed machines. The information revolution tamed bits. This time, the AI revolution is trying to tame something else: cognition itself.
From Attention Is All You Need to “This Is the Worst It Will Ever Be”
If a historian were asked to sketch a timeline for recent advances in AI, the year 2017 and the paper Attention Is All You Need (《注意力就是你所需要的一切》) would be hard to avoid. At the time, the mainstream path still ran through “training agents from scratch”: dropping blank-slate agents into complex environments—playing games, Go, StarCraft—and hoping that reinforcement learning would simmer up something like “general intelligence.” It was a “tabula rasa” bet. The result was superhuman play inside games, and no “general person” outside them.
In Attention Is All You Need, Ashish Vaswani and his co-authors (阿希什·瓦斯瓦尼等人, Vaswani, Ashish, et al.) proposed what became known as the Transformer (Transformer) architecture. They replaced much of the previous sequence-model machinery with a self-attention mechanism. Almost all of today’s large models—including the various GPTs, Claude, and Gemini—are built on that architecture. At the time, this path looked less glamorous. It relied on pre-training models on vast datasets so that they could predict and generate language and other symbols. Then it stacked up compute, data, and model size, and used so-called scaling laws (“Scaling Laws”) to model the relationship between capability and resources. Jack Clark and a cohort of researchers (杰克·克拉克, Jack Clark) bet on exactly this route: tie the model to the “laws of scale,” push parameters from billions to trillions, push training corpora from terabytes to petabytes.
By 2025, looking back, it is clear that this “back-yard steel furnace” approach is what actually changed the landscape. Language models were trained into general interfaces; programming, writing, translation, and search were all jammed into the same “circuit.” Then speech, images, and video were layered on top, and multimodal abilities grew one tier at a time. That produced the models we see today, and the parade of names like Claude Code and Gemini 3.5 Pro.
For outsiders, one fact is often missed: everything visible today is “the floor, not the ceiling.” Researchers find themselves repeating the same line to policymakers: this is the dumbest day of these models’ lives. Every few months, the frontier gets pushed again.
Michael Burry’s surprise (迈克尔·伯里, Michael Burry) lies elsewhere. His background is in finance, and his mental image of AI still leans toward “artificial general intelligence” and “self-awareness.” For his generation raised on science fiction, AI is HAL 9000, Skynet, robots that think. Very few would have regarded “a search-heavy chatbot” as true AI.
From his vantage point, three things are especially unexpected: Google is not leading but being pushed by startups; what triggered a multi-trillion-dollar capital outlay was not AGI, but a chat interface; the king of chips has remained Nvidia, not the ASICs and small-model ecosystem people once imagined would take over.
From the start, then, the AI revolution carries the marks of its age: a deep dependence on capital markets, on the decisions of a small number of giants, and on a software-engineering culture of “ship fast, iterate.” This looks very different from the scattered workshops of the steam era, and from the garage hackers of the early information revolution.
The Fourth Revolution: From Land and Machines and Bits to “Cognitive Infrastructure”
In the usual scheme, the agricultural revolution answered the question of who farms and how; the industrial revolution answered who exerts physical force and how; the information revolution answered who keeps the books and how.
On the surface, the AI revolution seems to repeat the same formula. But the object has quietly changed. The point is no longer just to have machines exert force or keep accounts on our behalf. It is to have machines understand, generate, and “think” alongside—and increasingly, instead of—people. That shift sets this revolution apart from the previous three in several ways.
First, it is global from day one. The agricultural revolution was tied tightly to geography and climate, and differences between regions were enormous. The industrial revolution, though it eventually spread, began in a small cluster of coalfields, ports, and textile towns. The information revolution traveled much faster, but hardware still took time to install. This time, as long as there is connectivity, cloud access, and compute, a country—or even an individual—can build on the same generation of models. Diffusion is faster than in any of the earlier revolutions.
Second, its infrastructure is highly concentrated. The base resource of agriculture was land. For industry it was coal and iron. For the information age it was fiber and transistors. All of these were dispersed among countless firms and states. The base resources of the AI revolution are training clusters, high-end GPUs, electricity, and cooling systems. At the moment, these sit inside a handful of “super-platforms.” As one quip puts it, every big software company has become a hardware company. Capital expenditure has shifted from light to heavy. Return on invested capital (ROIC) is coming down from high levels. The server farms on the giants’ balance sheets are turning into elements of national power.
Third, it reaches into the “inner order” of human life. The agricultural revolution redrew the line between settlement and nomadism. The industrial revolution redrew the line between factory and home. The information revolution redrew the line between office and network. The AI revolution has begun to redraw the line around “mental labor.” Developers use large models to write, revise, and debug code. Scholars use them to search literature, make charts, and draft reports. Doctors, lawyers, and teachers are experimenting with handing chunks of cognitive work to models.
Patrick McKenzie (帕特里克·麦肯齐, Patrick McKenzie) has pointed out that, for decades, enormous quantities of highly educated, highly paid human effort have gone into making PowerPoint slides and Excel charts. Now, at least for charting and layout, large models have already displaced a slice of that work. The change looks trivial at first glance, but it adds up. Cognitive labor is not as visible as physical labor. Its boundaries are fuzzier, and that very fuzziness can make it easier to shift quickly.
The Productivity Paradox: “It Feels Faster,” but the Numbers Lag
Every technological revolution runs into the same question: how much faster did productivity really become? The current situation is oddly split. Inside big companies, developers report eye-popping efficiency gains; some surveys show engineers who use large models claiming a 50 percent boost. At the same time, careful experiments like those conducted by METR have produced much cooler results—on some tasks, even showing declines.
Jack Clark readily concedes that there is a gap between subjective feeling and real productivity, and that better “instrument panels” are needed to measure it. Code generation is a textbook “closed-loop task”: the model writes something that can immediately be run and tested, and there is a full feedback cycle. Once the loop opens up—writing a strategy memo, drafting a policy analysis—the standards blur. Who decides what counts as readable? How is maintainability measured? How are soft qualities like “taste” and “judgment” scored? For that reason, large models have made the quickest inroads among programmers, and knowledge workers in other fields have adopted them far more slowly.
Then there is a blunter question: who pays? Here Burry’s challenge bites hardest. He keeps stressing that, in the end, AI must be paid for by firms and households. Global GDP is a fixed-size pie, and software accounts for less than a trillion dollars of it. If Nvidia can sell $400 billion worth of chips while the application layer brings in less than $100 billion a year, that kind of imbalance between infrastructure and applications will be hard to sustain.
In his notes, Burry uses an old-fashioned image. In the last century, a department store installs an escalator. The one across the street has to follow. Both spend heavily. The customer experience improves a bit. Neither gains a lasting edge.
AI infrastructure is likely to replay that story. If every competitor buys the same models, the same chips, the same cloud services, each firm will be forced to spend. Consumers will enjoy lower prices and better service. Few companies will manage to carve out a long-term moat. In that scenario, more of the gains flow to customers and to society at large, rather than pooling in the hands of a single “AI giant.”
This may be the productivity paradox of the AI revolution. On the micro level, individuals and organizations feel “much faster.” On the macro level, shifts in wages, employment, and aggregate demand are less obvious, and software becoming cheaper may even give the economy a faint deflationary tinge.
Stress-Testing the Order: Jobs, Capital, and the State
The shock that the AI revolution is delivering to the existing political-economic order is still building. Several fault lines are already visible.
The first is the labor market. Early forecasts tended to assume that once there was a model that could pass a Turing test and solve complex math and coding problems, half of white-collar work would vanish overnight. That has not happened. Performance on closed-form questions is outstanding. Put the same models into open-ended situations that involve long-term responsibility and intricate coordination, and they turn out to be “sharp but unreliable.”
The result is that current usage mostly looks like “human–machine collaboration.” The model writes a first draft; a human edits. The model suggests options; a human chooses. The model does grunt work; a human carries responsibility. The disappearance of entire job categories has been limited so far, but the internal composition of many jobs is changing.
The second fault line is capital structure. Big tech firms used to be the standard image of light-asset, high-return businesses. Cloud computing raised capital expenditures, but most of them still counted as “high-ROIC” companies. Now, chasing AI’s promise, they are spending like heavy industry: generation after generation of GPUs and custom chips, with depreciation cycles getting shorter; data centers being retrofitted with new power, cooling, and fiber to suit each chip wave; huge sums parked under construction-in-progress (CIP) to delay their impact on the income statement.
Burry’s warning is that the trend in ROIC tells you more about a firm’s long-term fate than its absolute profits. If returns keep sliding while capital spending keeps ratcheting higher, the end state is likely to be “larger scale, lower value.” For AI, that would mean a revolution that is, for a very long time, mainly a test of capital’s endurance.
The third fault line runs through energy and security at the level of the state. In Jack Clark’s view, one of the key policy questions of the next few years is how to “bind AI infrastructure to an energy revolution.” If models keep growing, they will need cheap, stable, large-scale power. High-density data centers are natural early customers for new energy technologies: small modular reactors, fusion demonstration plants.
Burry’s proposal is more radical. For a country to keep its edge in the AI era, it should be willing to allocate massive budgets, build a network of safe small nuclear plants across its territory, and upgrade the grid—launching a new wave of “power infrastructure.” In that picture, “compute sovereignty” and “energy sovereignty” are tied together. Whoever can provide the cheapest, most reliable electricity will be in a position to support the largest-scale training and deployment of models.
World Order: Whoever Owns the “Cognitive Infrastructure” Rewrites the Rules
The impact of the AI revolution on the global order is far from settled, but several trends are already emerging.
First, diffusion and concentration are happening at the same time. On the diffusion side, open-weight models and cross-border cloud platforms are accelerating what might be called “the downward spread of intelligence.” Many small and mid-sized countries can tap into the latest generation of models directly, skipping the prohibitive cost of local training.
On the concentration side, only a few states and firms—those with access to global data, top-tier compute, and elite talent—are truly able to push the frontier. If “self-improving models” eventually appear, that concentration could deepen. Clark is particularly concerned about a closed loop in which “AI builds AI.” If a lab manages to realize recursive self-improvement, the pace of research could jump dramatically. Outsiders would struggle to keep up, let alone regulate.
Second, military and security logic is seeping in. For many countries, AI is not “one sector among others.” It is becoming the decision layer of all sectors. From unmanned systems to intelligence analysis, from cyber offense and defense to logistics, whoever leads on algorithms gains an advantage in readiness and deterrence. That creates a basic tension. On one hand, every state fears that an adversary might weaponize AI first. On the other, no state can afford not to push AI hard at home, for fear of falling behind economically and socially. The structure is reminiscent of the nuclear age’s split between nuclear weapons and civilian nuclear power, but the feedback cycle is faster and the channels of spread are more numerous.
Third, there is an unease that might be called “cognitive colonialism.” During the information revolution, most of the world’s network infrastructure and core platforms were controlled by a small number of countries. In the AI era, the same imbalance may take a new form: certain languages, cultures, and value systems dominate the training data; models are better attuned to those idioms; they more readily reproduce their preferences and blind spots. Will this lead many countries, without noticing, to adopt “pre-loaded” ways of thinking? This is one of the most delicate and least measurable cultural effects of the AI revolution.
How the United States Is Responding: Between Mania and Restraint
In this “fourth revolution,” the United States is both the testing ground and the main table in the casino. On one side of the ledger, it has the most mature capital markets, several of the leading AI labs, and the broadest base of high-end users. That much is obvious from the rise of companies such as OpenAI and Anthropic. Waves of private capital, venture funds, and private equity have rushed to meet the model boom, sending money toward GPUs and data centers again and again.
On the other side, the United States is being squeezed from several directions. The first squeeze is fiscal and debt pressure. Large-scale AI infrastructure requires long-term funding, and public finances are already heavily burdened. If this round of capital expenditure is left entirely to companies—financed through equity, debt, and private credit—then, should model capabilities fail to improve as expected, or should application-layer revenues lag, balance sheets will look fragile. Burry is especially troubled by this: a great deal of AI infrastructure is entering the system through “private credit plus long-duration depreciation.” On the surface it looks safe. In reality, the maturity mismatch is severe, and the risk of stranded assets is high.
The second squeeze is industrial. The old growth logic of internet firms rested on light assets and high-margin software. If, over the next decade, they are pushed into becoming “heavy-capital hardware-plus-software” hybrids, expectations for high ROIC will be hard to meet. The market will be forced to rewrite how it values them. This is not just a technical problem. It means the whole evaluative machinery of Wall Street has to move.
The third squeeze is political and social. The AI revolution has arrived at an awkward moment, landing right on top of existing fractures in American society. On one side are hopes for “technological optimism,” the idea that AI can help handle aging populations and raise the efficiency of health care and education. On the other are fears of “technological misrule”—that AI will intensify job loss, information manipulation, and military risk.
In such an environment, policymakers are tempted to swing between extremes. They may either cheerlead, treating AI as the next growth miracle, or clamp down, trying to pre-empt every risk with regulation. The more realistic task probably lies in the middle. At the foundational level—energy, compute, talent—governments need something like national projects and long-term planning. At the level of applications and business models, they can afford to step back and let markets explore what actually works. On safety and ethics, they need early red lines and transparency around the highest-risk areas: recursive self-improvement, military uses, and control over critical infrastructure.
Culture and People: When Language Changes, People Change
There is another side of the AI revolution that is often overlooked: it is, first of all, a revolution in language. One unintended consequence of large language models is that they have acquired a multilingual capacity no person has ever had. In major languages, some models now translate at or near the level of professional translators. In many smaller languages, their competence far exceeds that of most humans. McKenzie has marveled at this: the ability to “casually translate a CNN article into Japanese” would have been unthinkable in the previous era.
At the level of everyday life, models have quietly slipped into many people’s hands. Some use them to look up information and make charts. Some use them as one-on-one tutors. Some use them to get remote guidance on fixing wiring or plumbing, hoping to save the cost of an expensive house call. Once this kind of “cognitive outsourcing” becomes a habit, it will reshape how people know things. A doctor who leans on models for diagnostic suggestions year after year may genuinely forget much of the underlying knowledge. A student who grows up having models do homework may see original writing skills wither. Burry does not lose much sleep over the apocalyptic scenario in which “AI destroys humanity.” He is more worried about people voluntarily giving up their own capacities and becoming lazy and dull. On a small scale this looks like tool dependence. On a larger scale, it is about how civilization itself is shaped.
If the agricultural revolution counts as the first “revolution in the mode of production,” the industrial as the second, and the information revolution as the third, then the AI revolution is indeed qualified to be called the fourth. Its distinct feature is that the first three mainly reshaped the relationship between humans and things. This time, the target is the relationship between humans and language, humans and knowledge, humans and themselves.
From the conversation among Burry, Clark, Patel, and McKenzie, several layers of reality come into view. On the technical side, progress has been astonishingly fast, yet there is still a gap from true “general intelligence.” On the capital side, investment has reached a scale rarely seen in history, while returns at the application layer have not fully caught up. On the political and institutional side, preparations lag far behind technology and money. Many key issues have not even entered the core of public debate.
For that reason, the “fourth revolution” will not only test one or two countries’ stores of compute, or a few companies’ earnings reports. It will test several basic capacities of human civilization. Faced with new tools, can societies stay rational, without worshipping or demonizing them? Faced with new orders, can institutions muster the imagination to redraw the lines of power and responsibility? Faced with new forms of dependence, can people preserve some non-delegable core of human judgment?
After the agricultural revolution, humans learned how to live with the land. After the industrial revolution, they learned how to live with machines. After the information revolution, they learned how to live with networks. After the AI revolution, what humanity will finally have to learn is how to live with “another system that can think.” That lesson is likely to be harder than any of the ones before.


