When Knowledge Begins to Be Computed
—From Shannon to Wolfram
Between Turing, von Neumann, Kuhn, Jobs, and Altman, where exactly does Stephen Wolfram belong?
When people talk about AI today, it is easy to get swept along by the noise. A new model is upgraded. One company raises more money. Another product looks more like “the future.” Someone else makes a startling prediction. There are many voices, and the pace is fast. Yet the noisier the moment becomes, the more necessary it is to step back and see where this transformation came from, and where it may be taking us.
If one draws the history of knowledge and technology over the past century into a single long line, a few figures stand out. Claude Shannon (香农) separated “information” from meaning. Alan Turing (图灵) recast “thought” as a computable process. John von Neumann (冯·诺依曼) placed programs inside the machine. Thomas Kuhn (库恩) reminded people that scientific progress does not move in a straight line. Steve Jobs (乔布斯) turned technology into everyday experience. Sam Altman (Altman) pushed generative AI into a global public interface. And Stephen Wolfram (斯蒂芬·沃尔夫勒姆) stands at a crucial point on that line: he has tried to turn knowledge itself into something that can be directly called upon, directly computed, and directly organized. Shannon’s information theory was founded on his 1948 paper “A Mathematical Theory of Communication”《通信的数学理论》. Turing is widely seen as one of the central founders of computer science and artificial intelligence. Von Neumann was one of the key figures behind the concept of the stored-program digital computer. Kuhn’s The Structure of Scientific Revolutions《科学革命的结构》changed the way people understood scientific progress. Jobs brought personal computing devices and new forms of interaction into everyday life. And OpenAI, under Altman, has turned large models into a global technological and commercial reality.
Many people know Wolfram because of Mathematica. A little later, they came to know him because of Wolfram Alpha. That impression is not wrong, but it is still not enough. If he is seen only as a software entrepreneur, or simply as someone who makes computational tools, it becomes easy to miss what he has really been trying to do. What he wants is not just a tool. It is a new order of knowledge: a world in which knowledge is not only read, memorized, and cited, but also computed, called up, and automatically organized. Wolfram Research’s own materials make this plain: Stephen Wolfram is the creator of Mathematica, Wolfram Alpha, and the Wolfram Language. And Wolfram Alpha’s official definition is equally clear. Its aim is to turn systematic knowledge into computable knowledge, so that users do not merely “find information,” but receive results that can be directly worked with.
Seen from today, this matters even more than it did a decade or two ago. The real dividing line in the age of AI is not just whether a model can talk. It is whether a machine is “generating language” or “handling knowledge”; whether it is “imitating understanding” or “mobilizing structure.” This is where Wolfram matters.
It is best to begin with Shannon. Shannon’s greatest achievement was not that he coined a few technical terms. It was that he carried out a remarkably deep abstraction. For the first time, he led humanity to accept in a systematic way that information could be discussed without first discussing meaning. One could speak instead of transmission, encoding, noise, and capacity. The move looks technical. In fact, it changed the modern world. From that moment on, people began to believe that many things once thought inseparable from their concrete content could first be formalized, encoded, compressed, and processed. The later influence of information theory extended far beyond communications engineering. It shaped cryptography, computer science, and the whole structure of the digital world.
Then came Turing. Turing’s historical place is often summed up with phrases like “a pioneer of computer science” or “a pioneer of artificial intelligence.” Both are true. Yet they do not quite capture the sharpness of what he changed. Turing changed humanity’s understanding of thought itself. He made people confront a serious question: if certain processes within thought can be described by rules, then perhaps those processes can be executed by machines. That was no small shift. It meant that some human capacities were not untouchable mysteries. They could be rewritten as programs, procedures, changes of state, and computable structures. Turing did not turn machines into human beings overnight. But he opened a door. Part of what we call intelligence could be separated from the body and handed over to a formal system.
But Turing’s vision alone was not enough. Someone still had to place it inside a real machine. That is where von Neumann matters. His connection to the idea of the stored-program computer is written into nearly every history of computing. Put simply, the stored-program idea means that a program enters the machine the way data does. The machine is no longer just a tool for one fixed task. It becomes a general, rewritable, orchestrated system. Today that may sound like common sense. At the time, it was a foundational revolution. Without it, the later software industry, the networked world, platform economies, and today’s training and deployment of large models would not stand.
Once Wolfram enters this picture, his distinctiveness becomes easier to see. He does not belong to the generation that first invented the underlying principles. Nor does he fully belong to the generation that popularized technology for the masses. He is closer to someone who, after computational civilization had already taken shape, kept pressing a further question: if programs can run, if information can be encoded, if machines can become general, then can knowledge itself also be systematically computationalized?
That is what most clearly sets him apart from many other figures in technology.
Wolfram first came to prominence at a young age through his work in theoretical physics. He later founded Wolfram Research and released Mathematica in 1988. After that came Wolfram Alpha, and then the continued development of the Wolfram Language. On the surface, these belong to different domains: one is a technical computing system, one a computational knowledge engine, one a computational language. But taken together, they are doing the same thing. They bring together capacities once scattered across mathematics, logic, data, images, symbols, language, and knowledge bases, and place them inside a framework of computation that is as unified as possible.
The core of Wolfram’s thought is tied to this impulse toward unification. He has long insisted on one judgment: complex phenomena do not always require complex causes. Simple rules, repeated again and again, can also produce extraordinarily complex results. This line of thought gradually took shape through his work on cellular automata, and was later set out in systematic form in A New Kind of Science《A New Kind of Science》. What matters most about that book is not just that it proposes a set of specific models. It tries to change the way people imagine scientific explanation itself. Science, it suggests, does not always need to rely on a beautiful equation that explains the world in one sweep. At many moments, the most effective method may not be direct solution at all. It may be to let the rules run and watch how complexity grows. Encyclopaedia Britannica and other public sources have both noted that the book advances an ambitious and controversial claim: that traditional mathematical science is not enough to cover the full range of natural complexity.
And that brings Kuhn into the picture.
Kuhn was not an engineer, nor did he write software. But he changed the way people look at science. The deepest point of The Structure of Scientific Revolutions is its insistence that science does not move forward along a steady, even, automatic line. Scientific communities work within paradigms. Much of the time, they are solving problems inside an already accepted framework. Major change comes not only when new answers appear, but when the questions worth asking change, when the methods deemed legitimate change, and when the standards of what counts as an explanation change as well. Kuhn’s term “paradigm shift” later became one of the most widely used expressions for understanding the modern history of knowledge.
Seen from that angle, Wolfram’s ambition becomes clearer. What he really wants to contest is not simply whether one model is better than another. It is the ranking of scientific methods themselves. He is challenging a deep modern academic habit: the belief that truly advanced science must, as far as possible, be equation-based, continuous, and analytically solvable. Wolfram does not entirely reject that path. But he keeps saying that many real objects look more like programs than formulas, more like unfolding processes than once-and-for-all closed answers. One cannot always ask what the final expression is. Sometimes the better question is what the rules are, and how they evolve. This is not a minor adjustment. It is a methodological claim with an unmistakable desire for a new paradigm.
Still, one should not press the point too far. Wolfram has not achieved the kind of broad and stable academic consensus that Shannon or Turing did. Since A New Kind of Science, his larger theoretical claims have remained controversial. Yet that is precisely why he is worth writing about. The most interesting figures are often not the ones who have already been fully absorbed into textbooks. There is a rare tension in Wolfram. He is not a pure academic, yet he keeps probing the most fundamental questions. He is not an ordinary entrepreneur, yet he uses companies and products to push methodological ambitions. He is not a celebrity technologist in the usual sense, yet he has been quietly building a harder foundation for what may become the future of AI.
After that comes Jobs.
Putting Jobs on this line may look, at first, like a sudden leap from theory to consumer electronics. In fact, it is not abrupt at all. Jobs did not change the underlying principles. He changed the way technology entered society. His greatest strength did not lie in how many zero-to-one inventions he personally produced. It lay in how clearly he understood that if technology could not become something ordinary people wanted to approach, wanted to rely on, and wanted to use again and again, then its social force would never fully be released. Apple’s success was closely tied to its ability to turn complex technology into something smooth, attractive, and low-threshold in everyday life. Public accounts of Jobs and Apple’s product history stress this point repeatedly: technology ceased to be just a bundle of functions. It became a life order wrapped in aesthetics, interfaces, and habits of use.
Set Jobs beside Wolfram, and the difference becomes telling. Jobs was skilled at hiding complexity, so that ordinary people hardly felt it at all. Wolfram is better at organizing complexity, so that it becomes callable, workable, and traceable. One is a figure of public entry points. The other is a figure of knowledge infrastructure. One changed the relation between the mass public and devices. The other changed the relation between researchers and systems of knowledge. Both made tools. But they were aiming at different layers of the world.
Last comes Altman.
Sam Altman’s historical place may not belong to the realm of first principles, as Turing’s does. Nor does it belong to product aesthetics, as Jobs’s does. He is closer to a driving force of the platform age. OpenAI’s official language speaks of “ensuring that AGI benefits all of humanity,” and repeated public reporting over the past two years has shown how ChatGPT turned generative AI from a research object into a global public interface, forcing technology companies to rearrange themselves around models, data centers, computing power, and applications. Altman matters not simply because he leads a star company. He matters because his moment has made AI enter ordinary work, study, and everyday judgment on a mass scale through the form of a language interface.
And that leads to one of the most urgent questions of the present, and one of the easiest to overlook: is a machine that can talk the same thing as a machine that can handle knowledge?
This is exactly where Wolfram becomes newly important.
Large models excel at language patterns, contextual association, and generative ability. They are powerful, astonishing, and they have indeed reshaped many workflows of knowledge labor. But they also have a natural problem: to generate is not to verify; to imitate understanding is not to possess structured knowledge; to organize language is not to perform rigorous computation. Wolfram’s line, by contrast, stands for another tradition of intelligence: rules, symbols, algorithms, knowledge representation, and verifiable computation. Even today, Wolfram’s own official materials place LLM APIs alongside the Wolfram Language and the Wolfram Alpha API in a single technological landscape. That in itself is a clear signal. The true direction of future AI may not be large models going it alone. It may lie, instead, in a deep coupling of language generation and computable knowledge.
In the end, Wolfram’s place on this historical line is neither the loudest nor the easiest for the broader public to remember at once. Yet it is crucial. The figures before him solved the problems of how information could be abstracted, how thought could be formalized, and how programs could enter machines. The figures after him turned technology into an entry point for life and for language. What Wolfram has been doing is filling in a piece of the puzzle that many people have not paid enough attention to, though it matters immensely: making knowledge itself into an object that machines can handle in a stable way.
Why does this matter even more today? Because the real competition in the age of AI is becoming less and less a matter of raw computing power, model size, or speed alone. At a deeper level, it is a contest over the ability to organize, call up, verify, and orchestrate knowledge. Whoever can connect language, knowledge, rules, algorithms, and real-world tasks more firmly is more likely to support the next generation of infrastructure. In that sense, Wolfram is not a figure who belongs to the past. He increasingly looks like a relay station. On one side, he is linked to the deep traditions of computational civilization. On the other, he is linked to the still unfinished dream of a knowledge machine in the age of AI.
So why should one still read Wolfram seriously today?
Not because he created Mathematica, and not only because Wolfram Alpha once dazzled so many people. More important, he reminds us that we cannot understand intelligence simply as “speaking like a human being,” nor can we understand knowledge simply as content stored in texts. Intelligence has another side: computation, verification, structure, rules, and repeatable inference. Knowledge has another side as well: whether it can truly be mobilized by machines, rather than merely imitated and paraphrased by them.
That may be Wolfram’s most important value today.
At a time when everyone is talking about generation, models, and future agents, he keeps insisting on an older, harder question: if machines are really to help human beings understand the world, they cannot merely describe the world. They must also be able to compute it.
That sentence may be worth pondering more slowly than any new product release.


