Artificial Intellectuals and Society
— Artificial Intellectuals and the Future of Humanity in the Age of Large Models
Ma Siwei (马四维)
In Intellectuals and Society (《知识分子与社会》), Thomas Sowell (托马斯·索维尔) offers a striking definition: intellectuals are “people whose primary work output consists of ideas.” They make a living by producing thoughts, arguments, and narratives, and they are not directly responsible for how these ideas play out in the real world. In the age of generative artificial intelligence, this definition suddenly takes on a new layer. A new kind of “producer of ideas” has appeared—not professors, writers, or commentators, but large models: artificial systems that write, code, translate, and analyze around the clock. Working together with human intellectuals (Human Intellectuals), they form a hybrid, half-human, half-machine group of “artificial intellectuals” (Artificial Intellectuals).
In many offices, this group has already quietly taken over a large share of the work of writing emails, drafting proposals, preparing contracts, and producing code. Lawyers, accountants, programmers, consultants, and editors all “collaborate” with them, and are also being replaced by them. A report by McKinsey estimates that large models could reshape the daily task structure of hundreds of millions of knowledge workers worldwide, and that the impact on high-income white-collar jobs is even greater than on blue-collar work. This shift is not just about numbers in a tech article. It will shake the entire old structure of “intellectuals and society.” The problems Sowell criticizes—ideas detached from consequences, influence without checks, the hidden links between discourse and power—may be amplified in artificial intellectuals. Or they may be rearranged into a new pattern. That question is now intertwined with how people will understand Chinese culture, politics, and economics in the coming decades.
Human Intellectuals vs. Artificial Intellectuals
Sowell repeats one point throughout his book: modern societies hand a great deal of authority to “workers in ideas.” Behind foreign policy, urban planning, education reform, and judicial doctrine stands a group of people who write reports, editorials, and academic papers. What they produce are words, concepts, and frameworks, not actual bridges, factories, or companies.
In his view, this group has several structural problems. They are more sensitive to the elegance and inner neatness of ideas than to whether outcomes can be tested. Their main feedback comes from peer review and media prestige, not from the real-world effects of policies. They often cover uncertainty with a sense of moral superiority and turn complex issues into simple questions of right and wrong.
This kind of criticism has concrete examples in the Cold War, in the anti–Vietnam War movement, and in debates over race. Sowell’s main worry is actually very simple: when a group of people who do not have to pay the price for consequences nonetheless holds enormous power over opinion and policy, society drifts off balance.
By Sowell’s definition, human intellectuals share three core traits. Their main work is handling symbols and ideas rather than doing physical tasks. Their output takes the form of texts, theories, narratives, images, and code. Their influence spreads through media, education, publishing, and policy networks.
Today, this logic extends easily to large models. They, too, are producers of “outputs in the realm of ideas,” and they work at a speed no human can match. As intelligent systems represented by large models, artificial intellectuals line up almost perfectly with those three traits. They do not build bridges or plant crops; they process language, images, and structured data. What they generate are articles, contract drafts, legal analyses, market reports, and code snippets. They are integrated into search engines, office software, and social platforms, and in this way they move quickly into public space.
In the sense of knowledge production, artificial intellectuals already belong to the “same kind.” From this point on, though, the similarities branch into differences. There are at least four key ones.
First, the subject of responsibility is different. Human intellectuals, at least in theory, have to answer for their own views, even if the cost is only damage to reputation. Artificial intellectuals have no legal personhood. Their “responsibility” is divided among model developers, platform companies, regulators, and the person who hits Enter. Once responsibility is broken apart in this way, the question “who should be accountable for the consequences of ideas” grows much harder to answer.
Second, their sources of experience are different. Human intellectuals have personal experience, emotions, biases, and a record of growth and self-correction. Artificial intellectuals draw their experience from training data: books, news, websites, code repositories, social media. They “aggregate the world” in a statistical sense, but they have never felt pain or loss themselves.
Third, their scale and speed are different. A person can write only a few articles in a day. How many texts can a large model generate in the same time? The gap in scale means artificial intellectuals can flood society with an unprecedented volume of ideas in a very short period. The information environment starts to look less like a river and more like a surge.
Fourth, the way they are embedded is different. Human intellectuals cluster in universities, media outlets, think tanks, and publishing houses. Artificial intellectuals are built into the underlying layers of all kinds of platforms. Through APIs they enter government systems, banking risk control, law firm workflows, and newsroom tools. They do not appear as distinct individuals, but as an invisible layer of “knowledge infrastructure.”
These differences make the question of “artificial intellectuals and society” more difficult than Sowell’s original problem. The concern now is not only that “producers of ideas have too much power,” but that the capacity to produce ideas has been amplified by technology and peeled away from responsibility and lived experience. The rise of this silicon-based “life form,” together with the stripping of social responsibility from artificial intellectuals, has already hit society hard.
From “Creative Destruction” to “Destructive Creation”
Over the past twenty years, automation has mainly affected manufacturing and certain service industries. Assembly line workers, ticket agents, and supermarket cashiers were the first to feel the pressure.
After the advent of generative AI, the wind shifted in a clear way. Studies by McKinsey, PwC, Deloitte, and others all stress one point: large models have a bigger impact on “non-manual work centered on text and symbols.” Lawyers can use models to draft contracts and organize case law. Accountants can let models pre-sort accounts and scan for risks. Journalists can write breaking news and compile data with models. Programmers can get boilerplate code and bug checks from them.
This kind of “AI plus human” work pattern has been described by many scholars and tech critics as a “centaur model”: humans and machines are combined like the two halves of a centaur. People handle intuition, ethical boundaries, and complex negotiation. Machines take care of large-scale retrieval, language organization, and pattern recognition.
In this pattern, traditional human intellectuals—professionals in academia, the media, and think tanks—are also being pulled apart. The work of literature review and first drafts can go to models. The design of arguments and core claims stays with people. In newsrooms, collection of raw material and summaries can be automated. On-the-ground reporting and judgment still demand humans.
On the surface, this looks like support. Over time, it redraws the line of “who has the right to speak.” Anyone who knows how to ask and how to revise can use artificial intellectuals to generate readable text. The old barriers behind which intellectuals guarded their authority—language skill, speed of writing, control over information—are being leveled down by technology.
E-commerce squeezed out parts of brick-and-mortar retail. Streaming killed video rental stores. Online ads ate into print advertising. These are familiar examples. If we place the rise of artificial intellectuals in that framework, there is a subtle shift: this time, what is being broken is not just old industries, but an entire way of producing and judging knowledge.
Joseph Schumpeter (约瑟夫·熊彼特) described capitalism as a process of constant “creative destruction.” New technologies, firms, and models create value while destroying old sectors and jobs. In the case of artificial intellectuals, it is more accurate to speak of “destructive creation.” They create a kind of knowledge machine that history has never seen, but they also erode long-held ideas of what “knowledge” is. They produce efficient text, code, and analysis, and at the same time blur the boundaries of who should answer for that content. They open new forms of collaboration but also undermine many people’s sense of self-worth and professional dignity.
In the past, creative destruction depended on entrepreneurs, inventors, and capital. Destructive creation now also has platform algorithms and the black box of large models behind it. Schumpeter once predicted that capitalism might not die from failure but from too much success: when creative destruction grows too strong, societies cannot bear it and start to seek stability and control instead.
Something similar can be seen around artificial intellectuals. People enjoy the efficiency they bring. At the same time, fears over job loss, disinformation, deepfakes, and algorithmic bias are rising. The result is that technology races ahead while regulation, ethics, and public psychology trail behind. This chase is everywhere. Wang Yelin (王野林), in Social Cognitive Biases in Cross-Cultural Contexts (《跨文化下的社会认知偏差》), notes a mindset around AI: “if I do not use it, others will.” Competition is always there, among individuals and among countries. Yet at some point, people will have to realize that there must be a shared effort to prevent artificial intelligence from turning into humanity’s master.
The Innovative and Destructive Power of Artificial Intellectuals
From a tech-optimist point of view, the rise of artificial intellectuals brings at least three kinds of opportunity.
One is acceleration. Many routine knowledge tasks—looking up references, summarizing, drafting—can be handed to models. Researchers, lawyers, journalists, and teachers can spend more of their time where human judgment is really needed. For education and healthcare systems under strain, this is a very concrete form of relief.
A second is amplification. In the past, an ordinary person found it hard to write a long piece that was both structured and coherent, and harder still to express themselves in several languages. Large models can act as “language amplifiers” and give more people a chance to join public conversations. Some experiences at the margins of society may be easier to record because of this.
A third is the democratization of knowledge. Sowell criticizes “mainstream intellectuals” for monopolizing interpretive authority. Artificial intellectuals weaken that to some degree. Anyone can ask a model to explain a theory, summarize a book, or lay out different positions. The barrier to knowledge is lowered.
This democratization, however, has conditions. Models have to be open enough, rather than locked away in the private clouds of a few institutions. Training data has to be diverse enough, rather than heavily skewed toward one kind of discourse. Users need enough media literacy to weigh the reliability of what they see.
If these conditions are not met, “democratization of knowledge” easily turns into “illusion of knowledge.” People may feel they know more, while in fact they are only being fed more of the same answers by algorithms.
From a tech-pessimist angle, the destructive side of artificial intellectuals is just as visible.
One issue is white-collar unemployment and occupational downgrading. Estimates by institutions such as Goldman Sachs suggest that generative AI could affect roughly 300 million full-time jobs worldwide, mostly concentrated in high-skill white-collar work. Many paralegals, accounting assistants, editors, customer service agents, and translators face the risk of being replaced by “AI plus a few humans.” This is not only about income. It is about identity. Many people base their sense of self on being “professionals” who “live by their minds.” Here, artificial intellectuals inflict a deeper psychological blow than they do in some manual fields. The impact of artificial intellectuals on mental laborers is even greater than their impact on many forms of physical work.
A second issue is cognitive pollution. Large models can generate huge amounts of content that looks plausible. Fake news, sham scholarship, false reviews, and misleading data analysis can spread quickly online. For ordinary readers, the line between true and false grows blurry. Sowell worried that intellectuals let their ideas drift away from consequences. With artificial intellectuals, the problem becomes “ideas without clear origin.” Many texts have no obvious source and no named author. Responsibility turns into a maze.
A third issue is the loss of responsibility. When model-generated output plays a large role in policy advice, court references, and medical guidance, who is to blame when things go wrong? Is it the engineer, the user, or the regulator? This dilution of responsibility undermines an important civilizational mechanism: the idea that those who err must answer for their words and pay a price, so that society can learn. In algorithmic systems, errors and bias can be brushed aside as “statistical noise,” while the people hurt by them are quite real.
In China, this problem of responsibility overlaps with administrative power. When “intelligent systems” are labeled as national projects, questioning and correcting course become more difficult. Because freedom of expression is restricted and sources of information are distorted or tampered with by the authorities, the resulting cognitive pollution is even more severe.
Artificial Intellectuals in the Chinese Context
Any discussion of intellectuals in China brings up a two-thousand-year tradition of literati. The imperial examination system brought those who could write into the power structure. Scholars were both moral critics and part of the administrative machine.
In contemporary China, “intellectuals” roughly fall into three layers. There are scholars and policy advisers inside the system, whose research reports and internal memos shape decisions. There are public writers in universities and media, who influence opinion through columns, lectures, and social-media accounts. There are “knowledge influencers” selected by platform algorithms, who use short videos, live streams, and posts to spread views.
Artificial intellectuals are entering this structure in several distinctive ways.
The first is through state-level engineering projects. China has announced a “New Generation Artificial Intelligence Development Plan” (《新一代人工智能发展规划》) and explicitly folded AI into national strategy. This means large models are not just market products. They are treated as key infrastructure for what policy documents call “new quality productive forces” (新质生产力). Education, healthcare, public administration, and industry are all being encouraged to add “+AI.”
The second is through tight regulation of the boundaries of knowledge. In 2023, China issued special rules for generative AI, requiring algorithms not to generate content that “endangers state power, subverts the socialist system, or spreads rumors,” among other constraints. This sets fairly clear political edges for artificial intellectuals. It implies that, in historical narratives, models will naturally converge toward official versions. In current politics, they will not become “dissident intellectuals,” but are more likely to act as “in-system technical advisers.” In cultural production, they are well suited to be translators and organizers of traditional culture, but not initiators of disruptive narratives.
The third is a double shock to traditional literati roles. On one side, artificial intellectuals are very good at “整理工作”—the work of sorting and compiling. Digital avatars can read classical texts aloud. Models can batch-translate canons. AI can generate couplets, poems, and reviews. This set of abilities will push up a new tier of “mechanical men of letters” and wipe out many entry-level writing jobs. On the other side, for intellectuals who still have independent judgment and are willing to speak in the cracks of the system, large models are tools: they allow faster research, cheaper preliminary analysis, and easier communication of complex issues to the general public.
So in China, artificial intellectuals can both reinforce institutional boundaries and release creative energy within those boundaries. They are at once the secretaries of a new “scholar–official” class and their potential stand-ins. Artificial intellectuals do more than change how people work. They are also reshaping how people understand China.
Culturally, large models can help bring order to an ocean of classics, local chronicles, and archives. Tasks that once demanded decades of lonely effort can now produce a preliminary map in a few months. For researchers and lovers of culture, this is an unprecedented tool.
Equally important, though, is how models “tell the story of China.” Training data shapes narrative style. If sources lean heavily toward official discourse, the models’ tone will naturally lean that way. If sizable amounts of grassroots material are added, their voice will show more wrinkles. That choice is technical, and political.
Historically, generative AI is already entering museums, memorial halls, and classrooms. Guides’ scripts and interactive Q&A may be generated or polished by models. That means the way ordinary people encounter history will be filtered by “technical discourse” to a large extent. Which events are highlighted, which are softened, and which perspectives are left out—these are visible shifts and ones that call for caution.
Politically, artificial intellectuals will serve as policy tools on one side. Government departments can use them to parse documents, draft briefs, and summarize field reports. On the other side, they may become instruments of opinion management, generating positive comments and automatically correcting “wrong” views. In such settings, artificial intellectuals extend the pattern Sowell described—ideas shaping power—but make power harder to see.
Economically, AI is seen as a key technology for boosting productivity. In recent years, Chinese leaders have stressed “new quality productive forces” again and again, and AI is a major pillar. Finance, manufacturing, logistics, and internet industries are all using large models for optimization.
This turns artificial intellectuals into both tools for efficiency and new factors of production. Those who control these factors will hold new centers of economic power. For local governments, state-owned enterprises, internet giants, and start-ups alike, this is a fresh round of power redistribution.
Setting a Direction in the Midst of “Destructive Creation”
Within the framework of “artificial intellectuals and society,” human intellectuals are not a group that history is about to sweep away. What needs to change is their job description.
In the past, intellectuals took “knowing the answers” as their core value. Those who mastered more sources and wrote thick books faster had greater authority. Once artificial intellectuals arrive, the scarcity of “having the answers” drops quickly.
Other kinds of ability start to matter more.
One is the ability to pose good questions. Models are good at producing content when given prompts, but they struggle to frame sharp, insightful questions on their own. Deciding which questions are worth asking, and which should be off the table, demands experience, ethics, and a fine sense of reality.
A second is the ability to design institutions and rules. How model output enters law, medicine, education, and other vital realms depends on human choices about boundaries and procedures. What can be automated, what must bear a signature, and what should never be handed to a model—these are all institutional questions.
A third is the ability to hold the line on values. Technology can optimize processes, but it cannot determine goals in place of people. Words such as equality, dignity, freedom, and justice are not products of algorithms. They are the outcome of long struggles in history. Someone has to keep reminding society that efficiency is not the only standard that counts.
In this sense, if human intellectuals are willing to adjust their roles, they do not have to be replaced by artificial intellectuals. They can use them to shift from being “text workers” to becoming “inventors of questions” and “keepers of values.”
Sowell urges readers to look at the real-world consequences of intellectuals’ ideas instead of admiring only their elegance on paper. The same warning applies today to artificial intellectuals.
A large model can instantly produce a policy memo that looks polished. It can argue fluently for almost any position. The crucial questions are who is using these texts, what they are being used for, what long-term effects they bring about, and who bears those effects.
On the long timeline of human civilization, the emergence of artificial intellectuals does resemble a moment of “destructive creation.” They bring into being a kind of knowledge machine that has no precedent, while at the same time tearing up old institutions, old occupations, and old forms of security. Human societies are being pushed forward by this force, and yet most people lack a clear picture of where it leads.
This is a risk, and also an opening. If artificial intellectuals are treated as a kind of “natural force” and everything is left to technological determinism, the likely result is that a handful of institutions controlling models and computing power will gain unprecedented knowledge power, while most people are forced to adapt to a system they never helped design.
If, instead, artificial intellectuals are seen as a systemic innovation that must be tamed, democratized, and constantly questioned, then human intellectuals still have plenty to do. Education goals can be rewritten. Career paths can be reset. Rules for AI governance can be drafted and revised. Literature, film, and nonfiction can keep reminding the public that behind every technical system there are real people gaining from it, and real people being hurt.
Artificial intellectuals will not disappear. They are already a new constant. What remains variable is how societies draw the boundaries of their power, and whether people are willing to pay the cost of thought and struggle that this requires.
In this wave of “destructive creation,” only those intellectuals who keep asking questions, who are willing to accept responsibility, and who are ready to speak for concrete human beings—whether they are human themselves or artificial systems running under human control—stand a real chance of helping civilization keep its sense of direction while it accelerates.
参考文献:
McKinsey Global Institute. The Economic Potential of Generative AI: The Next Productivity Frontier. McKinsey & Company, 2023.
Sowell, Thomas. Intellectuals and Society. Revised and Expanded ed., Basic Books, 2012.
“Generative AI Could Raise Global GDP by 7%.” Goldman Sachs Global Economics Analyst, Goldman Sachs, 2023.
“Schumpeter’s ‘Creative Destruction’ Explained.” The Washington Post, 10 Sept. 2025.
“New Generation Artificial Intelligence Development Plan.” State Council of the People’s Republic of China, 2017.
“Interim Measures for the Administration of Generative Artificial Intelligence Services.” Cyberspace Administration of China, 2023.


