We live in an age of linguistic corruption, most evident in discourse surrounding artificial intelligence. This phrase has colonised venture capital, technology journalism, and credulous politicians.

A monument to marketing. To public confusion. To the vice whereby we mistake the name of a thing for knowledge of the thing itself.

The authors of this catastrophe were the fellows at Dartmouth in 1956. They declared they were building artificial minds. Minds. Not tools. Not calculators. Minds.

The word “intelligence” was selected with the care of a marketing strategist. It was meant to carry the gravitas of thought, to project something fundamental and cognitive.

And it worked. Seven decades later, this fiction is embedded in common vocabulary. To question it marks one as a pedant. The rhetorical capture must concern anyone capable of intellectual self-preservation.

The truth is simpler. They have built systems of statistical inference. Elaborate. Impressive. But systems nonetheless. They identify patterns in vast data and produce outputs weighted by probability.

This is it. The entire catalogue. Except it is not quite that simple. Calling it intelligence is only to mystify. But parameter adjustment itself obscures. Even then the distinction is not as clean as one suggests.

When these machines process language, this occurs: It performs mathematical operations over matrices derived from operations performed on yet more matrices. This is not controversial. The engineers themselves admit this. Parameter optimisation.

Data is ingested. Patterns are extracted and compressed into weights. When a prompt appears, the system samples from probability distributions to produce the next token. The output mimics human expression. Sometimes it corresponds to reality. More often, it does not.

Language has peculiar power. It is how we make sense of the world. How we instruct, persuade, threaten, console, deceive. Language is so woven into human thought that we assume whatever produces it fluently must itself be thinking.

This is a category error. A parrot produces language fluently without being intelligent. A chimpanzee arranges words without becoming a philosopher. A statistical system, no matter how complex, can generate prose that reads like thought without any thinking occurring.

When a term becomes embedded in institutional discourse, it determines how we think, regulate, and allocate resources and power. Anthropomorphism (treating objects as if they possessed human qualities) is one such term. It has become the water we swim in.

We discuss “AI safety” and “AI alignment” as though machines were agents needing constraint. This is backwards. The proper question is not what the machine wants, but who built it, who profits from it, and what it has been optimised to do.

When we speak of machines as though they possess understanding and intention, we absolve ourselves of examining the actual choices made by actual humans.

The anthropomorphism is instrumental. It obscures the questions that ought to be asked. When we speak of machines as though they possess understanding and intention, we absolve ourselves of examining the actual choices made by actual humans. The machine becomes a screen onto which we project anxieties, rather than a tool that demands scrutiny.

This principle runs throughout technology culture. Something goes wrong: blame the algorithm. Profit flows: celebrate innovation. Power concentrates: muse philosophically. The language is doing political work.

Venture capital requires narrative. The technical achievement is substantial but not romantic enough. The valuations cannot be justified by what these systems actually do. So the language has been upgraded. Not pattern-matching systems. Intelligent agents. Soon to surpass human cognition.

This narrative is convenient for those invested in its propagation. Considerably less convenient for everyone else.

Claims and observation diverge. We are told these systems reason. When outputs correspond to reality, we celebrate comprehension. When they are flagrantly false, we dismiss this as a known limitation. The underlying claim of reasoning remains unchallenged.

We are told these systems learn. When updated with new weights, we attribute this to learning as though a photograph enlarges through growth. The process is parameter adjustment. Not knowledge. Yet the language of learning persists because it is useful propaganda.

The mechanism is far less impressive. A system ingests text. It extracts statistical regularities and converts them into numerical parameters. Given a prompt, it samples from probability distributions and produces a continuation. There is no understanding. No reasoning. Only correlation masquerading as comprehension.

The system has no model of truth. It cannot distinguish between factually accurate claims and ones merely plausible given training data. It cannot distinguish fact from convincing fabrication unless training data itself favoured one over the other. This is not a limitation that better training or larger models will overcome. It is fundamental to what these systems are.

Human intelligence involves understanding. An internal model of the world. The capacity to reason through alternatives.

Calling this intelligent is risible. Human intelligence involves understanding. An internal model of the world. The capacity to reason through alternatives. To be wrong and subsequently recognise it. These systems have none of this. They operate within the envelope of their training data, producing weighted guesses.

The enterprise proceeds as though this difficulty did not exist. Money flows. Claims escalate. Serious people proclaim certainties about human civilisation based on extrapolating systems whose capabilities are systematically misrepresented. We have become collectively detached from reality. Delusions have become common sense.

The danger is simpler: we will make policy decisions based on false models. Grant authority to systems that have not earned it. Sacrifice privacy, autonomy, and judgment. Allow the language of artificial intelligence to obscure the human choices driving development.

The anthropomorphism is the point. By speaking of machines as though they think and learn and understand, we avoid speaking about the humans who built them, who profit from them, who made choices about optimisation. The language deflects attention from power structures towards the machines themselves.

It transforms questions of political economy into philosophical puzzles about consciousness. The critique does not escape the trap. It reverses the polarity. It mystifies through mechanism instead of anthropomorphism.

Stop pretending machines have intentions. Start examining the intentions of those who built them.

The solution is unpopular. Speak plainly about what these systems are. Resist the language that has been imposed. Ask relentlessly: who is making decisions, who is profiting, what is being optimised for, and at what cost.

Stop pretending machines have intentions. Start examining the intentions of those who built them. Stop being impressed by fluency. Start being sceptical of claims masquerading as insight.

Those benefiting from the current vocabulary will not surrender it. The press will continue with puff pieces about marvels. Politicians will continue legislating on misunderstanding.

The machines will do what they do: identify patterns and generate continuations, dressed up in the language of reason and insight. The circus will go on. We will call it intelligence. The actual questions about power, profit, and choice will remain obscured.

What is artificial? Plastic. A plastic house can never be safe shelter. You cannot live in it. Artificial intelligence can never be genuine thought. You cannot think with it. Both are imitations.

Reply

Avatar

or to participate

Keep Reading