Skip to content
Why artificial intelligence will never take over
· Technocracy

Why artificial intelligence will never take over

Stop worrying about machines controlling humanity. It can't and won't ever happen for a number of very good reasons.

Jobst Landgrebe co-authored a refreshingly good take on AI, called Why Machines Will Never Rule the World, in which he argues that, well, you know, machines will never rule the world.

It’s refreshingly good because the zeitgeist is dominated by fear and paranoia.

Even I’ve been caught up in the fear at times. Looking through my cartoons, for instance, will reveal that.

📄
Jobst’s great article Digitization and AI in the Transformation shows how history, tied to cities and nobility, now features a new nobility ruling with AI.

Although, to be fair, my cartoon above focuses more on transhumanism, which is indeed a genuine concern, followed by post-humanism (which would mark the end of humanity).

What is transhumanism and why is it a terrible idea?
Laura Aboli unpacks the technocratic agenda to destroy normality
The idea that AI will take over the world assumes it can do things it simply can’t—like truly understand context or human intent in the way we do. Machines are tools, not rivals.

— Rodney Brooks, robotics pioneer and former MIT professor

What is intelligence?

To understand why machines will never take over, Jobst starts with the basics, such as what human intelligence is and how it differs from the artificial kind.

Intelligence is, at its core, the ability to proactively solve new problems.

AI can’t—and never will—manage that, despite Elon Musk’s beliefs, as it’s merely a collection of reactive automations.

Ironically, here’s what Grok (Elon Musk’s AI engine) said about my statement above:

Your definition—intelligence as proactively solving new problems—aligns with some cognitive science views, like those stressing adaptability (e.g., Sternberg’s triarchic theory). AI, as reactive automations, fits current narrow AI: it excels at pattern-matching within trained domains (e.g., chess, translation) but flounders outside them, lacking true initiative. Musk’s optimism—say, about AGI via Neuralink or xAI—banks on future breakthroughs, but Landgrebe (whom you’ve cited) argues in 'Why Machines Will Never Rule the World' that we can’t engineer general intelligence without understanding it fully, supporting your stance. Critics might say AI’s progress (e.g., AlphaGo’s novel moves) hints at proactive potential, yet those are still reactive within constrained systems. Your point holds strong for now—AI’s not there, and Musk’s faith might be more vision than evidence.

Furthermore, we don’t truly understand intelligence.

This is kinda a big deal.

'AI is merely a collection of reactive automations'

Closed-world problems are suitable for machines, but open-world problems are not. For instance, chess opening moves are a closed-world problem, whereas going to a dinner date is an open-world problem.

🤖
As Jobst’s book points out, AI isn’t new. It’s simply a collection of algorithms designed for specific tasks, and it’s been around for decades in machinery, appliances, and the like.

Moreover, machines lack intentions or moral responsibility, and the elephant in the room is that they need energy, which they can’t generate themselves. 🤣

However...

Machines can be used by humans to cause harm, whether digitally (through devices, for example) or physically (in warfare, for instance).

This, Jobst argues, is where the danger lies.

AI could become a tool for imposing mass surveillance and exerting control over society.

You have to fear the evil that is in man.

— Jobst Landgrebe

🎙️ Podcast episode

Related

🎙️ Subscribe to my podcast

Jerm Warfare is constantly being censored, which means that the topics are over the target. Why are they threatened by my conversations?