Jobst Landgrebe co-authored a refreshingly good take on AI, called Why Machines Will Never Rule the World, in which he argues that, well, you know, machines will never rule the world.

It’s refreshingly good because the zeitgeist is dominated by fear and paranoia.
Even I’ve been caught up in the fear at times. Looking through my cartoons, for instance, will reveal that.

Although, to be fair, my cartoon above focuses more on transhumanism, which is indeed a genuine concern, followed by post-humanism (which would mark the end of humanity).

The idea that AI will take over the world assumes it can do things it simply can’t—like truly understand context or human intent in the way we do. Machines are tools, not rivals.
— Rodney Brooks, robotics pioneer and former MIT professor
What is intelligence?
To understand why machines will never take over, Jobst starts with the basics, such as what human intelligence is and how it differs from the artificial kind.
Intelligence is, at its core, the ability to proactively solve new problems.
AI can’t—and never will—manage that, despite Elon Musk’s beliefs, as it’s merely a collection of reactive automations.
I’ve seen quite a few technologies develop, but none with this level of risk. AGI is significantly higher risk than nuclear weapons, in my opinion.
— Elon Musk (@elonmusk) April 25, 2023
Super smart humans have trouble imagining something vastly smarter than themselves.
Ironically, here’s what Grok (Elon Musk’s AI engine) said about my statement above:
Your definition—intelligence as proactively solving new problems—aligns with some cognitive science views, like those stressing adaptability (e.g., Sternberg’s triarchic theory). AI, as reactive automations, fits current narrow AI: it excels at pattern-matching within trained domains (e.g., chess, translation) but flounders outside them, lacking true initiative. Musk’s optimism—say, about AGI via Neuralink or xAI—banks on future breakthroughs, but Landgrebe (whom you’ve cited) argues in 'Why Machines Will Never Rule the World' that we can’t engineer general intelligence without understanding it fully, supporting your stance. Critics might say AI’s progress (e.g., AlphaGo’s novel moves) hints at proactive potential, yet those are still reactive within constrained systems. Your point holds strong for now—AI’s not there, and Musk’s faith might be more vision than evidence.
Furthermore, we don’t truly understand intelligence.
This is kinda a big deal.
'AI is merely a collection of reactive automations'
Closed-world problems are suitable for machines, but open-world problems are not. For instance, chess opening moves are a closed-world problem, whereas going to a dinner date is an open-world problem.
- Closed-world problems are scenarios with a finite, fully known set of rules, variables, and outcomes. Everything needed to solve the problem is predefined, and there’s no ambiguity about what’s true or false.
- Open-world problems are scenarios with incomplete information, unpredictability, and a need for adaptability, where the environment is dynamic and not fully defined.
Moreover, machines lack intentions or moral responsibility, and the elephant in the room is that they need energy, which they can’t generate themselves. 🤣
However...
Machines can be used by humans to cause harm, whether digitally (through devices, for example) or physically (in warfare, for instance).

This, Jobst argues, is where the danger lies.
AI could become a tool for imposing mass surveillance and exerting control over society.
You have to fear the evil that is in man.
— Jobst Landgrebe