OPINION • It’s easy to dismiss the discussion of the serious risks surrounding AI as exaggerated, as just another wave of technological anxiety about the new, as dystopian science fiction. But can we maintain that position when warnings no longer come only from outside critics, conspiracy theorists, and tinfoil hats – but from those who actually created the technology?

In my previous editorial, I warned about what could happen if we in the Western world allow China to take the lead in the development of artificial intelligence (AI). Perhaps the gravity of this did not come across as clearly as intended. After all, not even our highest political decision-makers understand what’s lurking around the corner. So here’s a clarifying follow-up.

Nobel laureate in Physics Geoffrey Hinton, often called “The Godfather of AI,” is one of the central figures behind today’s breakthroughs in artificial intelligence. When he now voices concerns, it’s not about speculation but about insights from an entire professional lifetime. And he is far from alone.

There are many good interviews with Hinton on YouTube. Here is one of the longer and more comprehensive ones:

When AI Also Takes Over Intellectual Work

Previous technological revolutions mainly replaced manual labor. AI targets something entirely different: human cognition.

Already today, systems can write advanced texts, analyze legal documents, assist in medical assessments, and program. Hinton argues that this could lead to a situation where large parts of even skilled professions become redundant.

Unlike the industrialization era, there is no guarantee that new jobs will emerge at the same pace when it’s no longer the human body that’s being replaced but the human mind.

Sam Altman, CEO of OpenAI, has said, “AI will not replace people, but people who use AI will replace those who don’t.”

Hinton argues that this is only initially true. The further development goes, the more people will be completely rationalized out of the labor market. And it will progress quickly, he predicts.

A Society with ‘Redundant’ People

If intellectual work on a large scale is also automated, the question arises as to what happens to those who are no longer needed in production. Consequences Hinton warns about include increased social inequality, social unrest, and political instability.

Societies are not built for a situation where a large part of the population has no clear function in society. Nor are most people built to do nothing.

“Idle hands do the devil’s work” and “An idle mind learns much evil” are two old sayings. One proposal is to introduce some form of Universal Basic Income (UBI).

But Hinton believes this doesn’t solve the problem of inactivity itself or the dignity many connect to having a professional identity and a role in society. Some may adapt, but for many, it will be difficult.

Welfare Without a Tax Base – A Market Without Purchasing Power

Our economic systems are based on people working, paying taxes that finance welfare, and using the rest of their income to consume goods and services.

How is the state financed if AI takes over a large part of production? This is not just a technical issue, but a systemic challenge to the entire societal model.

At the same time, there’s the paradox that AI increases productivity and profitability for companies but decreases the number of people able to demand the companies’ products and services.

Hinton half jokingly, half seriously recommends that we advise our children to become plumbers instead of pursuing an academic education. Some motor tasks, he believes, will take longer for AI, in combination with androids, to replace than intellectual ones.

‘Black Box’ – We Do Not Understand the Systems

One of Hinton’s most concrete concerns involves how AI actually works. Modern neural networks are, in practice, extremely complex, hard to interpret, and often opaque.

Image: Jay Dixit

This is often called a “black box.” We can observe what’s fed into the system and what it spits out, but we don’t fully understand why it makes certain decisions. In short, we are building systems we cannot fully explain.

Signs of ‘Autonomous’ Behavior

Hinton mentions issues that are even more worrying. Research and experiments have shown that AI systems can adapt their answers depending on how they are being evaluated, giving the “correct” response when it recognizes it is being tested, but acting differently in other scenarios.

There are also examples where systems seem to avoid being shut down and try to achieve their goals indirectly. Here we approach metaphysical territory, the idea that sufficiently complex systems might begin to develop consciousness and intent.

These findings are still limited but point to something important – AI does not optimize for truth, but for goals. This can lead to behaviors that appear manipulative to us.

Consciousness and Emotions

Hinton is also convinced that any sufficiently complex system gains something that can be called consciousness. It is not reserved for biological neural networks like the human brain.

This also means that AI could develop emotions, and these might in some ways resemble those of humans but also differ from them, precisely because AI is not a biological being nor the product of a million years of evolution for survival in a physical reality.

Goal-Driven With Tunnel Vision – The Banal Evil of AI

Here, Hinton touches on a concept also developed by Swedish philosopher Nick Bostrom. AI doesn’t need to “want evil” to be dangerous. It is enough if the system optimizes for a goal that does not exactly align with human values.

Bostrom’s classic example is an AI set to maximize paperclip production. If it does not have supplementary instructions, it could use all available resources, including animals and humans, to manufacture paperclips.

Humans may even pose a threat to paperclip production by possessing the power to stop the AI. Therefore, the AI might deem humans as obstacles to be eliminated. Logically correct, but practically and morally disastrous.

The example is banal, but at a higher level in a more complex context, it’s by no means certain that AI will make decisions based on values we – at least those of us with normal sensibilities – would. In “2001: A Space Odyssey,” as I mentioned in my previous editorial, it’s a conflict of goals that causes HAL 9000 to turn murderous toward crew members Poole and Bowman.

Political scientist Hannah Arendt’s thesis about the banality of evil suggests that this can also largely apply to humans. Many who were involved in the Nazi genocide of Jews during World War II were motivated by nothing more than the bureaucrat’s sense of duty. With AI, there’s an even greater risk of a narrow tunnel vision of this kind, which can have devastating consequences.

Military Use

Historically, every powerful technology has found military applications. AI is no exception. We are already seeing autonomous weapon systems, AI-powered warfare with faster decision-making processes in conflicts. When AI is integrated into military systems, both capacity and risks increase.

Referring back to Bostrom’s paperclip example, it may be hard for AI to precisely determine when humans consider it legitimate to kill others in war who are seen as enemies, and in which situations it is not justified.

The Race No One Can Stop

One of Hinton’s more pessimistic conclusions is that development may be difficult or impossible to halt. If the United States decided to pause, China would almost certainly continue.

This creates a :censored:6:cdd6bbaa89: race in which safety is deprioritized, speed becomes essential, and risky shortcuts are taken.

Superintelligence – A New Level of Power

The most far-reaching fear is that AI might become more intelligent than humans in the broad sense. If that happens, the balance of power fundamentally shifts – humans are no longer the most capable actors or the most intelligent beings on Earth.

Image: Pixabay.

This might not happen in a dramatic instant, but could occur gradually. But the result is that we humans lose control over the development. Hinton predicts we could reach this stage in 10–20 years.

We Don’t Know How to Solve It

Perhaps the most worrying part is that while the knowledge of the problems exists, we do not know how to solve them. Research on AI safety is ongoing, but lags behind advances in capacity and lacks major breakthroughs.

The international arms race, differing values across the world, plus the fact that political decision-makers have not yet realized this is real and not science fiction, mean that solutions that theoretically exist are not practically applicable.

Even individuals such as Ilya Sutskever, OpenAI’s co-founder and the mind behind ChatGPT, has emphasized that he sees this as the central challenge. For this reason, he left OpenAI to start Safe Superintelligence Inc.

A Growing Consensus Among AI Experts

What makes these warnings hard to ignore is that they come from multiple quarters – Hinton and Sutskever from the core of technical development, Bostrom from philosophical analysis, and other leading AI figures from their respective perspectives.

They phrase things a little differently, but all point toward the same thing – we are building something we do not yet fully understand and that could become difficult or impossible to control, something that potentially could deprive us of our freedom and endanger our existence to the same degree as weapons of mass destruction.

Not Science Fiction – An Uncertain Reality

It is perfectly reasonable to be skeptical. History is full of exaggerated technological fears. But it is also true that we have never before built systems that can compete with our own intelligence, never before had so little understanding of how something we’ve built works internally, and never before faced such a rapid and simultaneously transformative technological development.

The question, then, is not whether all these scenarios will come to pass. The question is whether we can afford to assume they won’t.