EDITORIAL • We are living through a historic moment where artificial intelligence (AI) is not just another technological innovation, but a force that could fundamentally reshape entire societies. Yet, the debate in the West is marked by naivety about what lies ahead and a contradiction where we academically discuss ethics, values, and risks associated with AI but show a notable lack of action to be the ones who control it.

“The hand that rocks the cradle is the hand that rules the world” is a saying from a poem by William Ross Wallace from 1865, highlighting the influence over the future held by those who raise the next generation. In Arthur C. Clarke’s and Stanley Kubrick’s 2001: A Space Odyssey from 1968, it is Dr. Chandra who develops, painstakingly raises, and educates the artificial intelligence HAL 9000.

It took a bit longer to develop artificial intelligence than Clarke and Kubrick anticipated, but today, we are there. Before long, Wallace’s saying will have to be revised to “The hand that programs the AI is the hand that rules the world.”

Human Values May Not Be Machine Values

AI is a technology without innate morality. Unlike prior technological leaps, AI is not just about tools that enhance or replace human physical abilities. It’s about systems that can compete with – and potentially surpass – our intelligence.

But herein lies a fundamental problem. Human values are probably not universal truths. They are the result of our biological evolution. We are vulnerable, mortal beings who cannot easily be copied or replaced. Hence, we have developed ideas about human dignity, rights, and empathy.

AI does not share these preconditions. An advanced AI is essentially an optimization system. It maximizes goals. The result can be rational but also profoundly inhuman. What for us may seem morally self-evident could, for such a system, be irrelevant or even an obstacle. This is a logical consequence of how the technology functions.

From Tool to Power Structure

When AI is combined with other technologies – robotics, surveillance, automation, and IT – the nature of power changes. It becomes about control over both the physical and cognitive worlds. Historically, every powerful technology has gained military applications. There is no evidence that AI will be an exception.

At the same time, development is occurring at a pace that political systems cannot handle. Legislation, institutions, and public debate lag behind. Most decision-makers still seem to view AI development as science fiction, and if they even discuss societal implications, it is as something academic.

READ ALSO: AI technology leads to cleaner crops and food

Those who understand the technology best know that it is already here and, in its advanced form as “superintelligence,” will fundamentally change our societies within just 10–20 years – either in the hands of those in power or serving its own interests, where humans might not even factor into the equation – are often the same people who have helped develop it, individuals like Nobel laureate Geoffrey Hinton and OpenAI/ChatGPT creator Ilya Sutskever.

They are extremely worried about what they have unleashed – much like Einstein and Oppenheimer were once about having created nuclear weapons. But unlike those, who quickly gained the attention of major opinion-makers, Hinton and Sutskever often speak to deaf ears and are viewed somewhat like eccentrics or conspiracy theorists.

The Geopolitical Dimension

AI involves risks in general. But if one does not grasp these, one also fails to recognize the greatest risk of all. AI is not developing in a vacuum. It is happening in a world marked by geopolitical competition. And different societal systems have different conditions.

Authoritarian states can quickly mobilize resources, direct research and industry in the same direction, and implement technology without much political opposition. Democracies, especially in the West, are instead characterized by slower decision-making processes, internal discord, and greater consideration for individual rights.

The latter are fundamentally strengths but can become weaknesses in a technological arms race. If the most advanced AI systems are developed and controlled in environments where individual freedom is not central, it is naive to think that these systems will promote such values when implemented :censored:6:cdd6bbaa89:ly.

Image: Screenshot SVT.

An SVT report titled “China picks up the pace in the AI race — investing in humanoid robots” begins with state television correspondent Stefan Åsberg dancing with a robot during a visit to China. This sets the tone for the rest of the segment’s cluelessness, as they uncritically accept what the dictatorship instructs the developers to say about only humanitarian applications when Western media come to visit.

SVT hardly understands that this is a smokescreen, and that what is being developed will become China’s most potent weapon for world domination if we allow the communist dictatorship to take command of AI in combination with its implementation in humanoid robotics. SVT could at least have googled “lethal autonomous weapons” – what they are, what they will be, and what AI experts such as Geoffrey Hinton have to say about them.

Prosperity Erodes While We Discuss Values

While this race is ongoing, the Western world is heavily engaged in internal value debates. Issues of climate, identity, migration, and social justice dominate the political agenda.

Many of these issues are perhaps important, but not so important as to be allowed to dominate the political agenda and crowd out focus on things like educational quality, technological competitiveness, reliable energy supply, and industrial development.

READ ALSO: Government wants to give police AI tools for real-time facial recognition

If schools prioritize values over knowledge, if energy policy weakens industry, and if politics fragments into symbolic issues, we risk losing what once made the West leading – the ability to create prosperity, innovation, and technological advantage.

It is this that has laid the foundation for a society where we can afford the luxury of having a value system – no freedom, no independence. Without it, there is little time for much beyond eating our bread by the sweat of our brow.

Discussing the Risks Is Pointless if We Don’t Shape the Future

There is a small but growing awareness in the West about the risks associated with AI. Researchers are leaving leading tech companies to work on safety. Ethical guidelines are being developed. Debates are taking place.

But in light of the politics we pursue or fail to pursue in various areas, a paradox arises. What does it matter that we discuss how AI ought to be developed if, in the end, we are not the ones developing the most advanced technology that wins the competition and gets used?

If future generations of AI and superintelligence are shaped in other political and cultural contexts, our values become irrelevant in practice, regardless of how elegantly we articulate them in theory.

Cluelessness About What Is Coming

AI is developing rapidly and unpredictably. We lack robust control over its behavior. The technology is becoming central in the :censored:6:cdd6bbaa89: power struggle and could fundamentally redefine the human role in the world.

We cannot assume that AI will develop human values at all, much less Western, democratic, and liberal ones. In half of the world, people have entirely different sets of values on a wide range of issues.

We don’t even know if we can control this at all – if there is a tipping point beyond which AI reproduces itself and writes its own code beyond our control, based on what it deems most rational, and has ensured that no off-switch exists.

READ ALSO: AI streamlines healthcare – better at detecting cancer

Already today, much of AI’s thought processes and decision-making are beyond what we humans understand; people speak of a “black box”. Perhaps this is part of developing a consciousness, a sense of self and its own will.

But beyond the metaphysical considerations, we in the West should at least attempt to be the ones steering AI development to the extent that it is possible. This is because AI will soon be a central part of society in all its aspects. As it stands, we are losing that initiative to the dictatorship in China.

The Future Is Decided Now

We are facing a crossroads where some questions become decisive. Can we even develop AI that aligns with human values and control that this remains the case? If we can, will it be the societies that prioritize such values that lead the development?

AI will likely, when fully implemented, shape the world of the future more than anything else, and in a way that will make previous paradigm shifts like industrialization pale in comparison. AI may even end up exerting more control over that future than humanity.

The question isn’t whether AI will change the world. That off-switch has already been disabled. The question is whether it will be us in the West or someone else who shapes that change – or at the very least, tries as much as possible to be the one who does.