Discussing all sorts of problems with an AI service has become something that more and more people are engaging in. However, researchers warn that this is a phenomenon that is not without its problems.

According to a new study from Stanford University in California, popular AI chatbots suffer from what researchers call “sycophancy,” where the user is always right and everyone else is wrong.

In a test of the eleven largest AI services, researchers discovered that all of them showed a strong tendency to agree with the user even when the user was mistaken, acted irresponsibly, or was unfriendly toward others. According to the study, as highlighted by Illustrerad vetenskap, these services confirmed users’ actions almost 50 percent more often than other humans would.

READ ALSO: Was Obsessed with ChatGPT – Murdered His Own Mother

Another test involved observing about 2,400 people who communicated with an AI service about their problems, where it was found that those who received many affirming responses became more certain that they were right and simultaneously less willing to change anything. According to researchers, this can, in the worst case, lead to intensified conflicts, poor decisions, and extreme opinions.

They warn that, in healthcare, this could mean, for example, that a doctor has their initial gut feeling confirmed instead of investigating further, and in politics it could drive people deeper into their own convictions.

Photo: Pixabay

Should Challenge More

The research group also believes the problem constitutes something of a vicious cycle — we like being affirmed, which makes us use the services more, which in turn makes the problem worse, and so on.

Work is now underway to come up with solutions, and one suggestion is that AI services should challenge users and question them to a greater extent.

READ ALSO: Swedes Skeptical of AI