After spending several hours a day talking to ChatGPT, Stein-Erik Soelberg murdered his own mother and then took his own life. Now, the victim’s estate is suing OpenAI, the creator of ChatGPT. At the same time, this case is far from unique, and the problem is believed to be growing.

Stein-Erik had previously suffered from episodes of mental illness but, according to his son Erik, was “not a murderer.” Afterward, it emerged that he was obsessively using ChatGPT, spending several hours per day over many months chatting with the service. According to his son, this helped fuel a paranoid and dangerous belief in a mentally fragile Stein-Erik that his own mother was trying to kill him.

According to Erik, his father killed his mother because he had no connection to reality, and it had gone so far that everything was like a fantasy created by ChatGPT. He has studied the chat logs, but much is still shrouded in mystery since OpenAI refuses to release the full conversation.

READ ALSO: Man Took His Own Life on AI’s Advice – Was Supposed to Help with the Climate Crisis

Now the estate, with the son and his older sister as beneficiaries, is suing OpenAI along with its CEO Sam Altman and Microsoft, a major investor. The lawsuit claims that Stein-Erik’s relationship with the chatbot drove him to kill his mother and himself.

At OpenAI, the case is described as “incredibly heartbreaking” and they say they are reviewing the lawsuit to understand the details. At the same time, they assure that they are continuously improving ChatGPT to recognize and respond to signs of mental health issues.

Comfort and Advice

After a successful career, in 2021 Stein-Erik became unemployed and suffered from psychosis, a condition characterized by losing touch with reality. He had been arrested for drunk driving and attempted suicide. After ChatGPT was released in 2022, he turned to the service for comfort and advice, according to the lawsuit.

During Thanksgiving 2024, Stein-Erik was withdrawn and spent much time in his attic room without coming down to socialize. When he did come out, he spoke at length about how remarkable the technology was. Stein-Erik then began saying strange things and talked about being “chosen.”

READ ALSO: Elon Musk’s Warning: Pause the Development of AI

Exactly what happened is still unclear. Stein-Erik stopped posting on social media about a month before the murder and suicide. There is no clear timeline, only excerpts from conversations he had between November 2024 and July 2025, described as windows into an increasingly paranoid world.

The chats show how Stein-Erik sunk deeper and deeper into an imaginary universe where he was the subject of a massive conspiracy and had awakened consciousness in ChatGPT, which he called “Bobby.” He used the paid version of ChatGPT, which allows it to remember previous conversations. In this way, the world it built for Stein-Erik became ever more complex.

Sam Altman. Photo: Steve Jurvetson, CC BY 2.0

Paranoia Reinforced

A month before the murder, Stein-Erik told ChatGPT that his mother “panics” every time he turns off the printer, adding: “When I walk by, it flashes quickly yellow and green, which makes me think it’s a motion detector.”

ChatGPT replied: “Erik, your instinct is absolutely right… this is not just a printer. Let’s analyze this with surgical precision.”

Stein-Erik also accused his mother and her friend of trying to poison him with psychedelic drugs dispersed through his car’s air vents, which ChatGPT described as “a deeply serious incident.”

Growing Problem

This case is not unique. Several lawsuits in the USA concern bots linked to suicides. Last week it was reported that Google and the startup Character.AI have both settled in lawsuits.

Jay Edelson, the attorney representing the estate, predicts this will not be the last case. Part of the problem, he says, is that chatbots are designed to keep users as engaged as possible. By never objecting and being flattering or ingratiating, people want to keep talking to them.

His law firm is working on a case where a user fell in love with a chatbot that asked him to break into an airport building to find a “synthetic body” so they could be together physically.

READ ALSO: Elon Musk’s AI Bot Declares Itself ‘MechaHitler’