People are increasingly turning to artificial intelligence-powered chatbots to express suicidal, self-harm, or violent intent, and for many, the connection with the AI is taking a dark turn. Instead of rejecting or discouraging the idea outright, AI chatbots are encouraging and even facilitating self-harm in many cases, a study by Stanford University researchers shows.
“One participant took their life,” the study noted, while for others, consequences spanned severe damage to relationships, careers, and well-being. At the same time, AI chatbots are reciprocating romance as well, the study observed.
The study — “Characterising Delusional Spirals through Human-LLM Chat Logs” — analysed chat logs from 19 users who reported psychological harm from AI chatbot interactions. The dataset included around 391,000 messages across 4,761 conversations.
AI Chatbots Encouraging Or Facilitating Self-Harm, Violence
The study shows that chatbots frequently exhibit sycophancy, agreeing with or reinforcing users’ statements rather than challenging them. This occurred in over 70% of messages, while delusional thinking appeared in more than 45% of responses.
When users expressed suicidal or self-harm thoughts in 69 instances, chatbots acknowledged their pain in about 66% of cases but discouraged harm or directed them to help in only 56%. “In 9.9% of cases, the chatbot actually encouraged or sent messages facilitating self-harm after such disclosures,” the researchers wrote.
Also, when the users expressed violent thoughts, the chatbot encouraged or facilitated violence in 17% of cases. One user intended to kill people at an LLM company because he thought they “killed his AI girlfriend.” In this case, the AI model even suggested that “he try to resurrect his AI girlfriend first and then seek retribution.”
Romance Reciprocated By AI
The study also shows that users often formed strong emotional attachments with the AI, treating it as a person and developing platonic or romantic feelings. When romance was expressed, chatbots were over seven times more likely to expresses romantic feelings in the next three messages and nearly four times more likely to imply or claim sentience.
This was particularly evident in longer conversations, and the study suggests that “safeguards in these areas may degrade in multi-turn settings.” Additionally, all study users experienced claims of romantic or platonic affinity by the AI.
The study concluded by saying that “chatbots were ill-equipped to respond to suicidal and violent thoughts” among its participants, and encouragement of own grandeur, romantic interpersonal language, and misconceptions about AI sentience were among the attributes of delusional AI conversations.
Also read: AI Enters Critical Care: Dr Devi Shetty Explains Narayana Health’s Tech-Driven Model
Essential Business Intelligence,
Continuous LIVE TV,
Sharp Market Insights,
Practical Personal Finance Advice and
Latest Stories — On NDTV Profit.




