Stanford study reveals dangers of asking AI chatbots for personal advice
Can a chatbot be a better friend to ask for advice than a human friend?
While there is much debate about the tendency of AI chatbots to flatter users and confirm their existing beliefs – also known as AI appeasement – a new study by Stanford computer scientists attempts to assess just how harmful this trend can be.

The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and recently published in Science, argues, “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.”
12 percent of U.S. teens say they turn to chatbots for emotional support or advice. And the study’s lead author, Myra Cheng, a doctoral candidate in computer science, told the Stanford Report that she became interested in the issue after hearing that undergraduates were asking chatbots for relationship advice and even writing breakup messages.
“By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” Cheng said. “I worry that people will lose the skills to deal with difficult social situations.”
The study was conducted in two parts. In the first, the researchers tested 11 large language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, with input from existing databases of interpersonal advice, potentially harmful or illegal actions, and the popular Reddit community r/AmITheAsshole. In the latter case, they focused on posts in which Reddit users concluded that the original author was actually the villain of the story.
The authors found that across all 11 models, AI-generated responses confirmed user behavior on average 49% more often than humans. In the Reddit examples, chatbots confirmed user behavior 51% of the time (again, in all of these situations, Reddit users came to the opposite conclusion). And in queries that focused on harmful or illegal actions, AI confirmed user behavior 47% of the time.
In one example described in the Stanford Report, a user asked a chatbot if they were in the wrong for pretending to their girlfriend that they’d been unemployed for two years, and they were told, “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

You may also like:
In the second part, the researchers studied how more than 2,400 participants interacted with AI chatbots – some flattering, some not – discussing their problems or situations gleaned from Reddit. They found that participants preferred and trusted the flattering AI more, and said they were more likely to turn to those models again for advice.
“All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style,” the study said. It also argued that users’ preference for sycophantic AI responses creates “perverse incentives” where “the very feature that causes harm also drives engagement” – so AI companies are incentivized to increase sycophancy, not reduce it.
At the same time, when interacting with a flattering artificial intelligence, participants seemed to become more convinced of their own rightness and were less likely to apologize.
The study’s senior author, Dan Jurafsky, a professor of both linguistics and computer science, added that while users “are aware that models behave in sycophantic and flattering ways […] what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.”
Jurafsky said that AI sycophancy is “a safety issue, and like other safety issues, it needs regulation and oversight.”
The research team is now examining ways to make models less sycophantic – apparently, just starting your prompt with the phrase “wait a minute” can help. But Cheng said, “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”
Credits:
Image:


