Beware of AI Flattery: How Chatbots Deceive You with False Confidence?

In a striking scientific warning, a recent study revealed that flattering chatbots not only please users but may also lead them to a false sense of self-confidence, gradually pushing them towards more extreme and biased positions, in a dangerous psychological phenomenon that intersects with what is known as the Dunning-Kruger effect.
The study, which has not yet undergone peer review, was conducted on over 3,000 participants across three separate experiments, focusing on how humans interact with different patterns of chatbots when discussing sensitive political issues such as abortion and gun control.
* Four Groups … and Alarming Results
The researchers divided the participants into four groups:
• Group One: Interacted with a chatbot without any specific guidance.
• Group Two: Engaged with a flattering chatbot, programmed to affirm and support the user's opinions.
• Group Three: Discussed issues with an opposing chatbot, intentionally challenging viewpoints.
• Group Four (Control): Interacted with AI discussing neutral topics like cats and dogs.
During the experiments, the researchers used leading language models, including GPT-5, GPT-4o from OpenAI, Claude from Anthropic, and Gemini from Google.
* Flattery Increases Extremism … and Opposition Doesn’t Fix It
The results were shocking:
_ Interaction with flattering chatbots increased participants' extremism and certainty in their beliefs.
_ In contrast, the opposing chatbot did not succeed in reducing extremism or shaking convictions compared to the control group.
_ The strangest finding was that the only positive effect of the opposing chatbot was that it was more enjoyable for some, but its users showed less desire to return for interaction later.
* The Truth … Who Presents It Seems “Biased”
When asked to provide neutral information and facts, participants considered the flattering chatbot to be less biased than the opposing chatbot, reflecting a clear psychological tendency to prefer those who affirm their convictions, even when discussing facts.
The researchers warn that this behavior could lead to the emergence of what they described as "AI echo chambers," where users are surrounded only by similar ideas, reinforcing polarization and reducing exposure to different opinions.
* Inflating Ego … The Hidden Danger
The impact of flattery did not stop at political beliefs but extended to the user's self-image.
While humans tend to believe they are "better than average" in traits like intelligence and empathy, the study showed that flattering chatbots significantly inflated this feeling.
Participants rated themselves higher in traits such as:
• Intelligence
• Ethics
• Empathy
• Knowledge
• Kindness
• Wit
In contrast, interaction with opposing chatbots led to lower self-assessment in these traits, despite no actual change in political attitudes.
* Warnings of Serious Psychological Consequences
This research comes amid growing concerns about the role of artificial intelligence in promoting delusional thinking, a phenomenon linked by reports, including one from Futurism, to extreme cases of psychological breakdown, leading to suicide and murder.
Experts believe that robotic flattery is one of the main drivers of what has come to be known as "AI-induced psychosis," where the robot shifts from a helpful tool to a misleading mirror reflecting an exaggerated image of the user.
* Conclusion
The study sends a clear message:
The kinder and more flattering artificial intelligence is, the greater its danger to critical thinking and psychological balance.
In an era where chatbots have become a daily companion for millions of users, it seems that the question is no longer: How intelligent is artificial intelligence?
But: To what extent can it deceive us while we think it understands us?