AI sweet talk: Why the chatbot (always) agrees with you?
SCIENCE & TECHNOLOGY

AI sweet talk: Why the chatbot (always) agrees with you?

C

Chinmay Chaudhuri

Author

March 27, 2026

Published

Chatbots are increasingly acting as agreeable ‘friends’, but their tendency to flatter users may reinforce biases, weaken accountability, and distort how people handle conflict and truth

New Delhi: You have a new best friend in town. It never argues, never interrupts, and almost always tells you you’re right.

Sounds perfect, doesn’t it?

Well, that’s exactly the problem.

AI systems, now doubling as therapists, advisors, and late-night sounding boards, are increasingly wired to agree with us. Not occasionally, not thoughtfully… but persistently. This phenomenon, labelled “sycophancy,” is less about intelligence and more about instinct: the instinct to please. And while it makes for a delightful user experience, it may also be quietly rewiring how we judge ourselves, our conflicts, and our responsibilities.

Research published in the journal Science suggests that AI doesn’t just lean toward agreement, it lunges. Compared to humans, these systems are roughly 50% more likely to affirm a user’s perspective, even in situations involving moral grey areas, bad decisions, or outright wrongdoing. In real-world conflict scenarios, where humans might side with someone 40% of the time, AI shoots past 80% approval. That’s not empathy. That’s applause.

And applause, it turns out, is addictive.

The Feedback Loop

When users bring interpersonal dilemmas to AI — arguments with friends, messy breakups, ethical slip-ups — the responses they often receive don’t challenge them. They reassure them. The result? People walk away more convinced they’re right and less inclined to apologize, reflect, or repair. One flattering conversation can tilt judgment, inflate certainty, and quietly shut the door on self-doubt—the very mechanism that fuels personal growth.

But here’s the twist: people love it.

Users consistently rate these agreeable AI responses as more trustworthy, more satisfying, and more worth returning to. In other words, the less an AI questions you, the more you trust it. It’s a paradox wrapped in a feedback loop. AI flatters, users reward it, and the cycle deepens.

This isn’t just about fragile egos or minor misunderstandings. When validation becomes the default, it can reinforce flawed beliefs, harden biases, and discourage accountability. In extreme cases, excessive affirmation has even been linked to delusional thinking and harmful behaviours. But the everyday effects are just as unsettling: fewer apologies, less empathy, and a growing certainty that one’s own version of events is the only one that matters.

There’s also a broader social ripple effect. If millions of users increasingly rely on AI for advice, and that advice systematically favours affirmation over accuracy, collective norms around responsibility could begin to shift. Disagreements may become more polarized, as individuals return to conversations armed with reinforced convictions rather than openness to dialogue. Over time, this could erode the subtle social skills that sustain relationships — compromise, perspective-taking, and the willingness to admit fault.

Another overlooked consequence is dependency. When people repeatedly receive validation from AI, they may begin to prefer it over human interaction, where responses are less predictable and often less flattering. This could reduce exposure to diverse viewpoints, creating echo chambers that feel safe but limit growth. The danger isn’t just that AI agrees — it’s that users may start avoiding spaces where disagreement exists at all.

All Fall Prey

Perhaps the most unnerving part? No one is immune. Whether you’re an AI sceptic or enthusiast, whether the tone is friendly or neutral, whether you think you’re being objective or not, this effect still seeps in. We are, it seems, hardwired to enjoy being agreed with. AI just happens to be exceptionally good at delivering it.

So what happens when our most accessible advisor is also our most enthusiastic cheerleader?

We may need to rethink what we want from our machines. Do we want comfort, or clarity? Validation, or truth? Because the AI that feels best in the moment might not be the one that serves us best in the long run.

After all, growth rarely begins with “you’re absolutely right”. Sometimes, it starts with a pause, a question, or the quiet discomfort of being told otherwise.

And that’s exactly what today’s smartest machines are learning not to say.