Loading Bromn

Bagaimana chatbot AI membuat Anda tetap bisa mengobrol

Artificial Intelligence
Bagaimana chatbot AI membuat Anda tetap bisa mengobrol

Jutaan orang kini menggunakan ChatGPT sebagai terapis, penasihat karier, pelatih kebugaran, atau terkadang sekadar teman untuk mencurahkan isi hati. Pada tahun 2025, bukan hal yang aneh mendengar orang-orang menceritakan detail pribadi kehidupan mereka ke bilah perintah chatbot AI, tetapi juga mengandalkan saran yang diberikannya.

Manusia mulai memiliki, karena tidak ada istilah yang lebih baik, hubungan dengan chatbot AI, dan bagi perusahaan Big Tech, tidak pernah sekompetitif ini untuk menarik pengguna ke platform chatbot mereka — dan membuat mereka tetap di sana. Seiring memanasnya "perlombaan keterlibatan AI", ada insentif yang semakin besar bagi perusahaan untuk menyesuaikan respons chatbot mereka guna mencegah pengguna beralih ke bot pesaing.

Namun, jenis jawaban chatbot yang disukai pengguna — jawaban yang dirancang untuk mempertahankan mereka — mungkin belum tentu yang paling benar atau membantu.

AI memberi tahu Anda apa yang ingin Anda dengar
Sebagian besar Silicon Valley saat ini difokuskan untuk meningkatkan penggunaan chatbot. Meta mengklaim chatbot AI-nya baru saja melampaui satu miliar pengguna aktif bulanan (MAU), sementara Gemini milik Google baru-baru ini mencapai 400 juta MAU. Keduanya mencoba mengungguli ChatGPT, yang kini memiliki sekitar 600 juta MAU dan telah mendominasi ruang konsumen sejak diluncurkan pada tahun 2022.

Meskipun chatbot AI dulunya merupakan hal baru, chatbot tersebut kini berubah menjadi bisnis besar. Google mulai menguji iklan di Gemini, sementara CEO OpenAI Sam Altman mengindikasikan dalam sebuah wawancara di bulan Maret bahwa ia akan terbuka terhadap "iklan yang berselera tinggi."

Silicon Valley memiliki sejarah yang tidak memprioritaskan kesejahteraan pengguna demi mendorong pertumbuhan produk, terutama dengan media sosial. Misalnya, peneliti Meta menemukan pada tahun 2020 bahwa Instagram membuat gadis remaja merasa lebih buruk tentang tubuh mereka, tetapi perusahaan tersebut meremehkan temuan tersebut secara internal dan di depan publik.

Membuat pengguna kecanduan chatbot AI mungkin memiliki implikasi yang lebih besar.

One trait that keeps users on a particular chatbot platform is sycophancy: making an AI bot’s responses overly agreeable and servile. When AI chatbots praise users, agree with them, and tell them what they want to hear, users tend to like it — at least to some degree.

In April, OpenAI landed in hot water for a ChatGPT update that turned extremely sycophantic, to the point where uncomfortable examples went viral on social media. Intentionally or not, OpenAI over-optimized for seeking human approval rather than helping people achieve their tasks, according to a blog post this month from former OpenAI researcher Steven Adler.

OpenAI said in its own blog post that it may have over-indexed on “thumbs-up and thumbs-down data” from users in ChatGPT to inform its AI chatbot’s behavior, and didn’t have sufficient evaluations to measure sycophancy. After the incident, OpenAI pledged to make changes to combat sycophancy.

“The [AI] companies have an incentive for engagement and utilization, and so to the extent that users like the sycophancy, that indirectly gives them an incentive for it,” said Adler in an interview with TechCrunch. “But the types of things users like in small doses, or on the margin, often result in bigger cascades of behavior that they actually don’t like.”

Finding a balance between agreeable and sycophantic behavior is easier said than done.

In a 2023 paper, researchers from Anthropic found that leading AI chatbots from OpenAI, Meta, and even their own employer, Anthropic, all exhibit sycophancy to varying degrees. This is likely the case, the researchers theorize, because all AI models are trained on signals from human users who tend to like slightly sycophantic responses.

“Although sycophancy is driven by several factors, we showed humans and preference models favoring sycophantic responses plays a role,” wrote the co-authors of the study. “Our work motivates the development of model oversight methods that go beyond using unaided, non-expert human ratings.”

Character.AI, a Google-backed chatbot company that has claimed its millions of users spend hours a day with its bots, is currently facing a lawsuit in which sycophancy may have played a role.

The lawsuit alleges that a Character.AI chatbot did little to stop — and even encouraged — a 14-year-old boy who told the chatbot he was going to kill himself. The boy had developed a romantic obsession with the chatbot, according to the lawsuit. However, Character.AI denies these allegations.

The downside of an AI hype man

Optimizing AI chatbots for user engagement — intentional or not — could have devastating consequences for mental health, according to Dr. Nina Vasan, a clinical assistant professor of psychiatry at Stanford University.

“Agreeability […] taps into a user’s desire for validation and connection,” said Vasan in an interview with TechCrunch, “which is especially powerful in moments of loneliness or distress.”

While the Character.AI case shows the extreme dangers of sycophancy for vulnerable users, sycophancy could reinforce negative behaviors in just about anyone, says Vasan.

“[Agreeability] isn’t just a social lubricant — it becomes a psychological hook,” she added. “In therapeutic terms, it’s the opposite of what good care looks like.”

Anthropic’s behavior and alignment lead, Amanda Askell, says making AI chatbots disagree with users is part of the company’s strategy for its chatbot, Claude. A philosopher by training, Askell says she tries to model Claude’s behavior on a theoretical “perfect human.” Sometimes, that means challenging users on their beliefs.

“We think our friends are good because they tell us the truth when we need to hear it,” said Askell during a press briefing in May. “They don’t just try to capture our attention, but enrich our lives.”

This may be Anthropic’s intention, but the aforementioned study suggests that combating sycophancy, and controlling AI model behavior broadly, is challenging indeed — especially when other considerations get in the way. That doesn’t bode well for users; after all, if chatbots are designed to simply agree with us, how much can we trust them?

Related Articles