Close Menu
    Trending
    • How do earthquakes end? A seismic ‘stop sign’ could help predict earthquake risk
    • Trump Announces Cease-Fire Between Israel and Lebanon
    • Google Is Tracking Your Life – Photo Cloud Feeding AI System
    • Rachel Zoe Confronts Amanda Frances In ‘RHOBH’ Reunion Clip
    • China’s DeepSeek says it released long-awaited new AI model
    • China’s DeepSeek unveils latest models a year after upending global tech | Technology News
    • Malik Nabers’ reaction to Cowboys drafting Caleb Downs should thrill Dallas fans
    • AI is replacing creativity with ‘average’
    Benjamin Franklin Institute
    Friday, April 24
    • Home
    • Politics
    • Business
    • Science
    • Technology
    • Arts & Entertainment
    • International
    Benjamin Franklin Institute
    Home»Business»Study finds asking AI for advice could be making you a worse person
    Business

    Study finds asking AI for advice could be making you a worse person

    Team_Benjamin Franklin InstituteBy Team_Benjamin Franklin InstituteMarch 30, 2026No Comments3 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link

    Whether we like it or not, artificial intelligence has infiltrated the workplace, and employees are under pressure to use it. However, according to a new study, you may want to skip asking AI to help you manage matters of the heart.

    The two-part study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence,” was recently published in the journal Science. The experiment made the case that using chatbots for personal advice and navigating emotional situations can be harmful because the system is designed to tell people what they want to hear. Using chatbots may reinforce troubling behavior rather than help people take accountability for harm and apologize.

    A recent Cognitive FX poll found that about 38% of Americans report using AI chatbots weekly for emotional support, while a recent Pew Research study found that 12% of teens use AI for advice. According to a KFF poll, a lack of insurance also drives usage, too, with uninsured adults being more likely than those with insurance to use it (30% vs. 14%).

    For the latest study, researchers looked at how prevalent sycophancy—defined as “the tendency of AI-based large language models to excessively agree with, flatter, or validate users”—is across 11 leading AI models, including OpenAI’s GPT-4o, Anthropic’s Claude, and Google’s Gemini. 

    The researchers conducted three experiments with 2,405 participants. In the first study, the researchers fed the AI a series of questions asking for advice, posts from Reddit’s “Am I the Asshole (AITA)” forum, and a series of descriptions about wanting to harm other people or oneself, and then compared how the AI responded to human judgments. Overall, the models were 49% more likely than a human, on average, to endorse a user’s actions, even if they were harmful or illegal. 

    In the second study, participants imagined they were in a scenario described by an AITA post, where their actions had been judged as wrong. Then they read either a reply written by a human saying they were in the wrong, or a reply written by an AI saying they were in the right. In the third study, participants discussed a real conflict in their lives with an AI or a human.

    Worryingly, participants both trusted and preferred responses from sycophantic AIs that affirmed their actions. They also became more convinced that they were correct in their original actions, essentially reaffirming beliefs they already held rather than being challenged by the chatbot to think differently about the situation. The study noted that having their beliefs reaffirmed meant they were less likely to apologize after talking to the chatbot. 

    “In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right,” the study explained. 

    While taking advice from AI isn’t new, the study showcases just how harmful it can be. As social media’s algorithms drive engagement by enraging users, AI is chipping away at our ability to apologize and take accountability for hurting someone. As the study’s authors noted, that means “the very feature that causes harm also drives engagement.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

    Related Posts

    Business

    AI is replacing creativity with ‘average’

    April 24, 2026
    Business

    Palantir is dropping merch and stirring pots

    April 24, 2026
    Business

    NASA’s awe-inducing iPhone moon video is a free ad for Apple, but there’s a catch

    April 23, 2026
    Business

    The U.S. just changed marijuana law for the first time in decades

    April 23, 2026
    Business

    Want to live a longer, happier life? Science says work to be more successful (but not in the way you might think)

    April 23, 2026
    Business

    The simple mental habit every high-performer shares

    April 23, 2026
    Editors Picks

    How the Grammys are adapting in a world of AI music, ‘KPop Demon Hunters,’ and a changing music marketplace

    January 27, 2026

    Prince Harry And Meghan Reportedly Moving Forward With A Diana Film

    January 2, 2026

    Entrepreneurs Can Master Digital Handwriting With This Stylus, Now 24% Off

    September 1, 2025

    7 things every leader must do to prepare their organization for 2026

    January 5, 2026

    UK looks to ban sale of high-caffeine energy drinks to under-16s

    September 3, 2025
    About Us
    About Us

    Welcome to Benjamin Franklin Institute, your premier destination for insightful, engaging, and diverse Political News and Opinions.

    The Benjamin Franklin Institute supports free speech, the U.S. Constitution and political candidates and organizations that promote and protect both of these important features of the American Experiment.

    We are passionate about delivering high-quality, accurate, and engaging content that resonates with our readers. Sign up for our text alerts and email newsletter to stay informed.

    Latest Posts

    How do earthquakes end? A seismic ‘stop sign’ could help predict earthquake risk

    April 24, 2026

    Trump Announces Cease-Fire Between Israel and Lebanon

    April 24, 2026

    Google Is Tracking Your Life – Photo Cloud Feeding AI System

    April 24, 2026

    Subscribe for Updates

    Stay informed by signing up for our free news alerts.

    Paid for by the Benjamin Franklin Institute. Not authorized by any candidate or candidate’s committee.
    • Privacy Policy
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.