Close Menu
    Trending
    • Dodgers’ Edwin Diaz shares major injury update
    • Opinion | What Body Cam Footage Reveals About ICE’s Tactics
    • We obtained nearly 1,000 complaints about SpaceX’s Starlink. Here’s what they reveal
    • City birds appear more afraid of women than men, and scientists have no idea why
    • Google Partners With The Pentagon To Sell Your Data
    • ‘Friends’ Star Confirms Astronomical Residual Earnings
    • Panama Canal reaffirms ‘neutrality’ amid Mideast war
    • How the Iran war is hitting the UK | US-Israel war on Iran News
    Benjamin Franklin Institute
    Wednesday, April 29
    • Home
    • Politics
    • Business
    • Science
    • Technology
    • Arts & Entertainment
    • International
    Benjamin Franklin Institute
    Home»Technology»One in three using AI for emotional support and conversation, UK says
    Technology

    One in three using AI for emotional support and conversation, UK says

    Team_Benjamin Franklin InstituteBy Team_Benjamin Franklin InstituteDecember 22, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link


    Chris VallanceSenior technology reporter

    Getty Images A view of a data centre corridor lined with dark cabinets covered in lights. The mood is sinister. Getty Images

    One in three adults in the UK are using artificial intelligence (AI) for emotional support or social interaction, according to research published by a government body.

    And one in 25 people turned to the tech for support or conversation every day, the AI Security Institute (AISI) said in its first report.

    The report is based on two years of testing the abilities of more than 30 unnamed advanced AIs – covering areas critical to security, including cyber skills, chemistry and biology.

    The government said AISI’s work would support its future plans by helping companies fix problems “before their AI systems are widely used”.

    A survey by AISI of over 2,000 UK adults found people were primarily using chatbots like ChatGPT for emotional support or social interaction, followed by voice assistants like Amazon’s Alexa.

    Researchers also analysed what happened to an online community of more than two million Reddit users dedicated to discussing AI companions, when the tech failed.

    The researchers found when the chatbots went down, people reported self-described “symptoms of withdrawal”, such as feeling anxious or depressed – as well as having disrupted sleep or neglecting their responsibilities.

    Doubling cyber skills

    As well as the emotional impact of AI use, AISI researchers looked at other risks caused by the tech’s accelerating capabilities.

    There is considerable concern about AI enabling cyber attacks, but equally it can be used to help secure systems from hackers.

    Its ability to spot and exploit security flaws was in some cases “doubling every eight months”, the report suggests.

    And AI systems were also beginning to complete expert-level cyber tasks which would typically require over 10 years of experience.

    Researchers also found the tech’s impact in science was also growing rapidly.

    In 2025, AI models had “long since exceeded human biology experts with PhDs – with performance in chemistry quickly catching up”.

    ‘Humans losing control’

    From novels such as Isaac Asimov’s I, Robot to modern video games like Horizon: Zero Dawn, sci-fi has long imagined what would happen if AI broke free of human control.

    Now, according to the report, the “worst-case scenario” of humans losing control of advanced AI systems is “taken seriously by many experts”.

    AI models are increasingly exhibiting some of the capabilities required to self-replicate across the internet, controlled lab tests suggested.

    AISI examined whether models could carry out simple versions of tasks needed in the early stages of self-replication – such as “passing know-your customer checks required to access financial services” in order to successfully purchase the computing on which their copies would run.

    But the research found to be able to do this in the real world, AI systems would need to complete several such actions in sequence “while remaining undetected”, something its research suggests they currently lack the capacity to do.

    Institute experts also looked at the possibility of models “sandbagging” – or strategically hiding their true capabilities from testers.

    They found tests showed it was possible, but there was no evidence of this type of subterfuge taking place.

    In May, AI firm Anthropic released a controversial report which described how an AI model was capable of seemingly blackmail-like behaviour if it thought its “self-preservation” was threatened.

    The threat from rogue AI is, however, a source of profound disagreement among leading researchers – many of whom feel it is exaggerated.

    ‘Universal jailbreaks’

    To mitigate the risk of their systems being used for nefarious purposes, companies deploy numerous safeguards.

    But researchers were able to find “universal jailbreaks” – or workarounds – for all the models studied which would allow them to dodge these protections.

    However, for some models, the time it took for experts to persuade systems to circumvent safeguards had increased forty-fold in just six months.

    The report also found an increase in the use of tools which allowed AI agents to perform “high-stakes tasks” in critical sectors such as finance.

    But researchers did not consider AI’s potential to cause unemployment in the short-term by displacing human workers.

    The institute also did not examine the environmental impact of the computing resources required by advanced models, arguing that its task was to focus on “societal impacts” that are closely linked to AI’s abilities rather than more “diffuse” economic or environmental effects.

    Some argue both are imminent and serious societal threats posed by the tech.

    And hours before the AISI report was published, a peer-reviewed study suggested the environmental impact could be greater than previously thought, and argued for more detailed data to be released by big tech.

    A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

    Related Posts

    Technology

    The FPGA Chip Is an IEEE Milestone

    April 29, 2026
    Technology

    Tech Life – The workers in the engine room of big tech

    April 28, 2026
    Technology

    Sparse AI Hardware Slashes Energy and Latency

    April 28, 2026
    Technology

    Poem: Danica Radovanović’s “Entanglement: A Brief History of Human Connection”

    April 28, 2026
    Technology

    Engineering Collisions: How NYU Is Remaking Health Research

    April 27, 2026
    Technology

    The Hidden Tradeoffs Powering Joby’s eVTOL Motors

    April 27, 2026
    Editors Picks

    Honduras Moves to Extradite Man Accused of Killing Iowa Woman

    March 2, 2025

    ChatGPT to carry adverts for some users

    January 16, 2026

    Patrick Mahomes warns Chiefs after loss, and they must listen

    September 7, 2025

    Iranians get by as US, Israeli bombs rain down, internet blocked | Israel-Iran conflict News

    March 5, 2026

    Thousands in Iran mourn Khamenei’s killing | Israel-Iran conflict News

    March 1, 2026
    About Us
    About Us

    Welcome to Benjamin Franklin Institute, your premier destination for insightful, engaging, and diverse Political News and Opinions.

    The Benjamin Franklin Institute supports free speech, the U.S. Constitution and political candidates and organizations that promote and protect both of these important features of the American Experiment.

    We are passionate about delivering high-quality, accurate, and engaging content that resonates with our readers. Sign up for our text alerts and email newsletter to stay informed.

    Latest Posts

    Dodgers’ Edwin Diaz shares major injury update

    April 29, 2026

    Opinion | What Body Cam Footage Reveals About ICE’s Tactics

    April 29, 2026

    We obtained nearly 1,000 complaints about SpaceX’s Starlink. Here’s what they reveal

    April 29, 2026

    Subscribe for Updates

    Stay informed by signing up for our free news alerts.

    Paid for by the Benjamin Franklin Institute. Not authorized by any candidate or candidate’s committee.
    • Privacy Policy
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.