Close Menu
    Trending
    • Timothée Chalamet’s Love Life Sparks Shock Fan Exit
    • UK boosts security for Jews after London stabbings
    • Press freedom worldwide falls to its lowest level in 25 years | Freedom of the Press News
    • Concerning update emerges on Broncos QB Bo Nix
    • The analog edge: 8 old-fashioned habits to stay sharp and fit at work
    • What happened after the fall of Rome? Ancient genomes offer new clues
    • South Korean Market Surges Past Britain’s
    • Gina Carano Breaks Silence On ‘Star Wars’ Costar Pedro Pascal
    Benjamin Franklin Institute
    Thursday, April 30
    • Home
    • Politics
    • Business
    • Science
    • Technology
    • Arts & Entertainment
    • International
    Benjamin Franklin Institute
    Home»Technology»China plans strict AI rules to protect children and tackle suicide risks
    Technology

    China plans strict AI rules to protect children and tackle suicide risks

    Team_Benjamin Franklin InstituteBy Team_Benjamin Franklin InstituteDecember 30, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link


    Osmond ChiaBusiness reporter

    Getty Images A young girl with a pony tail is watching social media videos on a three fold mobile phone at Huawei's world's largest flagship store in Shanghai, ChinaGetty Images

    China has proposed strict new rules for artificial intelligence (AI) to provide safeguards for children and prevent chatbots from offering advice that could lead to self-harm or violence.

    Under the planned regulations, developers will also need to ensure their AI models do not generate content that promotes gambling.

    The announcement comes after a surge in the number of chatbots being launched in China and around the world.

    Once finalised, the rules will apply to AI products and services in China, marking a major move to regulate the fast-growing technology, which has come under intense scrutiny over safety concerns this year.

    The draft rules, which were published at the weekend by the Cyberspace Administration of China (CAC), include measures to protect children. They include requiring AI firms to offer personalised settings, have time limits on usage and getting consent from guardians before providing emotional companionship services.

    Chatbot operators must have a human take over any conversation related to suicide or self-harm and immediately notify the user’s guardian or an emergency contact, the administration said.

    AI providers must ensure that their services do not generate or share “content that endangers national security, damages national honour and interests [or] undermines national unity”, the statement said.

    The CAC said it encourages the adoption of AI, such as to promote local culture and create tools for companionship for the elderly, provided that the technology is safe and reliable. It also called for feedback from the public.

    Chinese AI firm DeepSeek made headlines worldwide this year after it topped app download charts.

    This month, two Chinese startups Z.ai and Minimax, which together have tens of millions of users, announced plans to list on the stock market.

    The technology has quickly gained huge numbers of subscribers with some using it for companionship or therapy.

    The impact of AI on human behaviour has come under increased scrutiny in recent months.

    Sam Altman, the head of ChatGPT-maker OpenAI, said this year that the way chatbots respond to conversations related to self-harm is among the company’s most difficult problems.

    In August, a family in California sued OpenAI over the death of their 16-year-old son, alleging that ChatGPT encouraged him to take his own life. The lawsuit marked the first legal action accusing OpenAI of wrongful death.

    This month, the company advertised for a “head of preparedness” who will be responsible for defending against risks from AI models to human mental health and cybersecurity.

    The successful candidate will be responsible for tracking AI risks that could pose a harm to people. Mr Altman said: “This will be a stressful job, and you’ll jump into the deep end pretty much immediately.”

    If you are suffering distress or despair and need support, you could speak to a health professional, or an organisation that offers support. Details of help available in many countries can be found at Befrienders Worldwide: www.befrienders.org.

    In the UK, a list of organisations that can help is available at bbc.co.uk/actionline. Readers in the US and Canada can call the 988 suicide helpline or visit its website.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

    Related Posts

    Technology

    The FPGA Chip Is an IEEE Milestone

    April 29, 2026
    Technology

    Tech Life – The workers in the engine room of big tech

    April 28, 2026
    Technology

    Sparse AI Hardware Slashes Energy and Latency

    April 28, 2026
    Technology

    Poem: Danica Radovanović’s “Entanglement: A Brief History of Human Connection”

    April 28, 2026
    Technology

    Engineering Collisions: How NYU Is Remaking Health Research

    April 27, 2026
    Technology

    The Hidden Tradeoffs Powering Joby’s eVTOL Motors

    April 27, 2026
    Editors Picks

    Nebraska sets historic milestone in new AP men’s basketball poll

    January 27, 2026

    Apple and Google clash with police and MPs over phone thefts

    June 4, 2025

    Winter floods wreak havoc on Gaza displacement camps as Israel blocks aid | Gaza News

    December 29, 2025

    Shane Steichen details how Colts signed Philip Rivers

    December 10, 2025

    Argentina’s world-beating currency rally puts pressure on Javier Milei

    December 27, 2024
    About Us
    About Us

    Welcome to Benjamin Franklin Institute, your premier destination for insightful, engaging, and diverse Political News and Opinions.

    The Benjamin Franklin Institute supports free speech, the U.S. Constitution and political candidates and organizations that promote and protect both of these important features of the American Experiment.

    We are passionate about delivering high-quality, accurate, and engaging content that resonates with our readers. Sign up for our text alerts and email newsletter to stay informed.

    Latest Posts

    Timothée Chalamet’s Love Life Sparks Shock Fan Exit

    April 30, 2026

    UK boosts security for Jews after London stabbings

    April 30, 2026

    Press freedom worldwide falls to its lowest level in 25 years | Freedom of the Press News

    April 30, 2026

    Subscribe for Updates

    Stay informed by signing up for our free news alerts.

    Paid for by the Benjamin Franklin Institute. Not authorized by any candidate or candidate’s committee.
    • Privacy Policy
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.