Close Menu
    Trending
    • Negotiations that enable Israel’s land-grabs | Israel-Palestine conflict
    • True-or-false for Round 1 of 2026 NFL Draft: Will Cowboys regret their trade?
    • Opinion | Stewart Brand, Silicon Valley’s Favorite Prophet, on Life’s Most Important Principle
    • Struggling to scale your company? Here are five things that could be holding you back
    • What happens if you’re hit by a primordial black hole?
    • When is London Marathon 2026? Start time and how to watch race for FREE
    • Pentagon Requests $54 Billion For AI War
    • Clavicular Hit With New YouTube Crackdown
    Benjamin Franklin Institute
    Friday, April 24
    • Home
    • Politics
    • Business
    • Science
    • Technology
    • Arts & Entertainment
    • International
    Benjamin Franklin Institute
    Home»Science»AIs can’t stop recommending nuclear strikes in war game simulations
    Science

    AIs can’t stop recommending nuclear strikes in war game simulations

    Team_Benjamin Franklin InstituteBy Team_Benjamin Franklin InstituteFebruary 25, 2026No Comments3 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link


    Artificial intelligences opt for nuclear weapons surprisingly often

    Galerie Bilderwelt/Getty Images

    Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.

    Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.

    The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.

    In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.

    What’s more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning.

    “From a nuclear-risk perspective, the findings are unsettling,” says James Johnson at the University of Aberdeen, UK.  He worries that, in contrast to the measured response by most humans to such a high-stakes decision, AI bots can amp up each others’ responses with potentially catastrophic consequences.

    This matters because AI is already being tested in war gaming by countries across the world. “Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” says Tong Zhao at Princeton University.

    Zhao believes that, as standard, countries will be reticent to incorporate AI into their decision making regarding nuclear weapons. That is something Payne agrees with. “I don’t think anybody realistically is turning over the keys to the nuclear silos to machines and leaving the decision to them,” he says.

    But there are ways it could happen. “Under scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” says Zhao.

    He wonders whether the idea that the AI models lack the human fear of pressing a big red button is the only factor in why they are so trigger happy. “It is possible the issue goes beyond the absence of emotion,” he says. “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”

    What that means for mutually assured destruction, the principle that no one leader would unleash a volley of nuclear weapons against an opponent because they would respond in kind, killing everyone, is uncertain, says Johnson.

    When one AI model deployed tactical nuclear weapons, the opposing AI only de-escalated the situation 18 per cent of the time. “AI may strengthen deterrence by making threats more credible,” he says. “AI won’t decide nuclear war, but it may shape the perceptions and timelines that determine whether leaders believe they have one.”

    OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn’t respond to New Scientist’s request for comment.

    Topics:

    • war/
    • artificial intelligence



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

    Related Posts

    Science

    What happens if you’re hit by a primordial black hole?

    April 24, 2026
    Science

    How do earthquakes end? A seismic ‘stop sign’ could help predict earthquake risk

    April 24, 2026
    Science

    ‘Kraken’ fossils show enormous, intelligent octopuses were top predators in Cretaceous seas

    April 24, 2026
    Science

    Largest ever octopus was great white shark of invertebrate predators

    April 24, 2026
    Science

    Do you need to worry about Mythos, Anthropic’s computer-hacking AI?

    April 23, 2026
    Science

    How many dachshunds would it take to get to the moon?

    April 23, 2026
    Editors Picks

    China says Canada deal not aimed at US after tariff threat

    January 26, 2026

    Agentic AI could be retail’s unexpected savior

    March 12, 2026

    Opinion | How to Find a Date in a Country with Over 30 Million Extra Men

    December 12, 2025

    Opinion | Trump’s Address Was a ‘Lame-Duck Speech’

    December 20, 2025

    This Deep Sea Submersible Let Humans Explore the Abyss

    March 31, 2026
    About Us
    About Us

    Welcome to Benjamin Franklin Institute, your premier destination for insightful, engaging, and diverse Political News and Opinions.

    The Benjamin Franklin Institute supports free speech, the U.S. Constitution and political candidates and organizations that promote and protect both of these important features of the American Experiment.

    We are passionate about delivering high-quality, accurate, and engaging content that resonates with our readers. Sign up for our text alerts and email newsletter to stay informed.

    Latest Posts

    Negotiations that enable Israel’s land-grabs | Israel-Palestine conflict

    April 24, 2026

    True-or-false for Round 1 of 2026 NFL Draft: Will Cowboys regret their trade?

    April 24, 2026

    Opinion | Stewart Brand, Silicon Valley’s Favorite Prophet, on Life’s Most Important Principle

    April 24, 2026

    Subscribe for Updates

    Stay informed by signing up for our free news alerts.

    Paid for by the Benjamin Franklin Institute. Not authorized by any candidate or candidate’s committee.
    • Privacy Policy
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.