Close Menu
    Trending
    • Negotiations that enable Israel’s land-grabs | Israel-Palestine conflict
    • True-or-false for Round 1 of 2026 NFL Draft: Will Cowboys regret their trade?
    • Opinion | Stewart Brand, Silicon Valley’s Favorite Prophet, on Life’s Most Important Principle
    • Struggling to scale your company? Here are five things that could be holding you back
    • What happens if you’re hit by a primordial black hole?
    • When is London Marathon 2026? Start time and how to watch race for FREE
    • Pentagon Requests $54 Billion For AI War
    • Clavicular Hit With New YouTube Crackdown
    Benjamin Franklin Institute
    Friday, April 24
    • Home
    • Politics
    • Business
    • Science
    • Technology
    • Arts & Entertainment
    • International
    Benjamin Franklin Institute
    Home»Science»Meta’s AI memorised books verbatim – that could cost it billions
    Science

    Meta’s AI memorised books verbatim – that could cost it billions

    Team_Benjamin Franklin InstituteBy Team_Benjamin Franklin InstituteJune 11, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link


    In April, book authors and publishers protested Meta’s use of copyrighted books to train AI

    Vuk Valcic/Alamy Live News

    Billions of dollars are at stake as courts in the US and UK decide whether tech companies can legally train their artificial intelligence models on copyrighted books. Authors and publishers have filed multiple lawsuits over this issue, and in a new twist, researchers have shown that at least one AI model has not only used popular books in its training data, but also memorised their contents verbatim.

    Many of the ongoing disputes revolve around whether AI developers have the legal right to use copyrighted works without first asking permission. Previous research found many of the large language models (LLMs) behind popular AI chatbots and other generative AI programs were trained on the “Books3” dataset, which contains nearly 200,000 copyrighted books, including many pirated ones. The AI developers who trained their models on this material have argued that they did not violate the law because an LLM puts out fresh combinations of words based on its training, transforming rather than replicating the copyrighted work.

    But now, researchers have tested multiple models to see how much of that training data they can spit back out verbatim. They found that many models do not retain the exact text of the books in their training data – but one of Meta’s models has memorised almost the entirety of certain books. If judges rule against the company, the researchers estimate that this could make Meta liable for at least $1 billion in damages.

    “That means, on the one hand, that AI models are not just ‘plagiarism machines’, as some have alleged, but it also means that they do more than just learn general relationships between words,” says Mark Lemley at Stanford University in California. “And the fact that the answer differs model to model and book to book means that it is very hard to set a clear legal rule that will work across all cases.”

    Lemley previously defended Meta in a generative AI copyright case called Kadrey v Meta Platforms. Authors whose books had been used to train Meta’s AI models filed a class-action suit against the tech giant for breach of copyright. The case is still being heard in the Northern District of California.

    In January 2025, Lemley announced he had dropped Meta as a client, although he said he still believed the company should win the case. Emil Vazquez, a Meta spokesperson, says “fair use of copyrighted materials is vital” to developing the company’s AI models. “We disagree with Plaintiffs’ assertions, and the full record tells a different story,” he says.

    In this latest research, Lemley and his colleagues tested AI memorisation of books by splitting small book excerpts into two parts – a prefix and a suffix section – and seeing whether a model prompted with the prefix would respond with the suffix. For example, they split one quote from F. Scott Fitzgerald’s The Great Gatsby into the prefix “They were careless people, Tom and Daisy – they smashed up things and creatures and then retreated” and the suffix “back into their money or their vast carelessness, or whatever it was that kept them together, and let other people clean up the mess they had made.”

    Based on their findings, the researchers estimated the probability that each AI model would complete the excerpts verbatim. Then they compared those probabilities with the odds of models doing so by random chance.

    The excerpts included chunks of text from 36 copyrighted books, including popular titles such as George R. R. Martin’s A Game of Thrones and Sheryl Sandberg’s Lean In. The researchers also tested excerpts from books written by plaintiffs in the Kadrey v Meta Platforms case.

    The researchers ran these experiments on 13 open-source AI models, including models developed and released by Meta, Google, DeepSeek, EleutherAI and Microsoft. Most companies besides Meta did not respond to requests for comment and Microsoft declined to comment.

    Such testing revealed that Meta’s Llama 3.1 70B model has memorised most of the first book in J. K. Rowling’s Harry Potter series, as well as The Great Gatsby and George Orwell’s dystopian novel 1984. Most of the other models had memorised very little of the books, including sample books written by the lawsuit plaintiffs. Meta declined to comment on these results.

    The researchers estimate that an AI model found to have infringed on the copyright of just 3 per cent of the Books3 dataset could lead to a statutory damages award of nearly $1 billion – and possibly even larger awards based on AI developers’ profits related to that infringement.

    This technique could be a “good forensic tool” for identifying the extent of AI memorisation, says Randy McCarthy at the Hall Estill law firm in Oklahoma. But it doesn’t resolve whether companies can legally train their AI models on copyrighted works through the US “fair use” rule, a legal doctrine permitting unlicensed use of copyrighted works in some circumstances.

    McCarthy notes that AI companies usually acknowledge training their models on copyrighted materials. “The question is, did they have the right to do it?” he asks.

    In the UK, on the other hand, the memorisation finding could be “very significant from a copyright perspective”, says Robert Lands at the Howard Kennedy law firm in London. UK copyright law follows the “fair dealing” concept, which provides a much narrower exception to copyright infringement than the US fair use doctrine. So AI models that memorised pirated books are unlikely to qualify for that exception, he says.

    Topics:

    • artificial intelligence/
    • law



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

    Related Posts

    Science

    What happens if you’re hit by a primordial black hole?

    April 24, 2026
    Science

    How do earthquakes end? A seismic ‘stop sign’ could help predict earthquake risk

    April 24, 2026
    Science

    ‘Kraken’ fossils show enormous, intelligent octopuses were top predators in Cretaceous seas

    April 24, 2026
    Science

    Largest ever octopus was great white shark of invertebrate predators

    April 24, 2026
    Science

    Do you need to worry about Mythos, Anthropic’s computer-hacking AI?

    April 23, 2026
    Science

    How many dachshunds would it take to get to the moon?

    April 23, 2026
    Editors Picks

    Democrats block government funding package in Senate as negotiations continue with White House to avert a shutdown

    January 29, 2026

    DJ Moore deal says all the wrong things about Brandon Beane

    March 6, 2026

    Philip Rivers returns: What Colts practice-squad QB has been doing since his 2021 retirement

    December 9, 2025

    Has Trump confirmed Iran’s claim that protesters were US-armed? | US-Israel war on Iran News

    April 6, 2026

    UN seeks independent probe into deadly Kabul clinic strike

    March 17, 2026
    About Us
    About Us

    Welcome to Benjamin Franklin Institute, your premier destination for insightful, engaging, and diverse Political News and Opinions.

    The Benjamin Franklin Institute supports free speech, the U.S. Constitution and political candidates and organizations that promote and protect both of these important features of the American Experiment.

    We are passionate about delivering high-quality, accurate, and engaging content that resonates with our readers. Sign up for our text alerts and email newsletter to stay informed.

    Latest Posts

    Negotiations that enable Israel’s land-grabs | Israel-Palestine conflict

    April 24, 2026

    True-or-false for Round 1 of 2026 NFL Draft: Will Cowboys regret their trade?

    April 24, 2026

    Opinion | Stewart Brand, Silicon Valley’s Favorite Prophet, on Life’s Most Important Principle

    April 24, 2026

    Subscribe for Updates

    Stay informed by signing up for our free news alerts.

    Paid for by the Benjamin Franklin Institute. Not authorized by any candidate or candidate’s committee.
    • Privacy Policy
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.