Close Menu
    Trending
    • Netanyahu says he was successfully treated for prostate cancer
    • Negotiations that enable Israel’s land-grabs | Israel-Palestine conflict
    • True-or-false for Round 1 of 2026 NFL Draft: Will Cowboys regret their trade?
    • Opinion | Stewart Brand, Silicon Valley’s Favorite Prophet, on Life’s Most Important Principle
    • Struggling to scale your company? Here are five things that could be holding you back
    • What happens if you’re hit by a primordial black hole?
    • When is London Marathon 2026? Start time and how to watch race for FREE
    • Pentagon Requests $54 Billion For AI War
    Benjamin Franklin Institute
    Friday, April 24
    • Home
    • Politics
    • Business
    • Science
    • Technology
    • Arts & Entertainment
    • International
    Benjamin Franklin Institute
    Home»Business»The real reason so many enterprise AI initiatives are failing? LLMs were never built to run a company 
    Business

    The real reason so many enterprise AI initiatives are failing? LLMs were never built to run a company 

    Team_Benjamin Franklin InstituteBy Team_Benjamin Franklin InstituteApril 21, 2026No Comments6 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link

    When ChatGPT launched in November 2022, the reaction was immediate and visceral: this works. For the first time, millions of people experienced AI not as a distant promise, but as something useful, intuitive, and even with its flaws, astonishingly capable. 

    That instinct was correct. The conclusion that followed was not. Because what works brilliantly for an individual at a keyboard has proven surprisingly ineffective inside an organization. 

    Two years later, after billions in investment, countless pilots, and an endless stream of “copilots,” a different reality is emerging: generative AI is exceptional at producing language. But companies do not run on language: they run on memory, context, feedback, and constraints. That’s the gap. And that’s why so many enterprise AI initiatives are quietly failing. 

    High adoption, low impact… and a growing sense of déjà vu 

    This is not a story about a technology that failed to gain traction. It’s the opposite. 

    A widely cited MIT-backed analysis found that around 95% of enterprise generative AI pilots fail to deliver meaningful results, with only about 5% making it to sustained production. Other coverage of the same findings points to the same pattern: massive experimentation, minimal transformation. 

    And the explanation is telling: the problem isn’t enthusiasm, or even capability: it’s that the tools don’t translate into real, operational change. 

    This is not an adoption problem. It’s an architecture problem. 

    The uncomfortable paradox: everyone uses AI, but nothing changes 

    Inside most companies today, two realities coexist: on one side, employees use tools like ChatGPT constantly. They draft, summarize, ideate, and accelerate their work in ways that feel natural and effective. 

    On the other, official enterprise AI initiatives struggle to scale beyond carefully controlled pilots. 

    The same MIT-related analysis describes a widening “learning gap”: individuals quickly find value, but organizations fail to integrate that value into workflows that matter. The result is something close to “shadow AI”: people use what works, while companies invest in what doesn’t. 

    That’s not resistance to change. 

    That’s a signal. 

    The core mistake: treating a language model like an operating system 

    Most explanations for this failure focus on execution: bad data, unclear use cases, lack of training. All true. All secondary. 

    The real issue is simpler and far more fundamental: large language models are designed to predict text. That’s it. Everything else, from reasoning, to summarization, conversation, etc. is an emergent property of that capability. 

    But companies do not operate as sequences of text. They operate as evolving systems with state, memory, dependencies, incentives, and constraints. 

    This is the mismatch. 

    As I’ve argued before, this is AI’s core architectural flaw: LLMs do not “see” the world. They do not maintain persistent state. They do not learn from real-world feedback unless explicitly engineered to do so. 

    They generate convincing language about reality. They do not operate within it. 

    You can’t run a company on predictions of words 

    This leads to a pattern that should feel familiar. 

    Ask an LLM to: 

    • “Increase my sales” 
    • “Design a go-to-market strategy” 
    • “Improve team performance” 

    And you will get an answer. Often a very good one. A structured, articulate and persuasive answer. And almost entirely disconnected from the actual system it is supposed to influence. 

    Because an LLM cannot track a pipeline, manage incentives, integrate CRM data, or adapt based on outcomes. It can describe a strategy. But it cannot execute one. 

    The MIT findings reinforce this point: generative AI tools are effective for flexible, individual tasks, but break down in enterprise contexts where adaptation, learning, and integration are required. 

    In other words: an LLM can write the memo. But it cannot run the company. 

    Throwing more compute at the problem won’t fix it 

    The industry’s response so far has been predictable: build bigger models, deploy more infrastructure, scale everything. But scale does not fix a design flaw. If a system lacks grounding in reality, more parameters will not give it grounding. If it lacks memory, more tokens will not give it memory. If it lacks feedback loops, more data centers will not create them. 

    Scale amplifies what exists. It does not create what’s missing. And what’s missing here is not more language. It’s more world. 

    The next layer won’t be about better answers 

    The next phase of enterprise AI will not be defined by better chat interfaces or more powerful LLMs. It will be defined by something else entirely: systems that can maintain state, integrate into workflows, learn from outcomes, and operate under constraints. 

    Systems that don’t just generate text, but act within real environments. This is why the future of AI in companies will not be built on LLMs alone, but on architectures that embed them within richer models of reality. 

    Or, as I’ve argued in previous work, why world models are likely to become a foundational capability rather than a niche concept. 

    Saying what many already know… but rarely say 

    If this feels obvious, it’s because many people inside organizations already see it: they’ve run the pilots. They’ve seen the demos. They’ve experienced the gap. But saying it out loud is still uncomfortable. 

    There is too much momentum, too much investment, and too much narrative built around the idea that scaling LLMs will eventually solve everything. It won’t. 

    The emperor is not just underdressed. He is wearing the wrong clothes entirely. 

    The real opportunity 

    This is not the end of enterprise AI: it is the end of a misconception. Language models are not enterprise architecture: they are an interface layer. A powerful one, but insufficient on its own. 

    The companies that understand this first will not just deploy AI better: they will build something fundamentally different. 

    And when that happens, it will feel, once again, like magic. 

    But this time, it won’t be an illusion. 



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

    Related Posts

    Business

    Struggling to scale your company? Here are five things that could be holding you back

    April 24, 2026
    Business

    AI search demands a new audience playbook

    April 24, 2026
    Business

    AI is replacing creativity with ‘average’

    April 24, 2026
    Business

    Palantir is dropping merch and stirring pots

    April 24, 2026
    Business

    NASA’s awe-inducing iPhone moon video is a free ad for Apple, but there’s a catch

    April 23, 2026
    Business

    The U.S. just changed marijuana law for the first time in decades

    April 23, 2026
    Editors Picks

    How government use of AI could hurt democracy

    July 12, 2025

    Wheat Has Toppled More Empires Than Gunpowder

    March 14, 2026

    Oilers release bizarre statement on Connor McDavid suspension

    January 23, 2025

    China says US arms sales to Taiwan ‘speeding up threat of war’

    December 25, 2025

    NASCAR’s return to the Chase signals a long-awaited shift back to simplicity

    January 13, 2026
    About Us
    About Us

    Welcome to Benjamin Franklin Institute, your premier destination for insightful, engaging, and diverse Political News and Opinions.

    The Benjamin Franklin Institute supports free speech, the U.S. Constitution and political candidates and organizations that promote and protect both of these important features of the American Experiment.

    We are passionate about delivering high-quality, accurate, and engaging content that resonates with our readers. Sign up for our text alerts and email newsletter to stay informed.

    Latest Posts

    Netanyahu says he was successfully treated for prostate cancer

    April 24, 2026

    Negotiations that enable Israel’s land-grabs | Israel-Palestine conflict

    April 24, 2026

    True-or-false for Round 1 of 2026 NFL Draft: Will Cowboys regret their trade?

    April 24, 2026

    Subscribe for Updates

    Stay informed by signing up for our free news alerts.

    Paid for by the Benjamin Franklin Institute. Not authorized by any candidate or candidate’s committee.
    • Privacy Policy
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.