Why does AI empower individuals but stumble when scaling across organisations? In our third blog on Large Language Models, Advisory Principal George Brain unpacks what MIT calls the "GenAI divide."

The GenAI Divide: Why AI Works for People but Not for Companies at Scale (Yet)

Insight / 7 Nov 2025

The GenAI Divide: Why AI Works for People but Not for Companies at Scale (Yet) 

MIT made headlines everywhere with their claim that only “5% of generative AI pilots make it to production and deliver measurable enterprise value”
 
Now, even if you doubt the validity of that statistic as some have, one thing is undeniable the ‘informal’ use of personal AI tools in organisations, also known as shadow AI, is booming versus enterprise-led AI initiatives. 

Across every industry, employees are using tools like ChatGPT, Claude and Gemini to write emails, craft campaigns, summarise research or build slides. They’ve quietly reinvented how they work. 
 
This poses an obvious question: 

If AI works so well for individuals, why is it still struggling to transform organisations? 

The latest MIT State of AI in Business 2025 report calls this the GenAI Divide, the widening gap between personal productivity gains and enterprise-level impact. 
 
In other words, everyone’s experimenting with AI, but almost no one is scaling it. 

The paradox of progress 

It’s easy to assume the difference lies in model quality or investment levels. But the truth is simpler, and more structural.

Generative AI works brilliantly for individuals because the cost of forgetting is low. 

When you use ChatGPT, it doesn’t need to remember much. You provide the context it gives you the output; you tweak and move on. The feedback loop is quick and cheap. If it forgets your tone or last week’s preferences, you just re-prompt it. You, the human, supply the memory – either directly in the prompt context window, or through third party tools like Fabric (in our VC portfolio) which allows consumers to take their personal context with them across the internet. 

Inside the enterprise, forgetting is a much bigger problem. 

When an AI system forgets a compliance rule, it risks fines. When it forgets a customer’s history, it undermines trust. When it ignores internal feedback, it repeats the same mistake across thousands of users. 
 
Fixing that isn’t a quick re-prompt, it’s a full process clean-up. 

That’s why consumer AI thrives despite its amnesia, while enterprise AI stalls because of it. 

Shadow AI succeeds because humans absorb the learning. 
Enterprise AI fails because systems don’t. 

The learning gap that holds business back 

Most enterprise AI systems today don’t learn in any meaningful sense. They generate, but they don’t improve. 

Each interaction begins with a blank slate. No memory, no feedback history, no adaptation. 

MIT calls this the learning gap: the absence of feedback loops, context retention, and workflow integration that allow AI systems to evolve. 

McKinsey’s State of AI 2025 study echoes the finding. Nearly 80% of firms now use AI in some capacity, yet fewer than 20% can show bottom-line impact. The strongest driver of success? Workflow redesign. 

AI isn’t failing because it’s weak, it’s failing because we’re forcing it into old processes that were never designed for learning. 

Why old workflows kill new intelligence 

Traditional enterprise workflows are deterministic: a sequence of fixed inputs, rules, and approvals. 
Generative AI thrives in the opposite environment; ambiguity, iteration, and feedback. When we drop AI into a rigid structure, one of two things happens: 

  1. The process breaks. The AI can’t adapt, so errors or risks cascade. 

  2. The AI is neutered. Governance and guardrails strip away its ability to explore or improve. 

Either way, the learning loop never closes. 

Meanwhile, employees using personal AI tools have unconsciously redesigned their micro-workflows. They iterate, adjust, and feed their own feedback back in. Every prompt is a tiny experiment. That fluidity is exactly what enterprise processes need, but rarely allow. 

Redesigning workflows for learning 

The companies crossing the GenAI Divide start with process redesign, not model selection. 

Rather than asking, “Where can we add AI?”, they ask, “Which parts of our workflow need to be rebuilt so AI can learn?” 

Some principles are emerging: 

  1. Feedback-as-a-feature 
    Build feedback directly into tools. Every “approve,” “edit,” or “reject” action becomes data that teaches the system. Platforms like Humanloop and Contextual AI now make this frictionless. 

  2. Persistent memory layers 
    Use retrieval-augmented architectures (such as LlamaIndex and Haystack) to store policies, preferences, and corrections so the system remembers across sessions. 

  3. Human-in-the-loop checkpoints 
    Let AI handle the repeatable 80%, but keep humans in the loop for exceptions — and feed those corrections back in. Platforms like gotoHuman and Humans in the Loop make continuous evaluation and learning operational at scale. 

  4. Role-weighted learning 
    Not all feedback is equal. Expert input – say, from compliance or brand teams – should carry more weight in model refinement than general user edits. 

Together, these mechanisms turn a static tool into a self-improving system. 

The Takeaway 

The next wave of AI transformation won’t be decided by who has the biggest model or the deepest pockets. It will be decided by who can make their systems – and processes – learn. 

To cross the GenAI Divide, organisations need to do two things at once:  

  1. Redesign workflows for AI. Make them iterative, flexible, and data-rich. 

  2. Embed learning loops within those workflows. Ensure every correction, decision, and interaction improves the next one. 

This is how enterprises can unlock the same adaptability and impact that’s driven the success of ‘shadow AI’ and, ultimately, realise the trillion-dollar AI opportunity. 

 

MIT NANDA, “The GenAI Divide: State of AI in Business 2025” (August 2025) 

McKinsey & Company, “The State of AI: How Organisations Are Rewiring to Capture Value” (2025) 

 

relevant stories

Insight

Large Language Models: Smarter than us — or confidently wrong?

Our latest blog unpacks the common issue of Large Language Models and their tendency to hallucinate. Advisory Analyst Cara Hogg explores what they are, why they happen and offers practical strategies for working around them.

View Story

Insight

Large Language Models: Consumer Facing Applications - Expansive Knowledge, Expansive Risks

This is the latest blog in our series on key risks associated with LLMs and how to overcome them. Here we unpack the potential issues that arise when used to power consumer-facing features/applications without sufficient guardrails and a handful of simple strategies to prevent them.

View Story