The GenAI Divide: Why AI Works for People but Not for Companies at Scale (Yet)
MIT made headlines everywhere with their claim that only “5% of generative AI pilots make it to production and deliver measurable enterprise value”.
Now, even if you doubt the validity of that statistic as some have, one thing is undeniable the ‘informal’ use of personal AI tools in organisations, also known as shadow AI, is booming versus enterprise-led AI initiatives.
Across every industry, employees are using tools like ChatGPT, Claude and Gemini to write emails, craft campaigns, summarise research or build slides. They’ve quietly reinvented how they work.
This poses an obvious question:
If AI works so well for individuals, why is it still struggling to transform organisations?
The latest MIT State of AI in Business 2025 report calls this the GenAI Divide, the widening gap between personal productivity gains and enterprise-level impact.
In other words, everyone’s experimenting with AI, but almost no one is scaling it.
The paradox of progress
It’s easy to assume the difference lies in model quality or investment levels. But the truth is simpler, and more structural.
Generative AI works brilliantly for individuals because the cost of forgetting is low.
When you use ChatGPT, it doesn’t need to remember much. You provide the context it gives you the output; you tweak and move on. The feedback loop is quick and cheap. If it forgets your tone or last week’s preferences, you just re-prompt it. You, the human, supply the memory – either directly in the prompt context window, or through third party tools like Fabric (in our VC portfolio) which allows consumers to take their personal context with them across the internet.
Inside the enterprise, forgetting is a much bigger problem.
When an AI system forgets a compliance rule, it risks fines. When it forgets a customer’s history, it undermines trust. When it ignores internal feedback, it repeats the same mistake across thousands of users.
Fixing that isn’t a quick re-prompt, it’s a full process clean-up.
That’s why consumer AI thrives despite its amnesia, while enterprise AI stalls because of it.
Shadow AI succeeds because humans absorb the learning.
Enterprise AI fails because systems don’t.
The learning gap that holds business back
Most enterprise AI systems today don’t learn in any meaningful sense. They generate, but they don’t improve.
Each interaction begins with a blank slate. No memory, no feedback history, no adaptation.
MIT calls this the learning gap: the absence of feedback loops, context retention, and workflow integration that allow AI systems to evolve.
McKinsey’s State of AI 2025 study echoes the finding. Nearly 80% of firms now use AI in some capacity, yet fewer than 20% can show bottom-line impact. The strongest driver of success? Workflow redesign.
AI isn’t failing because it’s weak, it’s failing because we’re forcing it into old processes that were never designed for learning.
Why old workflows kill new intelligence
Traditional enterprise workflows are deterministic: a sequence of fixed inputs, rules, and approvals.
Generative AI thrives in the opposite environment; ambiguity, iteration, and feedback. When we drop AI into a rigid structure, one of two things happens:
The process breaks. The AI can’t adapt, so errors or risks cascade.
The AI is neutered. Governance and guardrails strip away its ability to explore or improve.
Either way, the learning loop never closes.
Meanwhile, employees using personal AI tools have unconsciously redesigned their micro-workflows. They iterate, adjust, and feed their own feedback back in. Every prompt is a tiny experiment. That fluidity is exactly what enterprise processes need, but rarely allow.
Redesigning workflows for learning
The companies crossing the GenAI Divide start with process redesign, not model selection.
Rather than asking, “Where can we add AI?”, they ask, “Which parts of our workflow need to be rebuilt so AI can learn?”
Some principles are emerging:
Feedback-as-a-feature
Build feedback directly into tools. Every “approve,” “edit,” or “reject” action becomes data that teaches the system. Platforms like Humanloop and Contextual AI now make this frictionless.Persistent memory layers
Use retrieval-augmented architectures (such as LlamaIndex and Haystack) to store policies, preferences, and corrections so the system remembers across sessions.Human-in-the-loop checkpoints
Let AI handle the repeatable 80%, but keep humans in the loop for exceptions — and feed those corrections back in. Platforms like gotoHuman and Humans in the Loop make continuous evaluation and learning operational at scale.Role-weighted learning
Not all feedback is equal. Expert input – say, from compliance or brand teams – should carry more weight in model refinement than general user edits.
Together, these mechanisms turn a static tool into a self-improving system.
The Takeaway
The next wave of AI transformation won’t be decided by who has the biggest model or the deepest pockets. It will be decided by who can make their systems – and processes – learn.
To cross the GenAI Divide, organisations need to do two things at once:
Redesign workflows for AI. Make them iterative, flexible, and data-rich.
Embed learning loops within those workflows. Ensure every correction, decision, and interaction improves the next one.
This is how enterprises can unlock the same adaptability and impact that’s driven the success of ‘shadow AI’ and, ultimately, realise the trillion-dollar AI opportunity.
MIT NANDA, “The GenAI Divide: State of AI in Business 2025” (August 2025)
McKinsey & Company, “The State of AI: How Organisations Are Rewiring to Capture Value” (2025)


