AI Maturity: Less Code, More Focus
TL;DR
- GitHub reported 43M monthly pull requests in 2025 (+23%). AI multiplies programmers, it doesn’t replace them
- Johnson & Johnson cut their initiatives from 900 to “a handful”. Fewer experiments, more impact
- English is the new programming language: knowing what to ask matters more than knowing how to code
- Companies that win have “AI factories”, not 900 scattered pilots
Two news items this week that seem unrelated but tell the same story.
First: GitHub reported 43 million monthly pull requests in 2025, up 23% from the previous year. One billion commits total. AI isn’t replacing programmers; it’s multiplying them.
Second: Johnson & Johnson reduced their AI initiatives from 900 individual use cases to “a handful” of strategic projects. Not because AI doesn’t work, but because they learned where it works best.
Both point to the same thing: AI is maturing. And maturing means doing fewer things, but doing them well.
English is the new programming language
Sounds like a LinkedIn post, but there’s truth behind it.
For decades, the bottleneck in software development was writing code. You needed to know syntax, data structures, design patterns. Years of training. Technical knowledge was the barrier to entry.
With current tools (Cursor, Claude Code, Copilot, Codex), the bottleneck has shifted. It’s no longer writing code. It’s knowing what to ask.
The “repository intelligence” GitHub mentions allows AI to understand not just the code, but the context and history behind it. You tell it what you want to achieve, and it generates the code. You review, adjust, iterate.
The programmer shifts from writing to directing. From coding to articulating objectives.
What this means in practice
For senior developers: your value is no longer in memorizing syntax. It’s in understanding architecture, making design decisions, and knowing what to ask. Experience is still crucial, but applied differently.
For juniors: the barrier to entry drops, but the quality bar rises. You can generate functional code quickly, but you need judgment to know if it’s good code. “It works” is no longer enough.
For non-programmers: if you can clearly explain what you need, you can build things that used to require a development team. You won’t build complex systems, but scripts, automations, prototypes… are within reach.
For companies: development talent becomes more productive. A team of 5 can do what a team of 10 used to do. But you need that team of 5 with judgment, not 10 juniors generating code without understanding what they’re doing.
From 900 use cases to a handful
Johnson & Johnson had 900 AI initiatives. Nine hundred. Sounds impressive in a PowerPoint. In practice, it was chaos.
What did they do? They stopped, evaluated, and kept the projects that actually moved the needle: supply chain, R&D, sales. Strategic use cases, not individual productivity experiments.
It’s not a failure. It’s maturity.
The pattern we’re seeing
Companies that adopted AI early went through predictable phases:
Phase 1: Uncontrolled experimentation. Everyone wants to “use AI”. Every department launches their pilot. 900 use cases. None connected. None truly measured.
Phase 2: Partial disillusionment. Pilots don’t scale. ROI doesn’t appear. Some projects get abandoned. Skepticism begins.
Phase 3: Consolidation. Someone with judgment says “enough”. Use cases that actually matter get prioritized. Real measurement begins. What works gets scaled.
J&J is in phase 3. Many companies are still stuck in phase 1 or 2.
Which use cases survive
Projects that pass the filter usually share common characteristics:
Measurable business impact. Not “improves productivity”. But “reduces process time from 4 days to 4 hours” or “increases conversion by 15%”.
Real scale. Not a pilot with 10 users. Thousands of transactions, hundreds of users, impact on core operations.
Clear ownership. Someone is responsible. Budget is allocated. Success metrics are defined.
Strategic connection. Not “AI for AI’s sake”. It’s “AI to solve this problem that’s preventing us from growing”.
”AI factories”
BBVA, JPMorgan Chase, Procter & Gamble, Intuit… companies getting real value from AI have built what they call “AI factories”.
It’s not a marketing term. It’s real infrastructure: technology platforms, standardized methodologies, dedicated teams, data pipelines, clear governance.
The difference from scattered pilots is that a factory can produce. Not one experiment, but ten projects in parallel. Not one model, but a system of connected models. Not a demo, but real production with SLAs.
Building a factory requires serious investment. But once you have it, the marginal cost of each new use case drops dramatically.
The bottom line
AI isn’t in crisis. It’s maturing.
Maturing means:
- Fewer experiments, more production
- Fewer use cases, more impact
- Less manual code, more clear direction
- Less hype, more results
Companies that understand this will win. Those still chasing 900 simultaneous use cases will keep burning budget without results.
And professionals who understand this will thrive. Knowing how to direct AI (whether generating code or prioritizing projects) is the skill that separates those who ride this wave from those who drown in it.
2026 isn’t the year of hype. It’s the year of getting things right.
Keep exploring
- The uncomfortable truth about AI ROI - Why most projects don’t deliver
- Implementing AI in an SMB: the truth - No-BS guide for small businesses
- AI agents in enterprise: real ROI - When agents actually make sense
You might also like
The AI bubble: 7 trillion looking for returns
Who wins, who loses, and why you should care. Analysis of massive AI investment and its bubble signals.
Using AI cost $1000. Now it costs $1. What's your excuse?
AI costs dropped 1000x in two years. If you're not using it, it's not about money. It's fear, ignorance, or laziness.
AI is running out of internet to eat
AI models consume data faster than we create it. Quality internet content is almost used up. What comes next?