AI Won’t Reach AGI… But That Might Not Matter
- bernarddorenkamp
- Mar 2
- 2 min read

I was recently sent an article arguing that large language models are fundamentally limited. They predict the next word. They don’t truly “understand.” They hallucinate. Scaling alone won’t produce general intelligence.
There’s truth in that.
Transformers are probabilistic pattern learners. They lack grounded experience and intrinsic intent. Hallucinations are real engineering challenges. Scaling alone is unlikely to magically deliver AGI.
But I think the debate often misses something important.
The real question isn’t whether a standalone LLM becomes AGI.
It’s what happens when you stop looking at the model in isolation.
Over the past year, the shift hasn’t just been model size. It’s been architecture. We’re now seeing systems that combine foundation models with tool use, memory layers, planning loops, retrieval, execution environments, and multi-agent coordination.
The inflection point isn’t that AI can generate text.
It’s that AI can act.
When a system can send emails, update CRM records, deploy code, book meetings, run simulations, interact with APIs, observe outcomes and adjust - we move from static output to closed-loop systems.
That may not be AGI philosophically.
But economically, it could be enough.
Multi-agent systems make this even more interesting. Instead of one monolithic model, you get specialised agents planning, executing, verifying, and critiquing each other. That mirrors how organisations operate. In some cases, collaboration between agents produces solutions that weren’t explicitly trained into any single model.
The article I read argues scaling won’t fix structural limitations. I agree.
But hybrid systems - model + tools + agents + feedback loops - may close much of the functional gap without needing “true” AGI.
What does this mean for jobs?
I don’t believe AI replaces everyone. But I also don’t think this is a minor productivity boost.
Technology replaces tasks before professions. And much of knowledge work consists of structured, repeatable tasks: research, drafting, workflow execution, monitoring, iteration.
If agents can handle those reliably, leverage shifts. You don’t need 10 people executing processes. You might need 3 orchestrating intelligent systems.
High-trust, ambiguous, relationship-heavy roles remain resilient. Highly structured cognitive work is more exposed.
The AGI debate may be the wrong frame. It’s not “either AGI or hype.”
If hybrid agent systems become economically general enough, the philosophical distinction may not matter.
The real question isn’t whether AI thinks like a human.
It’s whether it can act effectively enough inside complex systems to reshape how organisations operate.
And on that front, the trajectory is becoming harder to ignore - even if the timeline remains uncertain.



Comments