LLM: Hallucination or Intelligence?

ai
llm
software-engineering
productivity
Author

Erik Lundevall-Zara

Published

November 19, 2025

Language models (LLMs) and AI agents are becoming part of—or already are part of—the developer’s toolkit. Along with these new capabilities come misconceptions and inflated expectations. Birgitta Böckeler from Thoughtworks had an excellent way of putting it, which essentially was:

If the LLM gives a result we do not want, we call it hallucination. If the LLM gives a result we want, we call it intelligence.

This observation highlights an important aspect of how LLMs function. When you use an LLM, you will get non-deterministic results. This is a feature that helps make language seem more natural and assists in exploring different approaches to a problem. If we make it completely deterministic, we also lose some of what makes these LLMs so attractive.

Many enthusiasts claim that programming jobs will disappear because agents with LLMs can generate code much faster than humans can. However, speed of code generation was never really what caused problems in software projects in the first place.

Instead, we struggle with unclear and ambiguous descriptions and requirements for what needs to be built, along with poor communication between those building the solutions and those who will use them. As developers, we learn what the problem really is while trying to solve it. The same applies to those who use the solutions we build—they learn what they actually want as the solution emerges.

LLMs don’t magically produce the results we want if we still have the same deficiencies in requirements and descriptions and lack the necessary feedback loops. We humans also hallucinate—most of us don’t have perfect memory that recalls every detail, so we fill in our memory gaps as best we can, and that can be wrong too, perhaps just with different patterns than an LLM.

Just as we give ourselves confidence and security in software development through established practices—such as:

we can also build greater confidence in what agents and LLMs accomplish and ensure we catch potential problems.

So regardless of whether you embrace LLMs and agents early or wait, using sound, established development methodology can help you along the way.


Photo by Growtika on Unsplash