Ferran Adrià, LLMs, and the Next Course for AI
Novelty fades. Precision and trust are what will decide AI’s future.
There was a time in food when Ferran Adrià changed everything. With El Bulli, he showed the world that a plate could be more than food — it could be science, art, surprise, performance. He bent the rules, rewrote expectations, and opened doors that chefs like me are still walking through today.
But here’s the thing: most people who tried to imitate him failed. They could foam, gel, or spherify their way into a headline or two, but without understanding, precision, and purpose, the plates fell flat. The novelty wore off, and what was once groundbreaking became a gimmick.
Novelty without trust doesn’t last.
AI is at risk of heading down that same path.
From Wow to Why
Large language models are extraordinary in their own right. They’ve reshaped how we think about knowledge, creativity, and human–machine collaboration.
But when they’re wrapped up and sold as finished products — without grounding, without precision, without trust — they start to look a lot like molecular gastronomy in the wrong hands. They wow in the demo, they photograph well on launch day, and then the cracks show.
LLMs don’t know. They guess. And you can’t build trust on guesses.
The cracks are clear: hallucinations and inconsistency. A chatbot can answer ten questions brilliantly and then completely lose the plot on the eleventh. For entertainment, that might be fine. For health, finance, or any domain where outcomes matter, it’s unacceptable.
The Trust Test
Trust, to me, is simple: the ability to confidently rely on an accurate outcome.
In the kitchen, it means a recipe that works every time, or a dish that tastes as good on a Tuesday night as it did on Saturday. In AI, it means systems that don’t just sound convincing but can be counted on to deliver.
The 3 Cs of Trust:
Clarity → Recommendations must come with reasons. If you can’t explain why, people won’t trust what.
Constraints → Guardrails rooted in science and context. Without them, models drift into nonsense.
Consistency → The output has to be reliable. Not once in a while, but every time.
Without clarity, constraints, and consistency, AI is just the foam-on-everything era all over again.
The Chef’s Rule for AI
Cooking and building AI have more in common than you might think. Both start with technique, data, and judgment. Both are measured, ultimately, by the experience they create.
Chef’s Rule for AI:
Mise en place: The prep has to be right — curated data, organized knowledge, the right ingredients.
Technique: Models need to be tuned for purpose, not just general guessing.
Taste: At the end of the day, does it work in real life? Does it satisfy, repeatably and reliably?
Chef’s Note: Get these wrong, and you have a flashy dish no one orders twice. Get them right, and you build something people trust enough to come back to again and again.
When the Tricks Fade
Ferran Adrià didn’t fail; he redefined cuisine. But chefs who misunderstood him, who copied his tricks without his rigor, burned out as quickly as they rose.
AI faces the same moment. LLMs are the foundation — the technique. But if all we do is sprinkle them everywhere without purpose, we risk turning transformation into gimmick.
The next course for AI isn’t more scale, more demos, or more shimmer. It’s trust.
The next course is trust built through clarity, constraints, and consistency. Trust grounded in prep, technique, and taste. Trust that lets us move from novelty to necessity.
That’s where the real transformation begins.