I have started to notice a pattern that no one is quite naming yet.
Publicly, leaders are still optimistic about AI. Earnings calls are confident. Keynotes are hopeful. Strategy decks are full of ambition.
Privately, the tone is different.
There is frustration. Confusion. A quiet sense that something did not land the way it was supposed to.
My bet is that 2026 will be the year leaders publicly express optimism and privately express regret.
Not regret for investing in AI. Regret for how they did it.
Over the past two years, organizations moved fast. According to McKinsey, more than 70 percent of companies now report using some form of AI in at least one business function. Gartner reports that by 2025, most enterprise AI initiatives will be embedded into existing workflows rather than stand alone experiments. On paper, this looks like progress.
In practice, many leaders are asking the same question behind closed doors.
Why does everything still feel so hard?
The answer is not that AI failed. It is that AI was asked to perform inside systems that were already failing.
We talk about AI as if it is a transformation lever. However, AI does not transform broken systems. It accelerates them.
If your data is fragmented, AI scales confusion. If your workflows are misaligned, AI amplifies friction. If your incentives reward speed over clarity, AI just helps you move faster in the wrong direction.
MIT Sloan published research noting that companies seeing the highest AI returns focused less on the tools and more on organizational readiness. Leadership alignment, decision rights, and trust mattered more than model sophistication.
That should not be surprising, yet many leaders skipped that step entirely.
Here is where the conversation usually gets uncomfortable.
Most organizations did not adopt AI as part of a systems transformation. They adopted it as a capability upgrade.
That distinction matters.
A capability asks, “What can this tool do?” A system asks, “What will this tool interact with once it is inside?”
Hidden systems are the layer we rarely map. Decision bottlenecks. Informal workarounds. Legacy ownership models. Unwritten rules about who is allowed to question outputs. These are not technical problems, but they are the environment AI inherits.
I have seen this play out in a few familiar ways.
In one organization, AI recommendations were technically sound but constantly overridden because leaders did not trust data they had historically argued about. The tool became a suggestion engine no one followed.
In another, automation improved throughput but quietly shifted accountability. When errors happened, no one could explain whether the system failed or the human did. Trust eroded, not because of the model, but because the system never defined responsibility.
In a third, AI was rolled out unevenly. Some teams were empowered. Others felt surveilled. Adoption stalled not from resistance to technology, but from resistance to how it was introduced.
Harvard Business Review recently observed that the biggest barrier to AI value creation is not technical debt. It is organizational debt. The accumulation of unresolved decisions, misaligned incentives, and outdated operating assumptions.
This is the disappointment wave forming now.
It is not loud yet. It sounds like leaders saying things like, “We have the tools, but adoption is uneven,” or “We expected more efficiency gains by now,” or “We just need to refine the model.”
Maybe. Or maybe the model is doing exactly what it was designed to do, inside a system that was never designed for clarity.
The most interesting leaders I know are not asking how to get more out of AI. They are asking what AI is revealing about their organization that they were able to ignore before.
Where decisions stall. Where accountability blurs. Where humans are expected to compensate for system gaps.
AI has a way of surfacing truth quickly. That can feel like failure when it is actually exposure.
The norm right now is to double down on tools. Add governance. Add dashboards. Add more layers of control. Sometimes that helps. Sometimes it just adds complexity on top of complexity.
What if the real work is quieter?
Mapping the hidden systems. Naming the informal rules. Clarifying who owns what. Deciding where humans must stay in the loop, not as a safety blanket, but as a deliberate design choice.
There is no single answer here. Anyone selling certainty is skipping the hard part.
But I do think this moment will separate organizations that use AI as a shortcut from those that use it as a mirror.
One accelerates noise. The other forces clarity.
And clarity, uncomfortable as it is, is where real transformation actually begins.
See you next week for more straight talk. For bold ideas, honest insights, and real strategies subscribe to my newsletter and follow me on LinkedIn.
— Christina Aguilera | CIO & Executive Leader | Co-Founder, Synthis | President, WiTH Foundation
