Decisions are the atomic units of transformation
Those of us in the transformation business can trace our lineage back to the first change-makers in the 1st century AD: the alchemists.
They were passionate and aligned on the goal to turn base metal into gold. They created sophisticated methodologies and built entire systems of understanding from forces that could be observed: fire, water, metals, minerals.
Although they were considered smart, skilled and serious scholars in their time, they never cracked the code.
Because the answer couldn’t be found at the level they were working. The real source of change was one level deeper; a level that wouldn't be discovered for centuries.
In 1909, Ernest Rutherford asked a different question. Not what can we observe but what's underneath the observable. He discovered atoms: the invisible building blocks that determine the behavior of everything our eyes can perceive.
Once we studied atoms — and then electrons, and then quantum behavior — real transformation became possible. Materials that had never existed in nature could be designed from scratch. From the smallest building blocks we created semiconductors, medical imaging, nanotech, iPhones.
The entire modern world happened when we shift attention from the macro to the micro.
I’ve worked in the transformation business for years: Customer centricity, sustainability, digital, large-scale cross-sector systems change. Even personal transformation. Now AI.
Like the alchemists of old, we’ve never been short of ideas or rigorous methodologies. And yet the gold is too often elusive.
We’ve been working the macro — strategy, technology, culture, governance. But the source of transmutation lies one level deeper.
Decisions are the atomic units of transformation. Millions of them, made every second across your organization, that either support the change or stifle it.
Agentic AI forced the discovery.
When unpredictable machines started making autonomous decisions, “decision quality” suddenly carried legal, financial, and reputational consequences. Now everyone's starting to pay attention; decision intelligence and decision governance are the hot topics today.
But we can’t solve for agentic decisions without tackling the human ones too.
The junior analyst in a professional services firm at 9:45pm who chooses not to fact-check that AI-generated report because I just want to go home already.
The claims adjuster who sees clear reason to override that AI recommendation, but governance rules and override justifications make it not worth the effort.
The department head who's heard the transformation mandate, nodded in the right meetings, then kept making decisions that protect the status quo.
None of these people want to be the reason why transformation fails; they're simply responding to the context and inner realities that actually drive decisions. And these atomic decision moments are playing out at scale daily across your entire organization.
Decisions emerge more often than they’re made
How are decisions actually made in your organization? That's your decision system — intentionally designed or not. And it looks less like a machine and more like a coral reef ecosystem: complex, organic, dynamic. It includes:
Context — the external forces that influence decisions before anyone consciously makes one: culture, incentives, workload, governance, economic pressure, doomsday news cycles. Whether there's a compelling change narrative or not.
Choice — the moment of decision. Whether someone has genuine agency, clarity, and capability. Whether they’re cognitively present or exhausted. What intrinsic motivators and cognitive biases are present. It also includes the information they’re given and how it’s presented; data and AI interfaces shape decisions too.
Consequences — the downstream financial, operational, legal, and reputational outcomes that reveal whether the decision actually moved the organization forward.
This system is already running inside your organization. It produced yesterday’s decisions and it will produce tomorrow’s, unless you intentionally design it -- including feedback loops from Consequences to help the system get smarter over time.
Without intentionality, transformation programs themselves become context events. New strategy, new technology, new governance gets introduced into an unmapped decision system.
Which adds cognitive load, uncertainty, and noise that either shapes choices in unforeseen ways… or fails to impact decisions at all.
Is it any wonder that few AI pilots have moved into production or achieved an ROI? The transformation is being consumed by the very system it was supposed to change.
The transformation playbook gap
The alchemists of old weren't working in isolation. They shared manuscripts, debated methodologies, built on each other's work.
Sound familiar? Today's transformation playbooks come from the same small pool of analyst and consulting firms. The themes are consistent: strategy, culture, technology, change management, governance. Created by smart people, rigorous research, and genuine expertise.
I ran customer-centric transformation projects based on those playbooks at Forrester, and only now in hindsight can I see the gaping hole:
These playbooks didn't include decisions. Especially not the myriad cross-functional decisions that stymy the best change efforts.
And yet in my best work, we built what I'd now call a level 1 decision system — guiding the ELT, board, and functional leaders to make aligned decisions together. Like a magnet, an effective decision system moves the needle in ways the off-the-shelf playbook can't.
But AI transformation can't just change how leaders decide. It requires a shift in how every individual in your organization decides, every single day. The macro level isn't enough anymore. It never was — we just didn't have a forcing function to see it, until AI came along.
Having spent the past several months deep in the bowels of decision science, decision intelligence, and decision governance — trying to solve for how humans and machines make decisions together — is what made the connection visible.
It doesn't take a better alchemist to crack the code. It just requires approaching the problem from a completely different angle — like a physicist asking a question the alchemists never thought to ask.
The new science of transformation
The temptation, once you can see the decision level, is to treat it like an engineering problem. Map the decisions, design the rules, build the governance, control the outcomes. A better machine at a smaller scale.
But that's the same mistake, one level down. Not saying this level of detail isn't necessary for high-stakes, high risk decisions. It is. But these more detailed decision systems are inherently baked into the bigger system.
Organizations aren't machines. I mentioned earlier that decision systems should be more like a coral reef than a set of guidelines or processes – because that’s how complex organizations work.
You can't control millions of daily decisions across a living, breathing organization. What you can do is understand the conditions that make better decisions more likely — the culture, the incentives, the individual psychologies, the clarity of consequence, the presence or absence of genuine agency. Shape those conditions and transformation-aligned decisions become the path of least resistance.
Which is why science didn’t stop at the discovery of the atom. Go deep enough into physics and you hit quantum mechanics, where the act of observing changes what's being observed. Decision systems work the same way. The moment you make decisions visible — name them, measure them, close the feedback loop around them — the system starts to shift.
That's the new science of transformation. Not a new set of tools to impose on the organization, but a way of seeing that changes what you look at and how you intervene.
What’s next
Where to start? Not with a new framework or a new workstream — just with honest answers to a few questions your current playbook might not be answering:
What decisions are being made today that reinforce the status quo?
How are your employees making decisions with and about AI – and how might those decisions ripple into financial, human and risk consequences?
Which of these decisions qualify as high impact/high risk -- necessitating a more robust way of governing them?
How might culture and incentives shape those decisions, and what needs to change?
Transformation pros -- how does this land for you? What challenges or opportunities open up by exploring the atomic decision level? Comments welcome!
Curious what this might look like in your organization? Me too. DM me and let's explore it together.
PSS. Want to read more on the topic? Check out my recent article on the 3C decision framework.