The Human-AI decision frontier and the loops that shape it.
TLDR: Human-AI decisions in high-stakes areas like credit, claims and diagnostics are where risk, money and human outcomes converge; yet they’re complex, unpredictable, and usually ungoverned. This newsletter explores the loops—human, AI, organizational, systemic—that shape these decisions, and what comes next.
Loops Within Loops Within Loops
Everyone knows that AI is loopy.
AI hallucinates and drifts, and performance shifts over time in ways you don't see until outcomes roll in months later.
So we put “humans in the loop” to keep AI from misbehaving, because humans are so rational… right?
But our brains are far loopier than AI; full of cognitive biases (over 180 at last count) and quick to take shortcuts when they’re tired or stressed.
Now pair our unpredictable brains with unpredictable AI systems making recommendations about money, risk protection, and people's lives.
Then feed those joint decisions back into AI training data—good and bad ones alike—because there’s no closed loop between decisions and downstream impacts on risks, costs, capital.
All in loopy, unpredictable business and regulatory environments.
What could possibly go wrong?
The decision intelligence gap
AI didn’t just create new risks; it revealed old ones too. Biases and decision fatigue have been around for centuries, but it’s only now that regulators are making the problem un-ignorable. We have to know:
Are humans actually improving and de-risking AI recommendations... or creating more risk, variability and errors?
Which humans are better decision-makers than others? How consistently?
Are we capturing why those decisions were made – not only to please regulators, but to make the organization smarter?
Decision intelligence is a giant gap, and one that doesn’t have clear ownership. Human-AI decisions sit at the crossroads between AI, Data, Risk, Operations, even HR, all with competing goals.
And the infrastructure to measure this Human-AI collaboration doesn't exist yet. Everyone's building from scratch, while new approaches and solutions are emerging.
What this newsletter is about
Loopy Humans is a newsletter that maps the territory of human-AI decisions, especially in regulated industries. In it, we’ll explore the loops—human, AI, organizational, systemic—that shape high-stakes human-AI decisions. How they work. Why they're hard to see. And what becomes possible as the infrastructure catches up. Topics will cover:
Loopy Humans — We’ll look at how bias and fatigue patterns show up in high-stakes decisions, why they’re invisible today, and what gaps to close.
Decision Loops — Human-AI decisions are where all the risk lies. So why aren’t they properly governed and tracked like assets? We’ll explore closing the loop on decision outcomes, plus the future function of DecisionOps.
Feedback Loops — Tracking decisions is one thing; learning from them is another. Do the downstream outcomes flow back to humans and the AI model to improve how decisions get made? We’ll explore different types of feedback loops, including those for behavior change.
The Loopy Future — As AI handles more decisions autonomously, what happens to the humans? Some humans work better with AI than others. New roles are emerging. And organizations need to think strategically about workforce design before they're forced to react.
You’re invited to the conversation.
I'm writing for leaders who sit close to these decisions—risk, governance, AI, data, commercial, operations, workforce planning—and who know the old playbooks no longer apply.
Your insights and ideas are deeply appreciated as we co-create the future of Human-AI decision intelligence together.
So if this resonates, hit the subscribe button... or send to someone on your team who's responsible for decisions that move money, risk, or lives.
These loops are already running and compounding; it’s high time to get them under control.