Automation isn’t the threat.
“NO OVERRIDE” is.
3/10/20261 min read


To see where AI helps - and where it gets risky- use this framework:
Layer 1: Policy (ends + values)
What we want. What we refuse to do.
Layer 2: Rules (translation)
Thresholds, procedures, escalation paths.
Layer 3: Execution (running the process)
Day-to-day decisions and enforcement.
Layers 1 and 2 are where humans must stay in charge.
AI can turbocharge Layer 3 (that’s often where agentic AI gets deployed)
THE RISK is when Layer 2 and Layer 1 quietly get pulled into Layer 3 in the name of “efficiency”, and nobody can point anymore to who chose the tradeoffs and why.
So the key question isn’t: “Did AI make the decision?”
It’s: “Can humans contest it, and rewrite the rules when needed?”
If not, “human-in-the-loop” is just a checkbox:
• humans rubber-stamp AI decisions
• dissent gets frowned upon
• accountability evaporates (“the system decided”)
In high-stakes domains (HR, safety, credit, healthcare, for example), you need MORE THAN JUST A HUMAN CLICK.
You need rules everyone can see, an appeals process that works, and a human who can overrule the system.
Automation is fine.
Unchallengeable automation is not.
Do you have instances in your organization where the “appeal” path looks real on paper, but impossible in practice?
#AIAutomation #ResponsibleAI #Leadership #AIAgents #HumanInTheLoop
Contact
bruno.gentil@sherpaconsultingasia.com
© 2026. All rights reserved.
