Don't Delegate Responsibility To Algorithms

Accountability in the age of AI

3/1/20261 min read

It was a rainy Sunday and I was home alone, so I had a long conversation with my LLMs to clarify my thoughts on a question that has been buzzing in my head lately:

What happens to human accountability in the Age of AI ?

As AI embeds itself deeper into organizations, it may not replace intelligence, but it could surreptitiously replace our habit of owning the reasons. And that would be the real surrender.

The civilizational failure mode looks like this:

· We let AI set the objective function (“optimize X”)

· Humans follow the recommendation

· When things go wrong, nobody is accountable—because “the model said”

That’s not progress. That’s moral deskilling.

And the same pattern shows up inside companies—just faster and quieter.

· A hiring screen filters “top talent” and accidentally encodes bias—and nobody can explain the rejects.

· A forecasting model drives layoffs or budget cuts—then leadership hides behind “the numbers.”

· A recommender system optimizes engagement and slowly degrades trust, discourse, or mental health - because the metric looked good.

In each case, the organization didn’t just outsource analysis. It outsourced the because.

My suggested working rule:

AI can advise on means. Humans must own ends—and be accountable for tradeoffs.

Where this matters most (in governments and firms): decisions involving

· Coercion / force (state: war, detention, surveillance; corporate equivalents: deplatforming, employee surveillance, major enforcement actions)

· Rights limitations (state: speech, privacy, due process; corporate equivalents: user voice, privacy, access, appeal)

A few safeguards that could be normalized:

· A small, diverse, identifiable decision panel

· Named votes (no hiding in “we”)

· Post-mortems when outcomes fail (what we believed, what was wrong, what changes)

· A “human reasons” requirement: you can use AI, but you must rewrite and own the rationale

· “Frame competition”: a human-only framing, an AI framing, and a red-team adversarial framing - before choosing a direction

If we can’t explain our choices without outsourcing the “because…”, we’re not leading. We’re executing. And isn’t execution what machines do best ?…

#AI #Leadership #CriticalThinking #Governance #Accountability