AI Delegation Is Not People Management

Stop Treating It Like It Is

2/19/20261 min read

AI delegation is having a moment. But we should be careful with one potentially misleading assumption: that delegating to an AI is โ€œbasically likeโ€ delegating to a person.

A recent paper on Intelligent AI Delegation by Google DeepMind (see link below) frames delegation with human concepts: roles, responsibility, accountability, trust, reputation, escalation.

I like the direction. At the same time, we should remember what an
AI agent actually is: a PROBABILISTIC OPTIMIZER WITH TOOL ACCESS. Meaning such an approach contains risks that should be addressed:

1) โ€œAuthorityโ€ isnโ€™t social for agents, itโ€™s technical

For humans, authority is enforced by norms and institutions.
For agents, โ€œauthorityโ€ is permissions, credentials, tool access. If we donโ€™t translate the metaphor into controls (least privilege, revocation, sandboxing), we risk safety-by-storytelling.

2) โ€œTrustโ€ and โ€œreputationโ€ are gameable by default

In an agent ecosystem, identities are cheap, agents are copyable, and โ€œgood behaviorโ€ can be selectively shown to measurement channels. Trust needs identity + attestation to be operational, not aspirational.

3) Monitoring wonโ€™t catch the many small failures

Many failures are silent: plausible-but-wrong outputs, subtle drift, partial tool misuse. Trigger-based โ€œdetect โ†’ diagnose โ†’ re-delegateโ€ can overestimate whatโ€™s actually observable.

So whatโ€™s the way forward? A hybrid approach:

Human-style governance, machine-style control.

- Use
human delegation concepts to assign human responsibility: define who is accountable and who can approve/stop.

- Use
security engineering to control agents: scoped permissions, isolation, provenance, rate limits, kill switches, and lean on โ€œverificationโ€ only in places where it can genuinely prove something.

If we get this right, delegation becomes scalable. If we get it wrong, we create potential industrial-scale mishaps.

https://lnkd.in/ez5a3STx

#AI #AIAgents #SafetyEngineering #Governance #AgenticSystems