Deskilling and Human-in-the-Loop

I found boyd’s “Deskilling on the Job” to be a useful framing for how to be broadly concerned, or at least thoughtful, about using emerging A.I. technologies in professional as well as training environments.

Most technologies serve to augment human activity. In sensitive situations we often already require a human-in-the-loop to respond to dangerous errors (see: dam operators, nuclear power staff, etc). However, should emerging A.I. systems’ risks be mitigated by also placing humans-in-the-loop then it behooves policymakers to ask: how well does this actually work when we thrust humans into correcting often highly complicated issues moments before a disaster?

Not to spoil things, but it often goes poorly, and we then blame the humans in the loop instead of the technical design of the system.1

AI technologies offer an amazing bevy of possibilities. But thinking more carefully on how to integrate them into society while, also, digging into history and scholarly writing in automation will almost certainly help us avoid obvious, if recurring, errors in how policy makers think about adding guardrails around AI systems.


  1. If this idea of humans-in-the-loop and the regularity of errors in automated systems interests you, I’d highly encourage you to get a copy of ‘Normal Accidents’ by Perrow. ↩︎

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close