Categories
Links

Deskilling and Human-in-the-Loop

I found boyd’s “Deskilling on the Job” to be a useful framing for how to be broadly concerned, or at least thoughtful, about using emerging A.I. technologies in professional as well as training environments.

Most technologies serve to augment human activity. In sensitive situations we often already require a human-in-the-loop to respond to dangerous errors (see: dam operators, nuclear power staff, etc). However, should emerging A.I. systems’ risks be mitigated by also placing humans-in-the-loop then it behooves policymakers to ask: how well does this actually work when we thrust humans into correcting often highly complicated issues moments before a disaster?

Not to spoil things, but it often goes poorly, and we then blame the humans in the loop instead of the technical design of the system.1

AI technologies offer an amazing bevy of possibilities. But thinking more carefully on how to integrate them into society while, also, digging into history and scholarly writing in automation will almost certainly help us avoid obvious, if recurring, errors in how policy makers think about adding guardrails around AI systems.


  1. If this idea of humans-in-the-loop and the regularity of errors in automated systems interests you, I’d highly encourage you to get a copy of ‘Normal Accidents’ by Perrow. ↩︎
Categories
Quotations

Measurement of Work and Moral Scaffolding

How we measure changes not only what is being measured but also the moral scaffolding that compels us to live toward those standards. Innovations like assembly-line factories would further extend this demand that human beings work at the same relentlessly monotonous rate of a machine, as immortalized in Charlie Chaplin’s film Modern Times. Today, the control creep of self-tracking technologies into workplaces and institutions follows a similar path. In a “smart” or “AI-driven” workplace, the productive worker is someone who emits the desired kind of data — and does so in an inhumanly consistent way.


Sun-ha Hong, “Control Creep: When the Data Always Travels, So Do the Harms