Categories
Links Writing

Generative AI Technologies and Emerging Wicked Policy Problems

While some emerging generative technologies may positively affect various domains (e.g., certain aspects of drug discovery and biological research, efficient translation between certain languages, speeding up certain administrative tasks, etc) they are, also, enabling new forms of harmful activities. Case in point, some individuals and groups are using generative technologies to generate child sexual abuse or exploitation materials:

Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”

… realism also presents potential problems for investigators who spend hours trawling through abuse images to classify them and help identify victims. Analysts at the IWF, according to the organization’s new report, say the quality has improved quickly—although there are still some simple signs that images may not be real, such as extra fingers or incorrect lighting. “I am also concerned that future images may be of such good quality that we won’t even notice,” says one unnamed analyst quoted in the report.

The ability to produce generative child abuse content is becoming a wicked problem with few (if any) “good” solutions. It will be imperative for policy professionals to learn from past situations where technologies were found to sometimes facilitate child abuse related harms. In doing so, these professionals will need to draw lessons concerning what kinds of responses demonstrate necessity and proportionality with respect to the emergent harms of the day.

As just one example, we will have to carefully consider how generative AI-created child sexual abuse content is similar to, and distinctive from, past policy debates on the policing of online child sexual abuse content. Such care in developing policy responses will be needed to address these harms and to avoid undertaking performative actions that do little to address the underlying issues that drive this kind of behaviour.

Relatedly, we must also beware the promise that past (ineffective) solutions will somehow address the newest wicked problem. Novel solutions that are custom built to generative systems may be needed, and these solutions must simultaneously protect our privacy, Charter, and human rights while mitigating harms. Doing anything less will, at best, “merely” exchange one class of emergent harms for others.