Quote

How we measure changes not only what is being measured but also the moral scaffolding that compels us to live toward those standards. Innovations like assembly-line factories would further extend this demand that human beings work at the same relentlessly monotonous rate of a machine, as immortalized in Charlie Chaplin’s film Modern Times. Today, the control creep of self-tracking technologies into workplaces and institutions follows a similar path. In a “smart” or “AI-driven” workplace, the productive worker is someone who emits the desired kind of data — and does so in an inhumanly consistent way.


Sun-ha Hong, “Control Creep: When the Data Always Travels, So Do the Harms
Link

Facebook Prioritizes Growth Over Social Responsibility

Karen Hao writing at MIT Technology Review:

But testing algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

[Kaplan’s] claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.

The whole thing with ethics is that they have to be integrated such that they underlie everything that an organization does; they cannot function as public relations add ons. Sadly at Facebook the only ethic is growth at all costs, the social implications be damned.

When someone or some organization is responsible for causing significant civil unrest, deaths, or genocide then we expect that those who are even partly responsible to be called to account, not just in the public domain but in courts of law and international justice. And when those someones happen to be leading executives for one of the biggest companies in the world the solution isn’t to berate them in Congressional hearings and hear their weak apologies, but to take real action against them and their companies.

Link

Links for November 23-December 4, 2020

  • When AI sees a man, it thinks “official.” a woman? “smile”| “The AI services generally saw things human reviewers could also see in the photos. But they tended to notice different things about women and men, with women much more likely to be characterized by their appearance. Women lawmakers were often tagged with “girl” and “beauty.” The services had a tendency not to see women at all, failing to detect them more often than they failed to see men.” // Studies like this help to reveal the bias baked deep into the algorithms that are meant to be ‘impartial’, with this impartiality truly constituting a mathwashing of existent biases that are pernicious to 50% of society.
  • The ungentle joy of spider sex | “Spectacular though all this is, extreme sexual size dimorphism is rare even in spiders. “It’s an aberration,” Kuntner says. Even so, as he and Coddington describe in the Annual Review of Entomology , close examination of the evolutionary history of spiders indicates that eSSD has evolved at least 16 times, and in one major group some lineages have repeatedly lost it and regained it. The phenomenon is so intriguing it’s kept evolutionary biologists busy for decades. How and why did something so weird evolve?” // This is a truly wild, and detailed, discussion of the characteristics of spider evolution and intercourse.
  • Miley Cyrus-Plastic Hearts // Consider me shocked, but I’m really liking Cyrus’ newest album.
Link

The implausibility of intelligence explosion

The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? We cannot perform this experiment, but we do know that cognitive development in humans and animals is driven by hardcoded, innate dynamics.

Chollet’s long-form consideration of the ‘intelligence explosion’ is exactly the long, deep dive assessments of artificial intelligence I wish we had more of. In particular, his appreciation for the relationship between ‘intelligence’ and ‘mind’ and ‘socio-situationality’ struck me as meaningful and helpful, insofar as it recognizes the philosophical dimensions of intelligence that is often disregarded, forgotten about, or simply not appreciated by those who talk generally about strong AI systems.