Facebook, Microsoft, Twitter, and YouTube are coming together to help curb the spread of terrorist content online. There is no place for content that promotes terrorism on our hosted consumer services. When alerted, we take swift action against this kind of content in accordance with our respective policies.
Starting today, we commit to the creation of a shared industry database of “hashes” — unique digital “fingerprints” — for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.
The creation of the industry database of hashes both shows the world that these companies are ‘doing something’ without that something being particularly onerous: any change to a file will result it in having a different hash and thus undetectable by the filtering system being rolled out by these companies. But that technical deficiency is actually the least interesting aspect of what these companies are doing. Rather than being compelled to inhibit speech – by way of a law that might not hold up to a First Amendment challenge in the United States – the companies are voluntarily adopting this process.
The result is that some files will be more challenging to find without someone putting in the effort to seek them out. But it also means that the governments of the world cannot say that the companies aren’t doing anything, and most people aren’t going to be interested in the nuances of the technical deficits of this mode of censorship. So what we’re witnessing is (another) privatized method of censorship that is arguably more designed to rebut political barbs about the discoverability of horrible material on these companies’ services than intended to ‘solve’ the actual problem of the content’s creation and baseline availability.
While a realist might argue that anything is better than nothing, I think that the very existence of these kinds of filtering and censoring programs is inherently dangerous. While it’s all fine and good for ‘bad content’ to be blocked who will be defining what is ‘bad’? And how likely is it that, at some point, ‘good’ content will be either intentionally or accidentally blocked? These are systems that can be used in a multitude of ways once established, and which are often incredibly challenging to retire when in operation.