Link

Europe Planning A DNS Infrastructure With Built-In Filtering

Catalin Cimpanu, reporting for The Record, has found that the European Union wants to build a recursive DNS service that will be available to EU institutions and the European public. The reasons for building the service are manifold, including concerns that American DNS providers are not GDPR compliant and worries that much of Europe is dependent on (largely) American-based or -owned infrastructure.

As part of the European system, plans are for it to:

… come with built-in filtering capabilities that will be able to block DNS name resolutions for bad domains, such as those hosting malware, phishing sites, or other cybersecurity threats.

This filtering capability would be built using threat intelligence feeds provided by trusted partners, such as national CERT teams, and could be used to defend organizations across Europe from common malicious threats.

It is unclear if DNS4EU usage would be mandatory for all EU or national government organizations, but if so, it would grant organizations like CERT-EU more power and the agility it needs to block cyber-attacks as soon as they are detected.

In addition, EU officials also want to use DNS4EU’s filtering system to also block access to other types of prohibited content, which they say could be done based on court orders. While officials didn’t go into details, this most likely refers to domains showing child sexual abuse materials and copyright-infringing (pirated) content.1

By integrating censorship/blocking provisions as the policy level of the European DNS, there is a real risk that over time that same system might be used for untoward ends. Consider the rise of anti-LGBTQ laws in Hungary and Poland, and how those governments mights be motivated to block access to ‘prohibited content’ that is identified as such by anti-LGBTQ politicians.

While a reader might hope that the European courts could knock down these kinds of laws, their recurrence alone raises the spectre that content that is deemed socially undesirable by parties in power could be censored, even where there are legitimate human rights grounds that justify accessing the material in question.


  1. Boldface not in original. ↩︎
Link

Partnering to help curb the spread of terrorist content online

Facebook, Microsoft, Twitter, and YouTube are coming together to help curb the spread of terrorist content online. There is no place for content that promotes terrorism on our hosted consumer services. When alerted, we take swift action against this kind of content in accordance with our respective policies.

Starting today, we commit to the creation of a shared industry database of “hashes” — unique digital “fingerprints” — for violent terrorist imagery or terrorist recruitment videos or images that we have removed from our services. By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms. We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.

The creation of the industry database of hashes both shows the world that these companies are ‘doing something’ without that something being particularly onerous: any change to a file will result it in having a different hash and thus undetectable by the filtering system being rolled out by these companies. But that technical deficiency is actually the least interesting aspect of what these companies are doing. Rather than being compelled to inhibit speech – by way of a law that might not hold up to a First Amendment challenge in the United States – the companies are voluntarily adopting this process.

The result is that some files will be more challenging to find without someone putting in the effort to seek them out. But it also means that the governments of the world cannot say that the companies aren’t doing anything, and most people aren’t going to be interested in the nuances of the technical deficits of this mode of censorship. So what we’re witnessing is (another) privatized method of censorship that is arguably more designed to rebut political barbs about the discoverability of horrible material on these companies’ services than intended to ‘solve’ the actual problem of the content’s creation and baseline availability.

While a realist might argue that anything is better than nothing, I think that the very existence of these kinds of filtering and censoring programs is inherently dangerous. While it’s all fine and good for ‘bad content’ to be blocked who will be defining what is ‘bad’? And how likely is it that, at some point, ‘good’ content will be either intentionally or accidentally blocked? These are systems that can be used in a multitude of ways once established, and which are often incredibly challenging to retire when in operation.

Link

South Korea to Ban Profanity and Porn from Teens’ Smartphones?

The supposed ban is meant to, in part, crack-down on cyberbullying. To be clear, such bullying is serious, but introducing security deficits into smartphones – for the children! – really isn’t the way to solve this social problem. You don’t solve social ills by turning to technological filters and blocks. Especially not when trying to get between a teenager and porn.