Link

A Brief Unpacking of a Declaration on the Future of the Internet

Cameron F. Kerry has a helpful piece in Brookings that unpacks the recently published ‘Declaration on the Future of the Internet.’ As he explains, the Declaration was signed by 60 States and is meant, in part, to rebut a China-Russia joint statement. Those countries’ statement would support their positions on ‘securing’ domestic Internet spaces and removing Internet governance from multi-stakeholder forums to State-centric ones.

So far, so good. However, baked into the Kerry’s article is language suggesting that either he misunderstands, or understates, some of the security-related elements of the Declaration. He writes:

There are additional steps the U.S. government can take that are more within its control than the actions and policies of foreign states or international organizations. The future of the Internet declaration contains a series of supporting principles and measures on freedom and human rights, Internet governance and access, and trust in use of digital network technology. The latter—trust in the use of network technology— is included to “ensure that government and relevant authorities’ access to personal data is based in law and conducted in accordance with international human rights law” and to “protect individuals’ privacy, their personal data, the confidentiality of electronic communications and information on end-users’ electronic devices, consistent with the protection of public safety and applicable domestic and international law.” These lay down a pair of markers for the U.S. to redeem.

I read this, against the 2019 Ministerial and recent Council of Europe Cybercrime Convention updates, and see that a vast swathe of new law enforcement and security agency powers would be entirely permissible based on Kerry’s assessment of the Declaration and States involved in signing it. While these new powers have either been agreed to, or advanced by, signatory States they have simultaneously been directly opposed by civil and human rights campaigners, as well as some national courts. Specifically, there are live discussions around the following powers:

  • the availability of strong encryption;
  • the guarantee that the content of communications sent using end-to-end encrypted devices cannot be accessed or analyzed by third-parties (include by on-device surveillance);
  • the requirement of prior judicial authorization to obtain subscriber information; and
  • the oversight of preservation and production powers by relevant national judicial bodies.

Laws can be passed that see law enforcement interests supersede individuals’ or communities’ rights in safeguarding their devices, data, and communications from the State. When or if such a situation occurs, the signatories of the Declaration can hold fast in their flowery language around protecting rights while, at the same time, individuals and communities experience heightened surveillance of, and intrusions into, their daily lives.

In effect, a lot of international policy and legal infrastructure has been built to facilitate sweeping new investigatory powers and reforms to how data is, and can be, secured. It has taken years to build this infrastructure and as we leave the current stage of the global pandemic it is apparent that governments have continued to press ahead with their efforts to expand the powers which could be provided to law enforcement and security agencies, notwithstanding the efforts of civil and human rights campaigners around the world.

The next stage of things will be to asses how, and in what ways, international agreements and legal infrastructure will be brought into national legal systems and to determine where to strategically oppose the worst of the over reaches. While it’s possible that some successes are achieved in resisting the expansions of state powers not everything will be resisted. The consequence will be both to enhance state intrusions into private lives as well as to weaken the security provided to devices and data, with the resultant effect of better enabling criminals to illicitly access or manipulate our personal information.

The new world of enhanced surveillance and intrusions is wholly consistent with the ‘Declaration on the Future of the Internet.’ And that’s a big, glaring, and serious problem with the Declaration.

Quote

CryptDB, a project out of MIT’s Computer Science and Artificial Intelligence Lab, (CSAIL) may be a solution for this problem. In theory, it would let you glean insights from your data without letting even your own personnel “see” that data at all, said Dr. Sam Madden, CSAIL director, on Friday.

“The goal is to run SQL on encrypted data, you don’t even allow your admin to decrypt any of that data and that’s important in cloud storage, Madden said at an SAP-sponsored event at Hack/reduce in Cambridge, Mass.

This is super interesting work that, if successful, could open a lot of sensitive data to mining. However, it needs to be extensively tested.

One thing that is baked into this product, however, is the assumption that large-scale data mining is good or appropriate. I’m not taking a position that it’s wrong, but note that there isn’t any discussion – that I can find – where journalists are thinking through whether such sensitive information should even be mined in the first place. We (seemingly) are foreclosing this basic and very important question and, in the process, eliding a whole series of important social and normative questions.

Notes EM: My FT oped: Google Revolution Isn’t Worth Our Privacy

evgenymorozov:

Google’s intrusion into the physical world means that, were its privacy policy to stay in place and cover self-driving cars and Google Glass, our internet searches might be linked to our driving routes, while our favourite cat videos might be linked to the actual cats we see in the streets. It also means that everything that Google already knows about us based on our search, email and calendar would enable it to serve us ads linked to the actual physical products and establishments we encounter via Google Glass.

For many this may be a very enticing future. We can have it, but we must also find a way to know – in great detail, not just in summary form – what happens to our data once we share it with Google, and to retain some control over what it can track and for how long.

It would also help if one could drive through the neighbourhood in one of Google’s autonomous vehicles without having to log into Google Plus, the company’s social network, or any other Google service.

The European regulators are not planning to thwart Google’s agenda or nip innovation in the bud. This is an unflattering portrayal that might benefit Google’s lobbying efforts but has no bearing in reality. Quite the opposite: it is only by taking full stock of the revolutionary nature of Google’s agenda that we can get the company to act more responsibly towards its users.

I think that it’s critically important to recognize just what the regulators are trying to establish: some kind of line in the sand, a line that identifies practices that move against the ethos and civil culture of particular nations. There isn’t anythingnecessarily wrong with this approach to governance. The EU’s approach suggests a deeper engagement with technology than some other nations, insofar as some regulators are questioning technical developments and potentialities on the basis of a legally-instantiated series of normative rights.

Winner, writing all the way back 1986 in his book The whale and the reactor: a search for limits in an age of high technology, recognized that frank discussions around technology and the socio-political norms embedded in it are critical to a functioning democracy. The decisions we make with regards to technical systems can have far-reaching consequences, insofar as (some) technologies become ‘necessary’ over time because of sunk costs, network effects, and their relative positioning compared to competing products. Critically, technologies aren’t neutral: they are shaped within a social framework that is crusted with power relationships. As a consequence, it behooves us to think about how technologies enable particular power relations and whether they are relates that we’re comfortable asserting anew, or reaffirming again.

(If you’re interested in reading some of Winner’s stuff, check out his essay, “Do Artifacts Have Politics.”)

Link

Surprise: American Equipment Spies on Iranians

Steve Stecklow, for Reuters, has an special report discussing how Chinese vendor ZTE was able to resell American network infrastructure and surveillance products to the Iranian government. The equipment sold is significant;

Mahmoud Tadjallimehr, a former telecommunications project manager in Iran who has worked for major European and Chinese equipment makers, said the ZTE system supplied to TCI was “country-wide” and was “far more capable of monitoring citizens than I have ever seen in other equipment” sold by other companies to Iran. He said its capabilities included being able “to locate users, intercept their voice, text messaging … emails, chat conversations or web access.”

The ZTE-TCI documents also disclose a backdoor way Iran apparently obtains U.S. technology despite a longtime American ban on non-humanitarian sales to Iran – by purchasing them through a Chinese company.

ZTE’s 907-page “Packing List,” dated July 24, 2011, includes hardware and software products from some of America’s best-known tech companies, including Microsoft Corp, Hewlett-Packard Co, Oracle Corp, Cisco Systems Inc, Dell Inc, Juniper Networks Inc and Symantec Corp.

ZTE has partnerships with some of the U.S. firms. In interviews, all of the companies said they had no knowledge of the TCI deal. Several – including HP, Dell, Cisco and Juniper – said in statements they were launching internal investigations after learning about the contract from Reuters.

The sale of Western networking and surveillance equipment/software to the Iranian government isn’t new. In the past, corporate agents for major networking firms explained to me the means by which Iran is successfully importing the equipment; while firms cannot positively know that this is going on, it’s typically because of an intentional willingness to ignore what they strongly suspect is happening. Regardless, the actual sale of this specific equipment – while significant – isn’t the story that Western citizens can do a lot to change at this point.

Really, we should be asking: do we, as citizens of Western nations, believe that manufacturing of these kinds of equipment is permissible? While some degree of surveillance capacity is arguably needed for lawful purposes within a democracy it is theoretically possible to design devices such that they have limited intercept and analysis capability out of the box. In essence, we could demand that certain degrees of friction are baked into the surveillance equipment that is developed, and actively work to prevent companies from producing highly scaleable and multifunctional surveillance equipment and software. Going forward, this could prevent the next sale of significant surveillance equipment to Iran on grounds that the West simply doesn’t have any for (legal) sale.

In the case of government surveillance inefficiency and lack of scaleability are advantageous insofar as they hinder governmental surveillance capabilities. Limited equipment would add time and resources to surveillance-driven operations, and thus demand a greater general intent to conduct surveillance than when authorities have access to easy-to-use, advanced and scalable, surveillance systems.

Legal frameworks are insufficient to protect citizens’ rights and privacy, as has been demonstrated time and time again by governmental extensions or exploitations of legal frameworks. We need a normatively informed limitation of surveillance equipment that is included in the equipment at the vendor-level. Anything less will only legitimize, rather than truly work towards stopping, the spread of surveillance equipment that is used to monitor citizens across the globe.