Categories
Links Writing

Why We Need to Reevaluate How We Share Intelligence Data With Allies

Last week, Canadians learned that their foreign signals intelligence agency, the Communications Security Establishment (CSE), had improperly shared information with their American, Australian, British, and New Zealand counterparts (collectively referred to as the “Five Eyes”). The exposure was unintentional: Techniques that CSE had developed to de-identify metadata with Canadians’ personal information failed to keep Canadians anonymous when juxtaposed with allies’ re-identification capabilities. Canadians recognize the hazards of such exposures given that lax information-sharing protocols with US agencies which previously contributed to the mistaken rendition and subsequent torture of a Canadian citizen in 2002. 

Tamir Israel (of CIPPIC) and I wrote and article for Just Security following these revelations. We focused on the organization’s efforts, and failure, to suppress Canadians’ identity information that is collected as part of CSE’s ongoing intelligence activities and the broader implications of erroneous information sharing. Specifically, we focus on how such sharing can have dire life consequences for those who are inappropriately targeted as a result by Western allies and how such sharing has led to the torture of a Canadian citizen. We conclude by arguing that the collection and sharing of such information raises questions regarding the ongoing viability of the agency’s old-fashioned mandates that bifurcate Canadian and non-Canadian persons’ data in light of the integrated nature of contemporary communications systems and data exchanges with foreign partners.

Read the Article

Categories
Quotations

2014.1.2

While policies may vary, the sensitive nature of the data produced does not. Traffic data analysis generates more sensitive profiles of an individual’s actions and intentions, arguably more so than communica- tions content. In a communication with another individual, we say what we choose to share; in a transaction with another device, for example, search engines and cell stations, we are disclosing our actions, movements, and intentions. Technology- neutral policies continue to regard this transactional data as POTS traffic data, and accordingly apply inadequate protections.

This is not faithful to the spirit of updating laws for new technology. We need to acknowledge that changing technological environments transform the policy itself. New policies need to reflect the totality of the new environment.

Alberto Escudero-Pascual and Ian Hosein, “Questioning Lawful Access to Traffic Data”
Categories
Links

Prism threatens ‘sovereignty’ of all EU data

Caspar Bowden has been aggressively lobbying the EU Parliament over the implications of the FISA Amendments Act for some time. In short, the Act authorizes capturing data from ‘Electronic Communications Service Providers’ when the data possesses foreign intelligence value. The result is that business and personal information, in addition to information directly concerning ‘national security’, can be legitimately collected by the Agency. (For more, see pages 33-35 of this report.)

Caspar’s most recent article outlines the unwillingness of key members of the EU Parliament to take seriously the implications of American surveillance … until it ceases to be an issue for policy wonks, and one of politics. Still, the Parliament has yet to retract recent amendments that would detrimentally affect the privacy rights of European citizens: it will be interesting to see whether the politics of the issue reverse the parliamentarians’ decisions or if lobbying by corporate interests win the day.

Categories
Links Writing

Notes EM: Fiction vs reality

evgenymorozov:

Tim Wu on my book:

Too much assault and battery creates a more serious problem: wrongful appropriation, as Morozov tends to borrow heavily, without attribution, from those he attacks. His critique of Google and other firms engaged in “algorithmic gatekeeping”is basically taken from Lessig’s first book, “Code and Other Laws of Cyberspace,” in which Lessig argued that technology is necessarily ideological and that choices embodied in code, unlike law, are dangerously insulated from political debate. Morozov presents these ideas as his own and, instead of crediting Lessig, bludgeons him repeatedly. Similarly, Morozov warns readers of the dangers of excessively perfect technologies as if Jonathan Zittrain hadn’t been saying the same thing for the past 10 years. His failure to credit his targets gives the misimpression that Morozov figured it all out himself and that everyone else is an idiot.

What my book actually says:

Alas, Internet-centrism prevents us from grasping many of these issues as clearly as we must. To their credit, Larry Lessig and Jonathan Zittrain have written extensively about digital preemption (and Lessig even touched on the future of civil disobedience). However, both of them, enthralled with the epochalist proclamations of Internet-centrism, seem to operate under the false assumption that digital preemption is mostly a new phenomenon that owes its existence to “the Internet,” e-books, and MP3 files. Code is law—but so are turnstiles. Lessig does note that buildings and architecture can and do regulate, but he makes little effort to explain whether the possible shift to code-based regulation is the product of unique contemporary circumstances or merely the continuation of various long-term trends in criminological thinking.

As Daniel Rosenthal notes in discussing the work of both Lessig and Zittrain, “Academics have sometimes portrayed digital preemption as an unfamiliar and novel prospect… In truth, digital preemption is less of a revolution than an extension of existing regulatory techniques.” In Zittrain’s case, his fascination with “the Internet” and its values of “openness” and “generativity,” as well as his belief that “the Internet” has important lessons to teach us, generates the kind of totalizing discourse that refuses to see that some attempts to work in the technological register might indeed be legitimate and do not necessarily lead to moral depravity.

One of the theoretical frames that I use in my dissertations is path dependency. Specifically, I consider whether early decisions with regards to Internet standards (small, early, decisions) actually lead to systems that are challenging to significantly change after systems relying on those protocols are widely adopted (i.e. big, late, decisions aren’t that influential). Once systems enjoy a network effect and see high levels of sunk capital, do they tend to be maintained even if something new comes along that is theoretically ‘superior’?

I mention this background in path dependency because a lot of the really interesting work in this field was written well before Lessig’s and Zittrain’s popular books (yes: there’s still excellent stuff being written today, but core literature predates Lessig or Zittrain). There’s also a extensive literature in public policy, with one of the more popular works being Tools of Government (1983). Hood, in Tools, that outlines how detectors and effectors work for institutions. Hood’s work, in part,  attends to how built infrastructure is used to facilitate governance; by transforming the world itself into a regulatory field (e.g. turnstiles, bridges and roads that possess particular driving characteristics, and so forth) the world becomes embedded with an aesthetic of regulation. This aesthetic can significantly ‘nudge’ the actions we choose to take. This thematic of ‘regulation by architecture’ is core to Lessig’s and Zittrain’s arguments, though there are no references to the ‘core books or sources’ that really launched some of this work in the academy.

This said, while there are predecessors that Lessig and Zittrain probably ought to have spent more time writing about, such complaints are true of practically any book or work that is designed to be read by the public and policy makers and academics. The real ‘magic’ of Zittrain and Lessig (and Morozov!) is that their works speak to a wide audience: their books are not, i would argue, written just for academics. As a result some of the nuance or specificity you’d expect in a $150 book that’s purchased by the other 10 specialists in your field is missing. And that’s ok.

Morozov’s key complaint, as I understand it, is that really important problems arise from how these authors’ books are perceived as what they are not. In other words, many people will not understand that many of the more populist books on ‘the Internet’ are being written by people with specific political intentions, who want their books to affect very particular public policy issues and that, as a consequence, these books and other writings have to be read as political works instead of ’dispassionate academic works’.* Their writings act as a kind of trojan horse through which particular ways of thinking of the world become ‘naturalized’, and the authors are ‘first’ to write on topics largely because of their skill in writing about the present while avoiding elongated literature reviews on the past.

I can appreciate Morozov’s concerns around language framing issues, and around the (sometimes) sloppy thinking of these authors. And I can appreciate Morozov’s critics who see him as being blunt and often similarly failing to ‘show all of his work’. For the public, however, I hope that they don’t necessarily see the very public conflicts between Morozov and his colleagues as necessarily an academic dispute in public so much as an unmasking and contestation of divergent political conceptions of the Internet and of literature more generally.

——-

* I write this on the basis of having attended conferences with American legal scholars working in this area. Papers and reports are often written with specific members of federal sub-committees, Congressional and Senate assistants, or federal/state justices in mind. In effect, these authors are writing for people in power to change specific laws and policies. As such you should always hunt for what is ‘really going on’ when reading most popular American legal scholarship.

Notes EM: Fiction vs reality

Categories
Quotations

2013.4.13

Lawyers are trained in reading, understanding, interpreting and advising on laws and legal compliance programs, and defending their clients from litigants and regulators. Privacy laws, everywhere in the world, are vague, so they leave much room for legal interpretations. The lawyers’ skill set is becoming more and more central to the role of privacy leadership. Moreover, lawyers benefit from attorney-client privileged communications internally, which is becoming an absolutely essential mechanism for privacy lawyers to have deep, unfettered, unfiltered exchanges of information and advice with their clients.

Of course, non-legal disciplines will always play an essential role in safeguarding privacy at companies, e.g., the vital role played by security engineers. Privacy will always be a cross-disciplinary project. I’m not saying that the rise of the lawyer-privacy-leader is necessarily the best thing for “privacy”. Yet in the face of rampant litigation, discovery orders, vague laws, political debates, regulatory actions, threats of billion dollar fines, companies will be looking to their privacy lawyers for a lot more than drafting a privacy policy. It’s a great profession, if you like stretch goals.

Peter Fleischer, “Stretch Goals for Privacy Lawyers
Categories
Links Writing

Privacy Policies Don’t Need to Be Obtuse

Peter Fleischer has a good summary piece on the (miserable) state of online privacy policies today. As he writes:

Today, privacy policies are being written to try to do two contradictory things.  Like most things in life, if you try to do two contradictory things at the same time, you end up doing neither well.  Here’s the contradiction:  should a privacy policy be a short, simple, readable notice that the average end-user could understand? Or should it be a long, detailed, legalistic disclosure document written for regulators?  Since average users and expert regulators have different expectations about what should be disclosed, the privacy policies in use today largely disappoint both groups.

(…)

The time has come for a global reflection on what, exactly, a privacy policy should look like.  Today, there is no consensus.  I don’t just mean consensus amongst regulators and lawyers.  My suggestion would be to start by doing some serious user-research, and actually ask Johnny and Jean and Johann.

I entirely, fully, wholeheartedly agree: most policies today are absolute garbage. I actually read a lot of them – and research on social media policies will be online and available soon! – and they are more often than not an elaborate act of obfuscation than something that explains, specifically and precisely, what a service does or is doing with the data that is collected.

The thing is, these policies don’t need to be as bad as they are. It really is possible to bridge ‘accessible’ and ‘legalese’ but doing so takes time, care, and effort.

And fewer lawyers.

As a good example of how this can be done check out how Tunnelbear has written their privacy policy: it’s reasonably accessible and lacks a lot of the ‘weasel phrases’ you’ll find in most privacy policies. Even better, read the company’s Terms of Service document; I cannot express how much ‘win’ is captured in their simultaneously legal and layperson disclosure of how and why their service functions as it does.

Categories
Quotations

2013.2.21

The 27 regulators, led by France’s CNIL, gave Google three to four months to make changes to its privacy policy — or face “more contentious” action. In a statement on its website today, the CNIL said that four months on from that report Google has failed “to come into compliance” so will now face additional action.

“On 18 February, the European authorities find that Google does not give a precise answer and operational recommendations. Under these circumstances, they are determined to act and pursue their investigations,” the CNIL said in its statement (translated from French with Google Translate).

According to the statement, the European regulators intend to set up a working group, led by CNIL, to “coordinate their enforcement action” against Google — with the working group due to be established before the summer. An action plan for tackling the issue was drawn up at a meeting of the regulators late last month, and will be “submitted for validation” later this month, they added.

Natasha Lomas, “Google’s Consolidated Privacy Policy Draws Fresh Fire In Europe
Categories
Quotations

2013.2.18

The [intelligence] professionals’ task is therefore to keep judgements anchored to what the intelligence actually reveals (or does not reveal) and keep in check any predisposition of policy-makers to pontificate … of trying to make nasty facts go away by the magical process of emitting loud noises in the opposite direction.

Sir David Omand, “Reflections on Secret Intelligence”
Categories
Writing

Policy Matters Too

Nadim Kobeissi recently wrote about Do Not Track, and effectively restated the engineering-based reasons why the proposed standard will fail. The standard, generally, would let users set their web browser to ask websites not to deposit tracking cookies on their computers. Specifically, Nadim wrote:

Do Not Track is not only ineffective: it’s dangerous, both to the users it lulls into a false belief of privacy, and towards the implementation of proper privacy engineering practice. Privacy isn’t achieved by asking those who have the power to violate your privacy to politely not do so — and thus sacrifice advertising revenue — it’s achieved by implementing client-side preventative measures. For browsers, these are available in examples such as EFF’s HTTPS Everywhere, Abine’s DoNotTrackMe, AdBlock, and so on. Those are proper measures from an engineering perspective, since they attempt to guard your privacy whether the website you’re visiting likes it or not.

He is writing as an engineer and, from that perspective, he’s not wrong. Unfortunately, as an engineer he’s entirely missing the broader implications of DNT: specifically, it lets users proactively inform a site that they do not give consent to being tracked. This proactive declaration can suddenly activate a whole host of privacy protections that are established under law; individuals don’t necessarily have to have their declarations respected for them to be legally actionable.

Now, will most users have any clue if their positions are being upheld? No, of course not. This is generally true of any number of laws. However, advocates, activists, academic researchers, and lawyers smelling class-action lawsuits will monitor to see if websites are intentionally dismissing users’ choice to refuse being tracked. As successful regulatory/legal challenges are mounted website owners will have to engage in a rational calculus: is the intelligence or monies gained from tracking worth the potential regulatory or legal risk? If initial punishments are high enough then major players may decide that it is economically rational to abide by DNT headers, whereas smaller sites (perhaps with less to lose/less knowledge of DNT) may continue to track regardless of what a browser declares to the web server. If we’re lucky, these large players will include analytics engine providers as well as advertiser networks.

Now, does this mean that DNT will necessarily succeed? No, not at all. The process is absolutely mired in confusion and problems – advertisers are trying to water down what DNT ‘means’, and some browser manufacturers are making things harder by trying to be ‘pro-privacy’ and designing DNT as a default setting for their browsers. Moreover, past efforts to technically demonstrate users’ privacy have failed (e.g. P3P), and chances are good that DNT will fail as well. However, simply because there are technical weaknesses associated with the standard does not mean that the protocol, more broadly, will fail: what is coded into standards can facilitate subsequent legal and regulatory defences of users’ privacy, and these defences may significantly improve users’ privacy online.