Categories
Links Writing

Thoughts on the Implications of ‘Secret Surveillance’

In one of Michael Geist’s recent articles on secret surveillance he notes three key issues with the secretive intelligence surveillance actions that are coming to light. Specifically:

First, the element of trust has been severely compromised. Supporters of the current Internet governance model frequently pointed to Internet surveillance and the lack of accountability within countries like China and Russia as evidence of the danger of a UN-led model. With the public now aware of the creation of a massive, secret U.S.-backed Internet surveillance program, the U.S. has ceded the moral high ground on the issue.

This has been a point that academics have warned about for the past decade: when/if it is apparent that the US and other Western governments aren’t ‘fit to govern’ critical Internet infrastructure then foreign states will increasingly agitate to influence network design. Still, while the US government’s mass surveillance systems may accelerate the rate at which governments are ‘interested’ in critical infrastructure design and deployment, this isn’t a novel path or direction: governments throughout the world have been extending their surveillance capacities, often pointing to the US’ previously disclosed behaviours as justifications. The consequence of the recent high-profile articles on NSA surveillance has been to (arguably) ensure that a ‘moral high ground’ cannot be reclaimed; arguably, that ground has actually been lost for quite some time.

Geist continues:

Second, as the scope of the surveillance becomes increasingly clear, many countries are likely to opt for a balkanized Internet in which they do not trust other countries with the security or privacy of their networked communications. This could lead to new laws requiring companies to store their information domestically to counter surveillance of the data as it crosses borders or resides on computer servers located in the U.S. In fact, some may go further by resisting the interoperability of the Internet that we now take for granted.

Again, we’ve been seeing these kinds of law crop up for the past many years. However, the countries that have been engaging in such actions are all (generally) regarded as ‘foreign’ by individuals in North America. So, when Iran, India, China, or other countries have imposed localization laws those nations are seen as ‘rogue’; missing from much of the critique, however, has been how ‘domestic’ governments have sought to contain or delimit the flow of information. Admittedly, most of Canada, the UK, and America lacks ‘data localization’ laws, but all of those jurisdictions do have ‘data limitation’ laws, insofar as some information is blocked at an ISP level. In effect, while a hardware balkanization of the Internet might accelerate, the content balkanization of the Internet has been ongoing for over a decade.

Geist concludes:

Third, some of those same countries may demand similar levels of access to personal information from the Internet giants. This could create a “privacy race to the bottom”, where governments around the world create parallel surveillance programs, ensuring that online privacy and co-operative Internet governance is a thing of the past.

This is an area that will be particularly interesting to watch for. In terms of content localization, there are laws around the world limiting what citizens in various nations can access. While such localization laws were initially seen as heralding the end of the Internet this has not been the case: save for in particularly censorious regimes, local norms have guided what should(n’t) be accessible (e.g. child pornography, nazi symbology and paraphernalia, etc). At issue is that efforts to ‘block’ certain content tends to often not work well, and also tends to reduce efforts to legally punish those responsible for the content in the first place. In effect, the former problem speaks to the limitations of blocking any content effectively and without accidental overreach, and the latter with poor international cooperation between policing agencies to actually act against the producers of obviously nefarious content (e.g. child pornography).

The ability for nations to demand strong data/server/service localization requirements will, I suspect, be predicated on economic size and relative ‘value’ of a nation’s citizens to a particular company. So, if you have a very large multinational, with ‘boots on the ground’ and a large subscriber base in a profitable nation-state, then the multinational may be more likely to comply with localization requirements compared to a similar demand from a small/economically insignificant state in which the company lacks ‘boots’. Moreover, the potential for certain services to no longer be accessible – say, GMail, if Google refused to comply with a given nations’ localization laws – could lead citizens to turn on their own government on the basis that the services are needed for ongoing, daily, commercial or personal activity.

In effect, I think that while Geist’s third point is arguably the most significant, it’s also the one that we’re furthest off from necessarily crossing over to. Admittedly there are some isolated cases of localization requirements now (e.g. India), but the ability to successfully impose such requirements is as much based on the attractiveness of a given market as anything else. So, there could actually be a division between the ‘localization countries’: ones that are ‘big enough’ to commercially demand compliance versus ones that are ‘too small’ to successfully impose their sovereign wills on Internet multinationals. How any such division were to line up, and the political and economic rationales for all involved, will be fascinating to watch, document, and explore in the coming years!

Categories
Quotations

2013.8.7

[Privacy] has to be institutional; it also has to do with social conventions that we adopt. The reason there isn’t a technological solution is that the ability to infer information from partial information is extremely powerful — you can take information which appears to be anonymous and (extrapolate identity). It has to be a set of conventions that we adopt, either a legal framework or social conventions.

Technology is racing ahead so quickly and we are so eager to embrace it with our mobiles and everything else that we don’t fully appreciate the side effects. When we put photos on the web and other people tag them, we create (problems) for people who just happen to be in the image. They get caught… we learned this with Street View.

There are a lot of things that we do everyday that we think are innocent… but there are cascades of things that happen. I don’t think we’ve figured out what the right intuitive set of social conventions should be in order to protect privacy. We’re going to have to learn by making mistakes.

This can’t be just a national issue because the internet is everywhere. The consequence of that is it causes us to confront head-on this problem of global issues, of frameworks, legal frameworks, social conventions and the like.

Vinton Cerf, “Internet inventor Vint Cerf: No technological cure for privacy ills
Categories
Links Writing

Another ‘Victory’ for the Internet of Things

Researchers have found, once again, that sensitive systems have been placed on the Internet without even the most basic of security precautions. The result?

Analyzing a database of a year’s worth of Internet scan results [H.D. Moore]’s assembled known as Critical.io, as well as other data from the 2012 Internet Census, Moore discovered that thousands of devices had no authentication, weak or no encryption, default passwords, or had no automatic “log-off” functionality, leaving them pre-authenticated and ready to access. Although he was careful not to actually tamper with any of the systems he connected to, Moore says he could have in some cases switched off the ability to monitor traffic lights, disabled trucking companies’ gas pumps or faked credentials to get free fuel, sent fake alerts over public safety system alert systems, and changed environmental settings in buildings to burn out equipment or turn off refrigeration, leaving food stores to rot.

Needless to say, Moore’s findings are telling insofar as they reveal that engineers responsible for maintaining our infrastructures are often unable to secure those infrastructures from third-parties. Fortunately, it doesn’t appear that a hostile third-party has significantly taken advantage of poorly-secured and Internet-connected equipment, but it’s really only a matter until someone does attack this infrastructure to advance their own interests, or simply to reap the lulz.

Findings like Moore’s are only going to be more commonly produced as more and more systems are integrated with the Internet as part of the ‘Internet of Things’. It remains to be seen whether vulnerabilities will routinely be promptly resolved, especially with legacy equipment that enjoys significant sunk costs and limited capital for ongoing maintenance. Given the cascading nature of failures in an interconnected and digitized world, failing to secure our infrastructure means that along with natural disasters we may get to ‘enjoy’ cyber disasters that are both harder to positively identify or subsequently remedy when/if appropriately identified.

Categories
Quotations

2013.7.10

… the cultural, political, and privacy concerns raised by the new business alliances of search engines, social networks, and carriers cannot be translated into traditional economic analysis. They raise questions about the type of society we want to live in–a holistic inquiry that cannot be reduced to the methodological individualism of economics.

Frank Pasquale. (2010). “Beyond Innovation and Competition: The Need for Qualified Transparency in Internet Intermediaries.” Northwestern University Law Review 104(1).
Categories
Quotations

2013.7.8

…PETs are a technological fix to a sociological problem … [they] introduce another dimension of social hierarchy into cyberspace, not one that aggravates the divide between the information rich and poor, but between those with technological savvy to assert their personal preferences and those who do not possess such expertise…An over-emphasis on PETs leaves the surveillance imperatives being designed into information infrastructures unscathed, while fostering particularistic struggles over the uses of technologies.

Dwayne Winseck, “Netscapes of power: convergence, network designed, walled gardens, and other strategies of control in the information age”
Categories
Links Writing

Will the BC Services Card Be Used for Online Voting?

Last year Rob Shaw wrote a piece for the Times Colonist about online voting in British Columbia. (This is a Bad Idea by the way, for reasons that are expounded elsewhere.) At the very end of his article, we read:

B.C.’s flirtation with online voting coincides with changes to its information and privacy laws last year that paved the way for high-tech identity cards.

The government has said people will one day be able to use the cards to verify their identity and access Internet-based government services, including, potentially, online voting.

No government document released under FOIA laws that I’ve read has stated voting as a driver of the card. However, this isn’t an indictment of Shaw’s reporting but of the government’s unwillingness to fully disclose documents pertaining to the Services Card.

To be clear: there is no good reason to believe that the Services Card will be particularly helpful in combating the core problems related to online voting. It won’t actually verify that the same person associated with the Card is casting the ballot. It won’t ensure that the person is voting in a non-coerced manner. It won’t guarantee that malware hasn’t affected the computer to ‘vote’ for whomever the malware writer wants voted for.

The Services Card is (seemingly) a solution looking for a problem. Voting is not one problem to which the Card is the solution.

Categories
Videos

The Internet: A Warning from History

Categories
Links Writing

Notes EM: Fiction vs reality

evgenymorozov:

Tim Wu on my book:

Too much assault and battery creates a more serious problem: wrongful appropriation, as Morozov tends to borrow heavily, without attribution, from those he attacks. His critique of Google and other firms engaged in “algorithmic gatekeeping”is basically taken from Lessig’s first book, “Code and Other Laws of Cyberspace,” in which Lessig argued that technology is necessarily ideological and that choices embodied in code, unlike law, are dangerously insulated from political debate. Morozov presents these ideas as his own and, instead of crediting Lessig, bludgeons him repeatedly. Similarly, Morozov warns readers of the dangers of excessively perfect technologies as if Jonathan Zittrain hadn’t been saying the same thing for the past 10 years. His failure to credit his targets gives the misimpression that Morozov figured it all out himself and that everyone else is an idiot.

What my book actually says:

Alas, Internet-centrism prevents us from grasping many of these issues as clearly as we must. To their credit, Larry Lessig and Jonathan Zittrain have written extensively about digital preemption (and Lessig even touched on the future of civil disobedience). However, both of them, enthralled with the epochalist proclamations of Internet-centrism, seem to operate under the false assumption that digital preemption is mostly a new phenomenon that owes its existence to “the Internet,” e-books, and MP3 files. Code is law—but so are turnstiles. Lessig does note that buildings and architecture can and do regulate, but he makes little effort to explain whether the possible shift to code-based regulation is the product of unique contemporary circumstances or merely the continuation of various long-term trends in criminological thinking.

As Daniel Rosenthal notes in discussing the work of both Lessig and Zittrain, “Academics have sometimes portrayed digital preemption as an unfamiliar and novel prospect… In truth, digital preemption is less of a revolution than an extension of existing regulatory techniques.” In Zittrain’s case, his fascination with “the Internet” and its values of “openness” and “generativity,” as well as his belief that “the Internet” has important lessons to teach us, generates the kind of totalizing discourse that refuses to see that some attempts to work in the technological register might indeed be legitimate and do not necessarily lead to moral depravity.

One of the theoretical frames that I use in my dissertations is path dependency. Specifically, I consider whether early decisions with regards to Internet standards (small, early, decisions) actually lead to systems that are challenging to significantly change after systems relying on those protocols are widely adopted (i.e. big, late, decisions aren’t that influential). Once systems enjoy a network effect and see high levels of sunk capital, do they tend to be maintained even if something new comes along that is theoretically ‘superior’?

I mention this background in path dependency because a lot of the really interesting work in this field was written well before Lessig’s and Zittrain’s popular books (yes: there’s still excellent stuff being written today, but core literature predates Lessig or Zittrain). There’s also a extensive literature in public policy, with one of the more popular works being Tools of Government (1983). Hood, in Tools, that outlines how detectors and effectors work for institutions. Hood’s work, in part,  attends to how built infrastructure is used to facilitate governance; by transforming the world itself into a regulatory field (e.g. turnstiles, bridges and roads that possess particular driving characteristics, and so forth) the world becomes embedded with an aesthetic of regulation. This aesthetic can significantly ‘nudge’ the actions we choose to take. This thematic of ‘regulation by architecture’ is core to Lessig’s and Zittrain’s arguments, though there are no references to the ‘core books or sources’ that really launched some of this work in the academy.

This said, while there are predecessors that Lessig and Zittrain probably ought to have spent more time writing about, such complaints are true of practically any book or work that is designed to be read by the public and policy makers and academics. The real ‘magic’ of Zittrain and Lessig (and Morozov!) is that their works speak to a wide audience: their books are not, i would argue, written just for academics. As a result some of the nuance or specificity you’d expect in a $150 book that’s purchased by the other 10 specialists in your field is missing. And that’s ok.

Morozov’s key complaint, as I understand it, is that really important problems arise from how these authors’ books are perceived as what they are not. In other words, many people will not understand that many of the more populist books on ‘the Internet’ are being written by people with specific political intentions, who want their books to affect very particular public policy issues and that, as a consequence, these books and other writings have to be read as political works instead of ’dispassionate academic works’.* Their writings act as a kind of trojan horse through which particular ways of thinking of the world become ‘naturalized’, and the authors are ‘first’ to write on topics largely because of their skill in writing about the present while avoiding elongated literature reviews on the past.

I can appreciate Morozov’s concerns around language framing issues, and around the (sometimes) sloppy thinking of these authors. And I can appreciate Morozov’s critics who see him as being blunt and often similarly failing to ‘show all of his work’. For the public, however, I hope that they don’t necessarily see the very public conflicts between Morozov and his colleagues as necessarily an academic dispute in public so much as an unmasking and contestation of divergent political conceptions of the Internet and of literature more generally.

——-

* I write this on the basis of having attended conferences with American legal scholars working in this area. Papers and reports are often written with specific members of federal sub-committees, Congressional and Senate assistants, or federal/state justices in mind. In effect, these authors are writing for people in power to change specific laws and policies. As such you should always hunt for what is ‘really going on’ when reading most popular American legal scholarship.

Notes EM: Fiction vs reality

Categories
Links Writing

Privacy Policies Don’t Need to Be Obtuse

Peter Fleischer has a good summary piece on the (miserable) state of online privacy policies today. As he writes:

Today, privacy policies are being written to try to do two contradictory things.  Like most things in life, if you try to do two contradictory things at the same time, you end up doing neither well.  Here’s the contradiction:  should a privacy policy be a short, simple, readable notice that the average end-user could understand? Or should it be a long, detailed, legalistic disclosure document written for regulators?  Since average users and expert regulators have different expectations about what should be disclosed, the privacy policies in use today largely disappoint both groups.

(…)

The time has come for a global reflection on what, exactly, a privacy policy should look like.  Today, there is no consensus.  I don’t just mean consensus amongst regulators and lawyers.  My suggestion would be to start by doing some serious user-research, and actually ask Johnny and Jean and Johann.

I entirely, fully, wholeheartedly agree: most policies today are absolute garbage. I actually read a lot of them – and research on social media policies will be online and available soon! – and they are more often than not an elaborate act of obfuscation than something that explains, specifically and precisely, what a service does or is doing with the data that is collected.

The thing is, these policies don’t need to be as bad as they are. It really is possible to bridge ‘accessible’ and ‘legalese’ but doing so takes time, care, and effort.

And fewer lawyers.

As a good example of how this can be done check out how Tunnelbear has written their privacy policy: it’s reasonably accessible and lacks a lot of the ‘weasel phrases’ you’ll find in most privacy policies. Even better, read the company’s Terms of Service document; I cannot express how much ‘win’ is captured in their simultaneously legal and layperson disclosure of how and why their service functions as it does.

Categories
Quotations

2013.4.5

But perhaps the most important recent development at Facebook is one that has no immediate bearing on the company’s finances. In October, Brad Smallwood, Facebook’s head of monetization analytics—a convoluted, five-dollar title that obscures his importance at the company—took to the stage at a marketers’ conference to announce that Facebook had formed a partnership with Datalogix, a market-analytics firm with purchasing information on about 70 million American homes. Under the agreement between the companies, Facebook would be able to measure whether a user’s exposure to an ad on the site was correlated with that person’s making a purchase at a store.

That type of information is essential for Facebook. Put simply, many corporations are still mired in click-through data, a standard of analysis that fails to fully reflect purchasing activity generated by online advertising. “The click is a terrible predictor of off-line sales,” Smallwood says. “Every research company knows that’s true.”

Still, Smallwood acknowledges, Wall Street continues to view clicks as the critical measure of online-ad performance. “At some level, people have gotten used to the click, and they still want to see the click when they deal with online,” he says. “It’s kind of our job to explain that that is not necessarily the best measure.”

The numbers from the early studies are powerful. Some 70 percent of the campaigns that were measured showed sales equal to three times or more the amount spent for the ads; 49 percent brought in at least five times what the ad had cost.

Kurt Eichenwald, “Facebook Leans In