There are two types of laws in the U.S., each designed to constrain a different type of power: constitutional law, which places limitations on government, and regulatory law, which constrains corporations. Historically, these two areas have largely remained separate, but today each group has learned how to use the other’s laws to bypass their own restrictions. The government uses corporations to get around its limits, and corporations use the government to get around their limits.
This partnership manifests itself in various ways. The government uses corporations to circumvent its prohibitions against eavesdropping domestically on its citizens. Corporations rely on the government to ensure that they have unfettered use of the data they collect.
Here’s an example: It would be reasonable for our government to debate the circumstances under which corporations can collect and use our data, and to provide for protections against misuse. But if the government is using that very data for its own surveillance purposes, it has an incentive to oppose any laws to limit data collection. And because corporations see no need to give consumers any choice in this matter – because it would only reduce their profits – the market isn’t going to protect consumers, either.
Tag: Surveillance
2013.8.2
… it is obvious that for all the academic critique, ‘privacy’, as a concept, as a regime, as a set of policy instruments, and as a way to frame advocacy and activism, is not going to disappear. On the contrary, it displays a remarkable resilience as a way to regulate the processing of personal information by public and privacy organizations, and as a way for ‘privacy advocates’ to resist the excessive monitoring of human behaviour. Like it or not, privacy frames the ways that most ordinary people see the contemporary surveillance issues. Surveillance scholars have got to live with it.
Colin J. Bennett, “In Defence of Privacy: The concept and the regime”
2013.7.30
The idea that there is no problem with surveillance as long as you have nothing to hide simply points to the complacency of the liberal view of freedom by contrast with the republican one. The liberal thinks that you are free so long as you are not coerced. The republican agrees, of course, that if you are coerced then you are not free. But freedom for the republican consists not in being free from coercion in respect of some action, but rather in being free from the possibility of coercion in respect of it.
Quentin Skinner, “Liberty, Liberalism and Surveillance: a historic overview”
In an interesting bit of news, it seems we can certifiably state that the NSA spied on a New Zealand journalist at the behest of the New Zealand government. The government has apparently classified journalists alongside foreign intelligence services and ‘organizations with extreme ideologies’ (read: terrorists). The government’s defence security staff “viewed investigative journalists as ”hostile“ threats requiring ”counteraction“. The classified security manual lists security threats, including ”certain investigative journalists“ who may attempt to obtain ”politically sensitive information“.”[1]
So, while the information about the surveillance is shocking in its own right, there is also an important tidbit of information that can derived from the US intelligence services’ actions: despite the supposedly sacrosanct prohibition the Five Eyes partners not spy on one another, this prohibition was broken in this instance. Though Canadian experts have previously stated that such surveillance on Five Eyes partners would be an extreme exception, it’s striking that surveillance mechanisms designed to counter the FSB are being brought to bear on investigative journalists. That the NSA and other American intelligence services turned their ‘ears’ towards a journalist at the New Zealand government’s behest suggests that, despite protestations to the contrary, ‘friendly’ intelligence services do ‘help’ one another spy on people and groups that domestic intelligence services are prohibited from monitoring for either legal or technical reasons.
Reasonable people can disagree on how and why intelligence services operate. However, the routine (mis)information that has been put forward by Western agencies concerning governmeing spying has significantly undermined any foundation for a genuine democratic debate to arise around such spying. When the United States’ Director of National Intelligence asserts that he was providing the “least untruthful” answers to elected officials questioning dragnet surveillance, and supposed ‘red lines’ are being crossed in secret to target journalists tasked with providing truthful reporting to citizens, then the ability to support or even reform intelligence practices is undermined: why shouldn’t we, the people, radically and unilaterally curtail surveillance practices if the same services and their administrative officers won’t truthfully disclose even their most basic operational guidelines?
- I should note that, following the revelations that the NZ government is monitoring journalists and classed them alongside foreign intelligence sources and extremist organizations, the government has publicly come out against these allegations. ↩
Washington’s Blog has an excellent, if somewhat long, post that outlines the significance of the NSA’s ‘three hop’ analysis. It collects and provides some numbers behind basic communications network analyses, and comes to the conclusion that upwards to 2.5 million Americans could be “caught up in dragnet for each suspected terrorist, means that a mere 140 potential terrorists could lead to spying on all Americans. There are tens of thousands of Americans listed as suspected terrorists … including just about anyone who protests anything that the government or big banks do.”
Go read the full post. Some of the numbers are a bit speculative, but on the whole it does a good job showing why ‘three hop’ analyses are so problematic: such analyses disproportionately collect data on American citizens the basis of the most limited forms of suspicion. Such surveillance should be set aside because it constitutes an inappropriate infringement on individuals’ and communities’ reasonable expectations of privacy; it runs counter to how a well ordered and properly functioning democracy should operate in theory and in practice.
2013.7.19
Mark Zuckerberg runs a giant spy machine in Palo Alto, California. He wasn’t the first to build one, but his was the best, and every day hundreds of thousands of peopl eupload the most intimate details of their lives to the Internet. The real coup wasn’t hoodwinking the public into revealing their thoughts, closest associates, and exact geographic coordinates at any given time. Rather, it was getting the public to volunteer that information. Then he turned off the privacy settings.
…
If the state had organized such an informationd rive, protestors would have burned down the White House. But the state is the natural beneficary of this new “social norm.” Today, that information is regularly used in court proceedings and law enforcement. There is no need for warrants or subpoenas. Judges need not be consulted. Th Forth Amendment does not come into play. Intelligence agencies don’t have to worry about violating laws protecting the citizenry from wiretapping and information gathering. Sharing information “more openly” and with “more people” is a step backward in civil liberties. And spies, whether foreign and domestic, are “more people,” too.
Marc Ambinder and D.B. Grady. (2013). Deep State: Inside the Government Secrecy Industry. New Jersey: Wiley. Pp. 27.
A Brief Comment on ‘Metadata’
We live in environments that are pervasively penetrated by digital systems. We carry personalized tracking devices with us everywhere (i.e. mobile phones) that have increasingly sophisticated sensors embedded in them. We rely on Internet-based systems for travel, work, and play. Even our ‘landline’ communications are pervasively turned into digital code when we call a friend or family member.
Every one of the previously mentioned transactions generates ‘non-content’ data: when and who we call, and for how long; which cellular towers we pass by; what (semi-)unique IP addresses are provided to websites we visit, and so forth. These identifiers can be used to trace our movements, practices, and who we communicate with: they are often far more revealing about ourselves than the pure content of our communications.
It’s with the reality of the surveillance potentials of metadata that we need to reorient how to talk about such ‘non-content’ data. It has become depressingly common to see elected officials and other authorities state that “it’s just metadata” as well as “we only use it for appropriate purposes.”
To the first statement, metadata can reveal incredibly sensitive infomation about individuals and about their community/communities. The collection and processing of such information therefore warrants a similar degree of care and concern as the processing of clearly personal information.
To the second statement, clarity around collection and use of metadata is needed. Moreover, data cannot be massively collected and ‘appropriate purposes’ just applied to how the data is subsequently parsed. The very collection of data itself needs to be targeted, justified, and enjoy significant oversight – arguably more oversight that ‘just’ the content of communications.
In a recent paper on metadata, Ontario Information and Privacy Commissioner Ann Cavoukian wrote:
we urge governments to adopt a proactive approach to securing the rights affected by intrusive surveillance programs. To protect privacy and liberty, any power to seize communications metadata must come with strong safeguards directly embedded into programs and technologies, that are clearly expressed in the governing legal framework. The purpose, scope, and duration of data collection must be strictly controlled. More robust judicial oversight, parliamentary or congressional controls, and systems capable of providing for effective public accountability should be brought to bear. The need for operational secrecy must not stand in the way of public accountability. Our essential need for privacy and the preservation of our freedoms are at stake.[1]
Commissioner Cavoukian is decidely correct that data collection, use, and intent must be carefully controlled. However, I would go a step further than the Commissioner has in her call for additional parliamentary oversight and control. In Canada, and unlike the United States and United Kingdom, there is not a committee of parliamentarians with security clearances to oversee how our intelligence and security authorities operate. Presently, the Canadian system predominantly enjoys only Cabinet-level political oversight: we need a broader set of eyes, and eyes that are not mindful of the ruling government’s optics, to evaluate the appropriateness of what our intelligence and security services are up to. So, in excess of Commissioner Cavoukian’s comments, we actually need to modify parliament such that oversight is even possible.
Reasonable people can disagree on the value and desire for national security and foreign intelligence services. Such disagreements should happen more prominently amongst parliamentarians and the public. However, there should be no disagreement that, in order to represent the public, at least some members of our legislative assemblies must know the extent of the government’s security and intelligence powers, capabilities, and practices.
Canada is a democracy and, as such, it is imperative that we establish a committee of parliamentarians to oversee how our security and spy agencies are collecting, using, and retaining the metadata and content associated with our communications. The actions that these agencies engage in are too significant to leave to Cabinet oversight alone.
- Ann Cavoukian. (2013). “A Primer on Metadata: Separating Fact from Fiction.” Office of the Information and Privacy Commissioner of Ontario. Available at: http://www.privacybydesign.ca/content/uploads/2013/07/Metadata.pdf. Pp. 10. Emphasis added. ↩
David Sirota of Salon has developed an excellent set of terms to speed along discussions about the contemporary American surveillance state. My own favorites include:
Least untruthful: A new legal doctrine that allows an executive branch official to issue a deliberate, calculated lie to Congress yet avoid prosecution for perjury, as long as the official is protecting the executive branch’s political interests. Usage example: Director of National Intelligence James Clapper avoided prosecution for perjury because he insisted that the blatant lie he told to Congress was merely the “least untruthful” statement he could have made.
And:
Modest encroachment: A massive, indiscriminate intrusion. Usage example: President Obama has deemed the NSA’s “collect it all” surveillance operation, which has captured 20 trillion information transactions and touches virtually all aspects of American life, a “modest encroachment” on citizens’ right to privacy.
The full listing of terms is depressingly cynical. However, the persistent – if often humorous – turn to cynicism may ultimately limit how politicians address and respond to Snowden’s surveillance revelations. What Snowden confirmed raises existential challenges to the potential to imagine, let alone actualize, a deliberative democratic state. The accompanying risk is that instead of addressing such challenges head on, citizens may retreat to cynicism rather than engaging in the hard work of recuperating their increasingly-authoritarian democratic institutions. We’re at a point where we need a more active, not more withdrawn and bemused, citizen response to government excesses.
Worries about spectrum scarcity have prompted telecommunications providers to provide their subscribers with femotocells, which are small and low-powered cellular base stations. Often, these stations are linked into subscribers’ existing 802.11 wireless or wired networks, and are used to relieve stress placed upon commercial cellular towers whilst simultaneously expanding cellular coverage. Questions have recently been raised about the security of those low-powered stations:
Ritter and his colleague, Doug DePerry, demonstrated for Reuters how they can eavesdrop on text messages, photos and phone calls made with an Android phone and an iPhone by using a Verizon femtocell that they had previously hacked.
…
They said that with a little more work, they could have weaponized it for stealth attacks by packaging all equipment needed for a surveillance operation into a backpack that could be dropped near a target they wanted to monitor.
While Verizon has issued a patch for its femtocells, there isn’t any reason why additional vulnerabilities won’t be found. By placing the stations in the hands of end-users, as opposed to retaining control over commercially deployed cellular towers, third-party security researchers and attackers can persistenty test the cells until flaws are found. The consequence of this deployment strategy is that attackers will continue to find vulnerabilities to (further) weaken the security associated with cellular communications. Unfortunately, countering attackers will significantly depend on security researchers finding the same exploit(s) and reporting it/them to the affected companies. The likelihood of security researchers and attackers finding and exploiting the same flaws diminishes as more and more vulnerabilities are found in these devices.
In countries such as Canada, for researchers to conduct their research they must often first receive permission from the companies selling the femtocells: if there are any ‘digital locks’ around the technology, then researchers cannot legally investigate the code without prior corporate approval. Such restrictions don’t mean that researchers won’t conduct research, but do mean that researchers’ discoveries will go unreported and thus unpatched. As a result, consumers will largely remain reliant on the companies responsible for the security deficits in the first place to identify and correct those deficits, but absent public pressure that results from researchers disclosing vulnerabilities.
In light of the high economic costs of such identification and patching processes, I’m less than confident that femtocell providers are going to be investing oodles of cash just to potentially as opposed to necessarily identify and fix vulnerabilities. The net effect is that, at least in Canada, telecommunications providers can be assured that the public will remain relatively unconcerned about the security of providers’ products: security perceptions will be managed by preventing consumers from learning about prospective harms associated with telecommunications equipment. I guess this is just another area of research where Canadians will have to point to the US and say, “The same thing is likely happening here. But we’ll never know for sure.”
2013.7.9
We can draw a distinction here between Big Data—the stuff of numbers that thrives on correlations—and Big Narrative—a story-driven, anthropological approach that seeks to explain why things are the way they are. Big Data is cheap where Big Narrative is expensive. Big Data is clean where Big Narrative is messy. Big Data is actionable where Big Narrative is paralyzing.
The promise of Big Data is that it allows us to avoid the pitfalls of Big Narrative. But this is also its greatest cost. With an extremely emotional issue such as terrorism, it’s easy to believe that Big Data can do wonders. But once we move to more pedestrian issues, it becomes obvious that the supertool it’s made out to be is a rather feeble instrument that tackles problems quite unimaginatively and unambitiously. Worse, it prevents us from having many important public debates.
As Band-Aids go, Big Data is excellent. But Band-Aids are useless when the patient needs surgery. In that case, trying to use a Band-Aid may result in amputation. This, at least, is the hunch I drew from Big Data.
Evgeny Morozov, “Connecting the Dots, Missing the Story”