This is the kind of introspection and critique that all backbenchers should be able to present to the public. They shouldn’t be forced to leave their party caucus to do so.
Source: Greater Oversight Required for Canada’s Spy Agencies
This is the kind of introspection and critique that all backbenchers should be able to present to the public. They shouldn’t be forced to leave their party caucus to do so.
Source: Greater Oversight Required for Canada’s Spy Agencies
[Privacy] has to be institutional; it also has to do with social conventions that we adopt. The reason there isn’t a technological solution is that the ability to infer information from partial information is extremely powerful — you can take information which appears to be anonymous and (extrapolate identity). It has to be a set of conventions that we adopt, either a legal framework or social conventions.
Technology is racing ahead so quickly and we are so eager to embrace it with our mobiles and everything else that we don’t fully appreciate the side effects. When we put photos on the web and other people tag them, we create (problems) for people who just happen to be in the image. They get caught… we learned this with Street View.
There are a lot of things that we do everyday that we think are innocent… but there are cascades of things that happen. I don’t think we’ve figured out what the right intuitive set of social conventions should be in order to protect privacy. We’re going to have to learn by making mistakes.
This can’t be just a national issue because the internet is everywhere. The consequence of that is it causes us to confront head-on this problem of global issues, of frameworks, legal frameworks, social conventions and the like.
Vinton Cerf, “Internet inventor Vint Cerf: No technological cure for privacy ills”
… it is obvious that for all the academic critique, ‘privacy’, as a concept, as a regime, as a set of policy instruments, and as a way to frame advocacy and activism, is not going to disappear. On the contrary, it displays a remarkable resilience as a way to regulate the processing of personal information by public and privacy organizations, and as a way for ‘privacy advocates’ to resist the excessive monitoring of human behaviour. Like it or not, privacy frames the ways that most ordinary people see the contemporary surveillance issues. Surveillance scholars have got to live with it.
Colin J. Bennett, “In Defence of Privacy: The concept and the regime”
The idea that there is no problem with surveillance as long as you have nothing to hide simply points to the complacency of the liberal view of freedom by contrast with the republican one. The liberal thinks that you are free so long as you are not coerced. The republican agrees, of course, that if you are coerced then you are not free. But freedom for the republican consists not in being free from coercion in respect of some action, but rather in being free from the possibility of coercion in respect of it.
Quentin Skinner, “Liberty, Liberalism and Surveillance: a historic overview”
Washington’s Blog has an excellent, if somewhat long, post that outlines the significance of the NSA’s ‘three hop’ analysis. It collects and provides some numbers behind basic communications network analyses, and comes to the conclusion that upwards to 2.5 million Americans could be “caught up in dragnet for each suspected terrorist, means that a mere 140 potential terrorists could lead to spying on all Americans. There are tens of thousands of Americans listed as suspected terrorists … including just about anyone who protests anything that the government or big banks do.”
Go read the full post. Some of the numbers are a bit speculative, but on the whole it does a good job showing why ‘three hop’ analyses are so problematic: such analyses disproportionately collect data on American citizens the basis of the most limited forms of suspicion. Such surveillance should be set aside because it constitutes an inappropriate infringement on individuals’ and communities’ reasonable expectations of privacy; it runs counter to how a well ordered and properly functioning democracy should operate in theory and in practice.
![]()
I’m really not clear why the hell a fitness application needs to be able to read who is calling me and to be able to track my outbound phone calls.
Thus far I’ve spoken via Twitter with their mobile developer, who says that the permissions request is an error on his part. I’ve also gone back and forth – repeatedly – with Fitbit’s technical support team. The first response wasn’t in parseable English, and the subsequent messages haven’t clarified why this particular permission is needed.
So, after several days of trying to learn why Fitbit is requesting these Android permissions – and what data they’re collecting – I’m not really any closer to understanding the situation than I was when I started this whole process. I’m thinking that it’s about time to exercise my rights as a Canadian and start requesting copies of all data that the company has captured about me….and then see if they’re willing to comply with Canadian privacy laws.
There’s a lot of confusion about the actual versus rhetorical security integrated with Apple’s iMessage product. I’ve tried to suggest, in the linked article, how Canadians can use our federal privacy laws to figure out whether Apple is, or the company’s critics are, right about the company’s security posture.
jakke said: Actually I don’t agree at all. That’s directly analogous to leveraging (some is a positive externality to credit, too much is a negative externality to risk, the threshold differs depending on whom you’re talking about) and we can regulate that.
The literature that has looked at the economic of privacy over the past decade or two has been absolutely dismal, insofar as efforts to operationalize the ‘value’ of privacy are pervaded with assumptions of rationality, comprehension, ability to enact privacy choices, and so forth. The literature on privacy more generally is still struggling – after 40+ years – to really move beyond squabbling about what ‘privacy’ even means. The consequence is that ascertaining the externalities linked to privacy infringements/violations/concerns/(term of the month) necessarily requires adopting one definition or another.
Unlike more ‘defined’ harms (e.g. X percentage of Y particulate in the water is linked to Z) those linked with privacy have a tendency to be more normative, and harder to measure as a result. Ascertaining what the chilling effect of corporate surveillance, or the consequences of non-transparency in how communications infrastructures subtly modulate discourse and association, is an exercise in theory as much as anything else. Consumers, for lots of good reasons, are poor rational actors in lots of areas, and privacy is argued to be one of those areas.
So the quotation was emergent from a (longer) argument concerning the efficacy of economic analyses of privacy and place such analyses have within the broader dimensions of the contested individual, communal, and intersubjective natures of privacy. It’s on these bases that economic analyses fall short: while they *might* improve the situation, marginally, what is improved will be regarded as perpetuating the harm by some, and being the wrong measure of alleviating harms by other.
Antitrust law is ill prepared to handle a “market” where some percentage of consumers consider a loss of privacy a gain and others consider it a loss. Economic reasoning in general falters in the face of externalities, but usually we can all agree that, say, pollution is a harm (or negative externality) and flowers are a boon (or positive externality). Privacy preferences are much more idiosyncratic.
Frank Pasquale. (2010). “Beyond Innovation and Competition: The Need for Qualified Transparency in Internet Intermediaries.“ Northwestern University Law Review 104(1).
Like a giant python that has consumed a rat, Facebook captures, swallows, and slowly digests its users.
Ron Deibert, Black Code: Inside the Battle for Cyberspace