Categories
Quotations Writing

“Commercially Friendly” Privacy Rules

Dr. Pentland, an academic adviser to the World Economic Forum’s initiatives on Big Data and personal data, agrees that limitations on data collection still make sense, as long as they are flexible and not a “sledgehammer that risks damaging the public good.”

He is leading a group at the M.I.T. Media Lab that is at the forefront of a number of personal data and privacy programs and real-world experiments. He espouses what he calls “a new deal on data” with three basic tenets: you have the right to possess your data, to control how it is used, and to destroy or distribute it as you see fit.

Personal data, Dr. Pentland says, is like modern money — digital packets that move around the planet, traveling rapidly but needing to be controlled. “You give it to a bank, but there’s only so many things the bank can do with it,” he says.

His M.I.T. group is developing tools for controlling, storing and auditing flows of personal data. Its data store is an open-source version, called openPDS. In theory, this kind of technology would undermine the role of data brokers and, perhaps, mitigate privacy risks. In the search for a deep fat fryer, for example, an audit trail should detect unauthorized use.

Steve Lohr, “Big Data Is Opening Doors, but Maybe Too Many

So, I don’t really get how Pentland’s system is going to work any better than the Platform for Privacy Preferences (P3P) work that was done a decade ago. Spoiler alert: P3P failed. Hard. And it was intended to simultaneously enhance users’ privacy online (by letting them establish controls on how their personal information was accessed and used) whilst simultaneously giving industry something to point to, in order to avoid federal regulation.

There is a prevalent strain of liberalism that assumes that individuals, when empowered, are best suited to control the dissemination of their personal information. However, it assumes that knowledge, time, and resourcing are equal amongst all parties. This clearly isn’t the case, nor is it the case that individuals are going to be able to learn when advertisers and data miners don’t respect privacy settings. In effect: control does not necessarily equal knowledge, nor does it necessarily equal capacity to act given individuals’ often limited fiscal, educational, temporal, or other resources.

Categories
Quotations Writing

2013.3.26

But in the long run that’s a problem for Google. Because we tend not to entrust this sort of critical public infrastructure to the private sector. Network externalities are all fine and good to ignore so long as they mainly apply to the sharing of news and pics from a weekend trip with college friends. Once they concern large swathes of economic output and the cognitive activity of millions of people, it is difficult to keep the government out. Maybe that deterrent will be sufficient to keep Google providing its most heavily used products. But maybe not.

Huh. This Economist article seems to be in favour of nationalizing the internet? And most other services?

(via towerofsleep)

I think that the focus was more on the services provided by private companies, as opposed to infrastructure itself (i.e. not the wires, but the stuff that runs on the wires). But I think The Economist has a point that governments could be involved if services that are perceived (note: perception does not necessarily correspond with empirical facts) as essential are threatened.

What really threw me in the piece was this paragraph:

But that makes it increasingly difficult for Google to have success with new services. Why commit to using and coming to rely on something new if it might be yanked away at some future date? This is especially problematic for “social” apps that rely on network effects. Even a crummy social service may thrive if it obtains a critical mass. Yanking away services beloved by early adopters almost guarantees that critical masses can’t be obtained: not, at any rate, without the provision of an incentive or commitment mechanism to protect the would-be users from the risk of losing a vital service.

I mean: I really, really, really use Google Reader. I use the shit out of it on a daily basis. I’m the definition of one of their power users, with hundreds of sites subscribed to – often ones that only get updates every month or two, but that are super helpful for my research – and so I’m far from impressed that Google’s shuttering the service. Reader lets me hold onto the long-tail of the Internet.

But: I’m not certain how a writer can clearly link ‘early adopter’ with yanking away Google Reader. I mean, it’s an older(ish) service. We’re not talking about something that was spawned a few months ago. I get that the write might have been obliquely referring to the social functions of reader that were stripped out a year or so back, but still: there’s no way (at the time of Reader’s social demise) that you can imagine those individuals as ‘early adopters’. The product was mature (as far as many Internet products go) and just didn’t have a lot of people using the service for social purposes beyond a pretty vocal minority.

I want to be clear that I’m already dreading the loss of Google Reader. Seriously dreading. But the article in The Economist is kind of weird insofar as it mixes what are arguably fair points with insider baseball and vaguely suggested ‘beware government regulators if you screw with the services your users really use.“

Categories
Links Writing

Senate Delivers a Devastating Blow to the Integrity of the Scientific Process at the National Science Foundation — WASHINGTON, March 20, 2013

jakke:

jhermann:

rhizombie:

The amendment places unprecedented restriction on the national research agenda by declaring the political science study of democracy and public policy out of bounds. The amendment allows only political science research that promotes “national security or the economic interests of the United States.”

holy shit, that’s disgusting

Practically speaking this will have almost no effect on political science research. The National Science Foundation (the US government agency that manages research funding) is advancing a slippery-slope argument to talk about why their independence has been threatened. But the NSF still ultimately decides where the grants go.

It’s very easy to argue that basically all research of “democracy and public policy” is useful for national security, economic interests, or both. Maybe funding applications will need to include a paragraph explaining why their research is useful for policy applications. But that’s hardly a bad thing, right? Even fundamental-level social science research generally presents a relatively straight line to policy application. And so the NSF can keep on approving whatever ivory-tower projects they like.

So yeah I mean obviously this change is massively suboptimal and deserves to be loudly frowned upon. But in terms of actual research projects losing funding? I’d be surprised if there were any at all.

I think that it’s going to matter how the Senate’s decision is actually implemented. Of course, you’ll see social scientists trying to figure out how their work ‘fits’ the new funding objectives. However, if NSF really gets on board and refuses to fund grants that only have ‘token’ statements for how research meets the new funding objectives then the Senate’s decision could hurt some political scientists.

The decision also establishes a kind of worry amongst some academics that the government could continue to aggressively direct academic study: sure, you can study whatever you want, so long as your work doesn’t depend on federal funds. Some of the Senate’s decision was the result of particular Senators being displeased with the academic work that had been funded; their modification to NSF granting effectively acts as a clear warning to other projects up for NSF funding: if ‘bad’ work that the political paymasters won’t approve of is funded then the paymasters will get very directly involved in matters.

Categories
Links Writing

Big data: the greater good or invasion of privacy?

Chatterjee has a good, quick, article on the significance of ‘big data.’. Note experts warning that, as a result of massive data aggregation, almost all individuals will have secret or sensitive information about themselves stored, traded, or used in the course of companies’ daily activities. This information isn’t necessarily about anything illegal, but legality is not the sole benchmark for whether humans want others to know things about them: embarrassing, shameful, or similar information that may not break the law could be financially, personally, or emotionally damaging should it be provided to third-parties.

Also, take note of Ohm’s warning that we should slow down and think about what is happening with regard to massive data aggregation and mining; we shouldn’t just commit ourselves to pushing the ‘privacy envelope.’ Headlong rushes and acceptance of novel technical structures that invisibly affect billions, with little clear accountability for corporate data mining practices, is a recipe for constructing futural harms.

Categories
Links Writing

The Internet as a Surveillance State

The Internet is a surveillance state. Whether we admit it to ourselves or not, and whether we like it or not, we’re being tracked all the time. Google tracks us, both on its pages and on other pages it has access to. Facebook does the same; it even tracks non-Facebook users. Apple tracks us on our iPhones and iPads. One reporter used a tool called Collusion to track who was tracking him; 105 companies tracked his Internet use during one 36-hour period.

This is ubiquitous surveillance: All of us being watched, all the time, and that data being stored forever. This is what a surveillance state looks like, and it’s efficient beyond the wildest dreams of George Orwell.

Opinion: The Internet is a surveillance state – CNN.com (via new-aesthetic)

There are a few important things to recognize about Schneier’s argument (which, I don’t think, detract from his overall points):

  1. Surveillance isn’t inherently bad. It speaks to a distribution of power where another party enjoys heightened capabilities resulting from their perception of the surveilled. Surveillance becomes ‘bad’ when the power disequilibrium has harmful moral or empirical consequences.
  2. Again, it isn’t entirely surveillance that’s the ‘problem’ with the Internet; it’s the persistent recollection of information by third-parties, often without the data subject knowing that (a) the data was collected; (b) it was subsequently recalled in an unrelated context; © it was then used to influence interactions with the data subject. These problems have always existed, in some fashion, but we are living in an era where what used to historically have been lost to the ethers of time is being retained in massive databases. The nature of perpetual computational memory – often made worse when errors in retained data spawn in perpetuity across interlinked systems – challenges how humans understand time, history, and subjectivity in very powerful ways.
  3. With regards to (2), this is why Europeans are interested in their so-called ‘Right to Be Forgotten’. And, before thinking that forgetting some data collected vis-a-vis the Internet would lead to the end of the (digital) world, consider that Canadians largely already ‘enjoy’ this right under the consent doctrines of federal privacy law: the ‘net isn’t broken here, at least not yet!

(Note: for more on the consent doctrine as it relates to social media, see our paper on SSRN entitled, “Forgetting, Non-Forgetting and Quasi-Forgetting in Social Networking: Canadian Policy and Corporate Practice”)

Categories
Links Writing

Did Google just shut down the wrong product?

parislemon:

John Herrman of BuzzFeed:

According to data from the BuzzFeed Network, a set of tracked partner sites that collectively have over 300 million users, Google Reader is still a significant source of traffic for news — and a much larger one than Google+. The above chart, created by BuzzFeed’s data team, represents data collected from August 2012 to today.

Yikes. Did Google just shut down the wrong product?

I’m less clear that the ‘wrong’ thing happened.* Google is getting slammed in Europe for grabbing headlines for Google News: why not shut down Reader (which pulls information from those agencies, to readers on a Google platform) and (if the same companies want all that traffic) force them onto Google+ so that the publishers are directly providing information to Google. With Google’s current policies could they then repurpose Google+ information that the companies provided and use that to feed Google News, thus undercutting publishers’ arguments?

In essence: could this be a play to push publishers onto Google+ and, by extension, then attract people who want publishers’ content, while at the same time trying to undermine some of the arguments in the EU about Google ‘stealing’ content?

*Don’t get me wrong. I depend on Google Reader and think they screwed up. But from Google’s perspective they might not have…

Categories
Writing

FUD and NSA Cybersecurity

I’ve been in too many meetings where popular articles led to a string of false – and intensely problematic – baseline ‘truths’ that subsequently led to damaging policy proposals. One of the worst recent articles was by Marc Ambinder, who wrote a piece for Foreign Policy about why the NSA has to support Deep Packet Inspection (DPI) appliances in businesses network. The general premise is that NSA assistance is critical if American companies are to effectively filter out foreign nations’ espionage behaviour. This ‘support’ is supposedly driven by the most recent revelations concerning Chinese attacks against predominantly American business interests.

So, in what follows I’ll pull out offending paragraphs and explain what’s factually problematic and, then, the significance of the false or misleading claims.

[The NSA] has some pretty nifty tools to use in terms of protecting cyberspace. In theory, it could probe devices at critical Internet hubs and inspect the patterns of data packets coming into the United States for signs of coordinated attacks. The recently declassified Comprehensive National Cyberspace Initiative describes the government’s plan, informally known as Einstein 3, to address the threats to government data that run through private computer networks – an admission that the NSA will have to perform deep packet inspection on private networks at some point. But, currently, the NSA only does this for a select group of companies that work with the Department of Defense. It is legally prohibited from setting up filters around all of the traffic entry points.

The issue is that Einstein, even if it is working (which remains unclear, at best), is invasive and isn’t a panacea. It might identify some traffic, but the core kind of data analysis that is required today isn’t so much inbound network traffic as outbound; what is leaving the network, why is it leaving, and do characteristics of the data exiting the network correspond with the authorized users’ normal network behaviours? To be blunt, there is no DPI appliance on the market that is genuinely capable of this kind of user- and network-centric surveillance. There are lots of companies that sell things claiming to perform these actions, but the sales language has not yet met the hype. Moreover, if you’re dealing with state-level actors it isn’t clear why, with their immense resources, they can’t simply purchase the DPI appliances and figure out how they work, and how to subvert their analytics protocols.

Why does this quoted section matter? Because it preps an audience for a magic (networked) bullet, and one that to-date doesn’t exist. And because it convinces an audience that if we just brought NSA-grade Einstein surveillance to bear that we’re figure out how to stop the evil hackers.

The next step may be letting the NSA conduct deep-packet monitoring of private networks. It’s undeniable that Congress and the public probably wouldn’t be comfortable knowing that the NSA has its hardware at the gateways to the Internet. And yet there may be no other workable way to detect and defeat major attacks. Thanks to powerful technology lobbies, Congress is debating a bill that would give the private sector the tools to defend itself, and it has been slowly peeling back the degree of necessary government intervention. As it stands, DHS lacks the resources to secure the dot-com top-level domain even if it wanted to. It competes for engineering minds with the NSA and with private industry; the former has more cachet and the latter has better pay.

The NSA already has it’s hardware at the core choke points of the American Internet infrastructure. This deployment led the Congress to retroactively grant immunity to American ISPs for participating in the NSA’s warrantless wiretapping. It’s what’s led a host of whistleblowers to come forward and disclose the extent of the NSA’s surveillance on Americans. The Agency is already using DPI appliances at Internet choke points: what is being proposed is extending the surveillance to the networks of corporations that are not Internet companies. This means that, rather than just filtering at AT&T’s network, The NSA will also filter at Ford’s network.

The author also asserts that it’s important to leave this to NSA on the basis that DHS cannot presently fulfil this defensive task. NSA knows this. DHS knows this. And, on the mutual basis of this knowledge, NSA is already permitted to assist DHS in securing American companies’ networks so long as DHS takes the lead. What is really changing here is that a foreign intelligence body would be given authority to act independently of DHS. Such a move would be intensely problematic on the basis that NSA is highly secretive, even more than DHS, and is routinely involved in bypassing or finding ways around American’s existing legal protections. The notion that the institution’s ongoing bad behaviour should lend credence and authority to its missions is absurd.

Some private-sector companies are good corporate citizens and spend money and time to secure their networks. But many don’t. It’s costly, both in terms of buying the protection systems necessary to make sure critical systems don’t fail and also in terms of the interaction between the average employee and the software. Security and efficiency diverge, at least in the short run.

While this is true, to an extend, it fails to account for the magnitude of scale. Most large-sized businesses have security staff and dedicated network administrators; there is some defence taking place. It’s the mid-sized businesses that tend to be disastrously under protected. Is the proposal that pretty well all businesses with under, say, 1,000 people will get the benefit of NSA-grade security and surveillance? If so, that’s an awful lot of NSA-compliant gear.

If the NSA were simply to share with the private sector en masse the signatures its intelligence collection obtains about potential cyber-attacks, cybersecurity could measurably improve in the near term. But outside the companies who regularly do business with the intelligence community and the military, few firms have people with the clearances required by the NSA to distribute threat information. (Under the new initiative, the NSA’s intelligence will be filtered through the FBI and DHS.)

It’s important to recognize the DPI equipment isn’t cheap. In addition to NSA signatures you’d likely need an ongoing service contract with the appliance manufacturer. Moreover, to actually run the appliance you’ll either need in house staff or contract out the job; in either case, businesses will see an increase in the cost of business. They may not see a return. Moreover, DPI signatures are not foolproof, and they are often particular to specific appliance vendors. So…will your appliance be ‘compatible’ with NSA intelligence? Moreover, how do you check the NSA’s own signatures to ensure that the Agency isn’t doing something sneaky?

By the end of the article what we’re really missing is critical any analysis of the security properties of the DPI appliances themselves or of the NSA in general. DPI devices exploit the vulnerability of data packets to run analyses/modifications of data either in real-time or, if offloaded to a temporary storage device, offline. In either case, when and if these devices are compromised all of the network traffic coursing through the appliances becomes compromised. So, you can in effect move from dealing with significantly placed compromised devices in your network or dealing with that plus having your sophisticated routers turned against you. And the author’s final lines in the article – yeah, NSA’s been bad in the past, but hey: they’re really on ‘our’ side now! – doesn’t exactly fill a reader with much confidence.

 

Categories
Writing

Don’t Risk Model for Aged, Wealthy, Americans

Data security and communicative privacy matters. The boons of the contemporary computer era has led to people across the world using common services for security, for data processing, and for communications generally despite users’ radically different risk profiles. Few users are savvy enough to engage in code-level audits, fewer to ascertain the validity of improperly issued security certificates, and likely even fewer to guarantee that programs’ and operating systems’ updates are from the actual developers. These are problems – important problems – that need to be directly addressed by developers.

It’s always been morally wrong to be cavalier about your software’s security profile, and to just discount the potential vulnerabilities or bugs linked to your tools. Things aren’t getting better, however, on account of state actors becoming more and more sophisticated in how they target and monitor their citizens’ and residents’ communications. Consequently, the blasé attitude towards security that has (largely) focused on successful engineering over successful security in depth is a larger and larger problem. This attitude, especially when it comes to anti-circumvention and encryption software, is leading to individual users ending up seriously hurt, imprisoned, or dead.

Security is important. Speech is important. And ensuring that secure, private, speech is possible is an increasingly critical issue for parties throughout the world. Developers and companies and individuals ought to take the severity of the consequences of their actions to heart, or risk having very real blood on their hands.

Categories
Writing

Why I’m quitting Facebook

I left Facebook a long time ago, before many of the current realities of that ecosystem. Rushkoff didn’t leave for the same reasons I did (which stemmed from philosophical conceptions of temporality, time, and privacy) but his reasons echo those I keep hearing from undergrads. It isn’t just that Facebook isn’t ‘cool’; they’re spending less time on the site because the company is increasingly seen as manipulative, secretive, and portrays users in ways antithetical to how the users perceive themselves.

What is perhaps most concerning is what will happen to all the data the company has amassed if/when it implodes like MySpace did. What if, in five or seven years, Facebook effectively closes shop: who will get the mass of data that the company has collected, and how will they subsequently disseminate or manipulate it? It’s this broader concern about long-term use of incredibly intimate data that leaves me most leery of corporate-hosted social media platforms, and it’s an issue that I really don’t think people appreciate. But, then, I guess not a lot of people really remember the dot com crash…

Categories
Links Writing

Attacks on the Press: A Moving Target – Committee to Protect Journalists:

While not every journalist is an international war correspondent, every journalist’s cellphone is untrustworthy. Mobile phones, and in particular Internet-enabled smartphones, are used by reporters around the world to gather and transmit news. But mobile phones also make journalists easier to locate and intimidate, and confidential sources easier to uncover. Cellular systems can pinpoint individual users within a few meters, and cellphone providers record months, even years, of individual movements and calls. Western cellphone companies like TeliaSonera and France Telecom have been accused by investigative journalists in their home countries of complicity in tracking reporters, while mobile spying tools built for law enforcement in Western countries have, according to computer security researchers working with human rights activists, been exported for use against journalists working under repressive regimes in Ethiopia, Bahrain, and elsewhere.

 

“Reporters need to understand that mobile communications are inherently insecure and expose you to risks that are not easy to detect or overcome,” says Katrin Verclas of the National Democratic Institute. Activists such as Verclas have been working on sites like SaferMobile, which give basic advice for journalists to protect themselves. CPJ recently published a security guide that addresses the use of satellite phones and digital mobile technologies. But repressive governments don’t need to keep up with all the tricks of mobile computing; they can merely set aside budget and strip away privacy laws to get all the power they need. Unless regulators, technology companies, and media personnel step up their own defenses of press freedom, the cellphone will become journalists’ most treacherous tool.

Network surveillance is a very real problem that journalists and, by extension, their sources have to account for. The problem is that many of the security tools that are used to protect confidential communications are awkward to use, provide to sources, and use correctly without network censors detecting the communication. Worst is when journalists simply externalize risk, putting sources at risk in the service of ‘getting the story’ in order to ‘spread the word.’ Such externalization is unfortunately common and generates fear and distrust in journalists.