Limits of Data Access Requests

rawpixel-378006-unsplash
Photo by rawpixel on Unsplash

A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.

I’ve conducted research over the past decade which has repeatedly shown that companies are often very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere in a requesting party’s country then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.

Automated Data Export As Solution?

Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.

A shining example of the kinds of information that are not revealed to users of these services has come to light. A leaked document from Facebook Australia revealed that:

Facebook’s algorithms can determine, and allow advertisers to pinpoint, “moments when young people need a confidence boost.” If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including “worthless,” “insecure,” “defeated,” “anxious,” “silly,” “useless,” “stupid,” “overwhelmed,” “stressed,” and “a failure.”

This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.

But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1

Solving for Bad Actors

There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing.

But part of the challenge in stopping such exploitative activities are also linked to Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.

So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.

In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.

A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.

History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.

(This article was previously published in a slightly different format on a now-defunct Medium account.)

Footnotes:

1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook.

2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally.

3 Surely you don’t think that Facebook is only targeting kids, right?

Quote

Data protection law has not fallen from the sky. Let me give you an example of this – the overblown discussion on consent.

The current Directive states since 1995 that consent has to be ‘unambiguous’. The Commission thinks it should be ‘explicit’. 27 national Data Protection Authorities agree. This has become a major talking point. What will this mean in practice? That explicit consent will be needed in all circumstances? Hundreds of pop-ups on your screens? Smartphones thrown on the floor in frustration? No. It means none of these things. This is only the scaremongering of certain lobbyists.

Citizens don’t understand the notion of implicit consent. Staying silent is not the same as saying yes.

  • Viviane Reding, Vice-President of the European Commission

The EU’s Data Protection reform: Decision-Time is Now

http://europa.eu/rapid/press-release_SPEECH-13-197_en.htm

(via omalleyprivacy)

Important things to consider when reading about how consent will – somehow – break the Internet. It will force American (and some Canadian!) companies to obey the law or face fines. So be it.

Quote

Over the last forty years, a strong and principled argument that privacy is a fundamental human right deserving special protection in an age of high technology has confronted more pragmatic considerations from a variety of interests. The messy twists and turns of this international struggle have produced a sort of consensus on what it means for an organization to process personal data responsibly. But it is an uneasy consensus, hedged by exemptions and qualifications, and regularly shaken by monumental shifts in the processing powers of technology, and by game changers like the 9/11 attacks.

This conflict is now being played out again with respect to a new Draft Regulation on privacy protection from the European Union. We have heard that this Regulation is too burdensome, that it will block innovation, that it will cost jobs, trade, and investment, that it will kill the online advertising industry, that it will unreasonably extend the reach of European law beyond European borders and exacerbate the transatlantic divide between a more protectionist and regulatory Europe and a more open and innovative United States.

These views are simplistic and misleading. The same fears were expressed twenty years ago when the first set of European privacy rules were proposed. The Internet developed and flourished since that time, and within that framework of national and international privacy law. Privacy protection did not constrain innovation then, and it will not do so today.

* Colin Bennett, “The Geo-Politics of Personal Data
Link

Data Protection Officers Needed in the EU

Peter Fleischer, Google Global Privacy Counsel, notes that most companies with over 250 employees will likely need a Data Protection Officer as a result of updates to European law . He rightly notes that such updates should increase basic data protection awareness in companies, though I have concerns about the effectiveness of securing privacy through data protection.

To be sure, breaches will hopefully be assuaged (though almost certainly not stopped) but data will be protected to the letter of the law as opposed to being secured to the level of citizens’ normative expectations of privacy. As a result, the legalization of data protection and privacy will continue to let companies engage in practices that citizens find upsetting without those practices actually being outlawed or banned.