It’s great that Apple is supporting these issues. But it’s equally important to reflect on Apple’s less rights-promoting activities. The company operates around the world and chooses to pursue profits to the detriment of the privacy of its China-based users. It clearly has challenges — along with all other smartphone companies — in acquiring natural mineral resources that are conflict-free; the purchase of conflict minerals raises fundamental human rights issues. And the company’s ongoing efforts to minimize its taxation obligations have direct impacts on the abilities of governments to provide essential services to those who are often the worst off in society.
Each of the above examples are easily, and quickly, reduced to assertions that Apple is a public company in a capitalist society. It has obligations to shareholders and, thus, can only do so much to advance basic rights while simultaneously pursuing profits. Apple is, on some accounts, actively attempting to enhance certain rights and promote certain causes and mitigate certain harms while simultaneously acting in the interests of its shareholders.
Those are all entirely fair, and reasonable, arguments. I understand them all. But I think that we’d likely all be well advised to consider Apple’s broader activities before declaring that Apple has ‘our’ backs, on the basis that ‘our’ backs are often privileged, wealthy, and able to externalize a range of harms associated with Apple’s international activities.
A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.
I’ve conducted research over the past decade which hasrepeatedlyshown that companies are often very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere in a requesting party’s country then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.
Automated Data Export As Solution?
Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.
A shining example of the kinds of information that are not revealed to users of these services has come to light. A leaked document from Facebook Australia revealed that:
Facebook’s algorithms can determine, and allow advertisers to pinpoint, “moments when young people need a confidence boost.” If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including “worthless,” “insecure,” “defeated,” “anxious,” “silly,” “useless,” “stupid,” “overwhelmed,” “stressed,” and “a failure.”
This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.
But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1
Solving for Bad Actors
There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing.
But part of the challenge in stopping such exploitative activities are also linked to Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.
So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.
In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.
A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.
History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.
(This article was previously published in a slightly different format on a now-defunct Medium account.)
1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook.
2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally.
3 Surely you don’t think that Facebook is only targeting kids, right?
PETs are a category of technologies that have not previously been systematically studied by the Office of the Privacy Commissioner of Canada (OPC). As a result, there were some gaps in our knowledge of these tools and techniques. In order to begin to address these gaps, a more systematic study of these tools and techniques was undertaken, starting with a (non-exhaustive) review of the general types of privacy enhancing technologies available. This paper presents the results of that review.
While Privacy Enhancing Technologies (PETs) have been around for a long time there are only some which have really taken hold over time, and usually only as a result of there being a commercial incentive for companies to integrate the enhancements.
Some of the failures of PETs to be widely adopted have stemmed from the reasons specific PETs were created (to effectively forestall formal regulatory or legislative action), others because of their complexity (you shouldn’t need a graduate degree to configure your tools properly!), and yet others because the PETs in question were built by researchers and not intended for commercialization.
The OPC’s review of dominant types of PETs is good and probably represents the most current of reviews. But the specific categories of tools, types of risks, and reasons PETs have failed to really take hold have largely been the same for a decade. We need to move beyond research and theory and actually do something soon given that data is leaking faster and further than ever before, and the rate of leakage and dispersal is only increasing.
Doctors at Royal Columbian Hospital in New Westminster have complained that local police and RCMP officers are routinely recording conversations without consent between doctors and patients who are considered a suspect in a crime.
“They will be present when we are trying to question the patients and trying to obtain a history of what happened,” said Tony Taylor, an emergency physician who practises at the hospital.
“They have now recently started recording these conversations and often they will do that unannounced, which has a number of implications around confidentiality and consent.”
As far as doctors at Royal Columbian are concerned, the police are getting in the way of patient care.
Patients tend to clam up when police officers are present, Dr. Taylor said. “That makes it difficult to get those kind of history details that are critically important,” he said.
The idea that the police are present, and recording interactions between a doctor and patient, is patently problematic from a procedural fairness perspective. In the past the authorities have lost Charter challenges based on their attempts to exploit Canada’s one-person consent doctrine; I’d be very curious to know the legal basis for their recording persons who may be accused of a crime, in a setting clearly designated as deserving heightened privacy protections, and the extent to which that legal theory holds up under scrutiny.
An agency like TfL could also use uber-accurate tracking data to send out real-time service updates. “If no passengers are using a particular stairway, it could alert TfL that there’s something wrong with the stairway—a missing step or a scary person,” Kaufman says. (Send emergency services stat.)
The Underground won’t exactly know what it can do with this data until it starts crunching the numbers. That will take a few months. Meanwhile, TfL has set about quelling a mini-privacy panic—if riders don’t want to share data with the agency, Sager Weinstein recommends shutting off your mobile device’s Wi-Fi.
So, on the one hand, they’ll apply norms and biases to ascertain why their data ‘says’ certain things. But to draw these conclusion the London transit authority will collect information from customers and the only way to disable this collection is to reduce the functionality of your device when you’re in a public space. Sounds like a recipe for great consensual collection of data and subsequent data ‘analysis’.
As the federal government holds public consultations on what changes should be made to Bill C-51, the controversial anti-terrorism legislation passed by the Conservative government, various police agencies such as the RCMP and the Canadian Association of Chiefs of Police have petitioned to gain new powers to access telephone and internet data. Meanwhile nearly half of Canadians believe they should have the right to complete digital privacy. The Agenda examines the question of how to balance privacy rights with effective policing in the digital realm.
Amazon’s Echo and Alphabet’s Home cost less than $200 today, and that price will likely drop. So who will pay our butler’s salary, especially as it offers additional services? Advertisers, most likely. Our butler may recommend services and products that further the super-platform’s financial interests, rather than our own interests. By serving its true masters—the platforms—it may distort our view of the market and lead us to services and products that its masters wish to promote.
But the potential harm transcends the search bias issue, which Google is currently defending in Europe. The increase in the super-platform’s economic power can translate into political power. As we increasingly rely on one or two head butlers, the super-platform will learn about our political beliefs and have the power to affect our views and the public debate.
The discussions about algorithmic bias often have an almost science fiction feel to them. But as personal assistant platforms are monetized by platforms by inking deals with advertisers and designing secretive business practices designed to extract value from users, the threat of attitude shaping will become even more important. Why did your assistant recommend a particular route? (Answer: because it took you past businesses the platform owner believes you are predisposed to spend money at.) Why did your assistant present a particular piece of news? (Answer: because the piece in question conformed with your existing views and thus increased time you spent on the site, during which you were exposed to the platform’s associated advertising partners’ content.)
We are shifting to a world where algorithms are functionally what we call magic. A type of magic that can be used to exploit us while we think that algorithmically-designed digital assistants are markedly changing our lives for the better.