… in the years since WhatsApp co-founders Jan Koum and Brian Acton cut ties with Facebook for, well, being Facebook, the company slowly turned into something that acted more like its fellow Facebook properties: an app that’s kind of about socializing, but mostly about shopping. These new privacy policies are just WhatsApp’s—and Facebook’s—way of finally saying the quiet part out loud.
What’s going to change? Namely whenever you’re speaking to a business then those communications will not be considered end-to-end encrypted and, as such, the communications content and metadata that is accessible can be used for advertising and other marketing, data mining, data targeting, or data exploitation purposes. If you’re just chatting with individuals–that is, not businesses!–then your communications will continue to be end-to-end encrypted.
For an additional, and perhaps longer, discussion of how WhatsApp’s shifts in policy–now, admittedly, delayed for a few months following public outrage–is linked to the goal of driving business revenue into the company check out Alec Muffett’s post over on his blog. (By way of background, Alec’s been in the technical security and privacy space for 30+ years, and is a good and reputable voice on these matters.)
Law enforcement agencies have been focusing their investigative efforts on two main information sources: the telematics system — which is like the “black box” — and the infotainment system. The telematics system stores a vehicle’s turn-by-turn navigation, speed, acceleration and deceleration information, as well as more granular clues, such as when and where the lights were switched on, the doors were opened, seat belts were put on and airbags were deployed.
The infotainment system records recent destinations, call logs, contact lists, text messages, emails, pictures, videos, web histories, voice commands and social media feeds. It can also keep track of the phones that have been connected to the vehicle via USB cable or Bluetooth, as well as all the apps installed on the device.
Together, the data allows investigators to reconstruct a vehicle’s journey and paint a picture of driver and passenger behavior. In a criminal case, the sequence of doors opening and seat belts being inserted could help show that a suspect had an accomplice.
Of note, rental cars as well as second hand vehicles also retain all of this information and it can then be accessed by third-parties. It’s pretty easy to envision a situation where rental companies are obligated to assess retained data to determine if a certain class or classes of offences have been committed, and then overshare information collected by rental vehicles to avoid their own liability that could follow from failing to fully meet whatever obligations are placed upon them.
Of course, outright nefarious actors can also take advantage of the digital connectivity built into contemporary vehicles.
Just as the trove of data can be helpful for solving crimes, it can also be used to commit them, Amico said. He pointed to a case in Australia, where a man stalked his ex-girlfriend using an app that connected to her high-tech Land Rover and sent him live information about her movements. The app also allowed him to remotely start and stop her vehicle and open and close the windows.
As in so many different areas, connectivity is being included into vehicles without real or sufficient assessment of how to secure new technologies and defray harmful or undesirable secondary uses of data. Engineers rarely worry about these outcomes, corporate lawyers aren’t attentive to these classes of issues, and the security of contemporary vehicles is generally garbage. Combined, this means that government bodies are almost certainly going to expand the ranges of data they can access without having to first go through a public debate about the appropriateness of doing so or creation of specialized warrants that would limit data mining. Moreover, in countries with weak policing accountability structures, it will be impossible to even assess the regularity at which government officials obtain access to information from cars, how such data lets them overcome other issues they state they are encountering (e.g., encryption), or the utility of this data in investigating crimes and introducing it as evidence in court cases.
To be clear, using a VPN doesn’t magically solve all these issues, it mitigates them. For example, if a site lacks sufficient HTTPS then there’s still the network segment between the VPN exit node and the site in question to contend with. It’s arguably the least risky segment of the network, but it’s still there. The effectiveness of black-holing DNS queries to known bad domains depends on the domain first being known to be bad. CyberSec is still going to do a much better job of that than your ISP, but it won’t be perfect. And privacy wise, a VPN doesn’t remove DNS or the ability to inspect SNI traffic, it simply removes that ability from your ISP and grants it to NordVPN instead. But then again, I’ve always said I’d much rather trust a reputable VPN to keep my traffic secure, private and not logged, especially one that’s been independently audited to that effect.
Something that security professionals are still not great at communicating—because we’re not asked to and because it’s harder for regular users to use the information—is that security is about adding friction that prevents adversaries from successfully exploiting whomever or whatever they’re targeting. Any such friction, however, can be overcome in the face of a sufficiently well-resourced attacker. But when you read most articles that talk about any given threat mitigation tool what is apparent is that the problems that are faced are systemic; while individuals can undertake some efforts to increase friction the crux of the problem is that individuals are operating in an almost inherently insecure environment.
Security is a community good and, as such, individuals can only do so much to protect themselves. But what’s more is that their individual efforts functionally represent a failing of the security community, and reveals the need for group efforts to reduce the threats faced by individuals everyday when they use the Internet or Internet-connected systems. Sure, some VPNs are a good thing to help individuals but, ideally, these are technologies to be discarded in some distant future after groups of actors successfully have worked to mitigate the threats that lurk all around us. Until then, though, adopting a trusted VPN can be a very good idea if you can afford the costs linked to them.
It’s great that Apple is asserting the importance of privacy. But if they’re really, really serious they’ll stop enabling the Chinese government direct access to Chinese users’ iCloud data. And they’ll secure data on iCloud so that government agencies can’t just request Apple to hand over our WhatsApp, iCloud, Notes, and other data that Apple holds the keys to unlocking and turning over to whomever comes with a warrant. I’m not holding my breath on the former, nor the latter.
The ability to socialize with friends in private spaces without state interference is vital to citizens’ growth, the maintenance of society, and a free and healthy democracy. It ensures a zone of safety in which we can share personal information with the people that we choose, and still be free from state intrusion. Recognizing a right to be left alone in private spaces to which we have been invited is an extension of the principle that we are not subject to state interference any time we leave our own homes. The right allows citizens to move about freely without constant supervision or intrusion from the state. Fear of constant intrusion or supervision itself diminishes Canadians’ sense of freedom.
Factum for Tom Le, in Tom Le v The Queen, Court File No. 37971
It’s great that Apple is supporting these issues. But it’s equally important to reflect on Apple’s less rights-promoting activities. The company operates around the world and chooses to pursue profits to the detriment of the privacy of its China-based users. It clearly has challenges — along with all other smartphone companies — in acquiring natural mineral resources that are conflict-free; the purchase of conflict minerals raises fundamental human rights issues. And the company’s ongoing efforts to minimize its taxation obligations have direct impacts on the abilities of governments to provide essential services to those who are often the worst off in society.
Each of the above examples are easily, and quickly, reduced to assertions that Apple is a public company in a capitalist society. It has obligations to shareholders and, thus, can only do so much to advance basic rights while simultaneously pursuing profits. Apple is, on some accounts, actively attempting to enhance certain rights and promote certain causes and mitigate certain harms while simultaneously acting in the interests of its shareholders.
Those are all entirely fair, and reasonable, arguments. I understand them all. But I think that we’d likely all be well advised to consider Apple’s broader activities before declaring that Apple has ‘our’ backs, on the basis that ‘our’ backs are often privileged, wealthy, and able to externalize a range of harms associated with Apple’s international activities.
A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.
I’ve conducted research over the past decade which hasrepeatedlyshown that companies are often very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere in a requesting party’s country then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.
Automated Data Export As Solution?
Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.
A shining example of the kinds of information that are not revealed to users of these services has come to light. A leaked document from Facebook Australia revealed that:
Facebook’s algorithms can determine, and allow advertisers to pinpoint, “moments when young people need a confidence boost.” If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including “worthless,” “insecure,” “defeated,” “anxious,” “silly,” “useless,” “stupid,” “overwhelmed,” “stressed,” and “a failure.”
This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.
But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1
Solving for Bad Actors
There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing.
But part of the challenge in stopping such exploitative activities are also linked to Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.
So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.
In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.
A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.
History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.
(This article was previously published in a slightly different format on a now-defunct Medium account.)
1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook.
2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally.
3 Surely you don’t think that Facebook is only targeting kids, right?
PETs are a category of technologies that have not previously been systematically studied by the Office of the Privacy Commissioner of Canada (OPC). As a result, there were some gaps in our knowledge of these tools and techniques. In order to begin to address these gaps, a more systematic study of these tools and techniques was undertaken, starting with a (non-exhaustive) review of the general types of privacy enhancing technologies available. This paper presents the results of that review.
While Privacy Enhancing Technologies (PETs) have been around for a long time there are only some which have really taken hold over time, and usually only as a result of there being a commercial incentive for companies to integrate the enhancements.
Some of the failures of PETs to be widely adopted have stemmed from the reasons specific PETs were created (to effectively forestall formal regulatory or legislative action), others because of their complexity (you shouldn’t need a graduate degree to configure your tools properly!), and yet others because the PETs in question were built by researchers and not intended for commercialization.
The OPC’s review of dominant types of PETs is good and probably represents the most current of reviews. But the specific categories of tools, types of risks, and reasons PETs have failed to really take hold have largely been the same for a decade. We need to move beyond research and theory and actually do something soon given that data is leaking faster and further than ever before, and the rate of leakage and dispersal is only increasing.
Doctors at Royal Columbian Hospital in New Westminster have complained that local police and RCMP officers are routinely recording conversations without consent between doctors and patients who are considered a suspect in a crime.
“They will be present when we are trying to question the patients and trying to obtain a history of what happened,” said Tony Taylor, an emergency physician who practises at the hospital.
“They have now recently started recording these conversations and often they will do that unannounced, which has a number of implications around confidentiality and consent.”
As far as doctors at Royal Columbian are concerned, the police are getting in the way of patient care.
Patients tend to clam up when police officers are present, Dr. Taylor said. “That makes it difficult to get those kind of history details that are critically important,” he said.
The idea that the police are present, and recording interactions between a doctor and patient, is patently problematic from a procedural fairness perspective. In the past the authorities have lost Charter challenges based on their attempts to exploit Canada’s one-person consent doctrine; I’d be very curious to know the legal basis for their recording persons who may be accused of a crime, in a setting clearly designated as deserving heightened privacy protections, and the extent to which that legal theory holds up under scrutiny.
An agency like TfL could also use uber-accurate tracking data to send out real-time service updates. “If no passengers are using a particular stairway, it could alert TfL that there’s something wrong with the stairway—a missing step or a scary person,” Kaufman says. (Send emergency services stat.)
The Underground won’t exactly know what it can do with this data until it starts crunching the numbers. That will take a few months. Meanwhile, TfL has set about quelling a mini-privacy panic—if riders don’t want to share data with the agency, Sager Weinstein recommends shutting off your mobile device’s Wi-Fi.
So, on the one hand, they’ll apply norms and biases to ascertain why their data ‘says’ certain things. But to draw these conclusion the London transit authority will collect information from customers and the only way to disable this collection is to reduce the functionality of your device when you’re in a public space. Sounds like a recipe for great consensual collection of data and subsequent data ‘analysis’.