Link

A Brief Unpacking of a Declaration on the Future of the Internet

Cameron F. Kerry has a helpful piece in Brookings that unpacks the recently published ‘Declaration on the Future of the Internet.’ As he explains, the Declaration was signed by 60 States and is meant, in part, to rebut a China-Russia joint statement. Those countries’ statement would support their positions on ‘securing’ domestic Internet spaces and removing Internet governance from multi-stakeholder forums to State-centric ones.

So far, so good. However, baked into the Kerry’s article is language suggesting that either he misunderstands, or understates, some of the security-related elements of the Declaration. He writes:

There are additional steps the U.S. government can take that are more within its control than the actions and policies of foreign states or international organizations. The future of the Internet declaration contains a series of supporting principles and measures on freedom and human rights, Internet governance and access, and trust in use of digital network technology. The latter—trust in the use of network technology— is included to “ensure that government and relevant authorities’ access to personal data is based in law and conducted in accordance with international human rights law” and to “protect individuals’ privacy, their personal data, the confidentiality of electronic communications and information on end-users’ electronic devices, consistent with the protection of public safety and applicable domestic and international law.” These lay down a pair of markers for the U.S. to redeem.

I read this, against the 2019 Ministerial and recent Council of Europe Cybercrime Convention updates, and see that a vast swathe of new law enforcement and security agency powers would be entirely permissible based on Kerry’s assessment of the Declaration and States involved in signing it. While these new powers have either been agreed to, or advanced by, signatory States they have simultaneously been directly opposed by civil and human rights campaigners, as well as some national courts. Specifically, there are live discussions around the following powers:

  • the availability of strong encryption;
  • the guarantee that the content of communications sent using end-to-end encrypted devices cannot be accessed or analyzed by third-parties (include by on-device surveillance);
  • the requirement of prior judicial authorization to obtain subscriber information; and
  • the oversight of preservation and production powers by relevant national judicial bodies.

Laws can be passed that see law enforcement interests supersede individuals’ or communities’ rights in safeguarding their devices, data, and communications from the State. When or if such a situation occurs, the signatories of the Declaration can hold fast in their flowery language around protecting rights while, at the same time, individuals and communities experience heightened surveillance of, and intrusions into, their daily lives.

In effect, a lot of international policy and legal infrastructure has been built to facilitate sweeping new investigatory powers and reforms to how data is, and can be, secured. It has taken years to build this infrastructure and as we leave the current stage of the global pandemic it is apparent that governments have continued to press ahead with their efforts to expand the powers which could be provided to law enforcement and security agencies, notwithstanding the efforts of civil and human rights campaigners around the world.

The next stage of things will be to asses how, and in what ways, international agreements and legal infrastructure will be brought into national legal systems and to determine where to strategically oppose the worst of the over reaches. While it’s possible that some successes are achieved in resisting the expansions of state powers not everything will be resisted. The consequence will be both to enhance state intrusions into private lives as well as to weaken the security provided to devices and data, with the resultant effect of better enabling criminals to illicitly access or manipulate our personal information.

The new world of enhanced surveillance and intrusions is wholly consistent with the ‘Declaration on the Future of the Internet.’ And that’s a big, glaring, and serious problem with the Declaration.

Link

The Broader Implications of Data Breaches

Ikea Canada notified approximately 95,000 Canadian customers in recent weeks about a data breach the company has suffered. An Ikea employee conducted a series of searches between March 1 to March 3 which surfaced the account records of the aforementioned customers.1

While Ikea promised that financial information–credit card and banking information–hadn’t been revealed a raft of other personal information had been. That information included:

  • full first and last name;
  • postal code or home address;
  • phone number and other contact information;
  • IKEA loyalty number.

Ikea did not disclose who specifically accessed the information nor their motivations for doing so.

The notice provided by Ikea was better than most data breach alerts insofar as it informed customers what exactly had been accessed. For some individuals, however, this information is highly revelatory and could cause significant concern.

For example, imagine a case where someone has previously been the victim of either physical or digital stalking. Should their former stalker be an Ikea employee the data breach victim may ask whether their stalker now has confidential information that can be used to renew, or further amplify, harmful activities. With the customer information in hand, as an example, it would be relatively easy for a stalker to obtain more information such as where precisely someone lived. If they are aggrieved then they could also use the information to engage in digital harassment or threatening behaviour.

Without more information about the motivations behind why the Ikea employee searched the database those who have been stalked or had abusive relations with an Ikea employee might be driven to think about changing how they live their lives. They might feel the need to change their safety habits, get new phone numbers, or cycle to a new email. In a worst case scenario they might contemplate vacating their residence for a time. Even if they do not take any of these actions they might experience a heightened sense of unease or anxiety.

Of course, Ikea is far from alone in suffering these kinds of breaches. They happen on an almost daily basis for most of us, whether we’re alerted of the breach or not. Many news reports about such breaches focus on whether there is an existent or impending financial harm and stop the story there. The result is that journalist reporting can conceal some of the broader harms linked with data breaches.

Imagine a world where our personal information–how you can call us or find our homes–was protected equivalent to how our credit card numbers are current protected. In such a world stalkers and other abusive actors might be less able to exploit stolen or inappropriately accessed information. Yes, there will always be ways by which bad actors can operate badly, but it would be possible to mitigate some of the ways this badness can take place.

Companies could still create meaningful consent frameworks whereby some (perhaps most!) individuals could agree to have their information stored by the company. But, for those who have a different risk threshold they could make a meaningful choice so they could still make purchases and receive deliveries without, at the same time, permanently increasing the risks that their information might fall into the wrong hand. However, getting to this point requires expanded threat modelling: we can’t just worry about a bad credit card purchase but, instead, would need to take seriously the gendered and intersectional nature of violence and its intersection with cybersecurity practices.


  1. In the interests of disclosure, I was contacted as an affected party by Ikea Canada. ↩︎

Policing the Location Industry

Photo by Ingo Joseph on Pexels.com

The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.

While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.

Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.

In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.

One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”

Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.

Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.

If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.

In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.

If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.


  1. “PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎
  2. See, as example, Section 23 of the CSE Act ↩︎
Link

The So-Called Privacy Problems with WhatsApp

(Photo by Anton on Pexels.com)

ProPublica, which is typically known for its excellent journalism, published a particularly terrible piece earlier this week that fundamentally miscast how encryption works and how Facebook vis-a-vis WhatsApp works to keep communications secured. The article, “How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users,” focuses on two so-called problems.

The So-Called Privacy Problems with WhatsApp

First, the authors explain that WhatsApp has a system whereby recipients of messages can report content they have received to WhatsApp on the basis that it is abusive or otherwise violates WhatsApp’s Terms of Service. The article frames this reporting process as a way of undermining privacy on the basis that secured messages are not kept solely between the sender(s) and recipient(s) of the communications but can be sent to other parties, such as WhatsApp. In effect, the ability to voluntarily forward messages to WhatsApp that someone has received is cast as breaking the privacy promises that have been made by WhatsApp.

Second, the authors note that WhatsApp collects a large volume of metadata in the course of using the application. Using lawful processes, government agencies have compelled WhatsApp to disclose metadata on some of their users in order to pursue investigations and secure convictions against individuals. The case that is focused on involves a government employee who leaked confidential banking information to Buzzfeed, and which were subsequently reported out.

Assessing the Problems

In the case of forwarding messages for abuse reporting purposes, encryption is not broken and the feature is not new. These kinds of processes offer a mechanism that lets individuals self-identify and report on problematic content. Such content can include child grooming, the communications of illicit or inappropriate messages or audio-visual content, or other abusive information.

What we do learn, however, is that the ‘reactive’ and ‘proactive’ methods of detecting abuse need to be fixed. In the case of the former, only about 1,000 people are responsible for intaking and reviewing the reported content after it has first been filtered by an AI:

Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.


Further, the employees are often reliant on machine learning-based translations of content which makes it challenging to assess what is, in fact, being communicated in abusive messages. As reported,

… using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”

There are also proactive modes of watching for abusive content using AI-based systems. As noted in the article,

Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.

Unfortunately, the AI often makes mistakes. This led one interviewed content reviewer to state that, “[t]here were a lot of innocent photos on there that were not allowed to be on there … It might have been a photo of a child taking a bath, and there was nothing wrong with it.” Often, “the artificial intelligence is not that intelligent.”

The vast collection of metadata has been a long-reported concern and issue associated with WhatsApp and, in fact, was one of the many reasons why many individuals advocate for the use of Signal instead. The reporting in the ProPublica article helpfully summarizes the vast amount of metadata that is collected but that collection, in and of itself, does not present any evidence that Facebook or WhatsApp have transformed the application into one which inappropriately intrudes into persons’ privacy.

ProPublica Sets Back Reasonable Encryption Policy Debates

The ProPublica article harmfully sets back broader policy discussion around what is, and is not, a reasonable approach for platforms to take in moderating abuse when they have integrated strong end-to-end encryption. Such encryption prevents unauthorized third-parties–inclusive of the platform providers themselves–from reading or analyzing the content of the communications themselves. Enabling a reporting feature means that individuals who receive a communication are empowered to report it to a company, and the company can subsequently analyze what has been sent and take action if the content violates a terms of service or privacy policy clause.

In suggesting that what WhatsApp has implemented is somehow wrong, it becomes more challenging for other companies to deploy similar reporting features without fearing that their decision will be reported on as ‘undermining privacy’. While there may be a valid policy discussion to be had–is a reporting process the correct way of dealing with abusive content and messages?–the authors didn’t go there. Nor did they seriously investigate whether additional resources should be adopted to analyze reported content, or talk with artificial intelligence experts or machine-based translation experts on whether Facebook’s efforts to automate the reporting process are adequate, appropriate, or flawed from the start. All those would be very interesting, valid, and important contributions to the broader discussion about integrating trust and safety features into encrypted messaging applications. But…those are not things that the authors choose to delve into.

The authors could have, also, discussed the broader importance (and challenges) in building out messaging systems that can deliberately conceal metadata, and the benefits and drawbacks of such systems. While the authors do discuss how metadata can be used to crack down on individuals in government who leak data, as well as assist in criminal investigations and prosecutions, there is little said about what kinds of metadata are most important to conceal and the tradeoffs in doing so. Again, there are some who think that all or most metadata should be concealed, and others who hold opposite views: there is room for a reasonable policy debate to be had and reported on.

Unfortunately, instead of actually taking up and reporting on the very valid policy discussions that are at the edges of their article, the authors choose to just be bombastic and asserted that WhatsApp was undermining the privacy protections that individuals thought they have when using the application. It’s bad reporting, insofar as it distorts the facts, and is particularly disappointing given that ProPublica has shown it has the chops to do good investigative work that is well sourced and nuanced in its outputs. This article, however, absolutely failed to make the cut.

Link

Explaining WhatsApp’s Encryption for Business Communications

Shoshana Wodinsky writing for Gizmodo has a lengthy, and detailed, breakdown of how and why WhatsApp is modifying its terms of service to facilitate consumer-to-business communications. The crux of the shift, really, comes down to:

… in the years since WhatsApp co-founders Jan Koum and Brian Acton cut ties with Facebook for, well, being Facebook, the company slowly turned into something that acted more like its fellow Facebook properties: an app that’s kind of about socializing, but mostly about shopping. These new privacy policies are just WhatsApp’s—and Facebook’s—way of finally saying the quiet part out loud.

What’s going to change? Namely whenever you’re speaking to a business then those communications will not be considered end-to-end encrypted and, as such, the communications content and metadata that is accessible can be used for advertising and other marketing, data mining, data targeting, or data exploitation purposes. If you’re just chatting with individuals–that is, not businesses!–then your communications will continue to be end-to-end encrypted.

For an additional, and perhaps longer, discussion of how WhatsApp’s shifts in policy–now, admittedly, delayed for a few months following public outrage–is linked to the goal of driving business revenue into the company check out Alec Muffett’s post over on his blog. (By way of background, Alec’s been in the technical security and privacy space for 30+ years, and is a good and reputable voice on these matters.)

Link

Privacy and Contemporary Motor Vehicles

Writing for NBC News, Olivia Solon provides a useful overview of just how much data is collected by motor vehicles—using sensors embedded in the vehicles as well as collected by infotainment systems when linked with a smartphone—and how law enforcement agencies are using that information.

Law enforcement agencies have been focusing their investigative efforts on two main information sources: the telematics system — which is like the “black box” — and the infotainment system. The telematics system stores a vehicle’s turn-by-turn navigation, speed, acceleration and deceleration information, as well as more granular clues, such as when and where the lights were switched on, the doors were opened, seat belts were put on and airbags were deployed.

The infotainment system records recent destinations, call logs, contact lists, text messages, emails, pictures, videos, web histories, voice commands and social media feeds. It can also keep track of the phones that have been connected to the vehicle via USB cable or Bluetooth, as well as all the apps installed on the device.

Together, the data allows investigators to reconstruct a vehicle’s journey and paint a picture of driver and passenger behavior. In a criminal case, the sequence of doors opening and seat belts being inserted could help show that a suspect had an accomplice.

Of note, rental cars as well as second hand vehicles also retain all of this information and it can then be accessed by third-parties. It’s pretty easy to envision a situation where rental companies are obligated to assess retained data to determine if a certain class or classes of offences have been committed, and then overshare information collected by rental vehicles to avoid their own liability that could follow from failing to fully meet whatever obligations are placed upon them.

Of course, outright nefarious actors can also take advantage of the digital connectivity built into contemporary vehicles.

Just as the trove of data can be helpful for solving crimes, it can also be used to commit them, Amico said. He pointed to a case in Australia, where a man stalked his ex-girlfriend using an app that connected to her high-tech Land Rover and sent him live information about her movements. The app also allowed him to remotely start and stop her vehicle and open and close the windows.

As in so many different areas, connectivity is being included into vehicles without real or sufficient assessment of how to secure new technologies and defray harmful or undesirable secondary uses of data. Engineers rarely worry about these outcomes, corporate lawyers aren’t attentive to these classes of issues, and the security of contemporary vehicles is generally garbage. Combined, this means that government bodies are almost certainly going to expand the ranges of data they can access without having to first go through a public debate about the appropriateness of doing so or creation of specialized warrants that would limit data mining. Moreover, in countries with weak policing accountability structures, it will be impossible to even assess the regularity at which government officials obtain access to information from cars, how such data lets them overcome other issues they state they are encountering (e.g., encryption), or the utility of this data in investigating crimes and introducing it as evidence in court cases.

Link

VPN and Security Friction

Troy Hunt spent some time over the weekend writing on the relative insecurity of the Internet and how VPNs reduce threats without obviating those threats entirely. The kicker is:

To be clear, using a VPN doesn’t magically solve all these issues, it mitigates them. For example, if a site lacks sufficient HTTPS then there’s still the network segment between the VPN exit node and the site in question to contend with. It’s arguably the least risky segment of the network, but it’s still there. The effectiveness of black-holing DNS queries to known bad domains depends on the domain first being known to be bad. CyberSec is still going to do a much better job of that than your ISP, but it won’t be perfect. And privacy wise, a VPN doesn’t remove DNS or the ability to inspect SNI traffic, it simply removes that ability from your ISP and grants it to NordVPN instead. But then again, I’ve always said I’d much rather trust a reputable VPN to keep my traffic secure, private and not logged, especially one that’s been independently audited to that effect.

Something that security professionals are still not great at communicating—because we’re not asked to and because it’s harder for regular users to use the information—is that security is about adding friction that prevents adversaries from successfully exploiting whomever or whatever they’re targeting. Any such friction, however, can be overcome in the face of a sufficiently well-resourced attacker. But when you read most articles that talk about any given threat mitigation tool what is apparent is that the problems that are faced are systemic; while individuals can undertake some efforts to increase friction the crux of the problem is that individuals are operating in an almost inherently insecure environment.

Security is a community good and, as such, individuals can only do so much to protect themselves. But what’s more is that their individual efforts functionally represent a failing of the security community, and reveals the need for group efforts to reduce the threats faced by individuals everyday when they use the Internet or Internet-connected systems. Sure, some VPNs are a good thing to help individuals but, ideally, these are technologies to be discarded in some distant future after groups of actors successfully have worked to mitigate the threats that lurk all around us. Until then, though, adopting a trusted VPN can be a very good idea if you can afford the costs linked to them.

Aside

2019.7.15

It’s great that Apple is asserting the importance of privacy. But if they’re really, really serious they’ll stop enabling the Chinese government direct access to Chinese users’ iCloud data. And they’ll secure data on iCloud so that government agencies can’t just request Apple to hand over our WhatsApp, iCloud, Notes, and other data that Apple holds the keys to unlocking and turning over to whomever comes with a warrant. I’m not holding my breath on the former, nor the latter.

Quote

The ability to socialize with friends in private spaces without state interference is vital to citizens’ growth, the maintenance of society, and a free and healthy democracy. It ensures a zone of safety in which we can share personal information with the people that we choose, and still be free from state intrusion. Recognizing a right to be left alone in private spaces to which we have been invited is an extension of the principle that we are not subject to state interference any time we leave our own homes. The right allows citizens to move about freely without constant supervision or intrusion from the state. Fear of constant intrusion or supervision itself diminishes Canadians’ sense of freedom.

  • Factum for Tom Le, in Tom Le v The Queen, Court File No. 37971

A Civil Rights Company?

Photo by Youssef Sarhan on Unsplash

Much has been made of Tim Cook’s advocacy on issues of privacy and gay rights. The most recent iteration of Safari that was unveiled at WWDC will incorporate techniques that hinder, though won’t entirely stop, advertisers and websites from tracking users across the Internet. And Apple continues to support and promote gay rights; the most evident manifestations of this is Apple selling pride-inspired Apple Watch bands and a matching pride-based watch facealong with company’s CEO being an openly gay man.

It’s great that Apple is supporting these issues. But it’s equally important to reflect on Apple’s less rights-promoting activities. The company operates around the world and chooses to pursue profits to the detriment of the privacy of its China-based users. It clearly has challenges — along with all other smartphone companies — in acquiring natural mineral resources that are conflict-free; the purchase of conflict minerals raises fundamental human rights issues. And the company’s ongoing efforts to minimize its taxation obligations have direct impacts on the abilities of governments to provide essential services to those who are often the worst off in society.

Each of the above examples are easily, and quickly, reduced to assertions that Apple is a public company in a capitalist society. It has obligations to shareholders and, thus, can only do so much to advance basic rights while simultaneously pursuing profits. Apple is, on some accounts, actively attempting to enhance certain rights and promote certain causes and mitigate certain harms while simultaneously acting in the interests of its shareholders.

Those are all entirely fair, and reasonable, arguments. I understand them all. But I think that we’d likely all be well advised to consider Apple’s broader activities before declaring that Apple has ‘our’ backs, on the basis that ‘our’ backs are often privileged, wealthy, and able to externalize a range of harms associated with Apple’s international activities.