Matt Burgess at Wired has a good summary article on the current (and always ongoing) debate concerning the availability of strong encryption.
In short, he sees three ‘classes’ of argument which are aimed at preventing individuals from protecting their communications (and their personal information) with robust encryption.
Governments or law enforcement agencies are asking for backdoors to be built into encrypted platforms to gain “lawful access” to content. This is best exemplified by recent efforts by the United Kingdom to prevent residents from using Apple’s Advanced Data Protection.
An increase in proposals related to a technology known as “client-side scanning.” Perhaps the best known effort is an ongoing European proposal to monitor all users’ communications for child sexual abuse material, notwithstanding the broader implications of integrating a configurable detector (and censor) on all individuals’ devices.
In this broader context it’s worth recognizing that alleged Chinese compromises of key American lawful interception systems led the US government to recommend that all Americans use strongly encrypted communications in light of network compromises. If strong encryption is banned then there is a risk that there will be no respite from such network intrusions while, also, likely creating an entirely new domain of cyber threats.
National cryptological organizations, such as the NSA, CSE, GCHQ, ASD, and GCSB, routinely assess the strength of different modes of encryption and offer recommendations on what organizations should be using. They make their assessments based on the contemporary strength of encryption algorithms as well as based on the planned or expected vulnerabilities of those algorithms in the face of new or forthcoming technologies.
Quantum computing has the potential to undermine the security that is currently provided by a range of approved cryptographic algorithms.1 On December 12, 2024, Australia’s ASD published a series of recommendations for what algorithms should be deprecated by 2030. What is notable about their decision is that they are proposing deprecations before other leading agencies, including the USA’s National Institute of Standards and Technology and Canada’s CSE, though with an acknowledgement that the deprecation is focused on High Assurance Cryptographic Equipment (HACE).
To-be-deprecated algorithms include:
Elliptic Curve Diffie-Hellman (EDHC)
Elliptic Curve Digital Signature Algorithm (ECDSA)
Module-Lattice-Based Digital Signature Algorithm 65 (ML-DSA-65)
Secure Hashing Mechanisms 224 and 256 (SHA-224 and RSA-256)
AES-128 and AES-192
Given that the English-speaking Five Eyes agencies regularly walk in near-lockstep we might see updated guidance from the different agencies in the coming weeks and months. Alternately, policy processes may prevent countries from updating their standards (or publicly announcing changes), leaving ASD as a path leader in cybersecurity while other agencies wait until policy mechanisms eventually lead to these algorithms being deprecated by 2035.
Looking further out, and aside from the national security space, the concerns around cryptographic algorithms speak to challenges that embedded systems will having in the coming decade where manufacturers fail to to get ahead of things and integrate quantum-resistance algorithms in the products they sell. Moreover, for embedded systems (e.g., Operational Technology, Internet of Things, and related systems) where it may be challenging or impossible to update cryptographic algorithms there may be a whole world of currently-secure solutions that will become woefully insecure in the not-so-distant future. That’s a future that we need to start planning for, today, so that at least a decade’s worth of work can hopefully head off the worst of the harms associated with deprecated embedded systems’ (in)security.
What continues to be my favourite, and most accessible, explanation of the risks posed by quantum computing is written by Bruce Schneier.↩︎
Five Eyes countries have regularly and routinely sought, and gained, access to foreign telecommunications infrastructures to carry out their operations. The same is true of other well resourced countries, including China.
Salt Typhoon’s penetration of American telecommunications and email platforms is slowly coming into relief. The New York Times has an article that summarizes what is being publicly disclosed at this point in time:
The full list of phone numbers that the Department of Justice had under surveillance in lawful interception systems has been exposed, with the effect of likely undermining American counter-intelligence operations aimed at Chinese operatives
Phone calls, unencrypted SMS messages, and email providers have been compromised
The FBI has heightened concerns that informants may have been exposed
Apple’s services, as well as end to end encrypted systems, were not penetrated
American telecommunications networks were penetrated, in part, due to companies relying on decades old systems and equipment that do not meet modern security requirements. Fixing these deficiencies may require rip-and-replacing some old parts of the network with the effect of creating “painful network outages for consumers.” Some of the targeting of American telecommunications networks is driven by an understanding that American national security defenders have some restrictions on how they can operate on American-based systems.
Some of the Five Eyes, led by Canada, have been developing and deploying defensive sensor networks that are meant to shore up some defences of government and select non-government organizations.1 But these edge, network, and cloud based sensors can only do so much: telecommunications providers, themselves, need to prioritize ensuring their core networks are protected against the classes of adversaries trying to penetrate them.2
At the same time, it is worth recognizing that end to end communications continued to be protected even in the face of Salt Typhoon’s actions. This speaks the urgent need to ensure that these forms of communications security continue to be available to all users. We often read that law enforcement needs select access to such communications and that they can be trusted to not abuse such exceptional access.
Setting aside the vast range of legal, normative, or geopolitical implications of weakening end to end encryption, cyber operations like the one perpetrated by Salt Typhoon speak to governments’ collective inabilities to protect their lawful access systems. There’s no reason to believe they’d be any more able to protect exceptional access measures that weakened, or otherwise gained access to, select content of end to end encrypted communications.
We are seeing some governments introducing, and sometimes passing, laws that would foster more robust security requirements. In Canada, Bill C-26 is generally meant to do this though the legislation as introduced raised some serious concerns. ↩︎
In a continuing demonstration of the importance of strong and privacy-protective communications, the federal Foreign Interference Commission has created a Signal account to receive confidential information.
Encrypted Messaging For those who may feel more comfortable providing information to the Commission using encrypted means, they may do so through the Signal – Private Messenger app. Those who already have a Signal account can contact the Commission using our username below. Others will have to first download the app and set up an account before they can communicate with the Commission.
The Commission’s Signal Username is signal_pifi_epie20.24
Signal users can also scan QR Code below for the Commission’s username:
The Commission has put strict measures in place to protect the confidentiality of any information provided through this Signal account.
Not so long ago, the Government of Canada was arguing for an irresponsible encryption policy that included the ability to backdoor end-to-end encryption. It’s hard to overstate the significance of a government body now explicitly adopting Signal.
Apple has announced it will begin rolling out new data security protections for Americans by end of 2022, and the rest of the world in 2023. This is a big deal.
One of the biggest, and most serious, gaping holes in the protections that Apple has provided to its users is linked to iCloud. Specifically, while a subset of information has been encrypted such that Apple couldn’t access or disclose the plaintext of communications or content (e.g., Health information, encrypted Apple Notes, etc) the company did not encrypt device backups, message backups, notes generally, iCloud contents, Photos, and more. The result is that third-parties could either compel Apple to disclose information (e.g., by way of warrant) or otherwise subvert Apple’s protections to access stored data (e.g., targeted attacks). Apple’s new security protections will expand the categories of protected data from 141 to 23.
I am very supportive of Apple’s decision and frankly congratulate them on the very real courage that it takes to implement something like this. It is:
courageous technically, insofar as this is a challenging thing to pull off at the scale at which Apple operates
courageous from a business perspective, insofar as it raises the prospect of unhappy customers should they lose access to their data and Apple unable to assist them
courageous legally, insofar as it’s going to inspire a lot of frustration and upset by law enforcement and government agencies around the world
It’ll be absolutely critical to observe how quickly, and how broadly, Apple extends its new security capacities and whether countries are able to pressure Apple to either not deploy them for their residents or roll them back in certain situations. Either way, Apple routinely sets the standard on consumer privacy protections; others in the industry will now be inevitably compared to Apple as either meeting the new standard or failing their own customers in one way or another.
From a Canadian, Australia, or British government point of view, I suspect that Apple’s decision will infuriate law enforcement and security agencies who had placed their hopes on CLOUD Act bilateral agreements to get access to corporate data, such as that held by Apple. Under a CLOUD bilateral British authorities could, as an example, directly serve a judicially authorised order to Apple about a British resident, to get Apple to disclose information back to the British authorities without having to deal with American authorities. It promised to substantially improve the speed at which countries with bilateral agreements could obtain electronic evidence. Now, it would seem, Apple will largely be unable to assist law enforcement and security agencies when it comes to Apple users who have voluntarily enabled heightened data protections. Apple’s decision will, almost certainly, further inspire governments around the world to double down on their efforts to advance anti-encryption legislation and pass such legislation into law.
Notwithstanding the inevitable government gnashing of teeth, Apple’s approach will represent one of the biggest (voluntary) increases in privacy protection for global users since WhatsApp adopted Signal’s underlying encryption protocols. Tens if not hundreds of millions of people who enable the new data protection will be much safer and more secure in how their data is stored while simultaneously restricting who can access that data without individuals’ own knowledge.
In a world where ‘high-profile’ targets are just people who are social influencers on social media, there are a lot of people who stand to benefit from Apple’s courageous move. I only hope that other companies, such as Google, are courageous enough to follow Apple at some point in the near future.
really, 13, given the issue of iMessage backups being accessible to Apple ↩︎
Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.
I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1
As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.
In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.
What do all the aforementioned elements mean for client security?
Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.
One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.
It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.
We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.
For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎
ProPublica, which is typically known for its excellent journalism, published a particularly terrible piece earlier this week that fundamentally miscast how encryption works and how Facebook vis-a-vis WhatsApp works to keep communications secured. The article, “How Facebook Undermines Privacy Protections for Its 2 Billion WhatsApp Users,” focuses on two so-called problems.
The So-Called Privacy Problems with WhatsApp
First, the authors explain that WhatsApp has a system whereby recipients of messages can report content they have received to WhatsApp on the basis that it is abusive or otherwise violates WhatsApp’s Terms of Service. The article frames this reporting process as a way of undermining privacy on the basis that secured messages are not kept solely between the sender(s) and recipient(s) of the communications but can be sent to other parties, such as WhatsApp. In effect, the ability to voluntarily forward messages to WhatsApp that someone has received is cast as breaking the privacy promises that have been made by WhatsApp.
Second, the authors note that WhatsApp collects a large volume of metadata in the course of using the application. Using lawful processes, government agencies have compelled WhatsApp to disclose metadata on some of their users in order to pursue investigations and secure convictions against individuals. The case that is focused on involves a government employee who leaked confidential banking information to Buzzfeed, and which were subsequently reported out.
Assessing the Problems
In the case of forwarding messages for abuse reporting purposes, encryption is not broken and the feature is not new. These kinds of processes offer a mechanism that lets individuals self-identify and report on problematic content. Such content can include child grooming, the communications of illicit or inappropriate messages or audio-visual content, or other abusive information.
What we do learn, however, is that the ‘reactive’ and ‘proactive’ methods of detecting abuse need to be fixed. In the case of the former, only about 1,000 people are responsible for intaking and reviewing the reported content after it has first been filtered by an AI:
Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.
Further, the employees are often reliant on machine learning-based translations of content which makes it challenging to assess what is, in fact, being communicated in abusive messages. As reported,
… using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”
There are also proactive modes of watching for abusive content using AI-based systems. As noted in the article,
Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.
Unfortunately, the AI often makes mistakes. This led one interviewed content reviewer to state that, “[t]here were a lot of innocent photos on there that were not allowed to be on there … It might have been a photo of a child taking a bath, and there was nothing wrong with it.” Often, “the artificial intelligence is not that intelligent.”
The vast collection of metadata has been a long-reported concern and issueassociated with WhatsApp and, in fact, was one of the many reasons why many individuals advocate for the use of Signal instead. The reporting in the ProPublica article helpfully summarizes the vast amount of metadata that is collected but that collection, in and of itself, does not present any evidence that Facebook or WhatsApp have transformed the application into one which inappropriately intrudes into persons’ privacy.
ProPublica Sets Back Reasonable Encryption Policy Debates
The ProPublica article harmfully sets back broader policy discussion around what is, and is not, a reasonable approach for platforms to take in moderating abuse when they have integrated strong end-to-end encryption. Such encryption prevents unauthorized third-parties–inclusive of the platform providers themselves–from reading or analyzing the content of the communications themselves. Enabling a reporting feature means that individuals who receive a communication are empowered to report it to a company, and the company can subsequently analyze what has been sent and take action if the content violates a terms of service or privacy policy clause.
In suggesting that what WhatsApp has implemented is somehow wrong, it becomes more challenging for other companies to deploy similar reporting features without fearing that their decision will be reported on as ‘undermining privacy’. While there may be a valid policy discussion to be had–is a reporting process the correct way of dealing with abusive content and messages?–the authors didn’t go there. Nor did they seriously investigate whether additional resources should be adopted to analyze reported content, or talk with artificial intelligence experts or machine-based translation experts on whether Facebook’s efforts to automate the reporting process are adequate, appropriate, or flawed from the start. All those would be very interesting, valid, and important contributions to the broader discussion about integrating trust and safety features into encrypted messaging applications. But…those are not things that the authors choose to delve into.
The authors could have, also, discussed the broader importance (and challenges) in building out messaging systems that can deliberately conceal metadata, and the benefits and drawbacks of such systems. While the authors do discuss how metadata can be used to crack down on individuals in government who leak data, as well as assist in criminal investigations and prosecutions, there is little said about what kinds of metadata are most important to conceal and the tradeoffs in doing so. Again, there are some who think that all or most metadata should be concealed, and others who hold opposite views: there is room for a reasonable policy debate to be had and reported on.
Unfortunately, instead of actually taking up and reporting on the very valid policy discussions that are at the edges of their article, the authors choose to just be bombastic and asserted that WhatsApp was undermining the privacy protections that individuals thought they have when using the application. It’s bad reporting, insofar as it distorts the facts, and is particularly disappointing given that ProPublica has shown it has the chops to do good investigative work that is well sourced and nuanced in its outputs. This article, however, absolutely failed to make the cut.
Over the course of the pandemic I’ve finally built up a good workflow for annotating papers and filing them in a reference manager. Unfortunately, the reference manager that I’ve been using announced this week that they were terminating all support for their mobile and desktop apps and pushing everything into the cloud, which entirely doesn’t work with my workflow.
This means that I’m giving Zotero another shot (I tried them back when I was doing my PhD and the service wasn’t exactly ready for popular use at the time). On the plus side, Zotero has a good set of instructions for how to import my references from Mendeley. On the negative side, Mendeley has made this about as painful as possible: they encrypt the local database so you need to move back to an older version of the application and they then force you to manually download all of the documents which are attached to entries before the full bibliographic entries can be exported to another reference manager like Zotero. They have also entirely falsely asserted that the local encryption is required to comply with the GDPR which is pretty frustrating.
On the plus side, the manual labour involved in importing the references is done, though it cost me around two hours of time that could have been used for something that was actually productive. And Zotero has an app for iOS coming, and there is another app called PaperShip which interoperates with Zotero, which should cut down on the hopefully-pretty-temporary pain of adopting a new workflow. However, I’m going to need to do a lot of corrections in the database (just to clean up references) and most likely have start paying another yearly subscription service given that the free tier for Zotero doesn’t clearly meet my needs. Two steps backwards, one step forwards, I guess.
… in the years since WhatsApp co-founders Jan Koum and Brian Acton cut ties with Facebook for, well, being Facebook, the company slowly turned into something that acted more like its fellow Facebook properties: an app that’s kind of about socializing, but mostly about shopping. These new privacy policies are just WhatsApp’s—and Facebook’s—way of finally saying the quiet part out loud.
What’s going to change? Namely whenever you’re speaking to a business then those communications will not be considered end-to-end encrypted and, as such, the communications content and metadata that is accessible can be used for advertising and other marketing, data mining, data targeting, or data exploitation purposes. If you’re just chatting with individuals–that is, not businesses!–then your communications will continue to be end-to-end encrypted.
For an additional, and perhaps longer, discussion of how WhatsApp’s shifts in policy–now, admittedly, delayed for a few months following public outrage–is linked to the goal of driving business revenue into the company check out Alec Muffett’s post over on his blog. (By way of background, Alec’s been in the technical security and privacy space for 30+ years, and is a good and reputable voice on these matters.)
It’s become incredibly popular to attribute the activities undertaken by the Facebooks and Googles of the work to ‘surveillance capitalism’. This concept generally asserts that the current dominant mode of economics has become reliant on surveillance to drive economic growth. Surveillance, specifically, is defined as the act of watching or monitoring activity with the intent of using captured information to influence behaviour. In the world of the Internet, this information tends to be used to influence purchasing behaviours.
The issue that I have with the term surveillance capitalism is that I’m uncertain whether it comprehensively captures the activities associated with the data-driven economy. Surveillance Studies scholars tend to apply the same theories which are used to understand CCTV to practices such as machine learning; in both cases, the technologies are understood as establishing feedback loops to influence an individual or entire population. But, just as often, neither CCTV nor machine learning actually have a person- or community-related feedback loop. CCTV cameras are often not attended to, not functional, or don’t provide sufficient information to take action against those being recorded. Nor do individuals necessarily modify their own behaviours in the presence of such cameras. Similarly, machine learning algorithms may not be used to influence all persons: in some cases, they may be sufficiently outside the scope of whatever the algorithm is intended to do that they are not affected. Also, like CCTV, individuals may not modify their own behaviours when machine learning algorithms are working on the data those individuals are generating on the basis of being unaware of machine learning operating on their data.
So, where surveillance capitalism depends on a feedback loop that is directly applied towards individuals within a particular economic framework, there may be instances where data is collected and monetized without clear or necessary efforts to influence individuals. Such situations could include those where a machine learning algorithm is designed to improve a facial recognition system, or improve battery life based on the activities undertaken by a user, or to otherwise very quietly make tools more effective without a clear attempt to modify user behaviour. I think that such activities may be very clearly linked to monetization and, more broadly, an ideology backed by capitalism. But I’m not sure it’s surveillance as it’s rigorously defined by scholars.
So one of the things that I keep thinking about is whether we should shift away from the increasingly-broad use of ‘surveillance capitalism’ to, more broadly, talk about ‘data capitalism’. I’m not suggesting doing away with the term surveillance capitalism but, instead, that surveillance capitalism is a sub-genus of data capitalism. Data capitalism would, I believe, better capture the ways in which information is collected, analyzed, and used to effect socio-technical changes. Further, I think such a term might also capture times where those changes are arguably linked to capitalist aims (i.e. enhancing profitability) but may be less obviously linked to the feedback loops towards individuals that are associated with surveillance itself.
After approximately twenty months of work, my colleagues and myself have published an extensive report on encryption policies in Canada. It’s a major accomplishment for all of us to have finally concluded the work, and we’re excited by the positive feedback we’ve received about it.
Inspiring Quotation of the Week
“Ambition is a noble passion which may legitimately take many forms… but the noblest ambition is that of leaving behind something of permanent value.”