The issue of CSAM on Facebook’s networks has risen in prominence following a report in 2019 in the New York Times. That piece indicated that Facebook was responsible for reporting the vast majority of the 45 million online photos and videos of children being sexually abused online. Ever since, Facebook has sought to contextualize the information it discloses to NCMEC and explain the efforts it has put in place to prevent CSAM from appearing on its services.
So what was the key finding from the research?
We evaluated 150 accounts that we reported to NCMEC for uploading CSAM in July and August of 2020 and January 2021, and we estimate that more than 75% of these did not exhibit malicious intent (i.e. did not intend to harm a child), but appeared to share for other reasons, such as outrage or poor humor. While this study represents our best understanding, these findings should not be considered a precise measure of the child safety ecosystem.
This finding is significant, as it quickly becomes suggestive that the mass majority of the content reported by Facebook—while illegal!—is not deliberately being shared for malicious purposes. Even if we assume that the number sampled should be adjusted—perhaps only 50% of individuals were malicious—we are still left with a significant finding.
There are, of course, limitations to the research. First, it excludes all end-to-end encrypted messages. So there is some volume of content that cannot be detected using these methods. Second, it remains unclear how scientifically robust it was to choose the selected 150 accounts for analysis. Third, and related, there is a subsequent question of whether the selected accounts are necessarily representative of the broader pool of accounts that are associated with distributing CSAM.
Nevertheless, this seeming sleeper-research hit has significant implications insofar as it would compress the number of problematic accounts/individuals disclosing CSAM to other parties. Clearly more work along this line is required, ideally across Internet platforms, in order to add further context and details to the extent of the CSAM problem and subsequently define what policy solutions are necessary and proportionate.
First, the authors explain that WhatsApp has a system whereby recipients of messages can report content they have received to WhatsApp on the basis that it is abusive or otherwise violates WhatsApp’s Terms of Service. The article frames this reporting process as a way of undermining privacy on the basis that secured messages are not kept solely between the sender(s) and recipient(s) of the communications but can be sent to other parties, such as WhatsApp. In effect, the ability to voluntarily forward messages to WhatsApp that someone has received is cast as breaking the privacy promises that have been made by WhatsApp.
Second, the authors note that WhatsApp collects a large volume of metadata in the course of using the application. Using lawful processes, government agencies have compelled WhatsApp to disclose metadata on some of their users in order to pursue investigations and secure convictions against individuals. The case that is focused on involves a government employee who leaked confidential banking information to Buzzfeed, and which were subsequently reported out.
Assessing the Problems
In the case of forwarding messages for abuse reporting purposes, encryption is not broken and the feature is not new. These kinds of processes offer a mechanism that lets individuals self-identify and report on problematic content. Such content can include child grooming, the communications of illicit or inappropriate messages or audio-visual content, or other abusive information.
What we do learn, however, is that the ‘reactive’ and ‘proactive’ methods of detecting abuse need to be fixed. In the case of the former, only about 1,000 people are responsible for intaking and reviewing the reported content after it has first been filtered by an AI:
Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.
Further, the employees are often reliant on machine learning-based translations of content which makes it challenging to assess what is, in fact, being communicated in abusive messages. As reported,
… using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”
There are also proactive modes of watching for abusive content using AI-based systems. As noted in the article,
Artificial intelligence initiates a second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive. The unencrypted data available for scrutiny is extensive. It includes the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.
Unfortunately, the AI often makes mistakes. This led one interviewed content reviewer to state that, “[t]here were a lot of innocent photos on there that were not allowed to be on there … It might have been a photo of a child taking a bath, and there was nothing wrong with it.” Often, “the artificial intelligence is not that intelligent.”
The vast collection of metadata has been a long-reported concern and issueassociated with WhatsApp and, in fact, was one of the many reasons why many individuals advocate for the use of Signal instead. The reporting in the ProPublica article helpfully summarizes the vast amount of metadata that is collected but that collection, in and of itself, does not present any evidence that Facebook or WhatsApp have transformed the application into one which inappropriately intrudes into persons’ privacy.
ProPublica Sets Back Reasonable Encryption Policy Debates
In suggesting that what WhatsApp has implemented is somehow wrong, it becomes more challenging for other companies to deploy similar reporting features without fearing that their decision will be reported on as ‘undermining privacy’. While there may be a valid policy discussion to be had–is a reporting process the correct way of dealing with abusive content and messages?–the authors didn’t go there. Nor did they seriously investigate whether additional resources should be adopted to analyze reported content, or talk with artificial intelligence experts or machine-based translation experts on whether Facebook’s efforts to automate the reporting process are adequate, appropriate, or flawed from the start. All those would be very interesting, valid, and important contributions to the broader discussion about integrating trust and safety features into encrypted messaging applications. But…those are not things that the authors choose to delve into.
The authors could have, also, discussed the broader importance (and challenges) in building out messaging systems that can deliberately conceal metadata, and the benefits and drawbacks of such systems. While the authors do discuss how metadata can be used to crack down on individuals in government who leak data, as well as assist in criminal investigations and prosecutions, there is little said about what kinds of metadata are most important to conceal and the tradeoffs in doing so. Again, there are some who think that all or most metadata should be concealed, and others who hold opposite views: there is room for a reasonable policy debate to be had and reported on.
Unfortunately, instead of actually taking up and reporting on the very valid policy discussions that are at the edges of their article, the authors choose to just be bombastic and asserted that WhatsApp was undermining the privacy protections that individuals thought they have when using the application. It’s bad reporting, insofar as it distorts the facts, and is particularly disappointing given that ProPublica has shown it has the chops to do good investigative work that is well sourced and nuanced in its outputs. This article, however, absolutely failed to make the cut.
Lotus Ruan and Gabrielle Lim have a terrific piece in Just Security which strongly makes the case that, “fears of Chinese disinformation are often exaggerated by overblown assessments of the effects of China’s propaganda campaigns and casually drawn attributions.”
The two make clear that there are serious issues with how some Western policy analysts and politicians are suggesting that their governments respond to foreign influence operations that are associated with Chinese public and private parties. To begin, the very efficacy of influence operations remains mired in questions. While this is an area that is seeing more research of late, academics and policy analysts alike cannot assert with significant accuracy whether foreign influence operations have any real impact on domestic opinions or feelings. This should call for conservatism in the policies which are advanced but, instead, we often see calls for Western nations to adopt the internet ‘sovereignty’ positions championed by Russia and China themselves. These analysts and politicians are, in other words, asserting that they only way to be safe from China (and Russia) is to adopt those countries’ own policies.
Even were such (bad) policies adopted, it’s unclear that they would resolve the worst challenges facing countries such as the United States today. Anti-vaxxers, pro-coup supporters, and Big Lie advocates have all been affected by domestic influence operations that were (and are) championed by legitimately elected politicians, celebrities, and major media personalities. Building a sovereign internet ecosystem will do nothing to protect from the threats that are inside the continental United States and which are clearly having a deleterious effect on American society.
What I think I most appreciated in the piece by Ruan and Lim is that they frankly and directly called out many of the so-called solutions to disinformation and influence operations as racist. As just one example, there are those who call for ‘clean’ technologies that juxtapose Western against non-Western technologies. These kinds of arguments often directly perpetuate racist policies; they will not only do nothing to mitigate the spread of misinformation but will simultaneously cast suspicion and violence towards non-Caucasian members of society. Such proposals must be resisted and the authors are to be congratulated for directly and forcefully calling out the policies for what they are instead of carefully critiquing the proposals without actually calling them as racist as they are.
Civil liberties debates about whether individuals should have to get vaccinated against Covid-19 are on the rise. Civil liberties groups broadly worry that individuals will suffer intrusions into their privacy, or that rights of association or other rights will be unduly abridged, as businesses and employers require individuals to demonstrate proof of vaccination.
As discussed in a recent article published by the CBC, some individuals are specifically unable to, or concerned about, receiving Covid-19 vaccines on the basis that, “they’re taking immunosuppressant drugs, for example, while others have legitimate concerns about the safety and efficacy of the COVID-19 vaccines or justifiable fears borne from previous negative interactions with the health-care system.” The same expert, Arthur Schafer of the Centre for Professional and Applied Ethics at the University of Manitoba, said, “[w]e should try to accommodate people who have objections, conscientious or scientific or even religious, where we can do so without compromising public safety and without incurring a disproportionate cost to society.”
Other experts, such as Ann Cavoukian, worry that being compelled to disclose vaccination status could jeopardize individuals’ medical information should it be shared with parties who are not equipped to protect it, or who may combine it with other information to discriminate against individuals. For the Canadian Civil Liberties Association, they have taken the stance that individuals should have the freedom to choose to be vaccinated or not, that no compulsions should be applied to encourage vaccination (e.g., requiring vaccination to attend events), and broadly that, “COVID is just another risk now that we have to incorporate into our daily lives.”
In situations where individuals are unable to be vaccinated, either due to potential allergic responses or lack of availability of vaccine (e.g., those under the age of 12), then it is imperative to ensure that individuals do not face discrimination. In these situations, those affected cannot receive a vaccine and it is important to not create castes of the vaccinated and unable-to-be-vaccinated. For individuals who are hesitant due to historical negative experiences with vaccination efforts, or medical experimentation, some accommodations may also be required.
However, in the cases where vaccines are available and there are opportunities to receive said vaccine, then not getting vaccinated does constitute a choice. As it stands, today, in many Canadian schools children are required to received a set of vaccinations in order to attend school and if their parents refuse, then the children are required to use alternate educational systems (e.g., home schooling). When parents make a specific choice they are compelled to deal with the consequences of said decision. (Of course, there is not a vaccine for individuals under 12 years of age at the moment and so we shouldn’t be barring unvaccinated children from schools, but adopting such a requirement in the future might align with how schools regularly require proof of vaccination status to attend public schools.)
The ability to attend a concert, as an example, can and should be predicated on vaccination status where vaccination is an option for attendees. Similarly, if an individual refuses to be vaccinated their decision may have consequences in cases where they are required to be in-person in their workplace. There may be good reasons for why some workers decline to be vaccinated, such as a lack of paid days off and fear that losing a few days of work due to vaccination symptoms may prevent them from paying the rent or getting food; in such cases, accommodations to enable them to get vaccinated are needed. However, once such accommodations are made decisions to continue to not get vaccinated may have consequences.
In assessing whether policies are discriminatory individuals’ liberties as well as those of the broader population must be taken into account, with deliberate efforts made to ensure that group rights do not trample on the rights of minority or disenfranchised members of society. Accommodations must be made so that everyone can get vaccinated; rules cannot be established that apply equally but affect members of society in discriminatory ways. But, at the same time, the protection of rights is conditional and mitigating the spread of a particularly virulent disease that has serious health and economic effects is arguably one of those cases where protecting the community (and, by extension, those individuals who are unable to receive a vaccine for medical reasons) is of heightened importance.
Is this to say that there are no civil liberties concerns that might arise when vaccinating a population? No, obviously not.
In situations where individuals are unhoused or otherwise challenged in keeping or retaining a certification that they have been vaccinated, then it is important to build policies that do not discriminate against these classes of individuals. Similarly, if there is a concern that vaccination passes might present novel security risks that have correlate rights concerns (e.g., a digital system that links presentations of a vaccination credential with locational information) then it is important to carefully assess, critique, and re-develop systems so that they provide the minimum data required to reduce the risk of Covid-19’s spread. Also, as the population of vaccinated persons reaches certain percentages there may simply be less of a need to assess or check that someone is vaccinated. While this means that some ‘free riders’ will succeed, insofar as they will decline to be vaccinated and not suffer any direct consequences, the goal is not to punish people who refuse vaccination and instead to very strongly encourage enough people to get vaccinated so that the population as a whole is well-protected.
However, taking a position that Covid-19 is part of society and that society just has to get used to people refusing to be vaccinated while participating in ‘regular’ social life, and that this is just a cost of enjoying civil liberties, seems like a bad argument and a poor framing of the issue. Making this kind of broader argument risks pushing the majority of Canadians towards discounting all reasons that individuals may present to justify or explain not getting vaccinated, with the effect of inhibiting civil society from getting the public on board to protect the rights of those who would be harmfully affected by mandatory vaccination policies or demands that individuals always carry vaccine passport documents.
Those who have made a choice to opt-out of vaccination may experience resulting social costs, but those who cannot opt to get a vaccine in the first place or who have proven good reasons for avoiding vaccination shouldn’t be unduly disadvantaged. That’s the line in the sand to hold and defend, not that protecting civil liberties means that there should be no cost for voluntarily opting out of life saving vaccination programs.
Elizabeth Dubois has a great episode of Wonks and War Rooms where she interviews Etienne Rainville of The Boys in Short Pants podcast, former Hill staffer, and government relations expert. They unpack how government staffers collect information, process it, and identify experts.
Broadly, the episode focuses on how the absence of significant policy expertise in government and political parties means that social media—and Twitter in particular—can play an outsized role in influencing government, and why that’s the case.
While the discussion isn’t necessarily revelatory to anyone who has dealt with some elements of government of Canada, and especially MPs and their younger staffers, it’s a good and tight conversation that could be useful for students of Canadian politics, and also helpfully distinguishes of of the differences between Canadian and American political cultures. I found the forthrightness of the conversation and the honesty of how government operates was particularly useful in clarifying why Twitter is, indeed, a place for experts in Canada to spend time if they want to be policy relevant.
Sabat Ismail, writing at Spacing Toronto, interrogates who safe streets are meant to be safe for. North American calls for adopting Nordic models of urban cityscapes are often focused on redesigning streets for cycling whilst ignoring that Nordic safety models are borne out of broader conceptions of social equity. Given the broader (white) recognition of the violent threat that police can represent to Black Canadians, cycling organizations which are principally advocating for safe streets must carefully think through how to make them safe, and appreciate why calls for greater law enforcement to protect non-automobile users may run counter to an equitable sense of safety. To this point, Ismail writes:
I recognize the ways that the safety of marginalized communities and particularly Black and Indigenous people is disregarded at every turn and that, in turn, we are often provided policing and enforcement as the only option to keep us safe. The options for “safety” presented provide a false choice – because we do not have the power to determine safety or to be imagined within its folds.
Redesigning streets without considering how the design of urban environments are rife with broader sets of values runs the very real risk of further systematizing racism while espousing values of freedom and equality. The values undergirding the concept of safe streets must be assessed by a diverse set of residents to understand what might equitably provide safety for all people; doing anything less will likely re-embed existing systems of power in urban design and law, to the ongoing detriment and harm of non-white inhabitants of North American cities.
It’s a profoundly strange experience knowing that my work was cited in the development of a US Presidential Executive Order (in this case, on the relative merits of cyber security transparency reporting).
Steven Levy has an article out in Wired this week in which he, vis-a-vis the persons he interviewed, proclaims that the ‘going dark’ solution has been solved to the satisfaction of (American) government agencies and (unnamed and not quoted) ‘privacy purists’.1 Per the advocates of the so-called-solution, should the proposed technical standard be advanced and developed then (American) government agencies could access encrypted materials and (American) users will enjoy the same degrees of strong encryption as they do today. This would ‘solve’ the problem of (American) agencies’ investigations being stymied by suspects’ adoption of encrypted communications systems and personal devices.
Unfortunately Levy got played: the proposal he dedicates his article to is just another attempt to advance a ‘solution’ that doesn’t address the realtechnical or policy problems associated with developing a global backdoor system to our most personal electronic devices. Specifically the architect of the solution overestimates the existent security characteristics of contemporary devices,2 overestimates the ability of companies to successfully manage a sophisticated and globe-spanning key management system,3 fails to address international policy issues about why other governments couldn’t or wouldn’t demand similar kinds of access (think Russia, China, Iran, etc),4 fails to contemplate an adequate key revocation system, and fails to adequately explain why why the exceptional access system he envisions is genuinely needed. With regards to that last point, government agencies have access to more data than ever before in history and, yet, because they don’t have access to all of the data in existence the agencies are claiming they are somehow being ‘blinded’.
As I’ve written in a draft book chapter, for inclusion in a book published later this year or early next, the idea that government agencies are somehow worse off than in the past is pure nonsense. Consider that,
[a]s we have embraced the digital era in our personal and professional lives, [Law Enforcement and Security Agencies] LESAs have also developed new techniques and gained additional powers in order to keep pace as our memories have shifted from personal journals and filing cabinets to blogs, social media, and cloud hosting providers. LESAs now subscribe to services designed to monitor social media services for intelligence purposes, they collect bulk data from telecommunications providers in so-called ‘tower dumps’ of all the information stored by cellular towers, establish their own fake cellular towers to collect data from all parties proximate to such devices, use malware to intrude into either personal endpoint devices (e.g. mobile phones or laptops) or networking equipment (e.g. routers), and can even retroactively re-create our daily online activities with assistance from Canada’s signals intelligence agency. In the past, each of these kinds of activities would have required dozens or hundreds or thousands of government officials to painstakingly follow persons — many of whom might not be specifically suspected of engaging in a criminal activity or activity detrimental to the national security of Canada — and gain lawful entry to their personal safes, install cameras in their homes and offices, access and copy the contents of filing cabinets, and listen in on conversations that would otherwise have been private. So much of our lives have become digital that entirely new investigative opportunities have arisen which were previously restricted to the imaginations of science fiction authors both insofar as it is easier to access information but, also, because we generate and leave behind more information about our activities vis-a-vis our digital exhaust than was even possible in a world dominated by analog technologies.
In effect: the ‘solution’ covered by Levy doesn’t clearly articulate what problem must be solved and it would end up generating more problems than it solves by significantly diminishing the security properties of devices while, simultaneously, raising international policy issues of which countries’ authorities, and under what conditions, could lawfully obtain decryption keys. Furthermore, companies and their decryption keys will suddenly become even more targeted by advanced adversaries than they are today. Instead of even attempting to realistically account for these realities of developing and implementing secure systems, the proposed ‘solution’ depends on a magical pixie dust assumption that you can undermine the security of globally distributed products and have no bad things happen.5
The article as written by Levy (and the proposed solution at the root of the article) is exactly the kind of writing and proposal that gives law enforcement agencies the energy to drive a narrative that backdooring all secure systems is possible and that the academic, policy, and technical communities are merely ideologically opposed to doing so. As has become somewhat common to say, while we can land a person on the moon, that doesn’t mean we can also land a person on the sun; while we can build (somewhat) secure systems we cannot build (somewhat) secure systems that include deliberately inserted backdoors. Ultimately, it’s not the case that ‘privacy purists’ oppose such solutions to undermine the security of all devices on ideological grounds: they’re opposed based on decades of experience, training, and expertise that lets them recognize such solutions as the charades that they are.
I am unaware of a single person in the American or international privacy advocacy space who was interviewed for the article, let alone espouses positions that would be pacified by the proposed solution. ↩
Consider that there is currently a way of bypassing the existing tamper-resistant chip in Apple’s iPhone, which is specifically designed to ‘short out’ the iPhone if someone attempts to enter an incorrect password too many times. A similar mechanism would ‘protect’ the master key that would be accessible to law enforcement and security agencies. ↩
Consider that Microsoft has, in the past, lost its master key that is used to validate copies of Windows as legitimate Microsoft-assured products and, also, that Apple managed to lose key parts of its iOS codebase and reportedly its signing key. ↩
Consider that foreign governments look at the laws promulgated by Western nations as justification for their own abusive and human rights-violating legislation and activities. ↩
Some of the more unhelpful security researchers just argue that if Apple et al. don’t want to help foreign governments open up locked devices they should just suspend all service into those jurisdictions. I’m not of the opinion that protectionism and nationalism are ways of advancing international human rights or of raising the qualities of life of all persons around the world; it’s not morally right to just cast the citizens of Russia, Ethiopia, China, India, Pakistan, or Mexico (and others!) to the wolves of their own oftentimes overzealous or rights abusing government agencies. ↩
But here’s the thing. Mnuchin’s shameless posturing about the administration’s tax plans—at one point he even promised there would be “no absolute tax cut for the upper class,” which was a laugher given every proposal Trump had ever backed—points to a deeper problem. The man regularly says things that just aren’t true. He’s been claiming that there was an analysis underway. There wasn’t. And while a lot of people may roll their eyes about that in the context of a wonky tax debate, his complete lack of credibility is going to be a problem if we ever run into a serious economic or financial crisis. Just ask yourself: If the markets were crashing and Steve Mnuchin held a press conference assuring everybody that the administration had an action plan in the works, would you believe him? His complete detachment from reality has mostly been an infuriating sideshow during this tax push. If stuff ever really hits the fan, though, his reputation for fibbing is going to make things even worse. Just like someone else we know.
While policies may vary, the sensitive nature of the data produced does not. Traffic data analysis generates more sensitive profiles of an individual’s actions and intentions, arguably more so than communica- tions content. In a communication with another individual, we say what we choose to share; in a transaction with another device, for example, search engines and cell stations, we are disclosing our actions, movements, and intentions. Technology- neutral policies continue to regard this transactional data as POTS traffic data, and accordingly apply inadequate protections.
This is not faithful to the spirit of updating laws for new technology. We need to acknowledge that changing technological environments transform the policy itself. New policies need to reflect the totality of the new environment.
* Alberto Escudero-Pascual and Ian Hosein, “Questioning Lawful Access to Traffic Data”