The issue of CSAM on Facebook’s networks has risen in prominence following a report in 2019 in the New York Times. That piece indicated that Facebook was responsible for reporting the vast majority of the 45 million online photos and videos of children being sexually abused online. Ever since, Facebook has sought to contextualize the information it discloses to NCMEC and explain the efforts it has put in place to prevent CSAM from appearing on its services.
So what was the key finding from the research?
We evaluated 150 accounts that we reported to NCMEC for uploading CSAM in July and August of 2020 and January 2021, and we estimate that more than 75% of these did not exhibit malicious intent (i.e. did not intend to harm a child), but appeared to share for other reasons, such as outrage or poor humor. While this study represents our best understanding, these findings should not be considered a precise measure of the child safety ecosystem.
This finding is significant, as it quickly becomes suggestive that the mass majority of the content reported by Facebook—while illegal!—is not deliberately being shared for malicious purposes. Even if we assume that the number sampled should be adjusted—perhaps only 50% of individuals were malicious—we are still left with a significant finding.
There are, of course, limitations to the research. First, it excludes all end-to-end encrypted messages. So there is some volume of content that cannot be detected using these methods. Second, it remains unclear how scientifically robust it was to choose the selected 150 accounts for analysis. Third, and related, there is a subsequent question of whether the selected accounts are necessarily representative of the broader pool of accounts that are associated with distributing CSAM.
Nevertheless, this seeming sleeper-research hit has significant implications insofar as it would compress the number of problematic accounts/individuals disclosing CSAM to other parties. Clearly more work along this line is required, ideally across Internet platforms, in order to add further context and details to the extent of the CSAM problem and subsequently define what policy solutions are necessary and proportionate.
The Cambridge Security Research Computer Laboratory has a really lovely blog series called ‘Three Paper Tuesday’ that I wish other organizations would adopt.
They have a guest (and usually a graduate student) provide concise summaries of three papers and then have a short 2-3 paragraph ‘Lessons Learned’ section to conclude the post. Not only do readers get annotated bibliographies for each entry but, perhaps more importantly, the lessons learned means that non-experts can appreciate the literature in a broader or more general context. The post aboutsubverting neural networks, as an example, concludes with:
On the balance of the findings from these papers, adversarial reprogramming can be characterised as a relatively simple and cost-effective method for attackers seeking to subvert machine learning models across multiple domains. The potential for adversarial programs to successfully avoid detection and be deployed in black-box settings further highlights the risk implications for stakeholders.
Elsayed et al. identify theft of computational resources and violation of the ethical principles of service providers as future challenges presented by adversarial reprogramming, using the hypothetical example of repurposing a virtual assistant as spyware or a spambot. Identified directions for future research include establishing the formal properties and limitations of adversarial reprogramming, and studying potential methods to defend against it.
If more labs and research groups did this, I’d imagine it would help to spread awareness of some research and its actual utility or importance in advancing the state of knowledge to the benefit of other academics. It would also have the benefit of showcasing to policymakers what key issues actually are and where research lines are trending, and thus empower them (and, perhaps, even journalists) to better take up the issues that they happen to be focused on. That would certainly be a win for everybody: it’d be easier to identify articles of interest for researchers, relevance of research for practitioners, and showcase the knowledge and communication skills of graduate students.
A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.
I’ve conducted research over the past decade which hasrepeatedlyshown that companies are often very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere in a requesting party’s country then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.
Automated Data Export As Solution?
Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.
A shining example of the kinds of information that are not revealed to users of these services has come to light. A leaked document from Facebook Australia revealed that:
Facebook’s algorithms can determine, and allow advertisers to pinpoint, “moments when young people need a confidence boost.” If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including “worthless,” “insecure,” “defeated,” “anxious,” “silly,” “useless,” “stupid,” “overwhelmed,” “stressed,” and “a failure.”
This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.
But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1
Solving for Bad Actors
There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing.
But part of the challenge in stopping such exploitative activities are also linked to Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.
So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.
In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.
A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.
History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.
(This article was previously published in a slightly different format on a now-defunct Medium account.)
1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook.
2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally.
3 Surely you don’t think that Facebook is only targeting kids, right?
This is probably the best journalistic account of how current and past members of the Citizen Lab, in tandem with Lookout (a security company), identified the most significant vulnerability to ever target Apple devices.
… security researchers have discovered how to use software defined radio (SDR) to remotely unlock hundreds of millions of cars. The findings are to be presented at a security conference later this week, and detail two different vulnerabilities.
The first affects almost every car Volkswagen has sold since 1995, with only the latest Golf-based models in the clear. Led by Flavio Garcia at the University of Birmingham in the UK, the group of hackers reverse-engineered an undisclosed Volkswagen component to extract a cryptographic key value that is common to many of the company’s vehicles.
Alone, the value won’t do anything, but when combined with the unique value encoded on an individual vehicle’s remote key fob—obtained with a little electronic eavesdropping, say—you have a functional clone that will lock or unlock that car.
Just implement the research by dropping some Raspberry Pi’s in a mid- to high-income condo parking garage and you’ve got an easy way to profit pretty handsomely from Volkswagen’s security FUBAR.
Netsweeper is a small Canadian company with a disarmingly boring name and an office nestled among the squat buildings of Waterloo, Ontario. But its services—namely, online censorship—are offered in countries as far-flung as Bahrain and Yemen.
In 2015, University of Toronto-based research hub Citizen Lab reported that Netsweeper was providing Yemeni rebels with censorship technology. In response, Citizen Lab director Ron Deibert revealed in a blog post on Tuesday, Netsweeper sued the university and Deibert for defamation. Netsweeper discontinued its lawsuit in its entirety in April.
The lack of teaching skills means we are supporting institutions that not only don’t do what we idealize them to do, they don’t value and professionalize the things that we expect them to do well. In fact, we have gone to extremes to prevent the job of university teaching from becoming a profession. The most obvious example is hiring adjunct professors. These are people who are hired for about the same wage as a fast food server, and are expected to teach physics or philosophy to 18 year olds. They don’t get benefits or even long-term contracts. So, in effect, they never get the chance to develop into highly skilled teaching professionals. Instead, they spend most of their time worrying about heating bills and whether they can afford to go to the doctor.
Now, of course, universities will argue that they are research organizations. And that is true. Universities do value research over teaching. Meaning that tenured and tenure-track professors, even if they love teaching, cannot prioritize it, because their administration requires them to be good researchers. Indeed, if you admit that you are a middling to average researcher and want to focus on teaching, you become viewed a burden by your department.
Yet, for the great majority of people, their only interaction with a university is through the people doing the teaching. It’s as if a major corporation, say General Motors, decided that their public face would not be their most visible product—hello Chevy Volt—and instead decides to place the janitorial service front and center. Then, just to top it off, decided not to train the janitors.
On January 20, 2014 the Citizen Lab along with leading Canadian academics and civil liberties groups asked Canadian telecommunications companies to reveal the extent to which they disclose information to state authorities. This post summarizes and analyzes the responses from the companies, and argues that the companies have done little to ultimately clarify their disclosure policies. We conclude by indicating the subsequent steps in this research project.
The most recent posting about our ongoing research into how, why, and how often Canadian ISPs disclose information to state agencies.
While such research is done in a number of countries, Canada seems to be a hotbed of boredom studies. James Danckert, an associate professor of psychology at the University of Waterloo, in Canada, recently conducted a study to compare the physiological effects of boredom and sadness.
To induce sadness in the lab, he used video clips from the 1979 tear-jerker, “The Champ,” a widely accepted practice among psychologists.
But finding a clip to induce boredom was a trickier task. Dr. Danckert first tried a YouTube video of a man mowing a lawn, but subjects found it funny, not boring. A clip of parliamentary proceedings was too risky. “There’s always the off chance you get someone who is interested in that,” he says.
A really interesting paper on social authentication has just been released that looks at how facial identification ‘works’ to secure social networks from unauthorized access to profiles/records. The authors note that users of social networks are most concerned in keeping their interactions private from those who know the users. Specifically, from the abstract:
Most people want privacy only from those close to them; if you’re having an affair then you want your partner to not find out but you don’t care if someone in Mongolia learns about it. And if your partner finds out and becomes your ex, then you don’t want them to be able to cause havoc on your account. Celebrities are similar, except that everyone is their friend (and potentially their enemy).
Moreover, a targeted effort to identify a users’ friends on a social network – and examine their photos – will let an attacker penetrate the social authentication mechanisms. While many users would consider this a design flaw Facebook, which uses this system, doesn’t necessarily agree because:
[Facebook] told us that the social captcha mechanism was used to solve the problem of large-scale phishing attacks. They knew it was not very effective against friends, and especially not against a jilted former lover. For that, they maintain that the local police and courts are an effective solution. They also claim that although small-scale face recognition is doable, their scraping protection prevents it being used at large scales.
What Facebook is doing isn’t wrong: they simply has a particular attacker-type in mind with regards to social authentication and have deployed a defence mechanism to combat that attacker. Most users, however, are unlikely to consider that the company has a different attack scenario in mind than its end-users, leading to anger and concern when the defence for wide-scale attacks fails to protect against targeted attackers. While I don’t see this as a security or policy failure, it is suggestive that companies would be well advised to explain to their users how different security inconveniences actually interact with different hack/attack scenarios. Beyond educating users as to what they can expect from the various defence mechanisms, it might serve to raise some awareness about the different kinds of attackers that companies have to defend against. In an ideal world, this might serve as a beginning point in educating users to become more critical of the security models that are imposed upon them by corporations, governments, and other parties they deal with.