Categories
Aside

In Memoriam of John L. Young of Cryptome

John L. Young, founder of Cryptome, has died.

John’s work at Cryptome was inspirational for much of the work that I did during my doctorate and time at the Citizen Lab. His unwavering commitment to transparency and efforts to hold the powerful accountable was an early and important light, showing how digital archives could be used to promote real change.

While we never met, his commitment to transparency and accountability will live on with me and many others.

You can learn about the history of Cryptome on Wikipedia.

Categories
Links Writing

An Initial Assessment of CLOUD Agreements

The United States has bilateral CLOUD Act agreements with the United Kingdom and Australia, and Canada continues to also negotiate an agreement with the United States.1 CLOUD agreements are meant to alleviate some of the challenges attributed to the MLAT process, namely that MLATs can be ponderous with the result being that investigators have difficulties obtaining information from communication providers in a manner deemed timely.

Investigators must conform with their domestic legal requirements and, with CLOUD agreements in place, can serve orders directly on bilateral partners’ communications and electronic service providers. Orders cannot target the domestic residents of a targeted country (i.e., the UK government could not target a US resident or person, and vice versa). Demands also cannot interfere with fundamental rights, such as freedom of speech. 2

A recent report from Lawfare unpacks the November 2024 report that was produced to explain how the UK and USA governments actually used the powers under their bilateral agreement. It showcases that, so far, the UK government has used this substantially to facilitate wiretap requests, with the UK issuing,

… 20,142 requests to U.S. service providers under the agreement. Over 99.8 percent of those (20,105) were issued under the Investigatory Powers Act, and were for the most part wiretap orders, and fewer than 0.2 percent were overseas production orders for stored communications data (37).

By way of contrast, the “United States made 63 requests to U.K. providers between Oct. 3, 2022, and Oct. 15, 2024. All but one request was for stored information.” Challenges in getting UK providers to respond to US CLOUD Act requests, and American complaints about this, may cause the UK government to “amend the data protection law to remove any doubt about the legality of honoring CLOUD Act requests.”

It will be interesting to further assess how CLOUD Acts operate, in practice, at a time when there is public analysis of how the USA-Australia agreement has been put into effect.


  1. In Canada, the Canadian Bar Association noted in November 2024 that new enabling legislation may be required, including reforms of privacy legislation to authorize providers’ disclosure of information to American investigators. ↩︎
  2. Debates continue about whether protections built into these agreements are sufficient. ↩︎
Categories
Writing

Ongoing Criminal Exploitation of Emergency Data Requests

When people are at risk, law enforcement agencies can often move quickly to obtain certain information from online service providers. In the United States this can involve issuing Emergency Data Requests (EDRs) absent a court order.1

The problem? Criminal groups are increasingly taking advantage of poor cyber hygiene to gain access to government accounts and issue fraudulent EDRs.

While the full extent of the threat remains unknown, of Verizon’s total 127,000 requests for data in Q2 of 2023, 36,000 were EDRs. And Kodex, a company that is often the intermediary between law enforcement and online providers, found that over the past year it had suspended 4,000 law enforcement users and approximately 30% of EDRs did not pass secondary verification. Taken together this may indicate a concerning cyber policy issue that may seriously endanger affected individuals.

These are just some of the broader policy and cybersecurity challenges that are key to keep in mind, both as new laws are passed and as new cybersecurity requirements are contemplated. It is imperative that lawful government capabilities are not transformed into significant and powerful tools for criminals and adversaries alike.


  1. There are similar kinds of provisions in the Canadian Criminal Code. ↩︎
Categories
Links Writing

Can University Faculty Hold Platforms To Account?

Heidi Tworek has a good piece with the Centre for International Governance Innovation, where she questions whether there will be a sufficient number of faculty in Canada (and elsewhere) to make use of information that digital-first companies might be compelled to make available to researchers. The general argument goes that if companies must make information available to academics then these academics can study the information and, subsequently, hold companies to account and guide evidence-based policymaking.

Tworek’s argument focuses on two key things.

  1. First, there has been a decline in the tenured professoriate in Canada, with the effect that the adjunct faculty who are ‘filling in’ are busy teaching and really don’t have a chance to lead research.
  2. While a vanishingly small number of PhD holders obtain a tenure track role, a reasonable number may be going into the very digital-first companies that researchers needs data from to hold them accountable.

On this latter point, she writes:

If the companies have far more researchers than universities have, transparency regulations may not do as much to address the imbalance of knowledge as many expect.

I don’t think that hiring people with PhDs necessarily means that companies are addressing knowledge imbalances. Whatever is learned by these researchers tends to be sheltered within corporate walls and protected by NDAs. So those researchers going into companies may learn what’s going on but be unable (or unmotivated) to leverage what they know in order to inform policy discussions meant to hold companies to account.

To be clear, I really do agree with a lot in this article. However, I think it does have a few areas for further consideration.

First, more needs to be said about what, specifically, ’transparency’ encompasses and its relationships with data type, availability, etc. Transparency is a deeply contested concept and there are a lot of ways that the revelation of data basically creates a funhouse of mirrors effect, insofar as what researchers ‘see’ can be very distorted from the reality of what truly is.

Second, making data available isn’t just about whether universities have the professors to do the work but, really, whether the government and its regulators have the staff time as well. Professors are doing a lot of things whereas regulators can assign staff to just work the data, day in and day out. Focus matters.

Third, and related, I have to admit that I have pretty severe doubts about the ability of professors to seriously take up and make use of information from platforms, at scale and with policy impact, because it’s never going to be their full time jobs to do so. Professors are also going to be required to publish in books or journals, which means their outputs will be delayed and inaccessible to companies, government bureaucrats and regulators, and NGO staff. I’m sure academics will have lovely and insightful discussions…but they won’t happen fast enough, or in accessible places or in plain language, to generally affect policy debates.

So, what might need to be added to start fleshing out how universities are organised to make use of data released by companies and have policy impacts in research outputs?

First, universities in Canada would need to get truly serious about creating a ’researcher class’ to analyse corporate reporting. This would involve prioritising the hiring of research associates and senior research associates who have few or no teaching responsibilities.1

Second, universities would need to work to create centres such as the Citizen Lab, or related groups.2 These don’t need to be organisations which try and cover the waterfront of all digital issues. They could, instead, be more focused to reduce the number of staff or fellows that are needed to fulfil the organisation’s mandate. Any and all centres of this type would see a small handful of people with PhDs (who largely lack teaching responsibilities) guide multidisciplinary teams of staff. Those same staff members would not typically need a PhD. They would need to be nimble enough to move quickly while using a peer-review lite process to validate findings, but not see journal or book outputs as their primacy currency for promotion or hiring.

Third, the centres would need a core group of long-term staffers. This core body of long-term researchers is needed to develop policy expertise that graduate students just don’t possess or develop in their short tenure in the university. Moreover, these same long-term researchers can then train graduate student fellows of the centres in question, with the effect of slowly building a cadre of researchers who are equipped to critically assess digital-first companies.

Fourth, the staff at research centres needs to be paid well and properly. They cannot be regarded as ‘graduate student plus’ employees but as specialists who will be of interest to government and corporations. This means that the university will need to pay competitive wages in order to secure the staff needed to fulfil centre mandates.

Basically if universities are to be successful in holding big data companies to account they’ll need to incubate quasi-NGOs and let them loose under the university’s auspice. It is, however, worth asking whether this should be the goal of the university in the first place: should society be outsourcing a large amount of the ‘transparency research’ that is designed to have policy impact or guide evidence-based policy making to academics, or should we instead bolster the capacities of government departments and regulatory agencies to undertake these activities?

Put differently, and in context with Tworek’s argument: I think that assuming that PhDs holders working as faculty in universities are the solution to analysing data released by corporations can only hold if you happen to (a) hold or aspire to hold a PhD; (b) possesses or aspire to possess a research-focused tenure track job.

I don’t think that either (a) or (b) should guide the majority of the way forward in developing policy proposals as they pertain to holding corporations to account.

Do faculty have a role in holding companies such as Google, Facebook, Amazon, Apple, or Netflix to account? You bet. But if the university, and university researchers, are going to seriously get involved in using data released by companies to hold them to account and have policy impact, then I think we need dedicated and focused researchers. Faculty who are torn between teaching, writing and publishing in inaccessible locations using baroque theoretical lenses, pursuing funding opportunities and undertaking large amounts of department service and performing graduate student supervision are just not going to be sufficient to address the task at hand.


  1. In the interests of disclosure, I currently hold one of these roles. ↩︎
  2. Again in the interests of disclosure, this is the kind of place I currently work at. ↩︎
Categories
Writing

Two Thoughts on China’s Draft Privacy Law

Alexa Lee, Samm Sacks, Rogier Creemers, Mingli Shi, and Graham Webster have collectively written a helpful summary of the new Chinese Data Privacy Law over at Stanford’s DigiChina.

There were a pair of features that most jump out to me.

First, that the proposed legislation will compel Chinese companies “to police the personal data practices across their platforms” as part of Article 57. As noted by the team at Stanford,

“the three responsibilities identified for big platform companies here resonate with the “gatekeeper” concept for online intermediaries in Europe, and a requirement for public social responsibility reports echoes the DMA/DSA mandate to provide access to platform data by academic researchers and others. The new groups could also be compared with Facebook’s nominally independent Oversight Board, which the company established to review content moderation decisions.”

I’ll be particularly curious to see the kinds of transparency reporting that emerges out of these companies. I doubt the reports will parallel those in the West, which tend to focus on the processes and number of disclosures from private companies to government and, instead, the Chinese companies’ reports will focus on how companies are being ‘socially responsible’ with how they collect, process, and disclose data to other Chinese businesses. Still, if we see this more consumer-focused approach it will demonstrate yet another transparency report tradition that will be useful to assess in academic and public policy writing.

Second, the Stanford team notes that,

“new drafts of both the PIPL and the DSL added language toughening requirements for Chinese government approval before data holders in China cooperate with foreign judicial or law enforcement requests for data, making failure to gain permission a clear violation punishable by financial penalties up to 1 million RMB.”

While not surprising, this kind of restriction will continue to raise data sovereignty borders around personal information held in China. The effect? Western states will still need to push for Mutual Legal Assistant Treaty (MLAT) reform to successfully extract information from Chinese companies (and, perhaps in all likelihood, fail to conclude these reforms).1

It’s perhaps noteworthy that while China is moving to build up walls there is a simultaneous attempt by the Council of Europe to address issues of law enforcement access to information held by cloud providers (amongst other things). The United States passed the CLOUD Act in 2018 to begin to try and alleviate the issue of states gaining access to information held by cloud providers operating in foreign jurisdictions (though did not address human rights concerns which were mitigated through traditional MLAT processes). Based on the proposed Chinese law, it’s unlikely that the CLOUD Act will gain substantial traction with the Chinese government, though admittedly this wasn’t the aim of the CLOUD Act or an expected outcome of its passage.

Nevertheless, as competing legal frameworks are established that place the West on one side, and China and Russia on the other, the effect will be further entrenching the legal cultures of the Internet between different economic and political (and security) regimes. At the same time, data will be easily stored anywhere in the world including out of reach of relevant law enforcement agencies by criminal actors that routinely behave with technical and legal savvy.

Ultimately, the raising of regional and national digital borders is a topic to watch, both to keep an eye on what the forthcoming legal regimes will look like and, also, to assess the extents to which we see languages of ‘strong sovereignty’ or nationalism creep functionally into legislation around the world.


  1. For more on MLAT reform, see these pieces from Lawfare ↩︎
Categories
Aside

2017.12.22

Honoured to be recognized by Access Now as a local champion for my work in safeguarding, protecting, and advancing digital civil liberties in Canada!

Categories
Quotations

2017.12.9

It’s time to admit that mere transparency isn’t enough, and that every decision to censor content is a political decision. Companies should act accordingly. They must carefully consider the long-term effects of complying with requests, and take a stand when those requests run counter to human rights principles. The more we accept everyday censorship, the more of it there seems to be, and before we know it, the window of acceptable information will only be open a crack.

Jillian York, “Tech Companies’ Transparency Efforts May Be Inadvertently Causing More Censorship
Categories
Aside Writing

Limits of Data Access Requests

Last week I wrote about the limits of data access requests, as they related to car sharing applications like Uber. A data access request involves you contacting a private company and requesting a copy of your personal information, as well as the ways in which that data is processed, disclosed, and the periods of time for which data is retained.

Research has repeatedly shown that companies are very poor at comprehensively responding to data access requests. Sometimes this is because of divides between technical teams that collect and use the data, policy teams that determine what is and isn’t appropriate to do with data, and legal teams that ascertain whether collections and uses of data comport with the law. In other situations companies simply refuse to respond because they adopt a confused-nationalist understanding of law: if the company doesn’t have an office somewhere then that jurisdiction’s laws aren’t seen as applying to the company, even if the company does business in the jurisdiction.

Automated Data Export As Solution?

Some companies, such as Facebook and Google, have developed automated data download services. Ostensibly these services are designed so that you can download the data you’ve input into the companies, thus revealing precisely what is collected about you. In reality, these services don’t let you export all of the information that these respective companies collect. As a result when people tend to use these download services they end up with a false impression of just what information the companies collect and how its used.

A shining example of the kinds of information that are not revealed to users of these services has come to light. A recently leaked document from Facebook Australia revealed that:

Facebook’s algorithms can determine, and allow advertisers to pinpoint, "moments when young people need a confidence boost." If that phrase isn’t clear enough, Facebook’s document offers a litany of teen emotional states that the company claims it can estimate based on how teens use the service, including "worthless," "insecure," "defeated," "anxious," "silly," "useless," "stupid," "overwhelmed," "stressed," and "a failure."

This targeting of emotions isn’t necessarily surprising: in a past exposé we learned that Facebook conducted experiments during an American presidential election to see if they could sway voters. Indeed, the company’s raison d’être is figure out how to pitch ads to customers, and figuring out when Facebook users are more or less likely to be affected by advertisements is just good business. If you use the self-download service provided by Facebook, or any other data broker, you will not receive data on how and why your data is exploited: without understanding how their algorithms act on the data they collect from you, you can never really understand how your personal information is processed.

But that raison d’être of pitching ads to people — which is why Facebook could internally justify the deliberate targeting of vulnerable youth — ignores baseline ethics of whether it is appropriate to exploit our psychology to sell us products. To be clear, this isn’t a company stalking you around the Internet with ads for a car or couch or jewelry that you were browsing about. This is a deliberate effort to mine your communications to sell products at times of psychological vulnerability. The difference is between somewhat stupid tracking versus deliberate exploitation of our emotional state.1

Solving for Bad Actors

There are laws around what you can do with the information provided by children. Whether Facebook’s actions run afoul of such law may never actually be tested in a court or privacy commissioner’s decision. In part, this is because actually mounting legal challenges is extremely challenging, expensive, and time consuming. These hurdles automatically tilt the balance towards activities such as this continuing, even if Facebook stops this particular activity. But, also, part of the problem is Australia’s historically weak privacy commissioner as well as the limitations of such offices around the world: Privacy Commissioners Offices are often understaffed, under resourced, and unable to chase every legally and ethically questionable practice undertaken by private companies. Companies know about these limitations and, as such, know they can get away with unethical and frankly illegal activities unless someone talks to the press about the activities in question.

So what’s the solution? The rote advice is to stop using Facebook. While that might be good advice for some, for a lot of other people leaving Facebook is very, very challenging. You might use it to sign into a lot of other services and so don’t think you can easily abandon Facebook. You might have stored years of photos or conversations and Facebook doesn’t give you a nice way to pull them out. It might be a place where all of your friends and family congregate to share information and so leaving would amount to being excised from your core communities. And depending on where you live you might rely on Facebook for finding jobs, community events, or other activities that are essential to your life.

In essence, solving for Facebook, Google, Uber, and all the other large data broker problems is a collective action problem. It’s not a problem that is best solved on an individualistic basis.

A more realistic kind of advice would be this: file complaints to your local politicians. File complaints to your domestic privacy commissioners. File complaints to every conference, academic association, and industry event that takes Facebook money.2 Make it very public and very clear that you and groups you are associated with are offended by the company in question that is profiting off the psychological exploitation of children and adults alike.3 Now, will your efforts to raise attention to the issue and draw negative attention to companies and groups profiting from Facebook and other data brokers stop unethical data exploitation tomorrow? No. But by consistently raising our concerns about how large data brokers collect and use personal information, and attributing some degree of negative publicity to all those who benefit from such practices, we can decrease the public stock of a company.

History is dotted with individuals who are seen as standing up to end bad practices by governments and private companies alike. But behind them tend to be a mass of citizens who are supportive of those individuals: while standing up en masse may mean that we don’t each get individual praise for stopping some tasteless and unethical practices, our collective standing up will make it more likely that such practices will be stopped. By each working a little we can do something that, individually, we’d be hard pressed to change as individuals.

NOTE: This blog was first published on Medium on May 1, 2017.


  1. 1 Other advertising companies adopt the same practices as Facebook. So I’m not suggesting that Facebook is worst-of-class and letting the others off the hook. ↩︎
  2. 2 Replace ‘Facebook’ with whatever company you think is behaving inappropriately, unethically, or perhaps illegally. ↩︎
  3. 3 Surely you don’t think that Facebook is only targeting kids, right? ↩︎
Categories
Links

IMSI Catcher Report Calls for Transparency, Proportionality, and Minimization Policies – The Citizen Lab

IMSI Catcher Report Calls for Transparency, Proportionality, and Minimization Policies:

The Citizen Lab and CIPPIC are releasing a report, Gone Opaque? An Analysis of Hypothetical IMSI Catcher Overuse in Canada, which examines the use of devices that are commonly referred to as ‘cell site simulators’, ‘IMSI Catchers’, ‘Digital Analyzers’, or ‘Mobile Device Identifiers’, and under brand names such as ‘Stingray’, DRTBOX, and ‘Hailstorm’. IMSI Catchers are a class of of surveillance devices used by Canadian state agencies. They enable state agencies to intercept communications from mobile devices and are principally used to identify otherwise anonymous individuals associated with a mobile device and track them.

Though these devices are not new, the ubiquity of contemporary mobile devices, coupled with the decreasing costs of IMSI Catchers themselves, has led to an increase in the frequency and scope of these devices’ use. Their intrusive nature, as combined with surreptitious and uncontrolled uses, pose an insidious threat to privacy.

This report investigates the surveillance capabilities of IMSI Catchers, efforts by states to prevent information relating to IMSI Catchers from entering the public record, and the legal and policy frameworks that govern the use of these devices. The report principally focuses on Canadian agencies but, to do so, draws comparative examples from other jurisdictions. The report concludes with a series of recommended transparency and control mechanisms that are designed to properly contain the use of the devices and temper their more intrusive features.

I’m not going to lie: after working on this with my colleague, Tamir Israel, for 12 months it was absolutely amazing to publicly release this report. What started as a 1,500 word blog post meant to put defense lawyers on notice of some new legislation transmogrified into a 130 page report that is the most comprehensive legal analysis of these devices that’s been done to date. It’s going to be interesting to see what the effects of it are for cases currently being litigated in Canada and around the world!

Categories
Aside Links

On weaponized transparency

On weaponized transparency:

Over the longer term, it’s likely that personal or sensitive data will continue to be hacked and released, and often for political purposes. This in turn raises a set of questions that we should all consider, related to all the traditional questions of openness and accountability. Weaponized transparency of private data of people in democratic institutions by unaccountable entities is destructive to our political norms, and to an open, discursive politics.

Weaponized transparency, especially when it affects the lives of ordinary persons who take an interest in the political process, is dangerous for a range of reasons. And responsible journalists – to say nothing of publishers such as Wikileaks – ought to be condemned when they fail to adequately protect the private interests of such ordinary persons.