Categories
Links Writing

Research Security Requirements and Ontario Colleges and Universities

There’s a lot happening, legislatively in Ontario. One item worth highlighting concerns the requirement for Ontario colleges and universities to develop security research plans.

The federal government has been warning that Canadian academic research is at risk of exfiltration or theft by foreign actors, including by foreign-influenced professors or students who work in Canadian research environments, or by way of electronic and trade-based espionage. In response, the federal government has established a series of guidance documents that Canadian researchers and universities are expected to adhere to where seeking certain kinds of federal funding.

The Ontario government introduced Bill 33, Supporting Children and Students Act, 2025 on May 29, 2025. Notably, Schedule 3 introduces requirements for security plans for Ontario college of applied arts and technology and publicly funded university.

The relevant text from the legislation states as follows:

Research security plan

Application

20.1 (1) This section applies to every college of applied arts and technology and to every publicly-assisted university.

Development and implementation of plan

(2) Every college or university described in subsection (1) shall develop and implement a research security plan to safeguard, and mitigate the risk of harm to or interference with, its research activities.

Minister’s directive

(3) The Minister may, from time to time, in a directive issued to one or more colleges or universities described in subsection (1),

(a) specify the date by which a college or university’s research security plan must be developed and implemented under subsection (2);

(b) specify the date by which a plan must be provided to the Minister under subsection (4) and any requirements relating to updating or revising a plan; and

(c) specify topics to be addressed or elements to be included in a plan and the date by which they must be addressed.

Review by Minister

(4) Every college or university described in subsection (1) shall provide the Minister with a copy of its research security plan and any other information or reports requested by the Minister in respect of research security.

Categories
Writing

What is the Role of Cyber Operators in Assessing Effectiveness or Shaping Cyber Policy?

An anonymous European Intelligence Official wrote an oped in July entitled, “Can lawyers lose wars by stifling cyber capabilities?” The article does a good job in laying out why a cyber operator — that is, someone who is presumably relatively close to either planning or undertaking cyber operations — is deeply frustrated by the way in which decision-making is undertaken.

While I admit to having some sympathy for the author’s plight I fundamentally disagree with much of their argument, and think that the positions they hold should be taken up and scrutinised. In this post, I’m really just pulling out quotations from the article and then providing some rebuttal or analysis — you’re best off reading it, first, if you want to more fully follow along and assess whether I’m being fair to the author and the points they are making.

With that out of the way, here we go….

Law is no longer seen as a system of checks and balances but as a way to shape state behaviour in cyberspace

Yes, this is one of the things that laws are actually supposed to do. You may (reasonably in some cases) disagree with the nature of the laws and their effects, but law isn’t a mere “check and balance.” And, especially where there is no real ability to contest interpretations of law (because they are administered by government agencies largely behind closed doors) it is particularly important for law to have a stronger guiding function in order to maintain democratic legitimacy and social trust in government operations.

Idealistic legalism causes legal debates on cyber capabilities to miss a crucial discussion point: what operational constraints are we willing to accept and what consequences does that have for our national security?

Sure, but some of this is because the USA government is so closed mouthed about its capacities. Consider if there was a more robust effort to explain practice such as in the case of some European agencies? I would note that the Dutch, as an example, are sometimes pretty explicit about their operations which is then helpful for considering their activities with respect to authorising laws and associated national and international norms.

Laws attempt to capture as many activities in cyberspace as possible. To do so, legal frameworks must oversimplify. This is ill-suited to such a complex domain

This seems to not appreciate how law tends, at least in some jurisdictions, to be broader in scope and then supplemented by regulations or policies. However, where regulations or policies have been determined as regularly insufficient there may be a decision that more detailed laws are now necessary. To an extent, this is the case post-Snowden and with very good reason, and as demonstrated in the various non-compliance reports that has been found with certain NSA (and other American intelligence community) operations over time.

The influence of practitioners slowly diminishes as lawyers increasingly take the lead in shaping senior leadership opinions on proposed cyber operations rather than merely advising.

I can appreciate the frustration of seeing the leadership move from operations practitioners to policy/legal practitioners.1 But that shift between whether organisations are being led by operations practitioners or those focused in law/policy can be a normal back and forth.

And to be entirely honest the key thing — and the implicit critique throughout this whole piece — is that the decision makers understand what the ops folks are saying.2 Those in decision making roles have a lot of responsibilities and, often, a bigger or different picture of the implications of operations.

I’m in no way saying that lawyers should be the folks to always call the shots3 but just because you’re in operations doesn’t mean that you necessarily are making the right calls broadly and, instead, may be seeing the right calls through your particular lens and mission. That lens and mission may not always be sufficient in coming to a conclusion that aligns more broadly with agency or national or international policy intents/goals.

… a law might stipulate that a (foreign) intelligence agency cannot collect information from systems owned by the citizens of its country. But what if, as Chinese and Russian cyber threat actors do, a system belonging to a citizen is being abused to route attack traffic through? Such an operational development is not foreseen, and thus not prescribed, by law. To collect information would then be illegal and require judicial overhaul – a process that can take years in a domain that can see modus operandi shift in a matter of days.

There may be cases where you have particularly risk adverse decision makers or, alternately, particularly strong legal limitations that preclude certain kinds of operations.

I would note that it is against the law to simply target civilians in conflict scenarios on grounds that doing so runs counter to the agreed-upon laws of war (recognising they are often not adhered to). Does this have the effect of impeding certain kinds of military activities? Yes. And that may still be the right decisions notwithstanding the consequences it may have on the ability to conduct some operations and/or reduce their efficacy.

In the cyber context, the complaint is that certain activities are precluded on the basis that the law doesn’t explicitly recognise and authorise them. Law routinely leaves wiggle rooms and part of the popular (and sometimes private…) problem has been how intelligence lawyers are perceived of as abusing that wiggle room — again, see the NSA and other agencies as they were denuded in some of the Snowden revelations, and openly opposite interpretations of legislation that was adopted to authorise actions that legislators had deliberately sought to preclude.4 For further reasons the mistrust may exist between operators and legislators, in Canada you can turn to the ongoing historical issues between CSIS and the Federal Court which suggests that the “secret law and practices” adopted by Canada’s IC community may counter to the actual law and legal processes, and then combine that with some NSIRA findings that CSE activities may have taken place in contravention of Canadian privacy law.

In the above context, I would say that lots of legislators (and publics) have good ground to doubt the good will or decision-making capacity of the various parties within national ICs. You don’t get to undertake the kind of activities that happened, previously, and then just pretend that “it was all in the recent past, everything’s changed, trust us guys.”

I would also note: the quoted material makes an assumption that policy makers have not, in fact, considered the scenario the author is proposing and then rejected it as a legitimate way of operating. The fact that a decision may not have gone your way is not the same as your concerns not being evaluated in the process of reaching a conclusion.

When effectiveness is seen as secondary, cyber activities may be compliant, but they are not winning the fight.

As I have been writing in various (frustrating) peer reviews I’ve been doing: evidence of this, please, as opposed to opinion and supposition. Also, “the fight” will be understood and perceived by different people in different positions in different agencies: a universal definition should not be presumed.

…constraints also incur costs due to increased bureaucratic complexity. This hampers operational flexibility and innovation – a trade-off often not adequately weighed by, or even visible to, law- and decision-makers. When appointing ex-ante oversight boards or judicial approval, preparation time for conducting cyber operations inevitably increases, even for those perfectly legal from the beginning.

So, in this case the stated problem is that legislators and decision makers aren’t getting the discrete kinds of operational detail that this particular writer thinks are needed to make the “right” trade off decisions.

In some cases….yeah. That’ll be the case. Welcome to the hell of people not briefing up properly, or people not understanding because briefing materials weren’t scoped or prepared right, and so forth. That is: welcome to the government (or any sufficiently large bureaucracy)!

But more broadly, the complaint is that the operator in question knows better than the other parties but without, again, specific and clear evidence that the trade offs are incorrect. I get that spooky things can’t be spoken aloud without them becoming de-spookified, but picture a similar kind of argument in any other sector of government and you’ll get the same kind of complaint. Ops people will regularly complain about legislators or decision makers when they don’t get their way, their sandcastles get crushed, or they have to do things in less-efficient ways in their busy days. And sometimes they’re right to complain and, in others, there is a lot more at stake than what they see operationally going on.

This is a losing game because, as Calder Walton noted, ‘Chinese and Russian services are limited only by operational effectiveness’.

I don’t want to suggest I disagree! But, at the same time, this is along the lines of “autocracies are great because they move faster than democracies and we have to recognise their efficiency” arguments that float around periodically.5

All of which is to say: autocracies and dictatorships have different internal logics to their bureaucracies that can have corresponding effects on their operations.

While it may be “the law” that impedes some Five Eyes/Western agencies’ activities, you can picture the need to advance the interests of kleptocrats or dictators’ kids, gin up enough ransomware dollars to put food on the team’s table, and so forth, as establishing some limits on the operational effectiveness of autocratic governments’ intelligence agencies.

It’s also worth noting that “effectiveness” can be a contested concept. If you’re OK blundering around and burning your tools and are identified pretty often then you may have a different approach to cyber operations, generally, as opposed to situations where being invisible is a key part of operational development. I’m not trying to suggest that the Russians, Chinese, and other adversaries just blunder about, nor that the FVEY are magical ghosts that no one sees on boxes and undertaking operations. However, how you perceive or define “effective” will have corresponding consequences for the nature and types of operations you undertake and which are perceived as achieving the mission’s goals.

Are agencies going to publicly admit they were unable to collect intelligence on certain adversary cyber actors because of legal boundaries?

This speaks to the “everything is secret and thus trust us” that is generally antithetical to democratic governance. To reverse things on the author: should there be more revelation of operations that don’t work so that they can more broadly be learned from? The complaint seems to be that the lawyers et al don’t know what they’re doing because they aren’t necessarily exposed to the important spooky stuff, or understand its significance and importance. To what extent, then, do the curtains need to open some and communicate this in effective ways and, also, the ways in which successes have previously happened.

I know: if anything is shown then it blows the whole premise of secret operations. But it’s hard to complain that people don’t get the issues if no facts are brought to the table, whereas the lawyers and such can point to the laws and at least talk to them. If you can’t talk about ops, then don’t be surprised that people will talk about what is publicly discussable…and your ops arguments won’t have weight because they don’t even really exist in the room where the substantive discussions about guardrails may be taking place.


In summary: while I tend to not agree with the author — and disagree as someone who has always been more on the policy and/or law side of the analytic space — their article was at least thought provoking. And for that alone I think that it’s worth taking the time to read their article and consider the arguments within it.


  1. I would, however, would hasten to note that the head of NSA/Cyber Command tends to be a hella lot closer to “ops” by merit of a military leadership. ↩︎
  2. And, also, what the legal and policy teams are saying… ↩︎
  3. Believe me on this point… ↩︎
  4. See, as example: “In 2006, after Congress added the requirement that Section 215 orders be “relevant to” an investigation, the DOJ acknowledged that language was intended to impose new protections. A fact sheet about the new law published by the DOJ stated: “The reauthorizing legislation’s amendments provide significant additional safeguards of Americans’ civil liberties and privacy,” in part by clarifying, “that a section 215 order cannot be issued unless the information sought is relevant to an authorized national security investigation.” Yet just months later, the DOJ convinced the FISC that “relevant to” meant “all” in the first Section 215 bulk dragnet order. In other words, the language inserted by Congress to ​limit ​the scope of what information could be gathered was used by the government to say that there were ​no limits​.” From: Section 215: A Brief History of Violations. ↩︎
  5. See, as examples, the past 2-4 years ago when there was a perception that the Chinese response to Covid-19 and the economy was superior to everyone else that was grappling with the global pandemic. ↩︎
Categories
Aside Writing

2024.6.27

For the past many months I’ve had the joy of working with, and learning from, a truly terrific set of colleagues. One of the files we’ve handled has been around law reform in Ontario and specifically Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act.

Our organization’s submission focuses on ways to further improve the legislation by way of offering 28 recommendations that apply to Schedule 1 (concerning cybersecurity, artificial intelligence, and technologies affecting individuals under the age of 18) and Schedule 2 (amendments to FIPPA). Broadly, our recommendations concern the levels of accountability, transparency, and oversight that are needed in a rapidly changing world.

Categories
Aside Links

Liberal Fictions, AI technologies, and Human Rights

Although we talk the talk of individual consent and control, such liberal fictions are no longer sufficient to provide the protection needed to ensure that individuals and the communities to which they belong are not exploited through the data harvested from them. This is why acknowledging the role that data protection law plays in protecting human rights, autonomy and dignity is so important. This is why the human rights dimension of privacy should not just be a ‘factor’ to take into account alongside stimulating innovation and lowering the regulatory burden on industry. It is the starting point and the baseline. Innovation is good, but it cannot be at the expense of human rights.

— Prof. Teresa Scassa, “Bill C-27 and a human rights-based approach to data protection

It’s notable that Prof. Scassa speaks about the way in which Bill C-27’s preamble was supplemented with language about human rights as a way to assuage some public critique of the legislation. Preambles, however, lack the force of law and do not compel judges to interpret legislation,action in a particular way. They are often better read as a way to explain legislation to a public or strike up discussions with the judiciary when legislation repudiates a court decision.

For a long form analysis of the utility of preambles see Prof. Kent Roaches, “The Uses and Audiences of Preambles in Legislation.”

Categories
Links Writing

RCMP Found to Unlawfully Collect Publicly Available Information

The recent report from Office of the Privacy Commissioner of Canada, entitled “Investigation of the RCMP’s collection of open-source information under Project Wide Awake,” is an important read for those interested in the restrictions that apply to federal government agencies’ collection of this information.

The OPC found that the RCMP:

  • had sought to outsource its own legal accountabilities to a third-party vendor that aggregated information,
  • was unable to demonstrate that their vendor was lawfully collecting Canadian residents’ personal information,
  • operated in contravention to prior guarantees or agreements between the OPC and the RCMP,
  • was relying on a deficient privacy impact assessment, and
  • failed to adequately disclose to Canadian residents how information was being collected, with the effect of preventing them from understanding the activities that the RCMP was undertaking.

It is a breathtaking condemnation of the method by which the RCMP collected open source intelligence, and includes assertions that the agency is involved in activities that stand in contravention of PIPEDA and the Privacy Act, as well as its own internal processes and procedures. The findings in this investigation build from past investigations into how Clearview AI collected facial images to build biometric templates, guidance on publicly available information, and joint cross-national guidance concerning data scraping and the protection of privacy.

Categories
Links Writing

Location Data Used to Drive Anti-Abortion Campaigns

It can be remarkably easy to target communications to individuals’ based on their personal location. Location information is often surreptitiously obtained by way of smartphone apps that sell off or otherwise provide this data to data brokers, or through agreements with telecommunications vendors that enable targeting based on mobile devices’ geolocation. 

Senator Wyden’s efforts to investigate this brokerage economy recently revealed how this sensitive geolocation information was used to enable and drive anti-abortion activism in the United States:

Wyden’s letter asks the Federal Trade Commission and the Securities and Exchange Commission to investigate Near Intelligence, a location data provider that gathered and sold the information. The company claims to have information on 1.6 billion people across 44 countries, according to its website.

The company’s data can be used to target ads to people who have been to specific locations — including reproductive health clinic locations, according to Recrue Media co-founder Steven Bogue, who told Wyden’s staff his firm used the company’s data for a national anti-abortion ad blitz between 2019 and 2022.



In a February 2023 filing, the company said it ensures that the data it obtains was collected with the users’ permission, but Near’s former chief privacy officer Jay Angelo told Wyden’s staff that the company collected and sold data about people without consent, according to the letter.

While the company stopped selling location data belonging to Europeans, it continued for Americans because of a lack of federal privacy regulations.

While the company in question, Near Intelligence, declared bankruptcy in December 2023 there is a real potential for the data they collected to be sold to other parties as part of bankruptcy proceedings. There is a clear and present need to legislate how geolocation information is collected, used, as well as disclosed to address this often surreptitious aspect of the data brokerage economy.

Categories
Photography Writing

Street Photography in a More Private World

Jack Layton Ferry Terminal, Toronto, 2023

For the past several months Neale James has talked about how new laws which prevent taking pictures of people on the street will inhibit the documenting of history in certain jurisdictions. I’ve been mulling this over while trying to determine what I really think about this line of assessment and photographic concern. As a street photographer it seems like an issue where I’ve got some skin in the game!

In short, while I’m sympathetic with this line of argumentation I’m not certain that I agree. So I wrote a longish email to Neale—which was included in this week’s Photowalk podcast—and I’ve largely reproduced the email below as a blog post.

I should probably start by stating my priors:

  1. As a street photographer I pretty well always try to include people in my images, and typically aim to get at least some nose and chin. No shade to people who take images of peoples’ backs (and I selectively do this too) but I think that capturing some of the face’s profile can really bring many street photos to life.1
  2. I, also, am usually pretty obvious when I’m taking photos. I find a scene and often will ‘set up’ and wait for folks to move through it. And when people tell me they aren’t pleased or want a photo deleted (not common but it happens sometimes) I’m usually happy to do so. I shoot between 28-50mm (equiv.) focal lengths and so it’s always pretty obvious when I’m taking photos, which isn’t the case with some street photographers who are shooting at 100mm . To each their own but I think if I’m taking a photo the subjects should be able to identify that’s happening and take issue with it, directly, if they so choose to.

Anyhow, with that out of the way:

If you think of street photography in the broader history of photography, it started with a lot of images with hazy or ghostly individuals (e.g. ‘Panorama of Saint Lucia, Naples’ by Jones or ’Physic Street, Canton’ by Thomson or ‘Rue de Hautefeuille’ by Marville). Even some of the great work—such as by Cartier-Bresson, Levitt, Bucquet, van Schaick, Atget, Friedlander, Robert French, etc—include photographs where the subjects are not clearly identified. Now, of course, some of their photographs include obvious subjects, but I think that it’s worth recognizing that many of the historical ‘greats’ include images where you can’t really identify the subject. And… that was just fine. Then, it was mostly a limitation of the kit whereas now, in some places, we’re dealing with the limitations of the law.

Indeed, I wonder if we can’t consider the legal requirement that individuals’ identifiable images not be captured as potentially a real forcing point for creativity that might inspire additional geographically distinctive street photography traditions: think about whether, in some jurisdictions, instead of aperture priority being a preferred setting, that shutter priority is a default, with speeds of 5-15 second shutters to get ghostly images.2

Now, if such a geographical tradition arises, will that mean we get all the details of the clothing and such that people are wearing, today? Well…no. Unless, of course, street photographers embrace creativity and develop photo essays that incorporate this in interesting or novel ways. But street photography can include a lot more than just the people, and the history of street photography and the photos we often praise as masterpieces showcase that blurred subjects can generate interesting and exciting and historically-significant images.

One thing that might be worth thinking about is what this will mean for how geographical spaces are created by generative AI in the future. Specifically:

  1. These AI systems will often default to norms based on the weighting of what has been collected in training data. Will they ‘learn’ that some parts of the world are more or less devoid of people based on street photos and so, when generating images of certain jurisdictions, create imagery that is similarly devoid of people? Or, instead, will we see generative imagery that includes people whereas real photos will have to blur or obfuscate them?
  2. Will we see some photographers, at least, take up a blending of the real and the generative, where they capture streets but then use programs to add people into those streetscapes based on other information they collect (e.g., local fashions etc)? Basically, will we see some street photographers adopt a hybrid real/generative image-making process in an effort to comply with law while still adhering to some of the Western norms around street photography?

As a final point, while I identify as a street photographer and avoid taking images of people in distress, the nature of AI regulation and law means that there are indeed some good reasons for people to be concerned about the taking of street photos. The laws frustrating some street photographers are born from arguably real concerns or issues.

For example, companies such as Cleaview AI (in Canada) engaged in the collection of images and, subsequently, generated biometric profiles of people based on scraping publicly available images.

Most people don’t really know how to prevent such companies from being developed or selling their products but do know that if they stop the creation of training data—photographs—then they’re at least less likely to be captured in a compromising or unfortunate situation.

It’s not the photographers, then, that are necessarily ‘bad’ but the companies who illegally exploit our work to our detriment, as well as to the detriment of the public writ large.

All to say: as street photographers, and photographers more generally, we should think broader than our own interests to appreciate why individuals may not want their images taken in light of technical developments that are all around us. And importantly, the difference is that as photographers we do often share our work whereas CCTV cameras and such do not, with the effect that the images we take can end up in generative AI, and non-generative AI training data systems, whereas the cameras that are monitoring all of us always are (currently…) less likely to be feeding the biometric surveillance training data beast.


  1. While, at the same time, recognizing that sometimes a photo is preferred because people are walking away from the camera/towards something else in the scene. ↩︎
  2. The ND filter manufacturers will go wild! ↩︎
Categories
Videos

Is Street Photography Legal In Canada?

The answer, in almost all cases, is a resounding “yes.” David Fraser, a privacy and technology lawyer from Halifax, does an exceptional job in running curious (Canadian) street photographers through what the law allows and the rare exceptions when making street photos could have legal consequences.

Categories
Links

Postal Interception Coming to Canada?

The Canadian Senate is debating Bill S-256, ‌An Act to amend the Canada Post Corporation Act (seizure) and to make related amendments to other Acts. The relevant elements of the speech include:

Under the amendment to the Customs Act, a shipment entering Canada may be subject to inspection by border services officers if they have reason to suspect that its contents are prohibited from being imported into Canada. If this is the case, the shipment, whether a package or an envelope, may be seized. However, an envelope mailed in Canada to someone who resides at a Canadian address cannot be opened by the police or even by a postal inspector.

To summarize, nothing in the course of the post in Canada is liable to demand, seizure, detention or retention, except if a specific legal exception exists in the Canada Post Corporation Act or in one of the three laws I referenced. However, items in the mail can be inspected by a postal inspector, but if it is a letter, the inspector cannot open it to complete the inspection.

Thus, a police officer who has reasonable grounds to suspect that an item in the mail contains an illegal drug or a handgun cannot be authorized, pursuant to a warrant issued by a judge, to intercept and seize an item until it is delivered to the addressee or returned to the sender. I am told that letters containing drugs have no return address.

The Canadian Association of Chiefs of Police, in 2015, raised this very issue (.pdf). They recognised “that search and seizure authorities granted to law enforcement personnel under the Criminal Code of Canada or other criminal law authorities are overridden by the [Canada Post Corporation Act], giving law enforcement no authority to seize, detain or retain parcels or letters while they are in the course of mail and under Canada Post’s control.” The result was the Association was resolved:

that the Canadian Association of Chiefs of Police requests the Government of Canada to amend the Canada Post Corporation Act to provide police, for the purpose of intercepting contraband, with the ability to obtain judicial authorization to seize, detain or retain parcels or letters while they are in the course of mail and under Canada Post’s control.

It would seem as though, should Bill S-256 pass into law, that seven or eight years later some fairly impressive new powers that contrast with decades of mail privacy precedent may come undone.

Categories
Writing

Apple To More Widely Encrypt iCloud Data

Photo by Kartikey Das on Pexels.com

Apple has announced it will begin rolling out new data security protections for Americans by end of 2022, and the rest of the world in 2023. This is a big deal.

One of the biggest, and most serious, gaping holes in the protections that Apple has provided to its users is linked to iCloud. Specifically, while a subset of information has been encrypted such that Apple couldn’t access or disclose the plaintext of communications or content (e.g., Health information, encrypted Apple Notes, etc) the company did not encrypt device backups, message backups, notes generally, iCloud contents, Photos, and more. The result is that third-parties could either compel Apple to disclose information (e.g., by way of warrant) or otherwise subvert Apple’s protections to access stored data (e.g., targeted attacks). Apple’s new security protections will expand the categories of protected data from 141 to 23.

I am very supportive of Apple’s decision and frankly congratulate them on the very real courage that it takes to implement something like this. It is:

  • courageous technically, insofar as this is a challenging thing to pull off at the scale at which Apple operates
  • courageous from a business perspective, insofar as it raises the prospect of unhappy customers should they lose access to their data and Apple unable to assist them
  • courageous legally, insofar as it’s going to inspire a lot of frustration and upset by law enforcement and government agencies around the world

It’ll be absolutely critical to observe how quickly, and how broadly, Apple extends its new security capacities and whether countries are able to pressure Apple to either not deploy them for their residents or roll them back in certain situations. Either way, Apple routinely sets the standard on consumer privacy protections; others in the industry will now be inevitably compared to Apple as either meeting the new standard or failing their own customers in one way or another.

From a Canadian, Australia, or British government point of view, I suspect that Apple’s decision will infuriate law enforcement and security agencies who had placed their hopes on CLOUD Act bilateral agreements to get access to corporate data, such as that held by Apple. Under a CLOUD bilateral British authorities could, as an example, directly serve a judicially authorised order to Apple about a British resident, to get Apple to disclose information back to the British authorities without having to deal with American authorities. It promised to substantially improve the speed at which countries with bilateral agreements could obtain electronic evidence. Now, it would seem, Apple will largely be unable to assist law enforcement and security agencies when it comes to Apple users who have voluntarily enabled heightened data protections. Apple’s decision will, almost certainly, further inspire governments around the world to double down on their efforts to advance anti-encryption legislation and pass such legislation into law.

Notwithstanding the inevitable government gnashing of teeth, Apple’s approach will represent one of the biggest (voluntary) increases in privacy protection for global users since WhatsApp adopted Signal’s underlying encryption protocols. Tens if not hundreds of millions of people who enable the new data protection will be much safer and more secure in how their data is stored while simultaneously restricting who can access that data without individuals’ own knowledge.

In a world where ‘high-profile’ targets are just people who are social influencers on social media, there are a lot of people who stand to benefit from Apple’s courageous move. I only hope that other companies, such as Google, are courageous enough to follow Apple at some point in the near future.


  1. really, 13, given the issue of iMessage backups being accessible to Apple ↩︎