Categories
Writing

Amendments in Bill C-2 Would Establish an Intelligence Role for the Canadian Coast Guard

While much of the attention around Canada’s Bill C-2: An Act respecting certain measures relating to the security of the border between Canada and the United States and respecting other related security measures has focused on its lawful access and interception aspects, one notable change has flown under the radar: amendments to the Oceans Act that quietly expand the Canadian Coast Guard’s mandate to include intelligence functions.

Specifically, the bill proposes updating the Coast Guard’s responsibilities to include:

security, including security patrols and the collection, analysis and disclosure of information or intelligence.1

This language, paired with provisions granting the Minister explicit authority to collect, analyze, and disclose intelligence,2 marks a meaningful shift. The update would echo the U.S. model, where the Coast Guard is both a maritime safety organization and an intelligence actor. The U.S. Coast Guard Intelligence (CG-2) has long played a dual role in maritime domain awareness and national security operations.

Why does this matter?

There are a few strategic implications:
1. NATO and National Security Alignment: The expanded role may help Canada meet NATO funding expectations, especially where the Coast Guard is deployed to conduct maritime surveillance and to maintain an Arctic presence.
2. Statutory Authority: These changes might establish a legal basis for intelligence collection practices that are already occurring, but until now may have lacked clear legislative grounding.
3. Redundancy and Resilience: With global intelligence sharing under strain, having a domestic maritime intelligence function could serve as a backstop if access to allied intelligence is reduced.
4. Northern Operations: Coast Guard vessels, which are not militarized like Royal Canadian Navy warships, are well-positioned to operate in the Arctic and northern waters, offering intelligence capabilities without the geopolitical weight of a military presence.

To be clear, this wouldn’t transform the Canadian Coast Guard into an intelligence agency. But it would give the institution statutory authorities that, until now, have not explicitly been within its official purview.

It’s a small clause in a big bill, but one worth watching. As researchers, journalists, and civil society take a closer look at Bill C-2, this expansion of maritime intelligence authority could (and should) draw more attention.


  1. 30(2) of C-2, amending 41(1)(f) of the Oceans Act ↩︎
  2. 30(2) of C-2, amending 41(2) of the Oceans Act ↩︎
Categories
Links Writing

Research Security Requirements and Ontario Colleges and Universities

There’s a lot happening, legislatively in Ontario. One item worth highlighting concerns the requirement for Ontario colleges and universities to develop security research plans.

The federal government has been warning that Canadian academic research is at risk of exfiltration or theft by foreign actors, including by foreign-influenced professors or students who work in Canadian research environments, or by way of electronic and trade-based espionage. In response, the federal government has established a series of guidance documents that Canadian researchers and universities are expected to adhere to where seeking certain kinds of federal funding.

The Ontario government introduced Bill 33, Supporting Children and Students Act, 2025 on May 29, 2025. Notably, Schedule 3 introduces requirements for security plans for Ontario college of applied arts and technology and publicly funded university.

The relevant text from the legislation states as follows:

Research security plan

Application

20.1 (1) This section applies to every college of applied arts and technology and to every publicly-assisted university.

Development and implementation of plan

(2) Every college or university described in subsection (1) shall develop and implement a research security plan to safeguard, and mitigate the risk of harm to or interference with, its research activities.

Minister’s directive

(3) The Minister may, from time to time, in a directive issued to one or more colleges or universities described in subsection (1),

(a) specify the date by which a college or university’s research security plan must be developed and implemented under subsection (2);

(b) specify the date by which a plan must be provided to the Minister under subsection (4) and any requirements relating to updating or revising a plan; and

(c) specify topics to be addressed or elements to be included in a plan and the date by which they must be addressed.

Review by Minister

(4) Every college or university described in subsection (1) shall provide the Minister with a copy of its research security plan and any other information or reports requested by the Minister in respect of research security.

Categories
Links Writing

Japan’s New Active Cyberdefence Law

Japan has passed legislation that will significantly reshape the range of cyber operations that its government agencies can undertake. As reported by The Record, the law will enable the following.

  1. Japan’s Self-Defence Forces will be able to provide material support to allies under the justification that failing to do so could endanger the whole of the country.
  2. Japanese LEAs can infiltrate and neutralize hostile servers before any malicious activity has taken place and to do so below the level of an armed attack against Japan.
  3. The Self-Defence Forces be authorized to undertake offensive cyber operations against particularly sophisticated incidents.
  4. The government will be empowered to analyze foreign internet traffic entering the country or just transiting through it. (The government has claimed it won’t collect or analyze the contents of this traffic.) Of note: the new law will not authorize the government to collect or analyze domestically generated internet traffic.
  5. Japan will establish an independent oversight panel that will give prior authorization to all acts of data collection and analysis, as well as for offensive operations intended to target attackers’ servers. This has some relationship to Ministerial oversight of the CSE in Canada, though perhaps (?) with a greater degree of control over the activities understand by Japanese agencies.

The broader result of this legislative update will be to further align the Japanese government, and its agencies, with its Five Eyes friends and allies.

It will be interesting to learn over time whether these activities are impaired by the historical stovepiping of Japan’s defence and SIGINT competencies. Historically the strong division between these organizations impeded cyber operations and was an issue that the USA (and NSA in particular) had sought to have remedied over a decade ago. If these issues persist then the new law may not be taken up as effectively as would otherwise be possible.

Categories
Aside

In Memoriam of John L. Young of Cryptome

John L. Young, founder of Cryptome, has died.

John’s work at Cryptome was inspirational for much of the work that I did during my doctorate and time at the Citizen Lab. His unwavering commitment to transparency and efforts to hold the powerful accountable was an early and important light, showing how digital archives could be used to promote real change.

While we never met, his commitment to transparency and accountability will live on with me and many others.

You can learn about the history of Cryptome on Wikipedia.

Categories
Writing

Details from the DNI’s Annual VEP Report

For a long time external observers wondered how many vulnerabilities were retained vs disclosed by FVEY SIGINT agencies. Following years of policy advocacy there is some small visibility into this by way of Section 6270 of Public Law 116-92. This law requires the U.S. Director of National Intelligence (DNI) to disclose certain annual data about the vulnerabilities disclosed and retained by US government agencies.

The Fiscal Year 2023 VEP Annual Report Unclassified Appendix reveals “the aggregate number of vulnerabilities disclosed to vendors or the public pursuant to the [VEP] was 39. Of those disclosed, 29 of them were initial submissions, and 10 of them were reconsiderations that originated in prior years.”1

There can be many reasons to reassess vulnerability equities. Some include:

  1. Utility of given vulnerabilities decrease either due to changes in the environment or research showing a vulnerability would not (or would no longer) have desired effect(s) or possess desired operational characteristics.
  2. Adversaries have identified the vulnerabilities themselves, or through 4th party collection, and disclosure is a defensive action to protect US or allied assets.
  3. Independent researchers / organizations are pursuing lines of research that would likely result in finding the vulnerabilities.
  4. By disclosing the vulnerabilities the U.S. agencies hope or expect adversaries to develop similar attacks on still-vulnerable systems, with the effect of masking future U.S. actions on similarly vulnerable systems.
  5. Organizations responsible for the affected software (e.g., open source projects) are now perceived as competent / resourced to remediate vulnerabilities.
  6. The effects of vulnerabilities are identified as having greater possible effects than initially perceived which rebalances disclosure equities.
  7. Orders from the President in securing certain systems result in a rebalancing of equities regarding holding the vulnerabilities in question.
  8. Newly discovered vulnerabilities are seen as more effective in mission tasks, thus deprecating the need for the vulnerabilities which were previously retained.
  9. Disclosure of vulnerabilities may enable adversaries to better target one another and thus enable new (deniable) 4th party collection opportunities.
  10. Vulnerabilities were in fact long used by adversaries (and not the U.S. / FVEY) and this disclosure burns some of their infrastructure or operational capacity.
  11. Vulnerabilities are associated with long-terminated programs and the release has no effect of current, recent, or deprecated activities.

This is just a very small subset of possible reasons to disclose previously-withheld vulnerabilities. While we don’t have a strong sense of how many vulnerabilities are retained each year, we do at least have a sense that rebalancing of equities year-over-year(s) is occurring. Though without a sense of scale the disclosed information is of middling value, at best.

Categories
Writing

ASD is Clearly Preparing for a Quantum Future

National cryptological organizations, such as the NSA, CSE, GCHQ, ASD, and GCSB, routinely assess the strength of different modes of encryption and offer recommendations on what organizations should be using. They make their assessments based on the contemporary strength of encryption algorithms as well as based on the planned or expected vulnerabilities of those algorithms in the face of new or forthcoming technologies.

Quantum computing has the potential to undermine the security that is currently provided by a range of approved cryptographic algorithms.1 On December 12, 2024, Australia’s ASD published a series of recommendations for what algorithms should be deprecated by 2030. What is notable about their decision is that they are proposing deprecations before other leading agencies, including the USA’s National Institute of Standards and Technology and Canada’s CSE, though with an acknowledgement that the deprecation is focused on High Assurance Cryptographic Equipment (HACE).

To-be-deprecated algorithms include:

  • Elliptic Curve Diffie-Hellman (EDHC)
  • Elliptic Curve Digital Signature Algorithm (ECDSA)
  • Module-Lattice-Based Digital Signature Algorithm 65 (ML-DSA-65)
  • Module-Lattice-Based Key Encapsulation Mechanism 768 (ML-KEM-768)
  • Rivest-Shamir-Adleman (RSA)
  • Secure Hashing Mechanisms 224 and 256 (SHA-224 and RSA-256)
  • AES-128 and AES-192

Given that the English-speaking Five Eyes agencies regularly walk in near-lockstep we might see updated guidance from the different agencies in the coming weeks and months. Alternately, policy processes may prevent countries from updating their standards (or publicly announcing changes), leaving ASD as a path leader in cybersecurity while other agencies wait until policy mechanisms eventually lead to these algorithms being deprecated by 2035.

Looking further out, and aside from the national security space, the concerns around cryptographic algorithms speak to challenges that embedded systems will having in the coming decade where manufacturers fail to to get ahead of things and integrate quantum-resistance algorithms in the products they sell. Moreover, for embedded systems (e.g., Operational Technology, Internet of Things, and related systems) where it may be challenging or impossible to update cryptographic algorithms there may be a whole world of currently-secure solutions that will become woefully insecure in the not-so-distant future. That’s a future that we need to start planning for, today, so that at least a decade’s worth of work can hopefully head off the worst of the harms associated with deprecated embedded systems’ (in)security.


  1. What continues to be my favourite, and most accessible, explanation of the risks posed by quantum computing is written by Bruce Schneier. ↩︎
Categories
Writing

Cybercrime, Advanced Persistent Threats, and Human-Centric Security

RUSI has published a compelling essay arguing that policy makers and threat intelligence groups should focus more time and attention towards the activities of cyber criminals.

Contemporary cyber criminals:

  • have many operational characteristics that parallel those of nation-state supported advanced persistent threats
  • are quickly innovating and developing new exploit processes and chains in reaction to market developments, and
  • have a real and significant impact on the lives of people around the world.

Moreover, criminals are increasingly targeting critical infrastructure, an activity-type which has characteristically been associated with nation-state supported organizations.

While it’s left unstated in the essay, Larson is also implicitly is calling for a focus on human-centric security practices. Such a focus would see policy makers and cyber practitioners work to more actively stymie the worst harms felt by individuals and communities affected by cyber operations or incidents. Such a focus might, also, see countries or organizations shift resources away from impeding nation-state supported threat actors and towards law enforcement agencies and cybersecurity bodies or, alternately, see national governments update operational guidance to prioritize targeting cyber criminals’ organizations or infrastructure using offensive cyber capacities.

Categories
Writing

The Data Broker Economy Continues to Endanger Individuals’ Privacy

Mobile advertisers and data brokers routinely collect vast amounts of sensitive information without individuals’ meaningful consent. Sometimes this collection is explicitly mentioned in the terms of service that advertisers provide. However, in many other cases, this collection is linked to “free” functionality services that developers integrate into their applications at the cost of losing control of their users’ data.

These kinds of data brokers fuel a large and mostly invisible data market. But there are times where aspects of it (accidentally) emerge from the shadows.

Recent reporting, first covered by 404 Media, reveals how Fog Reveal sells geolocation services to government agencies. Geofences can be placed around targeted persons’ friends’ and families’ homes, places of worship, doctors’ offices, and offices of a person’s lawyer. Fences can be established retroactively as well as proactively.

These same capacities, it must be noted, can and are also exploited by non-law enforcement agencies. Recent reporting has showcased how the activities of these kinds of data brokers can endanger national security, and they can also put the safety of political and business leaders, to say nothing of regular people, at risk of harm.

Fog Reveal and similar companies are offering an expansive for-sale surveillance capacity. And the capacity, which was once the thing of science fiction, has somehow become banally available for those who can convince private vendors to provide access to the data they have collected.

There remains an open question of how to remedy the current situation: should the focus be on regulating bad actors after they appear or, instead, invest the political capital required to stop the processes enabling the data collection in the first place?

Categories
Writing

Intelligence Commissioner Raises Concerns About Canada’s Federal Cybersecurity Legislation

Earlier this week the Intelligence Commissioner (IC) appeared at the Standing Senate Committee on National Security, Defence and Veterans Affairs on Bill C-26, along with federal Privacy Commissioner. The bill is intended to enhance the cybersecurity requirements that critical infrastructure providers must adopt.

The IC’s remarks are now public. He made four very notable comments in his opening remarks:

  1. The IC warned that the proposed amendments to the Telecom Act would allow the minister to essentially compel the production of any information in support of orders. This information could include personal information – which under broad exceptions, could then be widely disclosed.
  2. Part 2 allows for the regulators to carry out the equivalent of unwarranted searches – where again, personal information could be collected.
  3. The CSE will play a vital role and will be the holder of this information, in a technological form or otherwise, which will contain elements for which we have a reasonable expectation of privacy.
  4. In light of the invasive nature of the Bill, he asserted that it is important that meaningful safeguards be part of the legislation so that Canadians have confidence in the cybersecurity system.

His responses to comments at committee — not yet available through Hansard — made even more clear that he believed that amendments are needed to integrate appropriate oversight and accountability measures into the legislation. The IC’s comments, combined with those of the federal Privacy Commissioner of Canada and civil society representatives, constitute a clear warning to senators about the potential implications of the legislation.

It will be interesting to see how they respond.

Categories
Links Writing

The Ongoing Problems of Placing Backdoors in Telecommunications Networks

In a cyber incident reminiscent of Operation Aurora,1 threat actors successfully penetrated American telecommunications companies (and a small number of other countries’ service providers) to gain access to lawful interception systems or associated data. The result was that:

For months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data, according to people familiar with the matter, which amounts to a major national security risk. The attackers also had access to other tranches of more generic internet traffic, they said.

The surveillance systems believed to be at issue are used to cooperate with requests for domestic information related to criminal and national security investigations. Under federal law, telecommunications and broadband companies must allow authorities to intercept electronic information pursuant to a court order. It couldn’t be determined if systems that support foreign intelligence surveillance were also vulnerable in the breach.

Not only is this a major intelligence coup for the adversary in question, but it once more reveals the fundamental difficulties in deliberately establishing lawful access/interception systems in communications infrastructures to support law enforcement and national security investigations while, simultaneously, preventing adversaries from taking advantage of the same deliberately-designed communications vulnerabilities.