Categories
Writing

What is the Role of Cyber Operators in Assessing Effectiveness or Shaping Cyber Policy?

An anonymous European Intelligence Official wrote an oped in July entitled, “Can lawyers lose wars by stifling cyber capabilities?” The article does a good job in laying out why a cyber operator — that is, someone who is presumably relatively close to either planning or undertaking cyber operations — is deeply frustrated by the way in which decision-making is undertaken.

While I admit to having some sympathy for the author’s plight I fundamentally disagree with much of their argument, and think that the positions they hold should be taken up and scrutinised. In this post, I’m really just pulling out quotations from the article and then providing some rebuttal or analysis — you’re best off reading it, first, if you want to more fully follow along and assess whether I’m being fair to the author and the points they are making.

With that out of the way, here we go….

Law is no longer seen as a system of checks and balances but as a way to shape state behaviour in cyberspace

Yes, this is one of the things that laws are actually supposed to do. You may (reasonably in some cases) disagree with the nature of the laws and their effects, but law isn’t a mere “check and balance.” And, especially where there is no real ability to contest interpretations of law (because they are administered by government agencies largely behind closed doors) it is particularly important for law to have a stronger guiding function in order to maintain democratic legitimacy and social trust in government operations.

Idealistic legalism causes legal debates on cyber capabilities to miss a crucial discussion point: what operational constraints are we willing to accept and what consequences does that have for our national security?

Sure, but some of this is because the USA government is so closed mouthed about its capacities. Consider if there was a more robust effort to explain practice such as in the case of some European agencies? I would note that the Dutch, as an example, are sometimes pretty explicit about their operations which is then helpful for considering their activities with respect to authorising laws and associated national and international norms.

Laws attempt to capture as many activities in cyberspace as possible. To do so, legal frameworks must oversimplify. This is ill-suited to such a complex domain

This seems to not appreciate how law tends, at least in some jurisdictions, to be broader in scope and then supplemented by regulations or policies. However, where regulations or policies have been determined as regularly insufficient there may be a decision that more detailed laws are now necessary. To an extent, this is the case post-Snowden and with very good reason, and as demonstrated in the various non-compliance reports that has been found with certain NSA (and other American intelligence community) operations over time.

The influence of practitioners slowly diminishes as lawyers increasingly take the lead in shaping senior leadership opinions on proposed cyber operations rather than merely advising.

I can appreciate the frustration of seeing the leadership move from operations practitioners to policy/legal practitioners.1 But that shift between whether organisations are being led by operations practitioners or those focused in law/policy can be a normal back and forth.

And to be entirely honest the key thing — and the implicit critique throughout this whole piece — is that the decision makers understand what the ops folks are saying.2 Those in decision making roles have a lot of responsibilities and, often, a bigger or different picture of the implications of operations.

I’m in no way saying that lawyers should be the folks to always call the shots3 but just because you’re in operations doesn’t mean that you necessarily are making the right calls broadly and, instead, may be seeing the right calls through your particular lens and mission. That lens and mission may not always be sufficient in coming to a conclusion that aligns more broadly with agency or national or international policy intents/goals.

… a law might stipulate that a (foreign) intelligence agency cannot collect information from systems owned by the citizens of its country. But what if, as Chinese and Russian cyber threat actors do, a system belonging to a citizen is being abused to route attack traffic through? Such an operational development is not foreseen, and thus not prescribed, by law. To collect information would then be illegal and require judicial overhaul – a process that can take years in a domain that can see modus operandi shift in a matter of days.

There may be cases where you have particularly risk adverse decision makers or, alternately, particularly strong legal limitations that preclude certain kinds of operations.

I would note that it is against the law to simply target civilians in conflict scenarios on grounds that doing so runs counter to the agreed-upon laws of war (recognising they are often not adhered to). Does this have the effect of impeding certain kinds of military activities? Yes. And that may still be the right decisions notwithstanding the consequences it may have on the ability to conduct some operations and/or reduce their efficacy.

In the cyber context, the complaint is that certain activities are precluded on the basis that the law doesn’t explicitly recognise and authorise them. Law routinely leaves wiggle rooms and part of the popular (and sometimes private…) problem has been how intelligence lawyers are perceived of as abusing that wiggle room — again, see the NSA and other agencies as they were denuded in some of the Snowden revelations, and openly opposite interpretations of legislation that was adopted to authorise actions that legislators had deliberately sought to preclude.4 For further reasons the mistrust may exist between operators and legislators, in Canada you can turn to the ongoing historical issues between CSIS and the Federal Court which suggests that the “secret law and practices” adopted by Canada’s IC community may counter to the actual law and legal processes, and then combine that with some NSIRA findings that CSE activities may have taken place in contravention of Canadian privacy law.

In the above context, I would say that lots of legislators (and publics) have good ground to doubt the good will or decision-making capacity of the various parties within national ICs. You don’t get to undertake the kind of activities that happened, previously, and then just pretend that “it was all in the recent past, everything’s changed, trust us guys.”

I would also note: the quoted material makes an assumption that policy makers have not, in fact, considered the scenario the author is proposing and then rejected it as a legitimate way of operating. The fact that a decision may not have gone your way is not the same as your concerns not being evaluated in the process of reaching a conclusion.

When effectiveness is seen as secondary, cyber activities may be compliant, but they are not winning the fight.

As I have been writing in various (frustrating) peer reviews I’ve been doing: evidence of this, please, as opposed to opinion and supposition. Also, “the fight” will be understood and perceived by different people in different positions in different agencies: a universal definition should not be presumed.

…constraints also incur costs due to increased bureaucratic complexity. This hampers operational flexibility and innovation – a trade-off often not adequately weighed by, or even visible to, law- and decision-makers. When appointing ex-ante oversight boards or judicial approval, preparation time for conducting cyber operations inevitably increases, even for those perfectly legal from the beginning.

So, in this case the stated problem is that legislators and decision makers aren’t getting the discrete kinds of operational detail that this particular writer thinks are needed to make the “right” trade off decisions.

In some cases….yeah. That’ll be the case. Welcome to the hell of people not briefing up properly, or people not understanding because briefing materials weren’t scoped or prepared right, and so forth. That is: welcome to the government (or any sufficiently large bureaucracy)!

But more broadly, the complaint is that the operator in question knows better than the other parties but without, again, specific and clear evidence that the trade offs are incorrect. I get that spooky things can’t be spoken aloud without them becoming de-spookified, but picture a similar kind of argument in any other sector of government and you’ll get the same kind of complaint. Ops people will regularly complain about legislators or decision makers when they don’t get their way, their sandcastles get crushed, or they have to do things in less-efficient ways in their busy days. And sometimes they’re right to complain and, in others, there is a lot more at stake than what they see operationally going on.

This is a losing game because, as Calder Walton noted, ‘Chinese and Russian services are limited only by operational effectiveness’.

I don’t want to suggest I disagree! But, at the same time, this is along the lines of “autocracies are great because they move faster than democracies and we have to recognise their efficiency” arguments that float around periodically.5

All of which is to say: autocracies and dictatorships have different internal logics to their bureaucracies that can have corresponding effects on their operations.

While it may be “the law” that impedes some Five Eyes/Western agencies’ activities, you can picture the need to advance the interests of kleptocrats or dictators’ kids, gin up enough ransomware dollars to put food on the team’s table, and so forth, as establishing some limits on the operational effectiveness of autocratic governments’ intelligence agencies.

It’s also worth noting that “effectiveness” can be a contested concept. If you’re OK blundering around and burning your tools and are identified pretty often then you may have a different approach to cyber operations, generally, as opposed to situations where being invisible is a key part of operational development. I’m not trying to suggest that the Russians, Chinese, and other adversaries just blunder about, nor that the FVEY are magical ghosts that no one sees on boxes and undertaking operations. However, how you perceive or define “effective” will have corresponding consequences for the nature and types of operations you undertake and which are perceived as achieving the mission’s goals.

Are agencies going to publicly admit they were unable to collect intelligence on certain adversary cyber actors because of legal boundaries?

This speaks to the “everything is secret and thus trust us” that is generally antithetical to democratic governance. To reverse things on the author: should there be more revelation of operations that don’t work so that they can more broadly be learned from? The complaint seems to be that the lawyers et al don’t know what they’re doing because they aren’t necessarily exposed to the important spooky stuff, or understand its significance and importance. To what extent, then, do the curtains need to open some and communicate this in effective ways and, also, the ways in which successes have previously happened.

I know: if anything is shown then it blows the whole premise of secret operations. But it’s hard to complain that people don’t get the issues if no facts are brought to the table, whereas the lawyers and such can point to the laws and at least talk to them. If you can’t talk about ops, then don’t be surprised that people will talk about what is publicly discussable…and your ops arguments won’t have weight because they don’t even really exist in the room where the substantive discussions about guardrails may be taking place.


In summary: while I tend to not agree with the author — and disagree as someone who has always been more on the policy and/or law side of the analytic space — their article was at least thought provoking. And for that alone I think that it’s worth taking the time to read their article and consider the arguments within it.


  1. I would, however, would hasten to note that the head of NSA/Cyber Command tends to be a hella lot closer to “ops” by merit of a military leadership. ↩︎
  2. And, also, what the legal and policy teams are saying… ↩︎
  3. Believe me on this point… ↩︎
  4. See, as example: “In 2006, after Congress added the requirement that Section 215 orders be “relevant to” an investigation, the DOJ acknowledged that language was intended to impose new protections. A fact sheet about the new law published by the DOJ stated: “The reauthorizing legislation’s amendments provide significant additional safeguards of Americans’ civil liberties and privacy,” in part by clarifying, “that a section 215 order cannot be issued unless the information sought is relevant to an authorized national security investigation.” Yet just months later, the DOJ convinced the FISC that “relevant to” meant “all” in the first Section 215 bulk dragnet order. In other words, the language inserted by Congress to ​limit ​the scope of what information could be gathered was used by the government to say that there were ​no limits​.” From: Section 215: A Brief History of Violations. ↩︎
  5. See, as examples, the past 2-4 years ago when there was a perception that the Chinese response to Covid-19 and the economy was superior to everyone else that was grappling with the global pandemic. ↩︎
Categories
Links Writing

Russian State Media Disinformation Campaign Exposed

Today, a series of Western allies — including Canada, the United States, and the Netherlands — disclosed the existence of a sophisticated Russian social media influence operation that was being operated by RT. The details of the campaign are exquisite, and include some of code used to drive the operation.

Of note, the campaign used a covert artificial intelligence (AI) enhanced software package to create fictitious online personas, representing a number of nationalities, to post content on X (formerly Twitter). Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

Although the tool was only identified on X, the authoring organizations’ analysis of the software used for the campaign indicated the developers intended to expand its functionality to other social media platforms. The authoring organizations’ analysis also indicated the tool is capable of the following:

  1. Creating authentic appearing social media personas en masse;
  2. Deploying content similar to typical social media users;
  3. Mirroring disinformation of other bot personas;
  4. Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  5. Formulating messages, to include the topic and framing, based on the specific archetype of the bot.

Mitigations to address this influence campaign include:

  1. Consider implementing processes to validate that accounts are created and operated by a human person who abides by the platform’s respective terms of use. Such processes could be similar to well-established Know Your Customer guidelines.
  2. Consider reviewing and making upgrades to authentication and verification processes based on the information provided in this advisory;
  3. Consider protocols for identifying and subsequently reviewing users with known-suspicious user agent strings;
  4. Consider making user accounts Secure by Default by using default settings such as MFA, default settings that support privacy, removing personally identifiable information shared without consent, and clear documentation of acceptable behavior.

This is a continuation of how AI tools are being (and will be) used to expand the ability of actors to undertake next-generation digital influence campaigns. And while adversaries are found using these techniques, today, we should anticipate that private companies (and others) will offer similar capabilities in the near future in democratic and non-democratic countries alike.

Categories
Aside

2024.3.18

It is exceptionally rewarding to see years of research and advocacy while I was at my former employer lead to significant reforms to legislation The effect, thus far, has been to protect residents of Canada from cyber-related threats while, also, imposing checks on otherwise unfettered government power and simultaneously protecting all residents of Canada’s privacy.

Categories
Links Writing

The Near-Term Impact of AI Technologies and Cyber Threats

In January, the UK’s National Cyber Security Centre (NCSC) published its assessment of the near-term impact of AI with regards to cyber threats. The whole assessment is worth reading for its clarity and brevity in identifying different ways that AI technologies will be used by high-capacity state actors, by other state and well resourced criminal and mercenary actors, and by comparatively low-skill actors.

A few items which caught my eye:

  • More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realised before 2025.
  • AI will almost certainly make cyber operations more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models.
  • AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.
  • Cyber resilience challenges will become more acute as the technology develops. To 2025, GenAI and large language models will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.

There are more insights, such as the value of training data held by high capacity actors and the likelihood that low skill actors will see significant upskilling over the next 18 months due to the availability of AI technologies.

The potential to assess information more quickly may have particularly notable impacts in the national security space, enable more effective corporate espionage operations, as well as enhance cyber criminal activities. In all cases, the ability to assess and query volumes of information at speed and scale will let threat actors extract value from information more efficiently than today.

The fact that the same technologies may enable lower-skilled actors to undertake wider ransomware operations, where it will be challenging to distinguish legitimate versus illegitimate security-related emails, also speaks to the desperate need for organizations to transition to higher-security solutions, including multiple factor authentication or passkeys.

Categories
Links

Russian Cyber Doctrine and Its Implementation

While the following might be a bit bellicose it, at the same time, has a ring of truth to it.

Using a foreign country’s military doctrine to reframe fuck-ups as successes — here, that the Russians’ real operations have had the intended effects — boils down to doing a GRU colonel’s work for him; placating Gerasimov about whether or not the O6’s department has contributed to winning the war, among other things.

The Russian government and its various agencies have been incredibly active in attempting to influence or affect the ability of the Ukrainian government to resist the illegal Russian invasion of its territory. But at the same time there has been a back and forth about the successes or failures of Russia in largely academic or public policy circles. In at least some cases, these arguments seem to argue for the successes of the Russian doctrine without sufficient evidence to maintain the position.

Notwithstanding the value of some of those debates it’s nice to see a line of critique that is more attentive to the structure of institutions and what often drives them, with the affect of broadening the rationales and explanations for the (un)successful efforts in the cyber domain by Russian forces.

Categories
Writing

Why Is(n’t) TikTok A National Security Risk?

Photo by Ron Lach on Pexels.com

There have been grumblings about TikTok being a national security risk for many years and they’re getting louder with each passing month. Indeed, in the United States a bill has been presented to ban TikTok (“The ANTI-SOCIAL CCP ACT“) and a separate bill (“No TikTok on Government Devices Act“) has passed the Senate and would bar the application from being used on government devices. In Canada, the Prime Minister noted that the country’s signals intelligence agency, the Communications Security Establishment, is “watching very carefully.”

I recently provided commentary where I outlined some of the potential risks associated with TikTok and where it likely should fit into Canada’s national security priorities (spoiler: probably pretty low). Here I just want to expand on my comments a bit to provide some deeper context and reflections.

As with all things security-related you need to think through what assets you are attempting to protect, the sensitivity of what you’re trying to protect, and what measures are more or less likely to protect those assets. Further, in developing a protection strategy you need to think through how many resources you’re willing to invest to achieve the sought-after protection. This applies as much to national security policy makers as it does to individuals trying to secure devices or networks.

What Is Being Protected

Most public figures who talk about TikTok and national security are presently focused on one or two assets.

First, they worry that a large volume of data may be collected and used by Chinese government agencies, after these agencies receive it either voluntarily from TikTok or after compelling its disclosure. Commentators argue that Chinese companies are bound to obey the national security laws of China and, as such, may be forced to disclose data without any notice to users or non-Chinese government agencies. This information could be used to obtain information about specific individuals or communities, inclusive of what people are searching on the platform (e.g., medical information, financial information, sexual preference information), what they are themselves posting and could be embarrassing, or metadata which could be used for subsequent targeting.

Second, commentators are adopting a somewhat odious language of ‘cognitive warfare’ in talking about TikTok.1 The argument is that the Chinese government might compel the company to modify its algorithms so as to influence what people are seeing on the platform. The intent of this modification would be to influence political preferences or social and cultural perceptions. Some worry this kind of influence could guide whom individuals are more likely to vote for (e.g., you see a number of videos that directly or indirectly encourage you to support particular political parties), cause generalised apathy (e.g., you see videos that suggest that all parties are bad and none worth voting for), or enhance societal tensions (e.g., work to inflame partisanship and impair the functioning of otherwise moderate democracies). Or, as likely, a combination of each of these kinds of influence operations. Moreover, the TikTok algorithm could be modified by government compulsion to prioritise videos that praise some countries or that suppress videos which negatively portray other countries.

What Is the Sensitivity of the Assets?

When we consider the sensitivity of the information and data which is collected by TikTok it can be potentially high but, in practice, possesses differing sensitivities based on the person(s) in question. Research conducted by the University of Toronto’s Citizen Lab found that while TikTok does collect a significant volume of information, that volume largely parallels what Facebook or other Western companies collect. To put this slightly differently, a lot of information is collected and the sensitivity is associated with whom it belongs to, who may have access to it, and what those parties do with it.

When we consider who is using TikTok and having their information uploaded to the company’s servers, then, the question becomes whether there is a particular national security risk linked with this activity. While some individuals may potentially be targets based on their political, business, or civil society bonafides this will not be the case with all (or most) users. However, in even assessing the national security risks linked to individuals (or associated groups) it’s helpful to do a little more thinking.

First, the amount of information that is collected by TikTok, when merged with other data which could theoretically be collected using other signals intelligence methods (e.g., extracting metadata and select content from middle-boxes, Internet platforms, open-source locations, etc) could be very revealing. Five Eyes countries (i.e., Australia, Canada, New Zealand, the United Kingdom, and the United States of America) collect large volumes of metadata on vast swathes of the world’s populations in order to develop patterns of life which, when added together, can be deeply revelatory. When and how those countries’ intelligence agencies actually use the collected information varies and is kept very secretive. Generally, however, only a small subset of individuals whose information is collected and retained for any period of time have actions taken towards them. Nonetheless, we know that there is a genuine concern about information from private companies being obtained by intelligence services in the Five Eyes and it’s reasonable to be concerned that similar activities might be undertaken by Chinese intelligence services.

Second, the kinds of content information which are retained by TikTok could be embarrassing at a future time, or used by state agencies in ways that users would not expect or prefer. Imagine a situation where a young person says or does something on TikTok which is deeply offensive. Fast forward 3-4 years and their parents are diplomats or significant members of the business community, and that offensive content is used by Chinese security services to embarrass or otherwise inconvenience the parents. Such influence operations might impede Canada’s ability to conduct its diplomacy abroad or undermine the a business’s ability to prosper.

Third, the TikTok algorithm is not well understood. There is a risk that the Chinese government might compel ByteDance, and through them the TikTok platform, to modify algorithms to amplify some content and not others. It is hard to assess how ‘sensitive’ a population’s general sense of the world is but, broadly, if a surreptitious foreign influence operation occurred it might potentially affect how a population behaves or sees the world. To be clear this kind of shift in behaviour would not follow from a single video but from a concerted effort over time that shifted social perceptions amongst at least some distinct social communities. The sensitivity of the information used to identify videos to play, then, could be quite high across a substantial swathe of the population using the platform.

It’s important to recognise that in the aforementioned examples there is no evidence that ByteDance, which owns TikTok, has been compelled by the Chinese government to perform these activities. But these are the kinds of sensitivities that are linked to using TikTok and are popularly discussed.

What Should Be Done To Protect Assets?

The threats which are posed by TikTok are, at the moment, specious: it could be used for any number of things. Why people are concerned are linked less to the algorithm or data that is collected but, instead, to ByteDance being a Chinese company that might be influenced by the Chinese government to share data or undertake activities which are deleterious to Western countries’ interests.

Bluntly: the issue raised by TikTok is not necessarily linked to the platform itself but to the geopolitical struggles between China and other advanced economies throughout the world. We don’t have a TikTok problem per se but, instead, have a Chinese national security and foreign policy problem. TikTok is just a very narrow lens through which concerns and fears are being channelled.

So in the absence of obvious and deliberate harmful activities being undertaken by ByteDance and TikTok at the behest of the Chinese government what should be done? At the outset it’s worth recognising that many of the concerns expressed by politicians–and especially those linked to surreptitious influence operations–would already run afoul of Canadian law. The CSIS Act bars clandestine foreign intelligence operations which are regarded as threatening the security of Canada. Specifically, threats to the security of Canada means:

(a) espionage or sabotage that is against Canada or is detrimental to the interests of Canada or activities directed toward or in support of such espionage or sabotage,

(b) foreign influenced activities within or relating to Canada that are detrimental to the interests of Canada and are clandestine or deceptive or involve a threat to any person,

(c) activities within or relating to Canada directed toward or in support of the threat or use of acts of serious violence against persons or property for the purpose of achieving a political, religious or ideological objective within Canada or a foreign state, and

(d) activities directed toward undermining by covert unlawful acts, or directed toward or intended ultimately to lead to the destruction or overthrow by violence of, the constitutionally established system of government in Canada,

CSIS is authorised to undertake measures which would reduce the threats to the security of Canada, perhaps in partnership with the Communications Security Establishment, should such a threat be identified and a warrant obtained from the federal court.

On the whole a general ban on TikTok is almost certainly disproportionate and unreasonable at this point in time. There is no evidence of harm. There is no evidence of influence by the Chinese government. Rather than banning the platform generally I think that more focused legislation or policy could make sense.

First, I think that legislation or (preferably) policies precluding at least some members of government and senior civil servants from using TikTok has some merit. In these cases a risk analysis should be conducted to determine if collected information would undermine the Government of Canada’s ability to secure confidential information or if the collected information could be used for intelligence operations against the government officials. Advice might, also, be issued by the Canadian Security Intelligence Service so that private organisations are aware of their risks. In exceptional situations some kind of security requirements might also be imposed on private organisations and individuals, such as those who are involved in especially sensitive roles managing critical infrastructure systems. Ultimately, I suspect the number of people who should fall under this ban would, and should, be pretty small.

Second, what makes sense is legislation that requires social media companies writ large–not just TikTok–to make their algorithms and data flows legible to regulators. Moreover, individual users should be able to learn, and understand, why certain content is being prioritised or shown to them. Should platforms decline to comply with such a the law then sanctions may be merited. Similarly, should algorithmic legibility showcase that platforms are being manipulated or developed in ways that deliberately undermine social cohesion then some sanctions might be merited, though with the caveat that “social cohesion” should be understood as referring to platforms being deliberately designed to incite rage or other strong emotions with the effect of continually, and artificially, weakening social cohesion and amplifying social cleavages. The term should not, however, be seen as a kind of code for creating exclusionary social environments where underprivileged groups continue to be treated in discriminatory ways.

So Is TikTok ‘Dangerous’ From A National Security Perspective?

Based on open source information2 there is no reason to think that TikTok is currently a national security threat. Are there any risks associated with the platform? Sure, but they need to be juxtaposed against equivalent or more serious threats and priorities. We only have so many resources to direct towards the growing legion of legitimate national security risks and issues; funnelling a limited set of resources towards TikTok may not be the best kind of prioritisation.

Consider that while the Chinese government could compel TikTok to disclose information about its users to intelligence and security services…the same government could also use business cutouts and purchase much of the same information from data brokers operating in the United States and other jurisdictions. There would be no need to secretly force a company to do something when, instead, it could just lawfully acquire equivalent (or more extensive!) information. This is a pressing and real national security (and privacy!) issue and is deserving of legislative scrutiny and attention.

Further, while there is a risk that TikTok could be used to manipulate social values…the same is true of other social networking services. Indeed, academic and journalistic research over the past 5-7 years has drawn attention to how popular social media services are designed to deliver dopamine hits and keep us on them. We know that various private companies and public organisations around the world work tirelessly to ‘hack’ those algorithms and manipulate social values. Of course this broader manipulation doesn’t mean that we shouldn’t care but, also, makes clear that TikTok isn’t the sole vector of these efforts. Moreover, there are real questions about the how well social influence campaigns work: do they influence behaviour–are they supplying change?–or is the efficaciousness of any campaign representative of an attentive and interested pre-existing audience–is demand for the content the problem?

The nice thing about banning, blocking, or censoring material, or undertaking some other kind of binary decision, is that you feel like you’ve done something. Bans, blocks, and censors are typically designed for a black and white world. We, however, live in a world that is actually shrouded in greys. We only have so much legislative time, so much policy capacity, so much enforcement ability: it should all be directed efficiently to understanding, appreciating, and addressing the fulness of the challenges facing states and society. This time and effort should not be spent on performative politics that is great for providing a dopamine hit but which fails to address the real underlying issues.


  1. I have previously talked about the broader risks of correlating national security and information security. ↩︎
  2. Open source information means information which you or I can find, and read, without requiring a security clearance. ↩︎
Categories
Links Writing

National Security Means What, Again?

There have been any number of concerns about Elon Musk’s behaviour, and especially in the recent weeks and months. This has led some commentators to warn that his purchase of Twitter may raise national security risks. Gill and Lehrich try to make this argument in their article, “Elon Musk Owning Twitter is A National Security Threat.” They give three reasons:

First, Musk is allegedly in communication with foreign actors – including senior officials in the Kremlin and Chinese Communist Party – who could use his acquisition of Twitter to undermine American national security.

Will Musk’s foreign investors have influence over Twitter’s content moderation policies? Will the Chinese exploit their significant leverage over Musk to demand he censor criticism of the CCP, or turn the dials up for posts that sow distrust in democracy?

Finally, it’s not just America’s information ecosystem that’s at stake, it’s also the private data of American citizens.

It’s worth noting that at no point do the authors provide a definition for ‘national security’, which causes the reader to have to guess what they likely mean. More broadly, in journalistic and opinion circle communities there is a curious–and increasingly common–conjoining of national security and information security. The authors themselves make this link in the kicker paragraph of their article, when they write

It is imperative that American leaders fully understand Musk’s motives, financing, and loyalties amidst his bid to acquire Twitter – especially given the high-stakes geopolitical reality we are living in now. The fate of American national security and our information ecosystem hang in the balance.1

Information security, generally, is focused on dangers which are associated with true or false information being disseminated across a population. It is distinguished from cyber security, and which is typically focused on the digital security protocols and practices that are designed to reduce technical computer vulnerabilities. Whereas the former focuses on a public’s mind the latter attends to how their digital and physical systems are hardened from being technically exploited.

Western governments have historically resisted authoritarian governments attempts to link the concepts of information security and cyber security. The reason is that authoritarian governments want to establish international principles and norms, whereby it becomes appropriate for governments to control the information which is made available to their publics under the guise of promoting ‘cyber security’. Democratic countries that emphasise the importance of intellectual freedom, freedom of religion, freedom of assembly, and other core rights have historically been opposed to promoting information security norms.

At the same time, misinformation and disinformation have become increasingly popular areas of study and commentary, especially following Donald Trump’s election as POTUS. And, in countries like the United States, Trump’s adoption of lies and misinformation was often cast as a national security issue: correct information should be communicated, and efforts to intentionally communicate false information should be blocked, prohibited, or prevented from massively circulating.

Obviously Trump’s language, actions, and behaviours were incredibly destabilising and abominable for an American president. And his presence on the world stage arguably emboldened many authoritarians around the world. But there is a real risk in using terms like ‘national security’ without definition, especially when the application of ‘national security’ starts to stray into the domain of what could be considered information security. Specifically, as everything becomes ‘national security’ it is possible for authoritarian governments to adopt the language of Western governments and intellectuals, and assert that they too are focused on ‘national security’ whereas, in fact, these authoritarian governments are using the term to justify their own censorious activities.

Now, does this mean that if we are more careful in the West about our use of language that authoritarian governments will become less censorious? No. But being more careful and thoughtful in our language, public argumentation, and positioning of our policy statements we may at least prevent those authoritarian governments from using our discourse as a justification for their own activities. We should, then, be careful and precise in what we say to avoid giving a fig leaf of cover to authoritarian activities.

And that will start by parties who use terms like ‘national security’ clearly defining what they mean, such that it is clear how national security is different from informational security. Unless, of course, authors and thinkers are in fact leaning into the conceptual apparatus of repressive governments in an effort to save democratic governance. For any author who thinks such a move is wise, however, I must admit that I harbour strong doubts of the efficacy or utility of such attempts.


  1. Emphasis not in original. ↩︎
Categories
Links

Housing in Ottawa Now a National Security Issue

David Pugliese is reporting in the Ottawa Citizen that the Canadian Forces Intelligence Command (CFINTCOM) is “trying to avoid posting junior staff to Ottawa because it has become too expensive to live in the region.” The risk is that financial hardship associated with living in Ottawa could make junior members susceptible to subversion. Housing costs in Ottawa have risen much faster than either wage increases or inflation. Moreover, the special allowance provided to staff that is meant to assauge the high costs of living in Canadian cities has been frozen for 13 years.

At this point energy, telecommunications, healthcare, and housing all raise their own national security concerns. To some extent, such concerns have tracked with these industry categories: governments have always worried about the security of telecommunications networks as well as the availability of sufficient energy supplies. But in other cases, such as housing affordability, the national security concerns we are seeing are the result of long-term governance failures. These failures have created new national security threats that would not exist in the face of good (or even just better) governance.1

There is a profound danger in trying to address all the new national security challenges and issues using national security tools or governance processes. National security incidents are often regarded as creating moments of exception and, in such moments, actions can be undertaken that otherwise could not. The danger is that states of exception become the norm and, in the process, the regular modes of governance and law are significantly set aside to resolve the crises of the day. What is needed is a regeneration and deployment of traditional governance capacity instead of a routine reliance on national security-type responses to these issues.

Of course, governments don’t just need to respond to these metastasized governance problems in order to alleviate national security issues and threats. They need to do so, in equable and inclusive ways, so as to preserve or (re)generate the trust between the residents of Canada and their government.

The public may justifiably doubt that their system of government is working where successive governments under the major political parties are seen as having failed to provide for basic needs. The threat, then, is that ongoing governance failures run the risk of placing Canada’s democracy under pressure. While this might seem overstated I don’t think that’s the case: we are seeing a rise of politicians who are capitalizing on the frustrations and challenges faced by Canadians across the country, but who do not have their own solutions. Capitalizing on rage and frustration, and then failing to deliver on fixes, will only further alienate Canadians from their government.

Governments across Canada flexed their muscles during the earlier phases of the COVID-19 pandemic. Having used them, then, it’s imperative they keep flexing these muscles to address the serious issues that Canadians are experiencing. Doing so will assuage existent national security issues. It will also, simultaneously, serve to prevent other normal governance challenges from metastasizing into national security threats.


  1. As an aside, these housing challenges are not necessarily new. Naval staff posted to Esquimalt have long complained about the high costs of off-base housing in Victoria and the surrounding towns and cities. ↩︎
Categories
Writing

Mitigating AI-Based Harms in National Security

Photo by Pixabay on Pexels.com

Government agencies throughout Canada are investigating how they might adopt and deploy ‘artificial intelligence’ programs to enhance how they provide services. In the case of national security and law enforcement agencies these programs might be used to analyze and exploit datasets, surface threats, identify risky travellers, or automatically respond to criminal or threat activities.

However, the predictive software systems that are being deployed–‘artificial intelligence’–are routinely shown to be biased. These biases are serious in the commercial sphere but there, at least, it is somewhat possible for researchers to detect and surface biases. In the secretive domain of national security, however, the likelihood of bias in agencies’ software being detected or surfaced by non-government parties is considerably lower.

I know that organizations such as the Canadian Security Intelligence Agency (CSIS) have an interest in understanding how to use big data in ways that mitigate bias. The Canadian government does have a policy on the “Responsible use of artificial intelligence (AI)” and, at the municipal policing level, the Toronto Police Service has also published a policy on its use of artificial intelligence. Furthermore, the Office of the Privacy Commissioner of Canada has published a proposed regulatory framework for AI as part of potential reforms to federal privacy law.

Timnit Gebru, in conversation with Julia Angwin, suggests that there should be ‘datasheets for algorithms’ that would outline how predictive software systems have been tested for bias in different use cases prior to being deployed. Linking this to traditional circuit-based datasheets, she says (emphasis added):

As a circuit designer, you design certain components into your system, and these components are really idealized tools that you learn about in school that are always supposed to work perfectly. Of course, that’s not how they work in real life.

To account for this, there are standards that say, “You can use this component for railroads, because of x, y, and z,” and “You cannot use this component for life support systems, because it has all these qualities we’ve tested.” Before you design something into your system, you look at what’s called a datasheet for the component to inform your decision. In the world of AI, there is no information on what testing or auditing you did. You build the model and you just send it out into the world. This paper proposed that datasheets be published alongside datasets. The sheets are intended to help people make an informed decision about whether that dataset would work for a specific use case. There was also a follow-up paper called Model Cards for Model Reporting that I wrote with Meg Mitchell, my former co-lead at Google, which proposed that when you design a model, you need to specify the different tests you’ve conducted and the characteristics it has.

What I’ve realized is that when you’re in an institution, and you’re recommending that instead of hiring one person, you need five people to create the model card and the datasheet, and instead of putting out a product in a month, you should actually do it in three years, it’s not going to happen. I can write all the papers I want, but it’s just not going to happen. I’m constantly grappling with the incentive structure of this industry. We can write all the papers we want, but if we don’t change the incentives of the tech industry, nothing is going to change. That is why we need regulation.

Government is one of those areas where regulation or law can work well to discipline its behaviours, and where the relatively large volume of resources combined with a law-abiding bureaucracy might mean that formally required assessments would actually be conducted. While such assessments matter, generally, they are of particular importance where state agencies might be involved in making decisions that significantly or permanently alter the life chances of residents of Canada, visitors who are passing through our borders, or foreign national who are interacting with our government agencies.

As it stands, today, many Canadian government efforts at the federal, provincial, or municipal level seem to be signficiantly focused on how predictive software might be used or the effects it may have. These are important things to attend to! But it is just as, if not more, important for agencies to undertake baseline assessments of how and when different predictive software engines are permissible or not, as based on robust testing and evaluation of their features and flaws.

Having spoken with people at different levels of government the recurring complaint around assessing training data, and predictive software systems more generally, is that it’s hard to hire the right people for these assessment jobs on the basis that they are relatively rare and often exceedingly expensive. Thus, mid-level and senior members of government have a tendency to focus on things that government is perceived as actually able to do: figure out and track how predictive systems would be used and to what effect.

However, the regular focus on the resource-related challenges of predictive software assessment raises the very real question of whether these constraints should just compel agencies to forgo technologies on the basis of failing to determine, and assess, their prospective harms. In the firearms space, as an example, government agencies are extremely rigorous in assessing how a weapon operates to ensure that it functions precisely as meant given that the weapon might be used in life-changing scenarios. Such assessments require significant sums of money from agency budgets.

If we can make significant budgetary allocations for firearms, on the grounds they can have life-altering consequences for all involved in their use, then why can’t we do the same for predictive software systems? If anything, such allocations would compel agencies to make a strong(er) business case for testing the predictive systems in question and spur further accountability: Does the system work? At a reasonable cost? With acceptable outcomes?

Imposing cost discipline on organizations is an important way of ensuring that technologies, and other business processes, aren’t randomly adopted on the basis of externalizing their full costs. By internalizing those costs, up front, organizations may need to be much more careful in what they choose to adopt, when, and for what purpose. The outcome of this introspection and assessment would, hopefully, be that the harmful effects of predictive software systems in the national security space were mitigated and the systems which were adopted actually fulfilled the purposes they were acquired to address.

Categories
Writing

Chinese Spies Accused of Using Huawei in Secret Australia Telecom Hack

Bloomberg has an article that discusses how Chinese spies were allegedly involved in deploying implants on Huawei equipment which was operated in Australia and the United States. The key parts of the story include:

At the core of the case, those officials said, was a software update from Huawei that was installed on the network of a major Australian telecommunications company. The update appeared legitimate, but it contained malicious code that worked much like a digital wiretap, reprogramming the infected equipment to record all the communications passing through it before sending the data to China, they said. After a few days, that code deleted itself, the result of a clever self-destruct mechanism embedded in the update, they said. Ultimately, Australia’s intelligence agencies determined that China’s spy services were behind the breach, having infiltrated the ranks of Huawei technicians who helped maintain the equipment and pushed the update to the telecom’s systems. 

Guided by Australia’s tip, American intelligence agencies that year confirmed a similar attack from China using Huawei equipment located in the U.S., six of the former officials said, declining to provide further detail.

The details from the story are all circa 2012. The fact that Huawei equipment was successfully being targeted by these operations, in combination with the large volume of serious vulnerabilities in Huawei equipment, contributed to the United States’ efforts to bar Huawei equipment from American networks and the networks of their closest allies.1

Analysis

We can derive a number of conclusions from the Bloomberg article, as well as see links between activities allegedly undertaken by the Chinese government and those of Western intelligence agencies.

To begin, it’s worth noting that the very premise of the article–that the Chinese government needed to infiltrate the ranks of Huawei technicians–suggests that circa 2012 Huawei was not controlled by, operated by, or necessarily unduly influenced by the Chinese government. Why? Because if the government needed to impersonate technicians to deploy implants, and do so without the knowledge of Huawei’s executive staff, then it’s very challenging to say that the company writ large (or its executive staff) were complicit in intelligence operations.

Second, the Bloomberg article makes clear that a human intelligence (HUMINT) operation had to be conducted in order to deploy the implants in telecommunications networks, with data then being sent back to servers that were presumably operated by Chinese intelligence and security agencies. These kinds of HUMINT operations can be high-risk insofar because if operatives are caught then the whole operation (and its surrounding infrastructure) can be detected and burned down. Building legends for assets is never easy, nor is developing assets if they are being run from a distance as opposed to spies themselves deploying implants.2

Third, the United States’ National Security Agency (NSA) has conducted similar if not identical operations when its staff interdicted equipment while it was being shipped, in order to implant the equipment before sending it along to its final destination. Similarly, the CIA worked for decades to deliberately provide cryptographically-sabotaged equipment to diplomatic facilities around the world. All of which is to say that multiple agencies have been involved in using spies or assets to deliberately compromise hardware, including Western agencies.

Fourth, the Canadian Communications Security Establish Act (‘CSE Act’), which was passed into law in 2019, includes language which authorizes the CSE to do, “anything that is reasonably necessary to maintain the covert nature of the [foreign intelligence] activity” (26(2)(c)). The language in the CSE Act, at a minimum, raises the prospect that the CSE could undertake operations which parallel those of the NSA and, in theory, the Chinese government and its intelligence and security services.3

Of course, the fact that the NSA and other Western agencies have historically tampered with telecommunications hardware to facilitate intelligence collection doesn’t take away from the seriousness of the allegations that the Chinese government targeted Huawei equipment so as to carry out intelligence operations in Australia and the United States. Moreover, the reporting in Bloomberg covers a time around 2012 and it remains unclear whether the relationship(s) between the Chinese government and Huawei have changed since then; it is possible, though credible open source evidence is not forthcoming to date, that Huawei has since been captured by the Chinese state.

Takeaway

The Bloomberg article strongly suggests that Huawei, as of 2012, didn’t appear captured by the Chinese government given the government’s reliance on HUMINT operations. Moreover, and separate from the article itself, it’s important that readers keep in mind that the activities which were allegedly carried out by the Chinese government were (and remain) similar to those also carried out by Western governments and their own security and intelligence agencies. I don’t raise this latter point as a kind of ‘whataboutism‘ but, instead, to underscore that these kinds of operations are both serious and conducted by ‘friendly’ and adversarial intelligence services alike. As such, it behooves citizens to ask whether these are the kinds of activities we want our governments to be conducting on our behalves. Furthermore, we need to keep these kinds of facts in mind and, ideally, see them in news reporting to better contextualize the operations which are undertaken by domestic and foreign intelligence agencies alike.


  1. While it’s several years past 2012, the 2021 UK HCSEC report found that it continued “to uncover issues that indicate there has been no overall improvement over the course of 2020 to meet the product software engineering and cyber security quality expected by the NCSC.” (boldface in original) ↩︎
  2. It is worth noting that, post-2012, the Chinese government has passed national security legislation which may make it easier to compel Chinese nationals to operate as intelligence assets, inclusive of technicians who have privileged access to telecommunications equipment that is being maintained outside China. That having been said, and as helpfully pointed out by Graham Webster, this case demonstrates that the national security laws were not needed in order to use human agents or assets to deploy implants. ↩︎
  3. There is a baseline question of whether the CSE Act created new powers for the CSE in this regard or if, instead, it merely codified existing secret policies or legal interpretations which had previously authorized the CSE to undertake covert activities in carrying out its foreign signals intelligence operations. ↩︎