Why Is(n’t) TikTok A National Security Risk?

Photo by Ron Lach on Pexels.com

There have been grumblings about TikTok being a national security risk for many years and they’re getting louder with each passing month. Indeed, in the United States a bill has been presented to ban TikTok (“The ANTI-SOCIAL CCP ACT“) and a separate bill (“No TikTok on Government Devices Act“) has passed the Senate and would bar the application from being used on government devices. In Canada, the Prime Minister noted that the country’s signals intelligence agency, the Communications Security Establishment, is “watching very carefully.”

I recently provided commentary where I outlined some of the potential risks associated with TikTok and where it likely should fit into Canada’s national security priorities (spoiler: probably pretty low). Here I just want to expand on my comments a bit to provide some deeper context and reflections.

As with all things security-related you need to think through what assets you are attempting to protect, the sensitivity of what you’re trying to protect, and what measures are more or less likely to protect those assets. Further, in developing a protection strategy you need to think through how many resources you’re willing to invest to achieve the sought-after protection. This applies as much to national security policy makers as it does to individuals trying to secure devices or networks.

What Is Being Protected

Most public figures who talk about TikTok and national security are presently focused on one or two assets.

First, they worry that a large volume of data may be collected and used by Chinese government agencies, after these agencies receive it either voluntarily from TikTok or after compelling its disclosure. Commentators argue that Chinese companies are bound to obey the national security laws of China and, as such, may be forced to disclose data without any notice to users or non-Chinese government agencies. This information could be used to obtain information about specific individuals or communities, inclusive of what people are searching on the platform (e.g., medical information, financial information, sexual preference information), what they are themselves posting and could be embarrassing, or metadata which could be used for subsequent targeting.

Second, commentators are adopting a somewhat odious language of ‘cognitive warfare’ in talking about TikTok.1 The argument is that the Chinese government might compel the company to modify its algorithms so as to influence what people are seeing on the platform. The intent of this modification would be to influence political preferences or social and cultural perceptions. Some worry this kind of influence could guide whom individuals are more likely to vote for (e.g., you see a number of videos that directly or indirectly encourage you to support particular political parties), cause generalised apathy (e.g., you see videos that suggest that all parties are bad and none worth voting for), or enhance societal tensions (e.g., work to inflame partisanship and impair the functioning of otherwise moderate democracies). Or, as likely, a combination of each of these kinds of influence operations. Moreover, the TikTok algorithm could be modified by government compulsion to prioritise videos that praise some countries or that suppress videos which negatively portray other countries.

What Is the Sensitivity of the Assets?

When we consider the sensitivity of the information and data which is collected by TikTok it can be potentially high but, in practice, possesses differing sensitivities based on the person(s) in question. Research conducted by the University of Toronto’s Citizen Lab found that while TikTok does collect a significant volume of information, that volume largely parallels what Facebook or other Western companies collect. To put this slightly differently, a lot of information is collected and the sensitivity is associated with whom it belongs to, who may have access to it, and what those parties do with it.

When we consider who is using TikTok and having their information uploaded to the company’s servers, then, the question becomes whether there is a particular national security risk linked with this activity. While some individuals may potentially be targets based on their political, business, or civil society bonafides this will not be the case with all (or most) users. However, in even assessing the national security risks linked to individuals (or associated groups) it’s helpful to do a little more thinking.

First, the amount of information that is collected by TikTok, when merged with other data which could theoretically be collected using other signals intelligence methods (e.g., extracting metadata and select content from middle-boxes, Internet platforms, open-source locations, etc) could be very revealing. Five Eyes countries (i.e., Australia, Canada, New Zealand, the United Kingdom, and the United States of America) collect large volumes of metadata on vast swathes of the world’s populations in order to develop patterns of life which, when added together, can be deeply revelatory. When and how those countries’ intelligence agencies actually use the collected information varies and is kept very secretive. Generally, however, only a small subset of individuals whose information is collected and retained for any period of time have actions taken towards them. Nonetheless, we know that there is a genuine concern about information from private companies being obtained by intelligence services in the Five Eyes and it’s reasonable to be concerned that similar activities might be undertaken by Chinese intelligence services.

Second, the kinds of content information which are retained by TikTok could be embarrassing at a future time, or used by state agencies in ways that users would not expect or prefer. Imagine a situation where a young person says or does something on TikTok which is deeply offensive. Fast forward 3-4 years and their parents are diplomats or significant members of the business community, and that offensive content is used by Chinese security services to embarrass or otherwise inconvenience the parents. Such influence operations might impede Canada’s ability to conduct its diplomacy abroad or undermine the a business’s ability to prosper.

Third, the TikTok algorithm is not well understood. There is a risk that the Chinese government might compel ByteDance, and through them the TikTok platform, to modify algorithms to amplify some content and not others. It is hard to assess how ‘sensitive’ a population’s general sense of the world is but, broadly, if a surreptitious foreign influence operation occurred it might potentially affect how a population behaves or sees the world. To be clear this kind of shift in behaviour would not follow from a single video but from a concerted effort over time that shifted social perceptions amongst at least some distinct social communities. The sensitivity of the information used to identify videos to play, then, could be quite high across a substantial swathe of the population using the platform.

It’s important to recognise that in the aforementioned examples there is no evidence that ByteDance, which owns TikTok, has been compelled by the Chinese government to perform these activities. But these are the kinds of sensitivities that are linked to using TikTok and are popularly discussed.

What Should Be Done To Protect Assets?

The threats which are posed by TikTok are, at the moment, specious: it could be used for any number of things. Why people are concerned are linked less to the algorithm or data that is collected but, instead, to ByteDance being a Chinese company that might be influenced by the Chinese government to share data or undertake activities which are deleterious to Western countries’ interests.

Bluntly: the issue raised by TikTok is not necessarily linked to the platform itself but to the geopolitical struggles between China and other advanced economies throughout the world. We don’t have a TikTok problem per se but, instead, have a Chinese national security and foreign policy problem. TikTok is just a very narrow lens through which concerns and fears are being channelled.

So in the absence of obvious and deliberate harmful activities being undertaken by ByteDance and TikTok at the behest of the Chinese government what should be done? At the outset it’s worth recognising that many of the concerns expressed by politicians–and especially those linked to surreptitious influence operations–would already run afoul of Canadian law. The CSIS Act bars clandestine foreign intelligence operations which are regarded as threatening the security of Canada. Specifically, threats to the security of Canada means:

(a) espionage or sabotage that is against Canada or is detrimental to the interests of Canada or activities directed toward or in support of such espionage or sabotage,

(b) foreign influenced activities within or relating to Canada that are detrimental to the interests of Canada and are clandestine or deceptive or involve a threat to any person,

(c) activities within or relating to Canada directed toward or in support of the threat or use of acts of serious violence against persons or property for the purpose of achieving a political, religious or ideological objective within Canada or a foreign state, and

(d) activities directed toward undermining by covert unlawful acts, or directed toward or intended ultimately to lead to the destruction or overthrow by violence of, the constitutionally established system of government in Canada,

CSIS is authorised to undertake measures which would reduce the threats to the security of Canada, perhaps in partnership with the Communications Security Establishment, should such a threat be identified and a warrant obtained from the federal court.

On the whole a general ban on TikTok is almost certainly disproportionate and unreasonable at this point in time. There is no evidence of harm. There is no evidence of influence by the Chinese government. Rather than banning the platform generally I think that more focused legislation or policy could make sense.

First, I think that legislation or (preferably) policies precluding at least some members of government and senior civil servants from using TikTok has some merit. In these cases a risk analysis should be conducted to determine if collected information would undermine the Government of Canada’s ability to secure confidential information or if the collected information could be used for intelligence operations against the government officials. Advice might, also, be issued by the Canadian Security Intelligence Service so that private organisations are aware of their risks. In exceptional situations some kind of security requirements might also be imposed on private organisations and individuals, such as those who are involved in especially sensitive roles managing critical infrastructure systems. Ultimately, I suspect the number of people who should fall under this ban would, and should, be pretty small.

Second, what makes sense is legislation that requires social media companies writ large–not just TikTok–to make their algorithms and data flows legible to regulators. Moreover, individual users should be able to learn, and understand, why certain content is being prioritised or shown to them. Should platforms decline to comply with such a the law then sanctions may be merited. Similarly, should algorithmic legibility showcase that platforms are being manipulated or developed in ways that deliberately undermine social cohesion then some sanctions might be merited, though with the caveat that “social cohesion” should be understood as referring to platforms being deliberately designed to incite rage or other strong emotions with the effect of continually, and artificially, weakening social cohesion and amplifying social cleavages. The term should not, however, be seen as a kind of code for creating exclusionary social environments where underprivileged groups continue to be treated in discriminatory ways.

So Is TikTok ‘Dangerous’ From A National Security Perspective?

Based on open source information2 there is no reason to think that TikTok is currently a national security threat. Are there any risks associated with the platform? Sure, but they need to be juxtaposed against equivalent or more serious threats and priorities. We only have so many resources to direct towards the growing legion of legitimate national security risks and issues; funnelling a limited set of resources towards TikTok may not be the best kind of prioritisation.

Consider that while the Chinese government could compel TikTok to disclose information about its users to intelligence and security services…the same government could also use business cutouts and purchase much of the same information from data brokers operating in the United States and other jurisdictions. There would be no need to secretly force a company to do something when, instead, it could just lawfully acquire equivalent (or more extensive!) information. This is a pressing and real national security (and privacy!) issue and is deserving of legislative scrutiny and attention.

Further, while there is a risk that TikTok could be used to manipulate social values…the same is true of other social networking services. Indeed, academic and journalistic research over the past 5-7 years has drawn attention to how popular social media services are designed to deliver dopamine hits and keep us on them. We know that various private companies and public organisations around the world work tirelessly to ‘hack’ those algorithms and manipulate social values. Of course this broader manipulation doesn’t mean that we shouldn’t care but, also, makes clear that TikTok isn’t the sole vector of these efforts. Moreover, there are real questions about the how well social influence campaigns work: do they influence behaviour–are they supplying change?–or is the efficaciousness of any campaign representative of an attentive and interested pre-existing audience–is demand for the content the problem?

The nice thing about banning, blocking, or censoring material, or undertaking some other kind of binary decision, is that you feel like you’ve done something. Bans, blocks, and censors are typically designed for a black and white world. We, however, live in a world that is actually shrouded in greys. We only have so much legislative time, so much policy capacity, so much enforcement ability: it should all be directed efficiently to understanding, appreciating, and addressing the fulness of the challenges facing states and society. This time and effort should not be spent on performative politics that is great for providing a dopamine hit but which fails to address the real underlying issues.


  1. I have previously talked about the broader risks of correlating national security and information security. ↩︎
  2. Open source information means information which you or I can find, and read, without requiring a security clearance. ↩︎
Link

National Security Means What, Again?

There have been any number of concerns about Elon Musk’s behaviour, and especially in the recent weeks and months. This has led some commentators to warn that his purchase of Twitter may raise national security risks. Gill and Lehrich try to make this argument in their article, “Elon Musk Owning Twitter is A National Security Threat.” They give three reasons:

First, Musk is allegedly in communication with foreign actors – including senior officials in the Kremlin and Chinese Communist Party – who could use his acquisition of Twitter to undermine American national security.

Will Musk’s foreign investors have influence over Twitter’s content moderation policies? Will the Chinese exploit their significant leverage over Musk to demand he censor criticism of the CCP, or turn the dials up for posts that sow distrust in democracy?

Finally, it’s not just America’s information ecosystem that’s at stake, it’s also the private data of American citizens.

It’s worth noting that at no point do the authors provide a definition for ‘national security’, which causes the reader to have to guess what they likely mean. More broadly, in journalistic and opinion circle communities there is a curious–and increasingly common–conjoining of national security and information security. The authors themselves make this link in the kicker paragraph of their article, when they write

It is imperative that American leaders fully understand Musk’s motives, financing, and loyalties amidst his bid to acquire Twitter – especially given the high-stakes geopolitical reality we are living in now. The fate of American national security and our information ecosystem hang in the balance.1

Information security, generally, is focused on dangers which are associated with true or false information being disseminated across a population. It is distinguished from cyber security, and which is typically focused on the digital security protocols and practices that are designed to reduce technical computer vulnerabilities. Whereas the former focuses on a public’s mind the latter attends to how their digital and physical systems are hardened from being technically exploited.

Western governments have historically resisted authoritarian governments attempts to link the concepts of information security and cyber security. The reason is that authoritarian governments want to establish international principles and norms, whereby it becomes appropriate for governments to control the information which is made available to their publics under the guise of promoting ‘cyber security’. Democratic countries that emphasise the importance of intellectual freedom, freedom of religion, freedom of assembly, and other core rights have historically been opposed to promoting information security norms.

At the same time, misinformation and disinformation have become increasingly popular areas of study and commentary, especially following Donald Trump’s election as POTUS. And, in countries like the United States, Trump’s adoption of lies and misinformation was often cast as a national security issue: correct information should be communicated, and efforts to intentionally communicate false information should be blocked, prohibited, or prevented from massively circulating.

Obviously Trump’s language, actions, and behaviours were incredibly destabilising and abominable for an American president. And his presence on the world stage arguably emboldened many authoritarians around the world. But there is a real risk in using terms like ‘national security’ without definition, especially when the application of ‘national security’ starts to stray into the domain of what could be considered information security. Specifically, as everything becomes ‘national security’ it is possible for authoritarian governments to adopt the language of Western governments and intellectuals, and assert that they too are focused on ‘national security’ whereas, in fact, these authoritarian governments are using the term to justify their own censorious activities.

Now, does this mean that if we are more careful in the West about our use of language that authoritarian governments will become less censorious? No. But being more careful and thoughtful in our language, public argumentation, and positioning of our policy statements we may at least prevent those authoritarian governments from using our discourse as a justification for their own activities. We should, then, be careful and precise in what we say to avoid giving a fig leaf of cover to authoritarian activities.

And that will start by parties who use terms like ‘national security’ clearly defining what they mean, such that it is clear how national security is different from informational security. Unless, of course, authors and thinkers are in fact leaning into the conceptual apparatus of repressive governments in an effort to save democratic governance. For any author who thinks such a move is wise, however, I must admit that I harbour strong doubts of the efficacy or utility of such attempts.


  1. Emphasis not in original. ↩︎
Link

Housing in Ottawa Now a National Security Issue

David Pugliese is reporting in the Ottawa Citizen that the Canadian Forces Intelligence Command (CFINTCOM) is “trying to avoid posting junior staff to Ottawa because it has become too expensive to live in the region.” The risk is that financial hardship associated with living in Ottawa could make junior members susceptible to subversion. Housing costs in Ottawa have risen much faster than either wage increases or inflation. Moreover, the special allowance provided to staff that is meant to assauge the high costs of living in Canadian cities has been frozen for 13 years.

At this point energy, telecommunications, healthcare, and housing all raise their own national security concerns. To some extent, such concerns have tracked with these industry categories: governments have always worried about the security of telecommunications networks as well as the availability of sufficient energy supplies. But in other cases, such as housing affordability, the national security concerns we are seeing are the result of long-term governance failures. These failures have created new national security threats that would not exist in the face of good (or even just better) governance.1

There is a profound danger in trying to address all the new national security challenges and issues using national security tools or governance processes. National security incidents are often regarded as creating moments of exception and, in such moments, actions can be undertaken that otherwise could not. The danger is that states of exception become the norm and, in the process, the regular modes of governance and law are significantly set aside to resolve the crises of the day. What is needed is a regeneration and deployment of traditional governance capacity instead of a routine reliance on national security-type responses to these issues.

Of course, governments don’t just need to respond to these metastasized governance problems in order to alleviate national security issues and threats. They need to do so, in equable and inclusive ways, so as to preserve or (re)generate the trust between the residents of Canada and their government.

The public may justifiably doubt that their system of government is working where successive governments under the major political parties are seen as having failed to provide for basic needs. The threat, then, is that ongoing governance failures run the risk of placing Canada’s democracy under pressure. While this might seem overstated I don’t think that’s the case: we are seeing a rise of politicians who are capitalizing on the frustrations and challenges faced by Canadians across the country, but who do not have their own solutions. Capitalizing on rage and frustration, and then failing to deliver on fixes, will only further alienate Canadians from their government.

Governments across Canada flexed their muscles during the earlier phases of the COVID-19 pandemic. Having used them, then, it’s imperative they keep flexing these muscles to address the serious issues that Canadians are experiencing. Doing so will assuage existent national security issues. It will also, simultaneously, serve to prevent other normal governance challenges from metastasizing into national security threats.


  1. As an aside, these housing challenges are not necessarily new. Naval staff posted to Esquimalt have long complained about the high costs of off-base housing in Victoria and the surrounding towns and cities. ↩︎

Mitigating AI-Based Harms in National Security

Photo by Pixabay on Pexels.com

Government agencies throughout Canada are investigating how they might adopt and deploy ‘artificial intelligence’ programs to enhance how they provide services. In the case of national security and law enforcement agencies these programs might be used to analyze and exploit datasets, surface threats, identify risky travellers, or automatically respond to criminal or threat activities.

However, the predictive software systems that are being deployed–‘artificial intelligence’–are routinely shown to be biased. These biases are serious in the commercial sphere but there, at least, it is somewhat possible for researchers to detect and surface biases. In the secretive domain of national security, however, the likelihood of bias in agencies’ software being detected or surfaced by non-government parties is considerably lower.

I know that organizations such as the Canadian Security Intelligence Agency (CSIS) have an interest in understanding how to use big data in ways that mitigate bias. The Canadian government does have a policy on the “Responsible use of artificial intelligence (AI)” and, at the municipal policing level, the Toronto Police Service has also published a policy on its use of artificial intelligence. Furthermore, the Office of the Privacy Commissioner of Canada has published a proposed regulatory framework for AI as part of potential reforms to federal privacy law.

Timnit Gebru, in conversation with Julia Angwin, suggests that there should be ‘datasheets for algorithms’ that would outline how predictive software systems have been tested for bias in different use cases prior to being deployed. Linking this to traditional circuit-based datasheets, she says (emphasis added):

As a circuit designer, you design certain components into your system, and these components are really idealized tools that you learn about in school that are always supposed to work perfectly. Of course, that’s not how they work in real life.

To account for this, there are standards that say, “You can use this component for railroads, because of x, y, and z,” and “You cannot use this component for life support systems, because it has all these qualities we’ve tested.” Before you design something into your system, you look at what’s called a datasheet for the component to inform your decision. In the world of AI, there is no information on what testing or auditing you did. You build the model and you just send it out into the world. This paper proposed that datasheets be published alongside datasets. The sheets are intended to help people make an informed decision about whether that dataset would work for a specific use case. There was also a follow-up paper called Model Cards for Model Reporting that I wrote with Meg Mitchell, my former co-lead at Google, which proposed that when you design a model, you need to specify the different tests you’ve conducted and the characteristics it has.

What I’ve realized is that when you’re in an institution, and you’re recommending that instead of hiring one person, you need five people to create the model card and the datasheet, and instead of putting out a product in a month, you should actually do it in three years, it’s not going to happen. I can write all the papers I want, but it’s just not going to happen. I’m constantly grappling with the incentive structure of this industry. We can write all the papers we want, but if we don’t change the incentives of the tech industry, nothing is going to change. That is why we need regulation.

Government is one of those areas where regulation or law can work well to discipline its behaviours, and where the relatively large volume of resources combined with a law-abiding bureaucracy might mean that formally required assessments would actually be conducted. While such assessments matter, generally, they are of particular importance where state agencies might be involved in making decisions that significantly or permanently alter the life chances of residents of Canada, visitors who are passing through our borders, or foreign national who are interacting with our government agencies.

As it stands, today, many Canadian government efforts at the federal, provincial, or municipal level seem to be signficiantly focused on how predictive software might be used or the effects it may have. These are important things to attend to! But it is just as, if not more, important for agencies to undertake baseline assessments of how and when different predictive software engines are permissible or not, as based on robust testing and evaluation of their features and flaws.

Having spoken with people at different levels of government the recurring complaint around assessing training data, and predictive software systems more generally, is that it’s hard to hire the right people for these assessment jobs on the basis that they are relatively rare and often exceedingly expensive. Thus, mid-level and senior members of government have a tendency to focus on things that government is perceived as actually able to do: figure out and track how predictive systems would be used and to what effect.

However, the regular focus on the resource-related challenges of predictive software assessment raises the very real question of whether these constraints should just compel agencies to forgo technologies on the basis of failing to determine, and assess, their prospective harms. In the firearms space, as an example, government agencies are extremely rigorous in assessing how a weapon operates to ensure that it functions precisely as meant given that the weapon might be used in life-changing scenarios. Such assessments require significant sums of money from agency budgets.

If we can make significant budgetary allocations for firearms, on the grounds they can have life-altering consequences for all involved in their use, then why can’t we do the same for predictive software systems? If anything, such allocations would compel agencies to make a strong(er) business case for testing the predictive systems in question and spur further accountability: Does the system work? At a reasonable cost? With acceptable outcomes?

Imposing cost discipline on organizations is an important way of ensuring that technologies, and other business processes, aren’t randomly adopted on the basis of externalizing their full costs. By internalizing those costs, up front, organizations may need to be much more careful in what they choose to adopt, when, and for what purpose. The outcome of this introspection and assessment would, hopefully, be that the harmful effects of predictive software systems in the national security space were mitigated and the systems which were adopted actually fulfilled the purposes they were acquired to address.

Chinese Spies Accused of Using Huawei in Secret Australia Telecom Hack

Bloomberg has an article that discusses how Chinese spies were allegedly involved in deploying implants on Huawei equipment which was operated in Australia and the United States. The key parts of the story include:

At the core of the case, those officials said, was a software update from Huawei that was installed on the network of a major Australian telecommunications company. The update appeared legitimate, but it contained malicious code that worked much like a digital wiretap, reprogramming the infected equipment to record all the communications passing through it before sending the data to China, they said. After a few days, that code deleted itself, the result of a clever self-destruct mechanism embedded in the update, they said. Ultimately, Australia’s intelligence agencies determined that China’s spy services were behind the breach, having infiltrated the ranks of Huawei technicians who helped maintain the equipment and pushed the update to the telecom’s systems. 

Guided by Australia’s tip, American intelligence agencies that year confirmed a similar attack from China using Huawei equipment located in the U.S., six of the former officials said, declining to provide further detail.

The details from the story are all circa 2012. The fact that Huawei equipment was successfully being targeted by these operations, in combination with the large volume of serious vulnerabilities in Huawei equipment, contributed to the United States’ efforts to bar Huawei equipment from American networks and the networks of their closest allies.1

Analysis

We can derive a number of conclusions from the Bloomberg article, as well as see links between activities allegedly undertaken by the Chinese government and those of Western intelligence agencies.

To begin, it’s worth noting that the very premise of the article–that the Chinese government needed to infiltrate the ranks of Huawei technicians–suggests that circa 2012 Huawei was not controlled by, operated by, or necessarily unduly influenced by the Chinese government. Why? Because if the government needed to impersonate technicians to deploy implants, and do so without the knowledge of Huawei’s executive staff, then it’s very challenging to say that the company writ large (or its executive staff) were complicit in intelligence operations.

Second, the Bloomberg article makes clear that a human intelligence (HUMINT) operation had to be conducted in order to deploy the implants in telecommunications networks, with data then being sent back to servers that were presumably operated by Chinese intelligence and security agencies. These kinds of HUMINT operations can be high-risk insofar because if operatives are caught then the whole operation (and its surrounding infrastructure) can be detected and burned down. Building legends for assets is never easy, nor is developing assets if they are being run from a distance as opposed to spies themselves deploying implants.2

Third, the United States’ National Security Agency (NSA) has conducted similar if not identical operations when its staff interdicted equipment while it was being shipped, in order to implant the equipment before sending it along to its final destination. Similarly, the CIA worked for decades to deliberately provide cryptographically-sabotaged equipment to diplomatic facilities around the world. All of which is to say that multiple agencies have been involved in using spies or assets to deliberately compromise hardware, including Western agencies.

Fourth, the Canadian Communications Security Establish Act (‘CSE Act’), which was passed into law in 2019, includes language which authorizes the CSE to do, “anything that is reasonably necessary to maintain the covert nature of the [foreign intelligence] activity” (26(2)(c)). The language in the CSE Act, at a minimum, raises the prospect that the CSE could undertake operations which parallel those of the NSA and, in theory, the Chinese government and its intelligence and security services.3

Of course, the fact that the NSA and other Western agencies have historically tampered with telecommunications hardware to facilitate intelligence collection doesn’t take away from the seriousness of the allegations that the Chinese government targeted Huawei equipment so as to carry out intelligence operations in Australia and the United States. Moreover, the reporting in Bloomberg covers a time around 2012 and it remains unclear whether the relationship(s) between the Chinese government and Huawei have changed since then; it is possible, though credible open source evidence is not forthcoming to date, that Huawei has since been captured by the Chinese state.

Takeaway

The Bloomberg article strongly suggests that Huawei, as of 2012, didn’t appear captured by the Chinese government given the government’s reliance on HUMINT operations. Moreover, and separate from the article itself, it’s important that readers keep in mind that the activities which were allegedly carried out by the Chinese government were (and remain) similar to those also carried out by Western governments and their own security and intelligence agencies. I don’t raise this latter point as a kind of ‘whataboutism‘ but, instead, to underscore that these kinds of operations are both serious and conducted by ‘friendly’ and adversarial intelligence services alike. As such, it behooves citizens to ask whether these are the kinds of activities we want our governments to be conducting on our behalves. Furthermore, we need to keep these kinds of facts in mind and, ideally, see them in news reporting to better contextualize the operations which are undertaken by domestic and foreign intelligence agencies alike.


  1. While it’s several years past 2012, the 2021 UK HCSEC report found that it continued “to uncover issues that indicate there has been no overall improvement over the course of 2020 to meet the product software engineering and cyber security quality expected by the NCSC.” (boldface in original) ↩︎
  2. It is worth noting that, post-2012, the Chinese government has passed national security legislation which may make it easier to compel Chinese nationals to operate as intelligence assets, inclusive of technicians who have privileged access to telecommunications equipment that is being maintained outside China. That having been said, and as helpfully pointed out by Graham Webster, this case demonstrates that the national security laws were not needed in order to use human agents or assets to deploy implants. ↩︎
  3. There is a baseline question of whether the CSE Act created new powers for the CSE in this regard or if, instead, it merely codified existing secret policies or legal interpretations which had previously authorized the CSE to undertake covert activities in carrying out its foreign signals intelligence operations. ↩︎

Detecting Academic National Security Threats

Photo by Pixabay on Pexels.com

The Canadian government is following in the footsteps of it’s American counterpart and has introduced national security assessments for recipients of government natural science (NSERC) funding. Such assessments will occur when proposed research projects are deemed sensitive and where private funding is also used to facilitate the research in question. Social science (SSHRC) and health (CIHR) funding will be subject to these assessments in the near future.

I’ve written, elsewhere, about why such assessments are likely fatally flawed. In short, they will inhibit student training, will cast suspicion upon researchers of non-Canadian nationalities (and especially upon researchers who hold citizenship with ‘competitor nations’ such as China, Russia, and Iran), and may encourage researchers to hide their sources of funding to be able to perform their required academic duties while also avoiding national security scrutiny.

To be clear, such scrutiny often carries explicit racist overtones, has led to many charges but few convictions in the United States, and presupposes that academic units or government agencies can detect a human-based espionage agent. Further, it presupposes that HUMINT-based espionage is a more serious, or equivalent, threat to research productivity as compared to cyber-espionage. As of today, there is no evidence in the public record in Canada that indicates that the threat facing Canadian academics is equivalent to the invasiveness of the assessments, nor that human-based espionage is a greater risk than cyber-based means.

To the best of my knowledge, while HUMINT-based espionage does generate some concerns they pale in comparison to the risk of espionage linked to cyber-operations.

However, these points are not the principal focus of this post. I recently re-read some older work by Bruce Schneier that I think nicely casts why asking scholars to engage in national security assessments of their own, and their colleagues’, research is bound to fail. Schneier wrote the following in 2007, when discussing the US government’s “see something, say something” campaign:

[t]he problem is that ordinary citizens don’t know what a real terrorist threat looks like. They can’t tell the difference between a bomb and a tape dispenser, electronic name badge, CD player, bat detector, or trash sculpture; or the difference between terrorist plotters and imams, musicians, or architects. All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.

Replace “terrorist” with “national security” threat and we get to approximately the same conclusions. Individuals—even those trained to detect and investigate human intelligence driven espionage—can find it incredibly difficult to detect human agent-enabled espionage. Expecting academics, who are motivated to develop international and collegial relationships, who may be unable to assess the national security implications of their research, and who are being told to abandon funding while the government fails to supplement that which is abandoned, guarantees that this measure will fail.

What will that failure mean, specifically? It will involve incorrect assessments and suspicion being aimed at scholars from ‘competitor’ and adversary nations. Scholars will question whether they should work with a Chinese, Russian, or Iranian scholar even when they are employed in a Western university let alone when they are in a non-Western institution. I doubt these same scholars will similarly question whether they should work with Finish, French, or British scholars. Nationality and ethnicity lenses will be used to assess who are the ‘right’ people with whom to collaborate.

Failure will not just affect professors. It will also extend to affect undergraduate and graduate students, as well as post-doctoral fellows and university staff. Already, students are questioning what they must do in order to prove that they are not considered national security threats. Lab staff and other employees who have access to university research environments will similarly be placed under an aura of suspicion. We should not, we must not, create an academy where these are the kinds of questions with which our students and colleagues and staff must grapple.

Espionage is, it must be recognized, a serious issue that faces universities and Canadian businesses more broadly. The solution cannot be to ignore it and hope that the activity goes away. However, the response to such threats must demonstrate necessity and proportionality and demonstrably involve evidence-based and inclusive policy making. The current program that is being rolled out by the Government of Canada does not meet this set of conditions and, as such, needs to be repealed.

Link

Project GUNMAN and the Telling of Intelligence Histories

This story of how the National Security Agency (NSA) was involved in analyzing typewriter bugs that were implanted by agents of the USSR in the 1980s is pretty amazing (.pdf) in terms of the technical and operational details which are have been written about. It’s also revealing in terms of how the parties who are permitted to write about these materials breathlessly describe the agencies’ past exploits. In critically reading these kinds of accounts its possible to learn how the agencies, themselves, regard themselves and their activities. In effect, how history is ‘created’—or propaganda written, depending on how your read the article in question—functions to reveal the nature of the actors involved in that creation and the way that myths and truths are created and replicated.

As a slight aside, whenever I come across material like this I’m reminded of just how poor the Canadian government is in disclosing its own intelligence agencies’ histories. As senior members of the Canadian intelligence community retire or pass away, and as recorded materials waste away or are disposed of, key information that is needed to understand how and why Canada has acted in the world are being lost. This has the effect of impoverishing Canadians’ own understandings of how their governments have operated, with the result that Canadian histories often risk missing essential information that could reveal hidden depths to what Canadians know about their country and its past.

Link

When the Government Decides to Waylay Parliament

Steven Chaplin has a really great explanation of whether the Canadian government can rely on national security and evidentiary laws to lawfully justify refusing to provide documents to the House of Commons, and to House committees. His analysis and explanation arose as a result of the Canadian government doing everything it could to, first, refuse to provide documents to the Parliamentary Committee which was studying Canadian-Chinese relations and, subsequently, refusing to provide the documents when compelled to do so by the House of Commons itself.

Rather than releasing the requested documents the government turned to the courts to adjudicate whether the documents in question–which were asserted to contain sensitive national security information–must, in fact, be released to the House or whether they could instead be sent to an executive committee, filled with Members of Parliament and Senators, to assess the contents instead. As Chaplin notes,

Having the courts intervene, as proposed by the government’s application in the Federal Court, is not an option. The application is clearly precluded by Article 9 of the Bill of Rights, 1689, which provides that a proceeding in Parliament ought not to be impeached or questioned in court. Article 9 not only allows for free speech; it is also a constitutional limit on the jurisdiction of the courts to preclude judicial interference in the business of the House.

The House ordered that the documents be tabled without redaction. Any decision of the court that found to the contrary would impeach or question the proceeding that led to the Order. And any attempt by the courts to balance the interests involved would constitute the courts becoming involved in ascertaining, and thereby questioning, the needs of the House and why the House wants the documents.

Beyond the Court’s involvement impeding into the territory of Parliament, there could be serious and long-term implications of letting the court become a space wherein the government and the House fight to obtain information that has been demanded. Specifically,

It may be that at the end of the day the government will continue to refuse to produce documents. In the same way that the government cannot use the courts to withhold documents, the House cannot go to court to compel the government to produce them, or to order witnesses to attend proceedings. It could also invite disobedience of witnesses, requiring the House to either drop inquiries or involve the courts to compel attendance or evidence. Allowing, or requiring, the government and the House to resolve their differences in the courts would not only be contrary to the constitutional principles of Article 9, but “would inevitably create delays, disruption, uncertainties and costs which would hold up the nation’s business and on that account would be unacceptable even if, in the end, the Speaker’s rulings were vindicated as entirely proper” (Canada (House of Commons) v. Vaid [2005]). In short, the courts have no business intervening one way or the other.

Throughout the discussions that have taken place about this issue in Canada, what has been most striking is that the national security commentators and elites have envisioned that the National Security and Intelligence Committee of Parliamentarians (NSICOP) could (and should) be tasked to resolve any and all particularly sensitive national security issues that might be of interest to Parliament. None, however, seems to have contemplated that Parliament, itself, might take issue with the government trying to exclude Parliament from engaging in assessments of the government’s national security decisions nor that issue would be taken when topics of interest to Parliamentarians were punted into an executive body, wherein their fellow Members of Parliament on the body were sworn to the strictest secrecy. Instead, elites have hand waved to the importance of preserving secrecy in order for Canada to receive intelligence from allies, as well as asserted that the government would never mislead Parliament on national security matters (about which, these same experts explain, Members of Parliament are not prepared to receive, process, or understand given the sophistication of the intelligence and the apparent simplicity of most Parliamentarians themselves).

This was the topic of a recent episode of the Intrepid Podcast, where Philippe Lagassé noted that the exclusion of parliamentary experts when creating NSICOP meant that these entirely predictable showdown situations were functionally baked into how the executive body was composed. As someone who raised the issue of adopting an executive, versus a standing House, committee and was rebuffed as being ignorant of the reality of national security it’s with more than a little satisfaction that the very concerns which were raised when NSICOP was being created are, in fact, arising on the political agenda.

With regard to the documents that the House Committee was seeking, I don’t know or particularly care what their contents include. From my own experience I’m all too well aware that ‘national security’ is often stamped on things that either governments want to keep from the public because they can be politically damaging, be kept from the public just generally because of a culture of non-transparency and refusal of accountability, as well as (less often) be kept from the public on the basis that there are bonafide national security interests at stake. I do, however, care that the Government of Canada has (again) acted counter to Parliament’s wishes and has deliberately worked to impede the House from doing its work.

Successive governments seem to genuinely believe that they get to ‘rule’ Canada absolutely and with little accountability. While this is, in function, largely true given how cowed Members of Parliament are to their party leaders it’s incredibly serious and depressing to see the government further erode Parliament’s powers and abilities to fulfil its duties. A healthy democracy is filled with bumps for the government as it is held to account but, sadly, the Government of Canada–regardless of the party in power–is incredibly active in keeping itself, and its behaviours, from the public eye and thus held to account.

If only a committee might be struck to solve this problem…

Quote

Strategy is critical because it establishes a common goal that guides agencies in policymaking and provides the framework for collaboration and cohesion of vision. Strategy is difficult to devise, devilish to agree upon, and often painfully reductive when one considers competing demands. But without it, security boils down to ad hoc government responses based on urgent yet contradicting concepts.

Tatyana Bolton, Mary Brooks, and Kathryn Waldron, “Three Key Questions to Define ICT Supply Chain Security

Link

Russia, China, the USA and the Geopolitical and National Security Implications of Climate Change

Lustgarden, writing for the New York Times, has probably the best piece on the national security and geopolitical implications of climate change that I’ve recently come across. The assessment for the USA is not good:

… in the long term, agriculture presents perhaps the most significant illustration of how a warming world might erode America’s position. Right now the U.S. agricultural industry serves as a significant, if low-key, instrument of leverage in America’s own foreign affairs. The U.S. provides roughly a third of soy traded globally, nearly 40 percent of corn and 13 percent of wheat. By recent count, American staple crops are shipped to 174 countries, and democratic influence and power comes with them, all by design. And yet climate data analyzed for this project suggest that the U.S. farming industry is in danger. Crop yields from Texas north to Nebraska could fall by up to 90 percent by as soon as 2040 as the ideal growing region slips toward the Dakotas and the Canadian border. And unlike in Russia or Canada, that border hinders the U.S.’s ability to shift north along with the optimal conditions.

Now, the advantages faced by Canada might be eroded by a militant America, and those of Russia similarly threatened by a belligerent and desperate China (and desperate Southeast Asia more generally). Regardless, food and arable land are generally likely to determine which countries take the longest to most suffer from climate change. Though, in the end, it’s almost a forgone conclusion that we are all ultimately going to suffer horribly for the errors of our ways.