Categories
Links

Unpacking the Global Pivot from AI Safety

The global pivot away from AI safety is now driving a lot of international AI policy. This shift is often attributed to the current U.S. administration and is reshaping how liberal democracies approach AI governance.

In a recent article on Lawfare, author Jakub Kraus argues there are deeper reasons behind this shift. Specifically, countries such as France had already begun reorienting toward innovation-friendly frameworks before the activities of the current American administration. The rapid emergence of ChatGPT also sparked a fear of missing out and a surge in AI optimism, while governments also confronted the perceived economic and military opportunities associated with AI technologies.

Kraus concludes his article by arguing that there may be some benefits of emphasizing opportunity over safety while, also, recognizing the risks of not building up effective international or domestic governance institutions.

However, if AI systems are not designed to be safe, transparent, accountable, privacy protective, or human rights affirming then there is a risk that people will lack trust in these systems based on the actual and potential harms of them being developed and deployed without sufficient regulatory safeguards. The result could be a birthing or fostering of a range of socially destructive harms and long-term hesitancy to take advantage of the potential benefits associated with emerging AI technologies.

Categories
Links Writing

Implications for Canada of an Anti-Liberal Democratic USA

Any number of commentators have raised concerns over whether the USA could become an illiberal state and the knock on effects. A recent piece by Dr. Benjamin Goldsmith briefly discussed a few forms of such a reformed state apparatus, but more interestingly (to me) is his postulation of the potentially broader global effects:

  • The dominant ideology of great powers will be nationalism.  
  • International politics will resemble the realist vision of great powers balancing power, carving out spheres of influence.  
  • It will make sense for the illiberal great powers to cooperate in some way to thwart liberalism – a sort of new ‘Holy Alliance’ type system could emerge.  
  • The existing institutional infrastructure of international relations will move towards a state-centric bias, away from a human-rights, liberal bias.   
  • International economic interdependence, although curtailed since the days of high “globalisation,” will continue to play an important role in tempering great-power behaviour.  
  • Democracy will be under greater pressure globally, with no great power backing and perhaps active US encouragement of far-right illiberal parties in established and new democracies.  
  • Mass Politics and soft power will still matter, but the post-truth aspect of public opinion in foreign policy will be greater.  

For a middle state like Canada, this kind of transformation would fundamentally challenge how it has been able to operate for the past 80 years. This would follow from the effects of this international reordering and due to our proximity to a superpower state that has broadly adopted or accepted an anti-liberal democratic political culture.

Concerning the first, what does this international reordering mean for Canada when nationalism reigns supreme after decades of developing economic and cultural integrations with the USA? What might it mean to be under a ‘sphere of influence’ with an autocratic or illiberal country? How would Canada appease Americans who pushed our leaders to support other authoritarian governments, or else? Absent the same commitments (and resources) to advocate for democratic values and human rights (while recognizing America’s own missteps in those areas) what does it mean for Canada’s own potential foreign policy commitments? And in an era of rising adoptions of generative AI technologies that can be used to produce and spread illiberal or anti-democratic rhetoric, and without the USA to regulate such uses of these technologies, what does this mean for detecting truth and falsity in international discourse?

In aggregate, these are the sorts of questions that Canadians should be considering and is part of why our leaders are warning of the implications of the changing American political culture.

When it comes to our proximity to a growing anti-liberal democratic political cultural, we are already seeing some of those principles and rhetoric taking hold in Canada. As more of this language (and ideology) seeps into Canadian discourse there is a growing chance that Canada’s own democratic norms might be perverted with extended exposure and following American pressures to compel alterations in our democratic institutions.

The shifts in the USA were not entirely unexpected. And the implications have been previously theorized. An anti-liberal democratic political culture will not necessarily take hold amongstAmericans and their political institutions. But the implications and potential global effects of such a change are before us, today, and it’s important to carefully consider potential consequences. Middle states, such as Canada, that possess liberal democratic cultures must urgently prepare ways to plot through what may be a very chaotic and disturbing next few decades.

Categories
Links Writing

An Initial Assessment of CLOUD Agreements

The United States has bilateral CLOUD Act agreements with the United Kingdom and Australia, and Canada continues to also negotiate an agreement with the United States.1 CLOUD agreements are meant to alleviate some of the challenges attributed to the MLAT process, namely that MLATs can be ponderous with the result being that investigators have difficulties obtaining information from communication providers in a manner deemed timely.

Investigators must conform with their domestic legal requirements and, with CLOUD agreements in place, can serve orders directly on bilateral partners’ communications and electronic service providers. Orders cannot target the domestic residents of a targeted country (i.e., the UK government could not target a US resident or person, and vice versa). Demands also cannot interfere with fundamental rights, such as freedom of speech. 2

A recent report from Lawfare unpacks the November 2024 report that was produced to explain how the UK and USA governments actually used the powers under their bilateral agreement. It showcases that, so far, the UK government has used this substantially to facilitate wiretap requests, with the UK issuing,

… 20,142 requests to U.S. service providers under the agreement. Over 99.8 percent of those (20,105) were issued under the Investigatory Powers Act, and were for the most part wiretap orders, and fewer than 0.2 percent were overseas production orders for stored communications data (37).

By way of contrast, the “United States made 63 requests to U.K. providers between Oct. 3, 2022, and Oct. 15, 2024. All but one request was for stored information.” Challenges in getting UK providers to respond to US CLOUD Act requests, and American complaints about this, may cause the UK government to “amend the data protection law to remove any doubt about the legality of honoring CLOUD Act requests.”

It will be interesting to further assess how CLOUD Acts operate, in practice, at a time when there is public analysis of how the USA-Australia agreement has been put into effect.


  1. In Canada, the Canadian Bar Association noted in November 2024 that new enabling legislation may be required, including reforms of privacy legislation to authorize providers’ disclosure of information to American investigators. ↩︎
  2. Debates continue about whether protections built into these agreements are sufficient. ↩︎
Categories
Writing

Details from the DNI’s Annual VEP Report

For a long time external observers wondered how many vulnerabilities were retained vs disclosed by FVEY SIGINT agencies. Following years of policy advocacy there is some small visibility into this by way of Section 6270 of Public Law 116-92. This law requires the U.S. Director of National Intelligence (DNI) to disclose certain annual data about the vulnerabilities disclosed and retained by US government agencies.

The Fiscal Year 2023 VEP Annual Report Unclassified Appendix reveals “the aggregate number of vulnerabilities disclosed to vendors or the public pursuant to the [VEP] was 39. Of those disclosed, 29 of them were initial submissions, and 10 of them were reconsiderations that originated in prior years.”1

There can be many reasons to reassess vulnerability equities. Some include:

  1. Utility of given vulnerabilities decrease either due to changes in the environment or research showing a vulnerability would not (or would no longer) have desired effect(s) or possess desired operational characteristics.
  2. Adversaries have identified the vulnerabilities themselves, or through 4th party collection, and disclosure is a defensive action to protect US or allied assets.
  3. Independent researchers / organizations are pursuing lines of research that would likely result in finding the vulnerabilities.
  4. By disclosing the vulnerabilities the U.S. agencies hope or expect adversaries to develop similar attacks on still-vulnerable systems, with the effect of masking future U.S. actions on similarly vulnerable systems.
  5. Organizations responsible for the affected software (e.g., open source projects) are now perceived as competent / resourced to remediate vulnerabilities.
  6. The effects of vulnerabilities are identified as having greater possible effects than initially perceived which rebalances disclosure equities.
  7. Orders from the President in securing certain systems result in a rebalancing of equities regarding holding the vulnerabilities in question.
  8. Newly discovered vulnerabilities are seen as more effective in mission tasks, thus deprecating the need for the vulnerabilities which were previously retained.
  9. Disclosure of vulnerabilities may enable adversaries to better target one another and thus enable new (deniable) 4th party collection opportunities.
  10. Vulnerabilities were in fact long used by adversaries (and not the U.S. / FVEY) and this disclosure burns some of their infrastructure or operational capacity.
  11. Vulnerabilities are associated with long-terminated programs and the release has no effect of current, recent, or deprecated activities.

This is just a very small subset of possible reasons to disclose previously-withheld vulnerabilities. While we don’t have a strong sense of how many vulnerabilities are retained each year, we do at least have a sense that rebalancing of equities year-over-year(s) is occurring. Though without a sense of scale the disclosed information is of middling value, at best.

Categories
Links Writing

American Telecommunication Companies’ Cybersecurity Deficiencies Increasingly Apparent

Five Eyes countries have regularly and routinely sought, and gained, access to foreign telecommunications infrastructures to carry out their operations. The same is true of other well resourced countries, including China.

Salt Typhoon’s penetration of American telecommunications and email platforms is slowly coming into relief. The New York Times has an article that summarizes what is being publicly disclosed at this point in time:

  • The full list of phone numbers that the Department of Justice had under surveillance in lawful interception systems has been exposed, with the effect of likely undermining American counter-intelligence operations aimed at Chinese operatives
  • Phone calls, unencrypted SMS messages, and email providers have been compromised
  • The FBI has heightened concerns that informants may have been exposed
  • Apple’s services, as well as end to end encrypted systems, were not penetrated

American telecommunications networks were penetrated, in part, due to companies relying on decades old systems and equipment that do not meet modern security requirements. Fixing these deficiencies may require rip-and-replacing some old parts of the network with the effect of creating “painful network outages for consumers.” Some of the targeting of American telecommunications networks is driven by an understanding that American national security defenders have some restrictions on how they can operate on American-based systems.

The weaknesses of telecommunications networks and their associated systems are generally well known. And mobile systems are particularly vulnerable to exploitation as a result of archaic standards and an unwillingness by some carriers to activate the security-centric aspects of 4G and 5G standards.

Some of the Five Eyes, led by Canada, have been developing and deploying defensive sensor networks that are meant to shore up some defences of government and select non-government organizations.1 But these edge, network, and cloud based sensors can only do so much: telecommunications providers, themselves, need to prioritize ensuring their core networks are protected against the classes of adversaries trying to penetrate them.2

At the same time, it is worth recognizing that end to end communications continued to be protected even in the face of Salt Typhoon’s actions. This speaks the urgent need to ensure that these forms of communications security continue to be available to all users. We often read that law enforcement needs select access to such communications and that they can be trusted to not abuse such exceptional access.

Setting aside the vast range of legal, normative, or geopolitical implications of weakening end to end encryption, cyber operations like the one perpetrated by Salt Typhoon speak to governments’ collective inabilities to protect their lawful access systems. There’s no reason to believe they’d be any more able to protect exceptional access measures that weakened, or otherwise gained access to, select content of end to end encrypted communications.


  1. I have discussed these sensors elsewhere, including in “Unpacking NSICOP’s Special Report on the Government of Canada’s Framework and Activities to Defend its Systems and Networks from Cyber Attack”. Historical information about these sensors, which were previously referred to under the covernames of CASCADE, EONBLUE, and PHOTONICPRISM, is available at the SIGINT summaries. ↩︎
  2. We are seeing some governments introducing, and sometimes passing, laws that would foster more robust security requirements. In Canada, Bill C-26 is generally meant to do this though the legislation as introduced raised some serious concerns. ↩︎
Categories
Links

New Russian APT Daisy-Chain Capability Revealed

In an impressive operation, a Russian APT reportedly targeted a Washington, DC network after daisy chaining through a sequence of neighbouring networks and devices in 2022. The trick: they may have done so without ever using any local operatives.

This is a movie-like kind of operation and speaks to the immense challenges in defending against very well resourced, motivated, and entrepreneurial adversaries.

Wired has a good and accessible article on the cyber activity. The full report is available at Volexity’s website; it’s well worth the read, if only to appreciate the tradecraft of the adversaries as well as Veloxity’s own acumen.

Categories
Links Writing

The Ongoing Problems of Placing Backdoors in Telecommunications Networks

In a cyber incident reminiscent of Operation Aurora,1 threat actors successfully penetrated American telecommunications companies (and a small number of other countries’ service providers) to gain access to lawful interception systems or associated data. The result was that:

For months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data, according to people familiar with the matter, which amounts to a major national security risk. The attackers also had access to other tranches of more generic internet traffic, they said.

The surveillance systems believed to be at issue are used to cooperate with requests for domestic information related to criminal and national security investigations. Under federal law, telecommunications and broadband companies must allow authorities to intercept electronic information pursuant to a court order. It couldn’t be determined if systems that support foreign intelligence surveillance were also vulnerable in the breach.

Not only is this a major intelligence coup for the adversary in question, but it once more reveals the fundamental difficulties in deliberately establishing lawful access/interception systems in communications infrastructures to support law enforcement and national security investigations while, simultaneously, preventing adversaries from taking advantage of the same deliberately-designed communications vulnerabilities.

Categories
Links Writing

Russian State Media Disinformation Campaign Exposed

Today, a series of Western allies — including Canada, the United States, and the Netherlands — disclosed the existence of a sophisticated Russian social media influence operation that was being operated by RT. The details of the campaign are exquisite, and include some of code used to drive the operation.

Of note, the campaign used a covert artificial intelligence (AI) enhanced software package to create fictitious online personas, representing a number of nationalities, to post content on X (formerly Twitter). Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

Although the tool was only identified on X, the authoring organizations’ analysis of the software used for the campaign indicated the developers intended to expand its functionality to other social media platforms. The authoring organizations’ analysis also indicated the tool is capable of the following:

  1. Creating authentic appearing social media personas en masse;
  2. Deploying content similar to typical social media users;
  3. Mirroring disinformation of other bot personas;
  4. Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  5. Formulating messages, to include the topic and framing, based on the specific archetype of the bot.

Mitigations to address this influence campaign include:

  1. Consider implementing processes to validate that accounts are created and operated by a human person who abides by the platform’s respective terms of use. Such processes could be similar to well-established Know Your Customer guidelines.
  2. Consider reviewing and making upgrades to authentication and verification processes based on the information provided in this advisory;
  3. Consider protocols for identifying and subsequently reviewing users with known-suspicious user agent strings;
  4. Consider making user accounts Secure by Default by using default settings such as MFA, default settings that support privacy, removing personally identifiable information shared without consent, and clear documentation of acceptable behavior.

This is a continuation of how AI tools are being (and will be) used to expand the ability of actors to undertake next-generation digital influence campaigns. And while adversaries are found using these techniques, today, we should anticipate that private companies (and others) will offer similar capabilities in the near future in democratic and non-democratic countries alike.

Categories
Writing

Publicly Normalizing Significant Espionage Operations is a Good Thing

The USA government recently took a bad beat when it came to light that alleged Chinese threat actors undertook a pretty sophisticated espionage operation that got them access to sensitive email communications of members of the US government. As the details come out it seems as though the Secretary of State and his inner circle weren’t breached but that other senior officials managing the USA-China relationship were.

Still, the actual language the US government is using to describe the espionage operation is really good to read. As an example, the cybersecurity director of the NSA, Rob Joyce, has stated that:

“It is China doing espionage […] That is what nation-states do. We need to defend against it, we need to push back on it, but that is something that happens.”

Why is this good? Because the USA was successfully targeted by an advanced espionage operation that has likely serious effects but this is normal, and Joyce is saying so publicly. Adopting the right language in this space is all too rare when espionage or other activities are often cast as serious ‘attacks’ or described using other inappropriate or bombastic language.

The US government’s language helps to clarify what are, and are not, norms-violating actions. Major and successful espionage operations don’t violate acceptable international norms. Moreover, not only does this make clear what is a fair operation to take against the USA; it, also, makes clear what the USA/FVEY think are appropriate actions to take towards other international actors. The language must be read as also justifying the allies’ own actions and effectively preempts any arguments from China or other nations that successful USA or FVEY espionage operations are anything other than another day on the international stage.

Clearly this is not new language. Former DNI Clapper, when describing the Office of Personnel Management hack in 2015, said,

You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don’t think we’d hesitate for a minute.

But it bears regularly repeating to establish what remain ‘appropriate’ in terms of signalling ongoing international norms. This signalling is not just to adversary nations or friendly allies however, but also to more regular laypersons, national security practitioners, or other operators who might someday work on the national or international stage. Signalling has a broader educational value for them (and for new reporters who end up picking up the national security beat someday in the future).

At an operational level, it’s also worth noting that this is intelligence gathering that can potentially lower temperatures. Knowing what the other side is thinking or how they’re interpreting things is super handy if you want to defrost some of your diplomatic relations. Though it can obviously hurt by losing advantages in your diplomatic positions, too, of course! And especially if it lets the other side outflank you.

Still, I have faith in the EquationGroup’s ongoing collection against even hard targets in China and elsewhere to help balance the information asymmetry equation. While the US suffered a now-publicly reported loss of information security, the NSA is actively working to achieve similar (if less public) successes of its own on a daily basis. And I’m sure they’re racking up wins of their own!

Categories
Writing

Why Is(n’t) TikTok A National Security Risk?

Photo by Ron Lach on Pexels.com

There have been grumblings about TikTok being a national security risk for many years and they’re getting louder with each passing month. Indeed, in the United States a bill has been presented to ban TikTok (“The ANTI-SOCIAL CCP ACT“) and a separate bill (“No TikTok on Government Devices Act“) has passed the Senate and would bar the application from being used on government devices. In Canada, the Prime Minister noted that the country’s signals intelligence agency, the Communications Security Establishment, is “watching very carefully.”

I recently provided commentary where I outlined some of the potential risks associated with TikTok and where it likely should fit into Canada’s national security priorities (spoiler: probably pretty low). Here I just want to expand on my comments a bit to provide some deeper context and reflections.

As with all things security-related you need to think through what assets you are attempting to protect, the sensitivity of what you’re trying to protect, and what measures are more or less likely to protect those assets. Further, in developing a protection strategy you need to think through how many resources you’re willing to invest to achieve the sought-after protection. This applies as much to national security policy makers as it does to individuals trying to secure devices or networks.

What Is Being Protected

Most public figures who talk about TikTok and national security are presently focused on one or two assets.

First, they worry that a large volume of data may be collected and used by Chinese government agencies, after these agencies receive it either voluntarily from TikTok or after compelling its disclosure. Commentators argue that Chinese companies are bound to obey the national security laws of China and, as such, may be forced to disclose data without any notice to users or non-Chinese government agencies. This information could be used to obtain information about specific individuals or communities, inclusive of what people are searching on the platform (e.g., medical information, financial information, sexual preference information), what they are themselves posting and could be embarrassing, or metadata which could be used for subsequent targeting.

Second, commentators are adopting a somewhat odious language of ‘cognitive warfare’ in talking about TikTok.1 The argument is that the Chinese government might compel the company to modify its algorithms so as to influence what people are seeing on the platform. The intent of this modification would be to influence political preferences or social and cultural perceptions. Some worry this kind of influence could guide whom individuals are more likely to vote for (e.g., you see a number of videos that directly or indirectly encourage you to support particular political parties), cause generalised apathy (e.g., you see videos that suggest that all parties are bad and none worth voting for), or enhance societal tensions (e.g., work to inflame partisanship and impair the functioning of otherwise moderate democracies). Or, as likely, a combination of each of these kinds of influence operations. Moreover, the TikTok algorithm could be modified by government compulsion to prioritise videos that praise some countries or that suppress videos which negatively portray other countries.

What Is the Sensitivity of the Assets?

When we consider the sensitivity of the information and data which is collected by TikTok it can be potentially high but, in practice, possesses differing sensitivities based on the person(s) in question. Research conducted by the University of Toronto’s Citizen Lab found that while TikTok does collect a significant volume of information, that volume largely parallels what Facebook or other Western companies collect. To put this slightly differently, a lot of information is collected and the sensitivity is associated with whom it belongs to, who may have access to it, and what those parties do with it.

When we consider who is using TikTok and having their information uploaded to the company’s servers, then, the question becomes whether there is a particular national security risk linked with this activity. While some individuals may potentially be targets based on their political, business, or civil society bonafides this will not be the case with all (or most) users. However, in even assessing the national security risks linked to individuals (or associated groups) it’s helpful to do a little more thinking.

First, the amount of information that is collected by TikTok, when merged with other data which could theoretically be collected using other signals intelligence methods (e.g., extracting metadata and select content from middle-boxes, Internet platforms, open-source locations, etc) could be very revealing. Five Eyes countries (i.e., Australia, Canada, New Zealand, the United Kingdom, and the United States of America) collect large volumes of metadata on vast swathes of the world’s populations in order to develop patterns of life which, when added together, can be deeply revelatory. When and how those countries’ intelligence agencies actually use the collected information varies and is kept very secretive. Generally, however, only a small subset of individuals whose information is collected and retained for any period of time have actions taken towards them. Nonetheless, we know that there is a genuine concern about information from private companies being obtained by intelligence services in the Five Eyes and it’s reasonable to be concerned that similar activities might be undertaken by Chinese intelligence services.

Second, the kinds of content information which are retained by TikTok could be embarrassing at a future time, or used by state agencies in ways that users would not expect or prefer. Imagine a situation where a young person says or does something on TikTok which is deeply offensive. Fast forward 3-4 years and their parents are diplomats or significant members of the business community, and that offensive content is used by Chinese security services to embarrass or otherwise inconvenience the parents. Such influence operations might impede Canada’s ability to conduct its diplomacy abroad or undermine the a business’s ability to prosper.

Third, the TikTok algorithm is not well understood. There is a risk that the Chinese government might compel ByteDance, and through them the TikTok platform, to modify algorithms to amplify some content and not others. It is hard to assess how ‘sensitive’ a population’s general sense of the world is but, broadly, if a surreptitious foreign influence operation occurred it might potentially affect how a population behaves or sees the world. To be clear this kind of shift in behaviour would not follow from a single video but from a concerted effort over time that shifted social perceptions amongst at least some distinct social communities. The sensitivity of the information used to identify videos to play, then, could be quite high across a substantial swathe of the population using the platform.

It’s important to recognise that in the aforementioned examples there is no evidence that ByteDance, which owns TikTok, has been compelled by the Chinese government to perform these activities. But these are the kinds of sensitivities that are linked to using TikTok and are popularly discussed.

What Should Be Done To Protect Assets?

The threats which are posed by TikTok are, at the moment, specious: it could be used for any number of things. Why people are concerned are linked less to the algorithm or data that is collected but, instead, to ByteDance being a Chinese company that might be influenced by the Chinese government to share data or undertake activities which are deleterious to Western countries’ interests.

Bluntly: the issue raised by TikTok is not necessarily linked to the platform itself but to the geopolitical struggles between China and other advanced economies throughout the world. We don’t have a TikTok problem per se but, instead, have a Chinese national security and foreign policy problem. TikTok is just a very narrow lens through which concerns and fears are being channelled.

So in the absence of obvious and deliberate harmful activities being undertaken by ByteDance and TikTok at the behest of the Chinese government what should be done? At the outset it’s worth recognising that many of the concerns expressed by politicians–and especially those linked to surreptitious influence operations–would already run afoul of Canadian law. The CSIS Act bars clandestine foreign intelligence operations which are regarded as threatening the security of Canada. Specifically, threats to the security of Canada means:

(a) espionage or sabotage that is against Canada or is detrimental to the interests of Canada or activities directed toward or in support of such espionage or sabotage,

(b) foreign influenced activities within or relating to Canada that are detrimental to the interests of Canada and are clandestine or deceptive or involve a threat to any person,

(c) activities within or relating to Canada directed toward or in support of the threat or use of acts of serious violence against persons or property for the purpose of achieving a political, religious or ideological objective within Canada or a foreign state, and

(d) activities directed toward undermining by covert unlawful acts, or directed toward or intended ultimately to lead to the destruction or overthrow by violence of, the constitutionally established system of government in Canada,

CSIS is authorised to undertake measures which would reduce the threats to the security of Canada, perhaps in partnership with the Communications Security Establishment, should such a threat be identified and a warrant obtained from the federal court.

On the whole a general ban on TikTok is almost certainly disproportionate and unreasonable at this point in time. There is no evidence of harm. There is no evidence of influence by the Chinese government. Rather than banning the platform generally I think that more focused legislation or policy could make sense.

First, I think that legislation or (preferably) policies precluding at least some members of government and senior civil servants from using TikTok has some merit. In these cases a risk analysis should be conducted to determine if collected information would undermine the Government of Canada’s ability to secure confidential information or if the collected information could be used for intelligence operations against the government officials. Advice might, also, be issued by the Canadian Security Intelligence Service so that private organisations are aware of their risks. In exceptional situations some kind of security requirements might also be imposed on private organisations and individuals, such as those who are involved in especially sensitive roles managing critical infrastructure systems. Ultimately, I suspect the number of people who should fall under this ban would, and should, be pretty small.

Second, what makes sense is legislation that requires social media companies writ large–not just TikTok–to make their algorithms and data flows legible to regulators. Moreover, individual users should be able to learn, and understand, why certain content is being prioritised or shown to them. Should platforms decline to comply with such a the law then sanctions may be merited. Similarly, should algorithmic legibility showcase that platforms are being manipulated or developed in ways that deliberately undermine social cohesion then some sanctions might be merited, though with the caveat that “social cohesion” should be understood as referring to platforms being deliberately designed to incite rage or other strong emotions with the effect of continually, and artificially, weakening social cohesion and amplifying social cleavages. The term should not, however, be seen as a kind of code for creating exclusionary social environments where underprivileged groups continue to be treated in discriminatory ways.

So Is TikTok ‘Dangerous’ From A National Security Perspective?

Based on open source information2 there is no reason to think that TikTok is currently a national security threat. Are there any risks associated with the platform? Sure, but they need to be juxtaposed against equivalent or more serious threats and priorities. We only have so many resources to direct towards the growing legion of legitimate national security risks and issues; funnelling a limited set of resources towards TikTok may not be the best kind of prioritisation.

Consider that while the Chinese government could compel TikTok to disclose information about its users to intelligence and security services…the same government could also use business cutouts and purchase much of the same information from data brokers operating in the United States and other jurisdictions. There would be no need to secretly force a company to do something when, instead, it could just lawfully acquire equivalent (or more extensive!) information. This is a pressing and real national security (and privacy!) issue and is deserving of legislative scrutiny and attention.

Further, while there is a risk that TikTok could be used to manipulate social values…the same is true of other social networking services. Indeed, academic and journalistic research over the past 5-7 years has drawn attention to how popular social media services are designed to deliver dopamine hits and keep us on them. We know that various private companies and public organisations around the world work tirelessly to ‘hack’ those algorithms and manipulate social values. Of course this broader manipulation doesn’t mean that we shouldn’t care but, also, makes clear that TikTok isn’t the sole vector of these efforts. Moreover, there are real questions about the how well social influence campaigns work: do they influence behaviour–are they supplying change?–or is the efficaciousness of any campaign representative of an attentive and interested pre-existing audience–is demand for the content the problem?

The nice thing about banning, blocking, or censoring material, or undertaking some other kind of binary decision, is that you feel like you’ve done something. Bans, blocks, and censors are typically designed for a black and white world. We, however, live in a world that is actually shrouded in greys. We only have so much legislative time, so much policy capacity, so much enforcement ability: it should all be directed efficiently to understanding, appreciating, and addressing the fulness of the challenges facing states and society. This time and effort should not be spent on performative politics that is great for providing a dopamine hit but which fails to address the real underlying issues.


  1. I have previously talked about the broader risks of correlating national security and information security. ↩︎
  2. Open source information means information which you or I can find, and read, without requiring a security clearance. ↩︎