Categories
Links Writing

Russian State Media Disinformation Campaign Exposed

Today, a series of Western allies — including Canada, the United States, and the Netherlands — disclosed the existence of a sophisticated Russian social media influence operation that was being operated by RT. The details of the campaign are exquisite, and include some of code used to drive the operation.

Of note, the campaign used a covert artificial intelligence (AI) enhanced software package to create fictitious online personas, representing a number of nationalities, to post content on X (formerly Twitter). Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

Although the tool was only identified on X, the authoring organizations’ analysis of the software used for the campaign indicated the developers intended to expand its functionality to other social media platforms. The authoring organizations’ analysis also indicated the tool is capable of the following:

  1. Creating authentic appearing social media personas en masse;
  2. Deploying content similar to typical social media users;
  3. Mirroring disinformation of other bot personas;
  4. Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  5. Formulating messages, to include the topic and framing, based on the specific archetype of the bot.

Mitigations to address this influence campaign include:

  1. Consider implementing processes to validate that accounts are created and operated by a human person who abides by the platform’s respective terms of use. Such processes could be similar to well-established Know Your Customer guidelines.
  2. Consider reviewing and making upgrades to authentication and verification processes based on the information provided in this advisory;
  3. Consider protocols for identifying and subsequently reviewing users with known-suspicious user agent strings;
  4. Consider making user accounts Secure by Default by using default settings such as MFA, default settings that support privacy, removing personally identifiable information shared without consent, and clear documentation of acceptable behavior.

This is a continuation of how AI tools are being (and will be) used to expand the ability of actors to undertake next-generation digital influence campaigns. And while adversaries are found using these techniques, today, we should anticipate that private companies (and others) will offer similar capabilities in the near future in democratic and non-democratic countries alike.

Categories
Writing

Why Is(n’t) TikTok A National Security Risk?

Photo by Ron Lach on Pexels.com

There have been grumblings about TikTok being a national security risk for many years and they’re getting louder with each passing month. Indeed, in the United States a bill has been presented to ban TikTok (“The ANTI-SOCIAL CCP ACT“) and a separate bill (“No TikTok on Government Devices Act“) has passed the Senate and would bar the application from being used on government devices. In Canada, the Prime Minister noted that the country’s signals intelligence agency, the Communications Security Establishment, is “watching very carefully.”

I recently provided commentary where I outlined some of the potential risks associated with TikTok and where it likely should fit into Canada’s national security priorities (spoiler: probably pretty low). Here I just want to expand on my comments a bit to provide some deeper context and reflections.

As with all things security-related you need to think through what assets you are attempting to protect, the sensitivity of what you’re trying to protect, and what measures are more or less likely to protect those assets. Further, in developing a protection strategy you need to think through how many resources you’re willing to invest to achieve the sought-after protection. This applies as much to national security policy makers as it does to individuals trying to secure devices or networks.

What Is Being Protected

Most public figures who talk about TikTok and national security are presently focused on one or two assets.

First, they worry that a large volume of data may be collected and used by Chinese government agencies, after these agencies receive it either voluntarily from TikTok or after compelling its disclosure. Commentators argue that Chinese companies are bound to obey the national security laws of China and, as such, may be forced to disclose data without any notice to users or non-Chinese government agencies. This information could be used to obtain information about specific individuals or communities, inclusive of what people are searching on the platform (e.g., medical information, financial information, sexual preference information), what they are themselves posting and could be embarrassing, or metadata which could be used for subsequent targeting.

Second, commentators are adopting a somewhat odious language of ‘cognitive warfare’ in talking about TikTok.1 The argument is that the Chinese government might compel the company to modify its algorithms so as to influence what people are seeing on the platform. The intent of this modification would be to influence political preferences or social and cultural perceptions. Some worry this kind of influence could guide whom individuals are more likely to vote for (e.g., you see a number of videos that directly or indirectly encourage you to support particular political parties), cause generalised apathy (e.g., you see videos that suggest that all parties are bad and none worth voting for), or enhance societal tensions (e.g., work to inflame partisanship and impair the functioning of otherwise moderate democracies). Or, as likely, a combination of each of these kinds of influence operations. Moreover, the TikTok algorithm could be modified by government compulsion to prioritise videos that praise some countries or that suppress videos which negatively portray other countries.

What Is the Sensitivity of the Assets?

When we consider the sensitivity of the information and data which is collected by TikTok it can be potentially high but, in practice, possesses differing sensitivities based on the person(s) in question. Research conducted by the University of Toronto’s Citizen Lab found that while TikTok does collect a significant volume of information, that volume largely parallels what Facebook or other Western companies collect. To put this slightly differently, a lot of information is collected and the sensitivity is associated with whom it belongs to, who may have access to it, and what those parties do with it.

When we consider who is using TikTok and having their information uploaded to the company’s servers, then, the question becomes whether there is a particular national security risk linked with this activity. While some individuals may potentially be targets based on their political, business, or civil society bonafides this will not be the case with all (or most) users. However, in even assessing the national security risks linked to individuals (or associated groups) it’s helpful to do a little more thinking.

First, the amount of information that is collected by TikTok, when merged with other data which could theoretically be collected using other signals intelligence methods (e.g., extracting metadata and select content from middle-boxes, Internet platforms, open-source locations, etc) could be very revealing. Five Eyes countries (i.e., Australia, Canada, New Zealand, the United Kingdom, and the United States of America) collect large volumes of metadata on vast swathes of the world’s populations in order to develop patterns of life which, when added together, can be deeply revelatory. When and how those countries’ intelligence agencies actually use the collected information varies and is kept very secretive. Generally, however, only a small subset of individuals whose information is collected and retained for any period of time have actions taken towards them. Nonetheless, we know that there is a genuine concern about information from private companies being obtained by intelligence services in the Five Eyes and it’s reasonable to be concerned that similar activities might be undertaken by Chinese intelligence services.

Second, the kinds of content information which are retained by TikTok could be embarrassing at a future time, or used by state agencies in ways that users would not expect or prefer. Imagine a situation where a young person says or does something on TikTok which is deeply offensive. Fast forward 3-4 years and their parents are diplomats or significant members of the business community, and that offensive content is used by Chinese security services to embarrass or otherwise inconvenience the parents. Such influence operations might impede Canada’s ability to conduct its diplomacy abroad or undermine the a business’s ability to prosper.

Third, the TikTok algorithm is not well understood. There is a risk that the Chinese government might compel ByteDance, and through them the TikTok platform, to modify algorithms to amplify some content and not others. It is hard to assess how ‘sensitive’ a population’s general sense of the world is but, broadly, if a surreptitious foreign influence operation occurred it might potentially affect how a population behaves or sees the world. To be clear this kind of shift in behaviour would not follow from a single video but from a concerted effort over time that shifted social perceptions amongst at least some distinct social communities. The sensitivity of the information used to identify videos to play, then, could be quite high across a substantial swathe of the population using the platform.

It’s important to recognise that in the aforementioned examples there is no evidence that ByteDance, which owns TikTok, has been compelled by the Chinese government to perform these activities. But these are the kinds of sensitivities that are linked to using TikTok and are popularly discussed.

What Should Be Done To Protect Assets?

The threats which are posed by TikTok are, at the moment, specious: it could be used for any number of things. Why people are concerned are linked less to the algorithm or data that is collected but, instead, to ByteDance being a Chinese company that might be influenced by the Chinese government to share data or undertake activities which are deleterious to Western countries’ interests.

Bluntly: the issue raised by TikTok is not necessarily linked to the platform itself but to the geopolitical struggles between China and other advanced economies throughout the world. We don’t have a TikTok problem per se but, instead, have a Chinese national security and foreign policy problem. TikTok is just a very narrow lens through which concerns and fears are being channelled.

So in the absence of obvious and deliberate harmful activities being undertaken by ByteDance and TikTok at the behest of the Chinese government what should be done? At the outset it’s worth recognising that many of the concerns expressed by politicians–and especially those linked to surreptitious influence operations–would already run afoul of Canadian law. The CSIS Act bars clandestine foreign intelligence operations which are regarded as threatening the security of Canada. Specifically, threats to the security of Canada means:

(a) espionage or sabotage that is against Canada or is detrimental to the interests of Canada or activities directed toward or in support of such espionage or sabotage,

(b) foreign influenced activities within or relating to Canada that are detrimental to the interests of Canada and are clandestine or deceptive or involve a threat to any person,

(c) activities within or relating to Canada directed toward or in support of the threat or use of acts of serious violence against persons or property for the purpose of achieving a political, religious or ideological objective within Canada or a foreign state, and

(d) activities directed toward undermining by covert unlawful acts, or directed toward or intended ultimately to lead to the destruction or overthrow by violence of, the constitutionally established system of government in Canada,

CSIS is authorised to undertake measures which would reduce the threats to the security of Canada, perhaps in partnership with the Communications Security Establishment, should such a threat be identified and a warrant obtained from the federal court.

On the whole a general ban on TikTok is almost certainly disproportionate and unreasonable at this point in time. There is no evidence of harm. There is no evidence of influence by the Chinese government. Rather than banning the platform generally I think that more focused legislation or policy could make sense.

First, I think that legislation or (preferably) policies precluding at least some members of government and senior civil servants from using TikTok has some merit. In these cases a risk analysis should be conducted to determine if collected information would undermine the Government of Canada’s ability to secure confidential information or if the collected information could be used for intelligence operations against the government officials. Advice might, also, be issued by the Canadian Security Intelligence Service so that private organisations are aware of their risks. In exceptional situations some kind of security requirements might also be imposed on private organisations and individuals, such as those who are involved in especially sensitive roles managing critical infrastructure systems. Ultimately, I suspect the number of people who should fall under this ban would, and should, be pretty small.

Second, what makes sense is legislation that requires social media companies writ large–not just TikTok–to make their algorithms and data flows legible to regulators. Moreover, individual users should be able to learn, and understand, why certain content is being prioritised or shown to them. Should platforms decline to comply with such a the law then sanctions may be merited. Similarly, should algorithmic legibility showcase that platforms are being manipulated or developed in ways that deliberately undermine social cohesion then some sanctions might be merited, though with the caveat that “social cohesion” should be understood as referring to platforms being deliberately designed to incite rage or other strong emotions with the effect of continually, and artificially, weakening social cohesion and amplifying social cleavages. The term should not, however, be seen as a kind of code for creating exclusionary social environments where underprivileged groups continue to be treated in discriminatory ways.

So Is TikTok ‘Dangerous’ From A National Security Perspective?

Based on open source information2 there is no reason to think that TikTok is currently a national security threat. Are there any risks associated with the platform? Sure, but they need to be juxtaposed against equivalent or more serious threats and priorities. We only have so many resources to direct towards the growing legion of legitimate national security risks and issues; funnelling a limited set of resources towards TikTok may not be the best kind of prioritisation.

Consider that while the Chinese government could compel TikTok to disclose information about its users to intelligence and security services…the same government could also use business cutouts and purchase much of the same information from data brokers operating in the United States and other jurisdictions. There would be no need to secretly force a company to do something when, instead, it could just lawfully acquire equivalent (or more extensive!) information. This is a pressing and real national security (and privacy!) issue and is deserving of legislative scrutiny and attention.

Further, while there is a risk that TikTok could be used to manipulate social values…the same is true of other social networking services. Indeed, academic and journalistic research over the past 5-7 years has drawn attention to how popular social media services are designed to deliver dopamine hits and keep us on them. We know that various private companies and public organisations around the world work tirelessly to ‘hack’ those algorithms and manipulate social values. Of course this broader manipulation doesn’t mean that we shouldn’t care but, also, makes clear that TikTok isn’t the sole vector of these efforts. Moreover, there are real questions about the how well social influence campaigns work: do they influence behaviour–are they supplying change?–or is the efficaciousness of any campaign representative of an attentive and interested pre-existing audience–is demand for the content the problem?

The nice thing about banning, blocking, or censoring material, or undertaking some other kind of binary decision, is that you feel like you’ve done something. Bans, blocks, and censors are typically designed for a black and white world. We, however, live in a world that is actually shrouded in greys. We only have so much legislative time, so much policy capacity, so much enforcement ability: it should all be directed efficiently to understanding, appreciating, and addressing the fulness of the challenges facing states and society. This time and effort should not be spent on performative politics that is great for providing a dopamine hit but which fails to address the real underlying issues.


  1. I have previously talked about the broader risks of correlating national security and information security. ↩︎
  2. Open source information means information which you or I can find, and read, without requiring a security clearance. ↩︎
Categories
Writing

Apple To More Widely Encrypt iCloud Data

Photo by Kartikey Das on Pexels.com

Apple has announced it will begin rolling out new data security protections for Americans by end of 2022, and the rest of the world in 2023. This is a big deal.

One of the biggest, and most serious, gaping holes in the protections that Apple has provided to its users is linked to iCloud. Specifically, while a subset of information has been encrypted such that Apple couldn’t access or disclose the plaintext of communications or content (e.g., Health information, encrypted Apple Notes, etc) the company did not encrypt device backups, message backups, notes generally, iCloud contents, Photos, and more. The result is that third-parties could either compel Apple to disclose information (e.g., by way of warrant) or otherwise subvert Apple’s protections to access stored data (e.g., targeted attacks). Apple’s new security protections will expand the categories of protected data from 141 to 23.

I am very supportive of Apple’s decision and frankly congratulate them on the very real courage that it takes to implement something like this. It is:

  • courageous technically, insofar as this is a challenging thing to pull off at the scale at which Apple operates
  • courageous from a business perspective, insofar as it raises the prospect of unhappy customers should they lose access to their data and Apple unable to assist them
  • courageous legally, insofar as it’s going to inspire a lot of frustration and upset by law enforcement and government agencies around the world

It’ll be absolutely critical to observe how quickly, and how broadly, Apple extends its new security capacities and whether countries are able to pressure Apple to either not deploy them for their residents or roll them back in certain situations. Either way, Apple routinely sets the standard on consumer privacy protections; others in the industry will now be inevitably compared to Apple as either meeting the new standard or failing their own customers in one way or another.

From a Canadian, Australia, or British government point of view, I suspect that Apple’s decision will infuriate law enforcement and security agencies who had placed their hopes on CLOUD Act bilateral agreements to get access to corporate data, such as that held by Apple. Under a CLOUD bilateral British authorities could, as an example, directly serve a judicially authorised order to Apple about a British resident, to get Apple to disclose information back to the British authorities without having to deal with American authorities. It promised to substantially improve the speed at which countries with bilateral agreements could obtain electronic evidence. Now, it would seem, Apple will largely be unable to assist law enforcement and security agencies when it comes to Apple users who have voluntarily enabled heightened data protections. Apple’s decision will, almost certainly, further inspire governments around the world to double down on their efforts to advance anti-encryption legislation and pass such legislation into law.

Notwithstanding the inevitable government gnashing of teeth, Apple’s approach will represent one of the biggest (voluntary) increases in privacy protection for global users since WhatsApp adopted Signal’s underlying encryption protocols. Tens if not hundreds of millions of people who enable the new data protection will be much safer and more secure in how their data is stored while simultaneously restricting who can access that data without individuals’ own knowledge.

In a world where ‘high-profile’ targets are just people who are social influencers on social media, there are a lot of people who stand to benefit from Apple’s courageous move. I only hope that other companies, such as Google, are courageous enough to follow Apple at some point in the near future.


  1. really, 13, given the issue of iMessage backups being accessible to Apple ↩︎
Categories
Links Writing

Generalist Policing Models Remain Problematic

From the New York Time’s opinion section, this piece on“Why the F.B.I. Is so far behind on cybercrime?” reinforces the position that American law enforcement is stymied in investigating cybercrimes because:

…it lacks enough agents with advanced computer skills. It has not recruited as many of these people as it needs, and those it has hired often don’t stay long. Its deeply ingrained cultural standards, some dating to the bureau’s first director, J. Edgar Hoover, have prevented it from getting the right talent.

Emblematic of an organization stuck in the past is the F.B.I.’s longstanding expectation that agents should be able to do “any job, anywhere.” While other global law enforcement agencies have snatched up computer scientists, the F.B.I. tried to turn existing agents with no computer backgrounds into digital specialists, clinging to the “any job” mantra. It may be possible to turn an agent whose background is in accounting into a first-rate gang investigator, but it’s a lot harder to turn that same agent into a top-flight computer scientist.

The “any job” mantra also hinders recruitment. People who have spent years becoming computer experts may have little interest in pivoting to another assignment. Many may lack the aptitude for — or feel uneasy with — traditional law enforcement expectations, such as being in top physical fitness, handling a deadly force scenario or even interacting with the public.

This very same issue plagues the RCMP, which also has a generalist model that discourages or hinders specialization. While we do see better business practices in, say, France, with an increasing LEA capacity to pursue cybercrime, we’re not yet seeing North American federal governments overhaul their own policing services.1

Similarly, the FBI is suffering from an ‘arrest’ culture:

The F.B.I.’s emphasis on arrests, which are especially hard to come by in ransomware cases, similarly reflects its outdated approach to cybercrime. In the bureau, prestige often springs from being a successful trial agent, working on cases that result in indictments and convictions that make the news. But ransomware cases, by their nature, are long and complex, with a low likelihood of arrest. Even when suspects are identified, arresting them is nearly impossible if they’re located in countries that don’t have extradition agreements with the United States.

In the Canadian context, not only is pursuing to arrest a problem due to jurisdiction, the complexity of cases can mean an officer spends huge amounts of time on a computer, and not out in the field ‘doing the work’ of their colleagues who are not cyber-focused. This perception of just ‘playing games’ or ‘surfing social media’ can sometimes lead to challenges between cyber investigators and older-school leaders.2 And, making things even more challenging is that the resources to train to detect and pursue Child Sexual Abuse Material (CSAM) are relatively plentiful, whereas economic and non-CSAM investigations tend to be severely under resourced.

Though there is some hope coming for Canadian investigators, by way of CLOUD agreements between the Canadian and American governments, and the updates to the Cybercrime Convention, both will require updates to criminal law as well as potentially provincial privacy laws to empower LEAs with expanded powers. And, even with access to more American data that enables investigations this will not solve the arrest challenges when criminals are operating out of non-extradition countries.

It remains to be seen whether an expanded capacity to issue warrants to American providers will reduce some of the Canadian need for specialized training to investigate more rudimentary cyber-related crimes or if, instead, it will have a minimum effect overall.


  1. This is also generally true to provincial and municipal services as well. ↩︎
  2. Fortunately this is a less common issue, today, than a decade ago. ↩︎
Categories
Writing

Chinese Spies Accused of Using Huawei in Secret Australia Telecom Hack

Bloomberg has an article that discusses how Chinese spies were allegedly involved in deploying implants on Huawei equipment which was operated in Australia and the United States. The key parts of the story include:

At the core of the case, those officials said, was a software update from Huawei that was installed on the network of a major Australian telecommunications company. The update appeared legitimate, but it contained malicious code that worked much like a digital wiretap, reprogramming the infected equipment to record all the communications passing through it before sending the data to China, they said. After a few days, that code deleted itself, the result of a clever self-destruct mechanism embedded in the update, they said. Ultimately, Australia’s intelligence agencies determined that China’s spy services were behind the breach, having infiltrated the ranks of Huawei technicians who helped maintain the equipment and pushed the update to the telecom’s systems. 

Guided by Australia’s tip, American intelligence agencies that year confirmed a similar attack from China using Huawei equipment located in the U.S., six of the former officials said, declining to provide further detail.

The details from the story are all circa 2012. The fact that Huawei equipment was successfully being targeted by these operations, in combination with the large volume of serious vulnerabilities in Huawei equipment, contributed to the United States’ efforts to bar Huawei equipment from American networks and the networks of their closest allies.1

Analysis

We can derive a number of conclusions from the Bloomberg article, as well as see links between activities allegedly undertaken by the Chinese government and those of Western intelligence agencies.

To begin, it’s worth noting that the very premise of the article–that the Chinese government needed to infiltrate the ranks of Huawei technicians–suggests that circa 2012 Huawei was not controlled by, operated by, or necessarily unduly influenced by the Chinese government. Why? Because if the government needed to impersonate technicians to deploy implants, and do so without the knowledge of Huawei’s executive staff, then it’s very challenging to say that the company writ large (or its executive staff) were complicit in intelligence operations.

Second, the Bloomberg article makes clear that a human intelligence (HUMINT) operation had to be conducted in order to deploy the implants in telecommunications networks, with data then being sent back to servers that were presumably operated by Chinese intelligence and security agencies. These kinds of HUMINT operations can be high-risk insofar because if operatives are caught then the whole operation (and its surrounding infrastructure) can be detected and burned down. Building legends for assets is never easy, nor is developing assets if they are being run from a distance as opposed to spies themselves deploying implants.2

Third, the United States’ National Security Agency (NSA) has conducted similar if not identical operations when its staff interdicted equipment while it was being shipped, in order to implant the equipment before sending it along to its final destination. Similarly, the CIA worked for decades to deliberately provide cryptographically-sabotaged equipment to diplomatic facilities around the world. All of which is to say that multiple agencies have been involved in using spies or assets to deliberately compromise hardware, including Western agencies.

Fourth, the Canadian Communications Security Establish Act (‘CSE Act’), which was passed into law in 2019, includes language which authorizes the CSE to do, “anything that is reasonably necessary to maintain the covert nature of the [foreign intelligence] activity” (26(2)(c)). The language in the CSE Act, at a minimum, raises the prospect that the CSE could undertake operations which parallel those of the NSA and, in theory, the Chinese government and its intelligence and security services.3

Of course, the fact that the NSA and other Western agencies have historically tampered with telecommunications hardware to facilitate intelligence collection doesn’t take away from the seriousness of the allegations that the Chinese government targeted Huawei equipment so as to carry out intelligence operations in Australia and the United States. Moreover, the reporting in Bloomberg covers a time around 2012 and it remains unclear whether the relationship(s) between the Chinese government and Huawei have changed since then; it is possible, though credible open source evidence is not forthcoming to date, that Huawei has since been captured by the Chinese state.

Takeaway

The Bloomberg article strongly suggests that Huawei, as of 2012, didn’t appear captured by the Chinese government given the government’s reliance on HUMINT operations. Moreover, and separate from the article itself, it’s important that readers keep in mind that the activities which were allegedly carried out by the Chinese government were (and remain) similar to those also carried out by Western governments and their own security and intelligence agencies. I don’t raise this latter point as a kind of ‘whataboutism‘ but, instead, to underscore that these kinds of operations are both serious and conducted by ‘friendly’ and adversarial intelligence services alike. As such, it behooves citizens to ask whether these are the kinds of activities we want our governments to be conducting on our behalves. Furthermore, we need to keep these kinds of facts in mind and, ideally, see them in news reporting to better contextualize the operations which are undertaken by domestic and foreign intelligence agencies alike.


  1. While it’s several years past 2012, the 2021 UK HCSEC report found that it continued “to uncover issues that indicate there has been no overall improvement over the course of 2020 to meet the product software engineering and cyber security quality expected by the NCSC.” (boldface in original) ↩︎
  2. It is worth noting that, post-2012, the Chinese government has passed national security legislation which may make it easier to compel Chinese nationals to operate as intelligence assets, inclusive of technicians who have privileged access to telecommunications equipment that is being maintained outside China. That having been said, and as helpfully pointed out by Graham Webster, this case demonstrates that the national security laws were not needed in order to use human agents or assets to deploy implants. ↩︎
  3. There is a baseline question of whether the CSE Act created new powers for the CSE in this regard or if, instead, it merely codified existing secret policies or legal interpretations which had previously authorized the CSE to undertake covert activities in carrying out its foreign signals intelligence operations. ↩︎
Categories
Links Writing

Mandatory Patching of Serious Vulnerabilities in Government Systems

Photo by Mati Mango on Pexels.com

The Cybersecurity and Infrastructure Security Agency (CISA) is responsible for building national capacity to defend American infrastructure and cybersecurity assets. In the past year they have been tasked with receiving information about American government agencies’ progress (or lack thereof) in implementing elements of Executive Order 14028: Improving the Nation’s Cybersecurity and have been involved in responses to a number of events, including Solar Winds, the Colonial Pipeline ransomware attack, and others. The Executive Order required that CISA first collect a large volume of information from government agencies and vendors alike to assess the threats towards government infrastructure and, subsequently, to provide guidance concerning cloud services, track the adoption of multi factor authentication and seek ways of facilitating its implementation, establish a framework to respond to security incidents, enhance CISA’s threat hunting abilities in government networks, and more.1

Today, CISA promulgated a binding operational directive that will require American government agencies to adopt more aggressive patch tempos for vulnerabilities. In addition to requiring agencies to develop formal policies for remediating vulnerabilities it establishes a requirement that vulnerabilities with a common vulnerabilities and exposure ID be remediated within 6 months, and all others with two weeks. Vulnerabilities to be patched/remediated are found in CISA’s “Known Exploited Vulnerabilities Catalogue.”

It’s notable that while patching is obviously preferred, the CISA directive doesn’t mandate patching but that ‘remediation’ take place.2 As such, organizations may be authorized to deploy defensive measures that will prevent the vulnerability from being exploited but not actually patch the underlying vulnerability, so as to avoid a patch having unintended consequences for either the application in question or for other applications/services that currently rely on either outdated or bespoke programming interfaces.

In the Canadian context, there aren’t equivalent levels of requirements that can be placed on Canadian federal departments. While Shared Services Canada can strongly encourage departments to patch, and the Treasury Board Secretariat has published a “Patch Management Guidance” document, and Canada’s Canadian Centre for Cyber Security has a suggested patch deployment schedule,3 final decisions are still made by individual departments by their respective deputy minister under the Financial Administration Act.

The Biden administration is moving quickly to accelerate its ability to identify and remediate vulnerabilities while simultaneously lettings its threat intelligence staff track adversaries in American networks. That last element is less of an issue in the Canadian context but the first two remain pressing and serious challenges.

While its positive to see the Americans moving quickly to improve their security positions I can only hope that the Canadian federal, and provincial, governments similarly clear long-standing logjams that delegate security decisions to parties who may be ill-suited to make optimal decisions, either out of ignorance or because patching systems is seen as secondary to fulfilling a given department’s primary service mandate.


  1. For a discussion of the Executive Order, see: “Initial Thoughts on Biden’s Executive Order on Improving the Nation’s Cybersecurity” or “Everything You Need to Know About the New Executive Order on Cybersecurity.” ↩︎
  2. For more, see CISA’s “Vulnerability Remediation Requirements“. ↩︎
  3. “CCCS’s deployment schedule only suggests timelines for deployment. In actuality, an organization should take into consideration risk tolerance and exposure to a given vulnerability and associated attack vector(s) as part of a risk‑based approach to patching, while also fully considering their individual threat profile. Patch management tools continue to improve the efficiency of the process and enable organizations to hasten the deployment schedule.” Source: “Patch Management Guidance↩︎
Categories
Links Roundup

The Roundup for December 1-31, 2019 Edition

Alone Amongst Ghosts by Christopher Parsons

Welcome to this edition of The Roundup! Enjoy the collection of interesting, informative, and entertaining links. Brew a fresh cup of coffee or grab yourself a drink, find a comfortable place, and relax.


This month’s update is late, accounting for holidays and my generally re-thinking how to move forward (or not) with these kinds of posts. I find them really valuable, but the actual interface of using my current client (Ulysses) to draft elements of them is less than optimal. So expect some sort of changes as I muddle through how to improve workflow and/or consider the kinds of content that make the most sense to post.


Inspiring Quotation

Be intensely yourself. Don’t try to be outstanding; don’t try to be a success; don’t try to do pictures for others to look at—just please yourself.

  • Ralph Steiner

Great Photography Shots

Natalia Elena Massi’s photographs of Venice, flooded, are exquisite insofar as they are objectively well shot while, simultaneously, reminding us of the consequences of climate change. I dream of going to Venice to shoot photos at some point and her work only further inspires those dreams.

Music I’m Digging

I spent a lot of the month listening to my ‘Best of 2019’ playlist, and so my Songs I Liked in December playlist is a tad threadbare. That said, it’s more diverse in genre and styles than most monthly lists, though not a lot of the tracks made the grade to get onto my best of 2019 list.

  • Beck-Guero // I spent a lot of time re-listening to Beck’s corpus throughout December. I discovered that I really like his music: it’s moody, excitable,and catchy, and always evolving from album to album.
  • Little V.-Spoiler (Cyberpunk 2077) (Single) // Cyberpunk 2077 is one of the most hyped video games for 2020, and if all of the music is as solid and genre-fitting as this track, then the ambiance for the game is going to be absolutely stellar.

Neat Podcast Episodes

  • 99% Invisible-Racoon Resistance // As a Torontonian I’m legally obligated to share this. Racoons are a big part of the city’s identity, and in recent years new organic garbage containers were (literally) rolled out that were designed such that racoons couldn’t get into them. Except that some racoons could! The good news is that racoons are not ‘social learners’ and, thus, those who can open the bins are unlikely to teach all the others. But with the sheer number of trash pandas in the city it’s almost a certainty that a number of them will naturally be smart enough and, thus, garbage will continue to litter our sidewalks and laneways.

Good Reads

  • America’s Dark History of Killing Its Own Troops With Cluster Munitions // Ismay’s longform piece on cluster munitions is not a happy article, nor does the reader leave with a sense that this deadly weapon is likely to be less used. His writing–and especially the tragedies associated with the use of these weapons–is poignant and painful. And yet it’s also critically important to read given the barbarity of cluster munitions and their deadly consequences to friends, foes, and civilians alike. No civilized nation should use these weapons and all which do use them cannot claim to respect the lives of civilians stuck in conflict situations.
  • Project DREAD: White House Veterans Helped Gulf Monarchy Build Secret Surveillance Unit // The failure or unwillingness of the principals, their deputies, or staff to acknowledge they created a surveillance system that has systematically been used to hunt down illegitimate targets—human rights defenders, civil society advocates, and the like—is disgusting. What’s worse is that democratizing these surveillance capabilities and justifying the means by which the program was orchestrated almost guarantees that American signals intelligence employees will continue to spread American surveillance know-how to the detriment of the world for a pay check, the consequences be damned (if even ever considered in the first place).
  • The War That Continues to Shape Russia, 25 Years Later // The combination of the (re)telling of the first Russia-Chechen War and photographs from the conflict serve as reminders of what it looks like when well-armed nation-states engage in fullscale destruction, the human costs, and the lingering political consequences of wars-now-past.
  • A New Kind of Spy: How China obtains American technological secrets // Bhattacharjee’s 2014 article on Chinese spying continues to strike me as memorable, and helpful in understanding how the Chinese government recruits agents to facilitate its technological objectives. Reading the piece helps to humanize why Chinese-Americans may spy for the Chinese government and, also, the breadth and significance of such activities for advancing China’s interests to the detriment of America’s own.
  • Below the Asphalt Lies the Beach: There is still much to learn from the radical legacy of critical theory // Benhabib’s essay showcasing how the history of European political philosophy over the past 60 years or so are in the common service of critique, and the role(s) of Habermasian political theory in both taking account of such critique whilst offering thoughts on how to proceed in a world of imperfect praxis, is an exciting consideration of political philosophy today. She mounts a considered defense of Habermas and, in particular, the claims that his work is overly Eurocentric. Her drawing a line between the need to seek emancipation while standing to confront and overcome the xenophobia, authoritarianism, and racism that is sweeping the world writ large is deeply grounded on the need for subjects like human rights to orient and ground critique. While some may oppose such universalism on the same grounds as they would reject the Habermasian project there is a danger: in doing so, not only might we do a disservice to the intellectual depth that undergirds the concept of human rights but, also, we run the risk of losing the core means by which we can (re)orient the world towards enabling the conditions of freedom itself.
  • Ghost ships, crop circles, and soft gold: A GPS mystery in Shanghai // This very curious article explores the recent problem of ships’ GPS transponders being significantly affected while transiting the Yangtze in China. Specifically, transponders are routinely misplacing the location of ships, sometimes with dangerous and serious implications. The cause, however, remains unknown: it could be a major step up in the (effective) electronic warfare capabilities of sand thieves who illegally dredge the river, and who seek to escape undetected, or could be the Chinese government itself testing electronic warfare capabilities on the shipping lane in preparation of potentially deploying it elsewhere in the region. Either way, threats such as this to critical infrastructure pose serious risks to safe navigation and, also, to the potential for largely civilian infrastructures to be potentially targeted by nation-state adversaries.
  • A Date I Still Think About // These beautiful stories of memorable and special dates speak to just how much joy exists in the world, and how it unexpectedly erupts into our lives. In an increasingly dark time, stories like this are a kind of nourishment for the soul.

Cool Things

  • The Deep Sea // This interactive website that showcases the sea life we know exists, and the depths at which it lives, is simple and spectacular.
  • 100 Great Works Of Dystopian Fiction // A pretty terrific listing of books that have defined the genre.