Cyber Attacks Versus Operations in Ukraine

Photo by cottonbro on Pexels.com

For the past decade there has been a steady drumbeat that ‘cyberwar is coming’. Sometimes the parties holding these positions are in militaries and, in other cases, from think tanks or university departments that are trying to link kinetic-adjacent computer operations with ‘war’.

Perhaps the most famous rebuttal to the cyberwar proponents has been Thomas Rid’s Cyber War Will Not Take Place. The title was meant to be provocative and almost has the effect of concealing a core insight of Rid’s argument: cyber operations will continue to be associated with conflicts but cyber operations are unlikely to constitute (or lead to) out-and-out war on their own. Why? Because it is very challenging to prepare and launch cyber operations that have significant kinetic results at the scale we associate with full-on war.

Since the Russian Federation’s war of aggression towards Ukraine there have regularly been shocked assertions that cyberware isn’t taking place. A series of pieces by The Economist, as an example, sought to prepare readers for a cyberwar that just hasn’t happened. Why not? Because The Economist–much like other outlets!–often presumed that the cyber dimensions of the conflict in Ukraine would bear at least some resemblance to the long-maligned concept of a ‘cyber Pearl Harbour’: a critical cyber-enable strike of some sort would have a serious, and potentially devastating, effect on how Ukraine could defend against Russian aggression and thus tilt the balance towards Russian military victory.

As a result of the early mistaken understandings of cyber operations, scholars and experts have once more come out and explained why cyber operations are not the same as an imagined cyber Pearl Harbour situation, while still taking place in the Ukrainian conflict. Simultaneously, security and malware researchers have taken the opportunity to belittle International Relations theorists who have written about cyberwar and argued that these theorists have fundamentally misunderstood how cyber operations take place.

Part of the challenge is ‘cyberwar’ has often been popularly seen as the equivalent of hundreds of thousands of soldiers and their associated military hardware being deployed into a foreign country. As noted by Rid in a recent op-ed, while some cyber operations are meant to be apparent others are much more subtle. The former might be meant to reduce the will to fight or diminish command and control capabilities. The latter, in contrast, will look a lot like other reconnaissance operations: knowing who is commanding which battle group, the logistical challenges facing the opponent, or state of infrastructure in-country. All these latter dimensions provide strategic and tactical advantages to the party who’s launched the surveillance operation. Operations meant to degrade capabilities may occur but will often be more subtle. This subtly can be a particularly severe risk in a conflict, such as if your ammunition convoy is sent to the wrong place or train timetables are thrown off with the effect of stymying civilian evacuation or resupply operations.1

What’s often seemingly lost in the ‘cyberwar’ debates–which tend to take place either between people who don’t understand cyber operations, those who stand to profit from misrepresentations of them, or those who are so theoretical in their approaches as to be ignorant of reality–is that contemporary wars entail blended forces. Different elements of those blends have unique and specific tactical and strategic purposes. Cyber isn’t going to have the same effect as a Grad Missile Launcher or a T-90 Battle Tank, but that missile launcher or tank isn’t going to know that the target it’s pointed towards is strategically valuable without reconnaissance nor is it able to impair logistics flows the same way as a cyber operation targeting train schedules. To expect otherwise is to grossly misunderstand how cyber operations function in a conflict environment.

I’d like to imagine that one result of the Russian war of aggression will be to improve the general population’s understanding of cyber operations and what they entail, and do not entail. It’s possible that this might happen given that major news outlets, such as the AP and Reuters, are changing how they refer to such activities: they will not be called ‘cyberattacks’ outside very nuanced situations now. In simply changing what we call cyber activities–as operations as opposed to attacks–we’ll hopefully see a deflating of the language and, with it, more careful understandings of how cyber operations take place in and out of conflict situations. As such, there’s a chance (hope?) we might see a better appreciation of the significance of cyber operations in the population writ-large in the coming years. This will be increasingly important given the sheer volume of successful (non-conflict) operations that take place each day.


  1. It’s worth recognizing that part of why we aren’t reading about successful Russian operations is, first, due to Ukrainian and allies’ efforts to suppress such successes for fear of reducing Ukrainian/allied morale. Second, however, is that Western signals intelligence agencies such as the NSA, CSE, and GCHQ, are all very active in providing remote defensive and other operational services to Ukrainian forces. There was also a significant effort ahead of the conflict to shore up Ukrainian defences and continues to be a strong effort by Western companies to enhance the security of systems used by Ukrainians. Combined, this means that Ukraine is enjoying additional ‘forces’ while, simultaneously, generally keeping quiet about its own failures to protect its systems or infrastructure. ↩︎

Russia, Nokia, and SORM

Photo by Mati Mango on Pexels.com

The New York Times recently wrote about Nokia providing telecommunications equipment to Russian ISPs, all while Nokia was intimately aware of how its equipment would be interconnected with System for Operative Investigative Activities (SORM) lawful interception equipment. SORM equipment has existed in numerous versions since the 1990s. Per James Lewis:

SORM-1 collects mobile and landline telephone calls. SORM-2 collects internet traffic. SORM-3 collects from all media (including Wi-Fi and social networks) and stores data for three years. Russian law requires all internet service providers to install an FSB monitoring device (called “Punkt Upravlenia”) on their networks that allows the direct collection of traffic without the knowledge or cooperation of the service provider. The providers must pay for the device and the cost of installation.

SORM is part of a broader Internet and telecommunications surveillance and censorship regime that has been established by the Russian government. Moreover, other countries in the region use iterations or variations of the SORM system (e.g., Kazakhstan) as well as countries which were previously invaded by the Soviet Union (e.g., Afghanistan).

The Time’s article somewhat breathlessly states that the documents they obtained, and which span 2008-2017,

show in previously unreported detail that Nokia knew it was enabling a Russian surveillance system. The work was essential for Nokia to do business in Russia, where it had become a top supplier of equipment and services to various telecommunications customers to help their networks function. The business yielded hundreds of millions of dollars in annual revenue, even as Mr. Putin became more belligerent abroad and more controlling at home.

It is not surprising that Nokia, as part of doing business in Russia, was complying with lawful interception laws insofar as its products were compatible with SORM equipment. Frankly it would have been surprising if Nokia had flouted the law given that Nokia’s own policy concerning human rights asserts that (.pdf):

Nokia will provide passive lawful interception capabilities to customers who have a legal obligation to provide such capabilities. This means we will provide products that meet agreed standards for lawful intercept capabilities as defined by recognized standards bodies such as the 3rd Generation Partner Project (3GPP) and the European Telecoms Standards Institute (ETSI). We will not, however, engage in any activity relating to active lawful interception technologies, such as storing, post-processing or analyzing of intercepted data gathered by the network operator.

It was somewhat curious that the Times’ article declined to recognize that Nokia-Siemens has a long history of doing business in repressive countries: it allegedly sold mobile lawful interception equipment to Iran circa 2009 and in 2010-11 its lawful interception equipment was implicated in political repression and torture in Bahrain. Put differently, Nokia’s involvement in low rule-of-law countries is not new and, if anything, their actions in Russia appear to be a mild improvement on their historical approaches to enabling repressive governments to exercise lawful interception functionalities.

The broad question is whether Western companies should be authorized or permitted to do business in repressive countries. To some extent, we might hope that businesses themselves would express restraint. But, in excess of this, companies such as Nokia often require some kind of export license or approval before they can sell certain telecommunications equipment to various repressive governments. This is particularly true when it comes to supplying lawful interception functionality (which was not the case when Nokia sold equipment to Russia).

While the New York Times casts a light on Nokia the article does not:

  1. Assess the robustness of Nokia’s alleged human rights commitments–have they changed since 2013 when they were first examined by civil society? How do Nokia’s sales comport with their 2019 human rights policy? Just how flimsy is the human rights policy in its own right?
  2. Assess the export controls that Nokia was(n’t) under–is it the case that the Norwegian government has some liability or responsibility for the sales of Nokia’s telecommunications equipment? Should there be?
  3. Assess the activities of the telecommunications provider Nokia was supplying in Russia, MTS, and whether there is a broader issue of Nokia supplying equipment to MTS since it operates in various repressive countries.

None of this is meant to set aside the fact that Western companies ought to behave better on the international stage. But…this has not been a priority in Russia, at least, until the country’s recent war of aggression. Warning signs were prominently on display before this war and didn’t result in prominent and public recriminations towards Nokia or other Western companies doing business in Russia.

All lawful interception systems, regardless of whether they conform with North America, European, or Russian standards, are surveillance systems. Put another way, they are all about empowering one group to exercise influence or power over others who are unaware they are being watched. In low rule-of-law countries, such as Russia, there is a real question as to whether they should should even be called ‘lawful interception systems’ as opposed to explicitly calling them ‘interception systems’.

There was a real opportunity for the New York Times to both better contextualize Nokia’s involvement in Russia and, then, to explain and problematize the nature of lawful interception capability and standards. The authors could also have spent time discussing the nature of export controls on telecommunications equipment, where the equipment is being sold into repressive states. Sadly this did not occur with the result that the authors and paper declined to more broadly consider and report on the working, and ethics and politics, of enabling telecommunications and lawful interception systems in repressive and non-repressive states alike. While other kicks at this can will arise, it’s evident that there wasn’t even an attempt to do so in this report on Nokia.

Policing the Location Industry

Photo by Ingo Joseph on Pexels.com

The Markup has a comprehensive and disturbing article on how location information is acquired by third-parties despite efforts by Apple and Google to restrict the availability of this information. In the past, it was common for third-parties to provide SDKs to application developers. The SDKs would inconspicuously transfer location information to those third-parties while also enabling functionality for application developers. With restrictions being put in place by platforms such as Apple and Google, however, it’s now becoming common for application developers to initiate requests for location information themselves and then share it directly with third-party data collectors.

While such activities often violate the terms of service and policy agreements between platforms and application developers, it can be challenging for the platforms to actually detect these violations and subsequently enforce their rules.

Broadly, the issues at play represent significant governmental regulatory failures. The fact that government agencies often benefit from the secretive collection of individuals’ location information makes it that much harder for the governments to muster the will to discipline the secretive collection of personal data by third-parties: if the government cuts off the flow of location information, it will impede the ability of governments themselves obtain this information.

In some cases intelligence and security services obtain location information from third-parties. This sometimes occurs in situations where the services themselves are legally barred from directly collecting this information. Companies selling mobility information can let government agencies do an end-run around the law.

One of the results is that efforts to limit data collectors’ ability to capture personal information often sees parts of government push for carve outs to collecting, selling, and using location information. In Canada, as an example, the government has adopted a legal position that it can collect locational information so long as it is de-identified or anonymized,1 and for the security and intelligence services there are laws on the books that permit the collection of commercially available open source information. This open source information does not need to be anonymized prior to acquisition.2 Lest you think that it sounds paranoid that intelligence services might be interested in location information, consider that American agencies collected bulk location information pertaining to Muslims from third-party location information data brokers and that the Five Eyes historically targeted popular applications such as Google Maps and Angry Birds to obtain location information as well as other metadata and content. As the former head of the NSA announced several years ago, “We kill people based on metadata.”

Any arguments made by either private or public organizations that anonymization or de-identification of location information makes it acceptable to collect, use, or disclose generally relies tricking customers and citizens. Why is this? Because even when location information is aggregated and ‘anonymized’ it might subsequently be re-identified. And in situations where that reversal doesn’t occur, policy decisions can still be made based on the aggregated information. The process of deriving these insights and applying them showcases that while privacy is an important right to protect, it is not the only right that is implicated in the collection and use of locational information. Indeed, it is important to assess the proportionality and necessity of the collection and use, as well as how the associated activities affect individuals’ and communities’ equity and autonomy in society. Doing anything less is merely privacy-washing.

Throughout discussions about data collection, including as it pertains to location information, public agencies and companies alike tend to provide a pair of argument against changing the status quo. First, they assert that consent isn’t really possible anymore given the volumes of data which are collected on a daily basis from individuals; individuals would be overwhelmed with consent requests! Thus we can’t make the requests in the first place! Second, that we can’t regulate the collection of this data because doing so risks impeding innovation in the data economy.

If those arguments sound familiar, they should. They’re very similar to the plays made by industry groups who’s activities have historically had negative environmental consequences. These groups regularly assert that after decades of poor or middling environmental regulation that any new, stronger, regulations would unduly impede the existing dirty economy for power, services, goods, and so forth. Moreover, the dirty way of creating power, services, and goods is just how things are and thus should remain the same.

In both the privacy and environmental worlds, corporate actors (and those whom they sell data/goods to) have benefitted from not having to pay the full cost of acquiring data without meaningful consent or accounting for the environmental cost of their activities. But, just as we demand enhanced environmental regulations to regulate and address the harms industry causes to the environment, we should demand and expect the same when it comes to the personal data economy.

If a business is predicated on sneaking away personal information from individuals then it is clearly not particularly interested or invested in being ethical towards consumers. It’s imperative to continue pushing legislators to not just recognize that such practices are unethical, but to make them illegal as well. Doing so will require being heard over the cries of government’s agencies that have vested interests in obtaining location information in ways that skirt the law that might normally discipline such collection, as well as companies that have grown as a result of their unethical data collection practices. While this will not be an easy task, it’s increasingly important given the limits of platforms to regulate the sneaky collection of this information and increasingly problematic ways our personal data can be weaponized against us.


  1. “PHAC advised that since the information had been de-identified and aggregated, it believed the activity did not engage the Privacy Act as it was not collecting or using “personal information”. ↩︎
  2. See, as example, Section 23 of the CSE Act ↩︎

Improving My Photography In 2021

CB1A5DDF-8273-47CD-81CF-42C2FC0BA6F5
(Climbing Gear by Christopher Parsons)

I’ve spent a lot of personal time behind my cameras throughout 2021 and have taken a bunch of shots that I really like. At the same time, I’ve invested a lot of personal time learning more about the history of photography and how to accomplish things with my cameras. Below, in no particular order, is a list of the ways I worked to improve my photography in 2021.

Fuji Recipes

I started looking at different ‘recipes’ that I could use for my Fuji x100f, starting with those at Fuji X Weekly and some YouTube channels. I’ve since started playing around with my own black and white recipes to get a better sense of what works for making my own images. The goal in all of this is to create jpgs that are ‘done’ in body and require an absolute minimum amount of adjustment. It’s very much a work in progress, but I’ve gotten to the point that most of my photos only receive minor crops, as opposed to extensive edits in Darkroom.

Comfort in Street Photography

The first real memory I have of ‘doing’ street photography was being confronted by a bus driver after I took his photo. I was scared off of taking pictures of other people for years as a result.

Over the past year, however, I’ve gotten more comfortable by watching a lot of POV-style YouTube videos of how other street photographers go about making their images. I don’t have anyone else to go an shoot with, and learn from, so these videos have been essential to my learning process. In particular, I’ve learned a lot from watching and listening to Faizal Westcott, the folks over at Framelines, Joe Allan, Mattias Burling, and Samuel Lintaro Hopf.

Moreover, just seeing the photos that other photographers are making and how they move in the street has helped to validate that what I’m doing, when I go out, definitely fits within the broader genre of street photography.

Histories of Photography

In the latter three months of 2021 I spent an enormous amount of time watching videos from the Art of Photography, Tatiana Hopper, and a bit from Sean Tucker. The result is that I’m developing a better sense of what you can do with a camera as well as why certain images are iconic or meaningful.

Pocket Camera Investment

I really love my Fuji x100f and always have my iPhone 12 Pro in my pocket. Both are terrific cameras. However, I wanted something that was smaller than the Fuji and more tactile than the iPhone, and which I could always have in a jacket pocket.

To that end, in late 2021 I purchase a very lightly used Ricoh GR. While I haven’t used it enough to offer a full review of it I have taken a lot of photos with it that I really, really like. More than anything else I’m taking more photos since buying it because I always have a good, very tactile, camera with me wherever I go.

Getting Off Instagram

I’m not a particularly big fan of Instagram these days given Facebook’s unwillingness or inability to moderate its platform, as well as Instagram’s constant addition of advertisements and short video clips. So since October 2021 I’ve been posting my photos almost exclusively to Glass and (admittedly to a lesser extent) to this website.

Not only is the interface for posting to Glass a lot better than the one for Instagram (and Flickr, as well), the comments I get on my photos on Glass are better than anywhere else I’ve ever posted my images. Admittedly Glass still has some growing pains but I’m excited to see how it develops in the coming year.

Book Review: Blockchain Chicken Farm And Other Stories of Tech in China’s Countryside (2020) ⭐️⭐️⭐️

Xiaowei Wang’s book, Blockchain Chicken Farm And Other Stories of Tech in China’s Countryside, presents a nuanced and detailed account of the lives reality of many people in China through the lenses of history, culture, and emerging technologies. She makes clear through her writing that China is undergoing a massive shift through efforts to digitize the economy and society (and especially rural economies and societies) while also effectively communicating why so many of these initiatives are being undertaken. 

From exploring the relationship between a fraught cold chain and organic chicken, to attempts to revitalize rural villages by turning them into platform manufacturing towns, to thinking through and reflecting on the state of contemporary capitalistic performativity in rural China and the USA alike, we see how technologies are being used to try and ‘solve’ challenges while often simultaneously undermining and endangering the societies within which they are embedded. Wang is careful to ensure that a reader leaves with an understanding of the positive attributes of how technologies are applied while, at the same time, making clear how they do not remedy—and, in fact, often reify or extenuate—unequal power relationships. Indeed, many of the positive elements of technologies, from the perspective of empowering rural citizens or improving their earning powers, are either being negatively impacted by larger capitalistic actors or the technology companies whose platforms many of these so-called improvements operate upon. 

Wang’s book, in its conclusion, recognizes that we need to enhance and improve upon the cultural spaces we operate and live within if we are to create a new or reformed politics that is more responsive to the specific needs of individuals and their communities. Put differently, we must tend to the dynamism of the Lifeworld if we are to modify the conditions of the System that surrounds, and unrelentingly colonizes, the Lifeworld. 

Her wistful ending—that such efforts of (re)generation are all that we can do—speaks both to a hope but also an almost resignation that (re)forming the systems we operate in can only take place if we manage to avoid being distracted by the bauble or technology that is dangled in front of us, to distract us from the existential crises facing our societies and humanity writ large. As such, it concludes very much in the spirit of our times: with hope for the future but a fearful resignation that despite our best efforts, we may be too late to succeed. But, what else can we do?

Glass in 2022

GlassProfile

I’ve been primarily posting my photos to Glass for about three months now. There have been several quality of life improvements1 but, on the whole, the app has been pretty true to its original DNA.

That’s been a bit frustrating for some folks, such as Matt Birchler. He notes that Glass seems to be populated by professional photographers and lacks the life and diversity that you can sometimes find on Instagram or other photography sites. I was particularly struck by his comment that, “I used to enjoy the feed because it was high quality stuff, but now I scroll and everyone is making photos that look like every else’s.”

I don’t discount that Matt’s experience has been seeing a lot of professionals making photos but have to admit that his experiences don’t really parallel my own. To be clear, the photographers that I follow are doing neat work and some are definitely serious amateurs or professionals. But perhaps because I’m more focused on street photography it’s rarely self-apparent to me that I’m following professionals versus amateurs, nor that everyone’s work looks the same.

That being said, I definitely do follow a lot fewer people on Glass. If I have a problem with the app it’s that discovering active photographers on the platform is difficult; a lot of people signed up for the trial period but aren’t regularly posting. The result is that it’s hard to develop an active stream of photos and a photographic community. At the same time, however, I don’t browse the Glass app like I would Instagram: I pop in once or twice a day, and try to set aside some time every day or three (or four…) to leave comments on others photographers’ work. I treat Glass more seriously than free photography applications, if only because I have (thus far) only has positive experiences with the other active photographers posting their work there.

The only other problem I have with Glass—annoyance really!—is that I think that you actually can see/display photographers’ profiles in a much more beautiful way on non-phone devices. The image for this post was a screen capture from my iPad which attractively lays out photos. In contrast, you just get a flat waterfall of images if you visit my profile in the Glass app itself. That’s a shame and hopefully something that is improved upon in 2022.

To date I’m happy with Glass and incredibly pleased to no longer posting my photos to a Facebook platform. I really hope that Glass’s developers are able to maintain the app going forward, which will almost certainly depend in part on building the community and enhancing discoverability.

I’m currently planning to continue posting my work to Glass regularly. Even if the service doesn’t explode (which would be fine for me, though probably not great for its long term survival!) I find that the comments that I receive are far more valuable than anything I tended to receive on Instagram or other social sites, and the actual process of posting is also a comparative breeze and joy. If you’re looking for a neat photography site to try out, I heartily recommend that you give Glass a shot!


  1. Specifically, the developers have added some photography categories and public profiles, as well as the ability to ‘appreciate’ photos and comments ↩︎

Chinese Spies Accused of Using Huawei in Secret Australia Telecom Hack

Bloomberg has an article that discusses how Chinese spies were allegedly involved in deploying implants on Huawei equipment which was operated in Australia and the United States. The key parts of the story include:

At the core of the case, those officials said, was a software update from Huawei that was installed on the network of a major Australian telecommunications company. The update appeared legitimate, but it contained malicious code that worked much like a digital wiretap, reprogramming the infected equipment to record all the communications passing through it before sending the data to China, they said. After a few days, that code deleted itself, the result of a clever self-destruct mechanism embedded in the update, they said. Ultimately, Australia’s intelligence agencies determined that China’s spy services were behind the breach, having infiltrated the ranks of Huawei technicians who helped maintain the equipment and pushed the update to the telecom’s systems. 

Guided by Australia’s tip, American intelligence agencies that year confirmed a similar attack from China using Huawei equipment located in the U.S., six of the former officials said, declining to provide further detail.

The details from the story are all circa 2012. The fact that Huawei equipment was successfully being targeted by these operations, in combination with the large volume of serious vulnerabilities in Huawei equipment, contributed to the United States’ efforts to bar Huawei equipment from American networks and the networks of their closest allies.1

Analysis

We can derive a number of conclusions from the Bloomberg article, as well as see links between activities allegedly undertaken by the Chinese government and those of Western intelligence agencies.

To begin, it’s worth noting that the very premise of the article–that the Chinese government needed to infiltrate the ranks of Huawei technicians–suggests that circa 2012 Huawei was not controlled by, operated by, or necessarily unduly influenced by the Chinese government. Why? Because if the government needed to impersonate technicians to deploy implants, and do so without the knowledge of Huawei’s executive staff, then it’s very challenging to say that the company writ large (or its executive staff) were complicit in intelligence operations.

Second, the Bloomberg article makes clear that a human intelligence (HUMINT) operation had to be conducted in order to deploy the implants in telecommunications networks, with data then being sent back to servers that were presumably operated by Chinese intelligence and security agencies. These kinds of HUMINT operations can be high-risk insofar because if operatives are caught then the whole operation (and its surrounding infrastructure) can be detected and burned down. Building legends for assets is never easy, nor is developing assets if they are being run from a distance as opposed to spies themselves deploying implants.2

Third, the United States’ National Security Agency (NSA) has conducted similar if not identical operations when its staff interdicted equipment while it was being shipped, in order to implant the equipment before sending it along to its final destination. Similarly, the CIA worked for decades to deliberately provide cryptographically-sabotaged equipment to diplomatic facilities around the world. All of which is to say that multiple agencies have been involved in using spies or assets to deliberately compromise hardware, including Western agencies.

Fourth, the Canadian Communications Security Establish Act (‘CSE Act’), which was passed into law in 2019, includes language which authorizes the CSE to do, “anything that is reasonably necessary to maintain the covert nature of the [foreign intelligence] activity” (26(2)(c)). The language in the CSE Act, at a minimum, raises the prospect that the CSE could undertake operations which parallel those of the NSA and, in theory, the Chinese government and its intelligence and security services.3

Of course, the fact that the NSA and other Western agencies have historically tampered with telecommunications hardware to facilitate intelligence collection doesn’t take away from the seriousness of the allegations that the Chinese government targeted Huawei equipment so as to carry out intelligence operations in Australia and the United States. Moreover, the reporting in Bloomberg covers a time around 2012 and it remains unclear whether the relationship(s) between the Chinese government and Huawei have changed since then; it is possible, though credible open source evidence is not forthcoming to date, that Huawei has since been captured by the Chinese state.

Takeaway

The Bloomberg article strongly suggests that Huawei, as of 2012, didn’t appear captured by the Chinese government given the government’s reliance on HUMINT operations. Moreover, and separate from the article itself, it’s important that readers keep in mind that the activities which were allegedly carried out by the Chinese government were (and remain) similar to those also carried out by Western governments and their own security and intelligence agencies. I don’t raise this latter point as a kind of ‘whataboutism‘ but, instead, to underscore that these kinds of operations are both serious and conducted by ‘friendly’ and adversarial intelligence services alike. As such, it behooves citizens to ask whether these are the kinds of activities we want our governments to be conducting on our behalves. Furthermore, we need to keep these kinds of facts in mind and, ideally, see them in news reporting to better contextualize the operations which are undertaken by domestic and foreign intelligence agencies alike.


  1. While it’s several years past 2012, the 2021 UK HCSEC report found that it continued “to uncover issues that indicate there has been no overall improvement over the course of 2020 to meet the product software engineering and cyber security quality expected by the NCSC.” (boldface in original) ↩︎
  2. It is worth noting that, post-2012, the Chinese government has passed national security legislation which may make it easier to compel Chinese nationals to operate as intelligence assets, inclusive of technicians who have privileged access to telecommunications equipment that is being maintained outside China. That having been said, and as helpfully pointed out by Graham Webster, this case demonstrates that the national security laws were not needed in order to use human agents or assets to deploy implants. ↩︎
  3. There is a baseline question of whether the CSE Act created new powers for the CSE in this regard or if, instead, it merely codified existing secret policies or legal interpretations which had previously authorized the CSE to undertake covert activities in carrying out its foreign signals intelligence operations. ↩︎

Mandatory Patching of Serious Vulnerabilities in Government Systems

Photo by Mati Mango on Pexels.com

The Cybersecurity and Infrastructure Security Agency (CISA) is responsible for building national capacity to defend American infrastructure and cybersecurity assets. In the past year they have been tasked with receiving information about American government agencies’ progress (or lack thereof) in implementing elements of Executive Order 14028: Improving the Nation’s Cybersecurity and have been involved in responses to a number of events, including Solar Winds, the Colonial Pipeline ransomware attack, and others. The Executive Order required that CISA first collect a large volume of information from government agencies and vendors alike to assess the threats towards government infrastructure and, subsequently, to provide guidance concerning cloud services, track the adoption of multi factor authentication and seek ways of facilitating its implementation, establish a framework to respond to security incidents, enhance CISA’s threat hunting abilities in government networks, and more.1

Today, CISA promulgated a binding operational directive that will require American government agencies to adopt more aggressive patch tempos for vulnerabilities. In addition to requiring agencies to develop formal policies for remediating vulnerabilities it establishes a requirement that vulnerabilities with a common vulnerabilities and exposure ID be remediated within 6 months, and all others with two weeks. Vulnerabilities to be patched/remediated are found in CISA’s “Known Exploited Vulnerabilities Catalogue.”

It’s notable that while patching is obviously preferred, the CISA directive doesn’t mandate patching but that ‘remediation’ take place.2 As such, organizations may be authorized to deploy defensive measures that will prevent the vulnerability from being exploited but not actually patch the underlying vulnerability, so as to avoid a patch having unintended consequences for either the application in question or for other applications/services that currently rely on either outdated or bespoke programming interfaces.

In the Canadian context, there aren’t equivalent levels of requirements that can be placed on Canadian federal departments. While Shared Services Canada can strongly encourage departments to patch, and the Treasury Board Secretariat has published a “Patch Management Guidance” document, and Canada’s Canadian Centre for Cyber Security has a suggested patch deployment schedule,3 final decisions are still made by individual departments by their respective deputy minister under the Financial Administration Act.

The Biden administration is moving quickly to accelerate its ability to identify and remediate vulnerabilities while simultaneously lettings its threat intelligence staff track adversaries in American networks. That last element is less of an issue in the Canadian context but the first two remain pressing and serious challenges.

While its positive to see the Americans moving quickly to improve their security positions I can only hope that the Canadian federal, and provincial, governments similarly clear long-standing logjams that delegate security decisions to parties who may be ill-suited to make optimal decisions, either out of ignorance or because patching systems is seen as secondary to fulfilling a given department’s primary service mandate.


  1. For a discussion of the Executive Order, see: “Initial Thoughts on Biden’s Executive Order on Improving the Nation’s Cybersecurity” or “Everything You Need to Know About the New Executive Order on Cybersecurity.” ↩︎
  2. For more, see CISA’s “Vulnerability Remediation Requirements“. ↩︎
  3. “CCCS’s deployment schedule only suggests timelines for deployment. In actuality, an organization should take into consideration risk tolerance and exposure to a given vulnerability and associated attack vector(s) as part of a risk‑based approach to patching, while also fully considering their individual threat profile. Patch management tools continue to improve the efficiency of the process and enable organizations to hasten the deployment schedule.” Source: “Patch Management Guidance↩︎

Apple Music Voice Plan- The New iPod Shuffle?

A lot of tech commentators are scratching their heads over Apple’s new Apple Music Voice Plan. The plan is half the price of a ‘normal’ Apple Music subscription. If subscribed, individuals will can ask Siri to play songs or playlists but will not have access to a text-based or icon-based way to search for or play music.

I am dubious that this will be a particularly successful music plan. Siri is the definition of a not-good (and very bad) voice assistant.

Nevertheless, Apple has released this music plan into the world. I think that it’s probably most like the old iPod Shuffle that lacked any ability to really select or manage an individual’s music. The Shuffle was a cult favourite.

I have a hard time imagining a Siri-based interface developing a cult following like the iPods of yore, but the same thing was thought about the old Shuffle, too.

Detecting Academic National Security Threats

Photo by Pixabay on Pexels.com

The Canadian government is following in the footsteps of it’s American counterpart and has introduced national security assessments for recipients of government natural science (NSERC) funding. Such assessments will occur when proposed research projects are deemed sensitive and where private funding is also used to facilitate the research in question. Social science (SSHRC) and health (CIHR) funding will be subject to these assessments in the near future.

I’ve written, elsewhere, about why such assessments are likely fatally flawed. In short, they will inhibit student training, will cast suspicion upon researchers of non-Canadian nationalities (and especially upon researchers who hold citizenship with ‘competitor nations’ such as China, Russia, and Iran), and may encourage researchers to hide their sources of funding to be able to perform their required academic duties while also avoiding national security scrutiny.

To be clear, such scrutiny often carries explicit racist overtones, has led to many charges but few convictions in the United States, and presupposes that academic units or government agencies can detect a human-based espionage agent. Further, it presupposes that HUMINT-based espionage is a more serious, or equivalent, threat to research productivity as compared to cyber-espionage. As of today, there is no evidence in the public record in Canada that indicates that the threat facing Canadian academics is equivalent to the invasiveness of the assessments, nor that human-based espionage is a greater risk than cyber-based means.

To the best of my knowledge, while HUMINT-based espionage does generate some concerns they pale in comparison to the risk of espionage linked to cyber-operations.

However, these points are not the principal focus of this post. I recently re-read some older work by Bruce Schneier that I think nicely casts why asking scholars to engage in national security assessments of their own, and their colleagues’, research is bound to fail. Schneier wrote the following in 2007, when discussing the US government’s “see something, say something” campaign:

[t]he problem is that ordinary citizens don’t know what a real terrorist threat looks like. They can’t tell the difference between a bomb and a tape dispenser, electronic name badge, CD player, bat detector, or trash sculpture; or the difference between terrorist plotters and imams, musicians, or architects. All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.

Replace “terrorist” with “national security” threat and we get to approximately the same conclusions. Individuals—even those trained to detect and investigate human intelligence driven espionage—can find it incredibly difficult to detect human agent-enabled espionage. Expecting academics, who are motivated to develop international and collegial relationships, who may be unable to assess the national security implications of their research, and who are being told to abandon funding while the government fails to supplement that which is abandoned, guarantees that this measure will fail.

What will that failure mean, specifically? It will involve incorrect assessments and suspicion being aimed at scholars from ‘competitor’ and adversary nations. Scholars will question whether they should work with a Chinese, Russian, or Iranian scholar even when they are employed in a Western university let alone when they are in a non-Western institution. I doubt these same scholars will similarly question whether they should work with Finish, French, or British scholars. Nationality and ethnicity lenses will be used to assess who are the ‘right’ people with whom to collaborate.

Failure will not just affect professors. It will also extend to affect undergraduate and graduate students, as well as post-doctoral fellows and university staff. Already, students are questioning what they must do in order to prove that they are not considered national security threats. Lab staff and other employees who have access to university research environments will similarly be placed under an aura of suspicion. We should not, we must not, create an academy where these are the kinds of questions with which our students and colleagues and staff must grapple.

Espionage is, it must be recognized, a serious issue that faces universities and Canadian businesses more broadly. The solution cannot be to ignore it and hope that the activity goes away. However, the response to such threats must demonstrate necessity and proportionality and demonstrably involve evidence-based and inclusive policy making. The current program that is being rolled out by the Government of Canada does not meet this set of conditions and, as such, needs to be repealed.