Mandatory Patching of Serious Vulnerabilities in Government Systems

Photo by Mati Mango on Pexels.com

The Cybersecurity and Infrastructure Security Agency (CISA) is responsible for building national capacity to defend American infrastructure and cybersecurity assets. In the past year they have been tasked with receiving information about American government agencies’ progress (or lack thereof) in implementing elements of Executive Order 14028: Improving the Nation’s Cybersecurity and have been involved in responses to a number of events, including Solar Winds, the Colonial Pipeline ransomware attack, and others. The Executive Order required that CISA first collect a large volume of information from government agencies and vendors alike to assess the threats towards government infrastructure and, subsequently, to provide guidance concerning cloud services, track the adoption of multi factor authentication and seek ways of facilitating its implementation, establish a framework to respond to security incidents, enhance CISA’s threat hunting abilities in government networks, and more.1

Today, CISA promulgated a binding operational directive that will require American government agencies to adopt more aggressive patch tempos for vulnerabilities. In addition to requiring agencies to develop formal policies for remediating vulnerabilities it establishes a requirement that vulnerabilities with a common vulnerabilities and exposure ID be remediated within 6 months, and all others with two weeks. Vulnerabilities to be patched/remediated are found in CISA’s “Known Exploited Vulnerabilities Catalogue.”

It’s notable that while patching is obviously preferred, the CISA directive doesn’t mandate patching but that ‘remediation’ take place.2 As such, organizations may be authorized to deploy defensive measures that will prevent the vulnerability from being exploited but not actually patch the underlying vulnerability, so as to avoid a patch having unintended consequences for either the application in question or for other applications/services that currently rely on either outdated or bespoke programming interfaces.

In the Canadian context, there aren’t equivalent levels of requirements that can be placed on Canadian federal departments. While Shared Services Canada can strongly encourage departments to patch, and the Treasury Board Secretariat has published a “Patch Management Guidance” document, and Canada’s Canadian Centre for Cyber Security has a suggested patch deployment schedule,3 final decisions are still made by individual departments by their respective deputy minister under the Financial Administration Act.

The Biden administration is moving quickly to accelerate its ability to identify and remediate vulnerabilities while simultaneously lettings its threat intelligence staff track adversaries in American networks. That last element is less of an issue in the Canadian context but the first two remain pressing and serious challenges.

While its positive to see the Americans moving quickly to improve their security positions I can only hope that the Canadian federal, and provincial, governments similarly clear long-standing logjams that delegate security decisions to parties who may be ill-suited to make optimal decisions, either out of ignorance or because patching systems is seen as secondary to fulfilling a given department’s primary service mandate.


  1. For a discussion of the Executive Order, see: “Initial Thoughts on Biden’s Executive Order on Improving the Nation’s Cybersecurity” or “Everything You Need to Know About the New Executive Order on Cybersecurity.” ↩︎
  2. For more, see CISA’s “Vulnerability Remediation Requirements“. ↩︎
  3. “CCCS’s deployment schedule only suggests timelines for deployment. In actuality, an organization should take into consideration risk tolerance and exposure to a given vulnerability and associated attack vector(s) as part of a risk‑based approach to patching, while also fully considering their individual threat profile. Patch management tools continue to improve the efficiency of the process and enable organizations to hasten the deployment schedule.” Source: “Patch Management Guidance↩︎

Solved: “A Server With This Hostname Cannot Be Found” In iOS

For the past few days whenever I’ve been using my iPhone on a cellular connection I’ve been unable to play podcasts or stream music, or do anything else that requires an Internet connection. The title of this post refers to the error I was receiving in Apple Music whenever I tried to play something.

After spending a bit of time diagnosing the issue it became apparent that the problem originated in the VPN service that I use to scan for, and block, trackers and malicious content. Specifically, the 1Blocker application currently has a problem when it uses DNS Proxy-based scanning for its firewall.

While one solution involves disabling 1Blocker’s VPN functionality entirely1 you can also switch to HTTP Proxy-based scanning in 1Blocker to resolve the issue. To do so:

  1. Open the 1Blocker application
  2. Open the Firewall tab
  3. Click the ‘…’ in the upper right corner
  4. Select ‘HTTP Proxy’

At the moment the company is asserting that the problem originates from “an ongoing connectivity issue that affects some mobile network operators.” No further information has been provided.

It’s possible that this will be resolved if carriers fix whatever is wrong on their end, though there isn’t a public ETA for this occurring at the moment.


  1. Settings > VPN > the (i) button beside 1Blocker > Turn off ‘Connect on Demand’ > return to VPN and set status to ‘Disconnected’ ↩︎

Apple Music Voice Plan- The New iPod Shuffle?

A lot of tech commentators are scratching their heads over Apple’s new Apple Music Voice Plan. The plan is half the price of a ‘normal’ Apple Music subscription. If subscribed, individuals will can ask Siri to play songs or playlists but will not have access to a text-based or icon-based way to search for or play music.

I am dubious that this will be a particularly successful music plan. Siri is the definition of a not-good (and very bad) voice assistant.

Nevertheless, Apple has released this music plan into the world. I think that it’s probably most like the old iPod Shuffle that lacked any ability to really select or manage an individual’s music. The Shuffle was a cult favourite.

I have a hard time imagining a Siri-based interface developing a cult following like the iPods of yore, but the same thing was thought about the old Shuffle, too.

Detecting Academic National Security Threats

Photo by Pixabay on Pexels.com

The Canadian government is following in the footsteps of it’s American counterpart and has introduced national security assessments for recipients of government natural science (NSERC) funding. Such assessments will occur when proposed research projects are deemed sensitive and where private funding is also used to facilitate the research in question. Social science (SSHRC) and health (CIHR) funding will be subject to these assessments in the near future.

I’ve written, elsewhere, about why such assessments are likely fatally flawed. In short, they will inhibit student training, will cast suspicion upon researchers of non-Canadian nationalities (and especially upon researchers who hold citizenship with ‘competitor nations’ such as China, Russia, and Iran), and may encourage researchers to hide their sources of funding to be able to perform their required academic duties while also avoiding national security scrutiny.

To be clear, such scrutiny often carries explicit racist overtones, has led to many charges but few convictions in the United States, and presupposes that academic units or government agencies can detect a human-based espionage agent. Further, it presupposes that HUMINT-based espionage is a more serious, or equivalent, threat to research productivity as compared to cyber-espionage. As of today, there is no evidence in the public record in Canada that indicates that the threat facing Canadian academics is equivalent to the invasiveness of the assessments, nor that human-based espionage is a greater risk than cyber-based means.

To the best of my knowledge, while HUMINT-based espionage does generate some concerns they pale in comparison to the risk of espionage linked to cyber-operations.

However, these points are not the principal focus of this post. I recently re-read some older work by Bruce Schneier that I think nicely casts why asking scholars to engage in national security assessments of their own, and their colleagues’, research is bound to fail. Schneier wrote the following in 2007, when discussing the US government’s “see something, say something” campaign:

[t]he problem is that ordinary citizens don’t know what a real terrorist threat looks like. They can’t tell the difference between a bomb and a tape dispenser, electronic name badge, CD player, bat detector, or trash sculpture; or the difference between terrorist plotters and imams, musicians, or architects. All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.

Replace “terrorist” with “national security” threat and we get to approximately the same conclusions. Individuals—even those trained to detect and investigate human intelligence driven espionage—can find it incredibly difficult to detect human agent-enabled espionage. Expecting academics, who are motivated to develop international and collegial relationships, who may be unable to assess the national security implications of their research, and who are being told to abandon funding while the government fails to supplement that which is abandoned, guarantees that this measure will fail.

What will that failure mean, specifically? It will involve incorrect assessments and suspicion being aimed at scholars from ‘competitor’ and adversary nations. Scholars will question whether they should work with a Chinese, Russian, or Iranian scholar even when they are employed in a Western university let alone when they are in a non-Western institution. I doubt these same scholars will similarly question whether they should work with Finish, French, or British scholars. Nationality and ethnicity lenses will be used to assess who are the ‘right’ people with whom to collaborate.

Failure will not just affect professors. It will also extend to affect undergraduate and graduate students, as well as post-doctoral fellows and university staff. Already, students are questioning what they must do in order to prove that they are not considered national security threats. Lab staff and other employees who have access to university research environments will similarly be placed under an aura of suspicion. We should not, we must not, create an academy where these are the kinds of questions with which our students and colleagues and staff must grapple.

Espionage is, it must be recognized, a serious issue that faces universities and Canadian businesses more broadly. The solution cannot be to ignore it and hope that the activity goes away. However, the response to such threats must demonstrate necessity and proportionality and demonstrably involve evidence-based and inclusive policy making. The current program that is being rolled out by the Government of Canada does not meet this set of conditions and, as such, needs to be repealed.

Repurposing Apple Time Capsule as a Network Drive

(Photo by MockupEditor.com on Pexels.com)

For the past several years I’ve happily used an Apple Time Capsule as my router and one of many backup drives, but it’s been getting a big long in the tooth as the number of items on my network has grown. I recently upgraded to a new router but wanted to continue using my Time Capsule, and it’s very large drive, for LAN backups.

A post in Apple’s discussion forums helpfully kicked off how to reset the wireless settings for the Time Capsule and prepare it to just live on the network as a drive. After following those instructions, all I needed to do was:

  1. Open Time Machine Preferences on my device;
  2. Select ‘Add or Remove Backup Disk…’;
  3. Select the freshly networked disk;
  4. Choose to use the pre-existing backup image, and input the encryption password for the backup.

Voila! And now my disk–with all its data–is available on the network and capable of continuing my Time Machine backups!

The Kaseya Ransomware Attack Is a Really Big Deal

Screen Shot 2021-07-19 at 2.26.52 PM
(Managed Service Provider image by the Canadian Centre for Cybersecurity)

Matt Tait, as normal, has good insights into just why the Kaseya ransomware attack1 was such a big deal:

In short, software supply chain security breaches don’t look like other categories of breaches. A lot of this comes down to the central conundrum of system security: it’s not possible to defend the edges of a system without centralization so that we can pool defensive resources. But this same centralization concentrates offensive action against a few single points of failure that, if breached, cause all of the edges to fall at once. And the more edges that central failure point controls, the more likely the collateral real-world consequences of any breach, but especially a ransomware breach will be catastrophic, and cause overwhelm the defensive cybersecurity industry’s ability to respond.

Managed Service Providers (MSPs) are becoming increasingly common targets. It’s worth noting that the Canadian Centre for Cybersecurity‘s National Cyber Threat Assessment 2020 listed ransomware as well as the exploitation of MSPs as two of the seven key threats to Canadian financial and economic health. The Centre went so far as to state that it expected,

… that over the next two years ransomware campaigns will very likely increasingly target MSPs for the purpose of targeting their clients as a means of scaling targeted ransomware campaigns.

Sadly, if not surprisingly, this assessment has been entirely correct. It remains to be seen what impact the 2020 threats assessment has, or will have, on Canadian organizations and their security postures. Based on conversations I’ve had over the past few months the results are not inspiring and the threat assessment has generally been less effective than hoped in driving change in Canada.

As discussed by Steven Bellovin, part of the broader challenge for the security community in preparing for MSP operations has been that defenders are routinely behind the times; operators modify what and who their campaigns will target and defenders are forced to scramble to catch up. He specifically, and depressingly, recognizes that, “…when it comes to target selection, the attackers have outmaneuvered defenders for almost 30 years.”

These failures are that much more noteworthy given that the United States has trumpeted for years that the NSA will ‘defend forward‘ to identify and hunt threats, and respond to them before they reach ‘American cybershores’.2 The seemingly now routine targeting of both system update mechanisms as well as vendors which provide security or operational controls for wide swathes of organizations demonstrates that things are going to get a lot worse before they’re likely to improve.

A course correction could follow from Western nations developing effective and meaningful cyber-deterrence processes that encourage nations such as Russia, China, Iran, and North Korea to punish computer operators who are behind some of the worst kinds of operations that have emerged in public view. However, this would in part require the American government (and its allies) to actually figure out how they can deter adversaries. It’s been 12 years or so, and counting, and it’s not apparent that any American administration has figured out how to implement a deterrence regime that exceeds issuing toothless threats. The same goes for most of their allies.

Absent an actual deterrence response, such as one which takes action in sovereign states that host malicious operators, Western nations have slowly joined together to issue group attributions of foreign operations. They’ve also come together to recognize certain classes of cyber operations as particularly problematic, including ransomware. Must nations build this shared capacity, first, before they can actually undertake deterrence activities? Should that be the case then it would strongly underscore the need to develop shared norms in advance of sovereign states exercising their latent capacities in cyber and other domains and lend credence to the importance of the Tallinn manual process . If, however, this capacity is built and nothing is still undertaken to deter, then what will the capacity actually be worth? While this is a fascinating scholarly exercise–it’s basically an opportunity to test competing scholarly hypotheses–it’s one that has significant real-world consequences and the danger is that once we recognize which hypothesis is correct, years of time and effort could have been wasted for little apparent gain.

What’s worse is that this even is a scholarly exercise. Given that more than a decade has passed, and that ‘cyber’ is not truly new anymore, why must hypotheses be spun instead of states having developed sufficient capacity to deter? Where are Western states’ muscles after so much time working this problem?


  1. As a point of order, when is an act of ransomware an attack versus an operation? ↩︎
  2. I just made that one up. No, I’m not proud of it. ↩︎

Vaccination, Discrimination, and Canadian Civil Liberties

Photo by Karolina Grabowska on Pexels.com

Civil liberties debates about whether individuals should have to get vaccinated against Covid-19 are on the rise. Civil liberties groups broadly worry that individuals will suffer intrusions into their privacy, or that rights of association or other rights will be unduly abridged, as businesses and employers require individuals to demonstrate proof of vaccination.

As discussed in a recent article published by the CBC, some individuals are specifically unable to, or concerned about, receiving Covid-19 vaccines on the basis that, “they’re taking immunosuppressant drugs, for example, while others have legitimate concerns about the safety and efficacy of the COVID-19 vaccines or justifiable fears borne from previous negative interactions with the health-care system.” The same expert, Arthur Schafer of the Centre for Professional and Applied Ethics at the University of Manitoba, said, “[w]e should try to accommodate people who have objections, conscientious or scientific or even religious, where we can do so without compromising public safety and without incurring a disproportionate cost to society.”

Other experts, such as Ann Cavoukian, worry that being compelled to disclose vaccination status could jeopardize individuals’ medical information should it be shared with parties who are not equipped to protect it, or who may combine it with other information to discriminate against individuals. For the Canadian Civil Liberties Association, they have taken the stance that individuals should have the freedom to choose to be vaccinated or not, that no compulsions should be applied to encourage vaccination (e.g., requiring vaccination to attend events), and broadly that, “COVID is just another risk now that we have to incorporate into our daily lives.”

In situations where individuals are unable to be vaccinated, either due to potential allergic responses or lack of availability of vaccine (e.g., those under the age of 12), then it is imperative to ensure that individuals do not face discrimination. In these situations, those affected cannot receive a vaccine and it is important to not create castes of the vaccinated and unable-to-be-vaccinated. For individuals who are hesitant due to historical negative experiences with vaccination efforts, or medical experimentation, some accommodations may also be required.

However, in the cases where vaccines are available and there are opportunities to receive said vaccine, then not getting vaccinated does constitute a choice. As it stands, today, in many Canadian schools children are required to received a set of vaccinations in order to attend school and if their parents refuse, then the children are required to use alternate educational systems (e.g., home schooling). When parents make a specific choice they are compelled to deal with the consequences of said decision. (Of course, there is not a vaccine for individuals under 12 years of age at the moment and so we shouldn’t be barring unvaccinated children from schools, but adopting such a requirement in the future might align with how schools regularly require proof of vaccination status to attend public schools.)

The ability to attend a concert, as an example, can and should be predicated on vaccination status where vaccination is an option for attendees. Similarly, if an individual refuses to be vaccinated their decision may have consequences in cases where they are required to be in-person in their workplace. There may be good reasons for why some workers decline to be vaccinated, such as a lack of paid days off and fear that losing a few days of work due to vaccination symptoms may prevent them from paying the rent or getting food; in such cases, accommodations to enable them to get vaccinated are needed. However, once such accommodations are made decisions to continue to not get vaccinated may have consequences.

In assessing whether policies are discriminatory individuals’ liberties as well as those of the broader population must be taken into account, with deliberate efforts made to ensure that group rights do not trample on the rights of minority or disenfranchised members of society. Accommodations must be made so that everyone can get vaccinated; rules cannot be established that apply equally but affect members of society in discriminatory ways. But, at the same time, the protection of rights is conditional and mitigating the spread of a particularly virulent disease that has serious health and economic effects is arguably one of those cases where protecting the community (and, by extension, those individuals who are unable to receive a vaccine for medical reasons) is of heightened importance.

Is this to say that there are no civil liberties concerns that might arise when vaccinating a population? No, obviously not.

In situations where individuals are unhoused or otherwise challenged in keeping or retaining a certification that they have been vaccinated, then it is important to build policies that do not discriminate against these classes of individuals. Similarly, if there is a concern that vaccination passes might present novel security risks that have correlate rights concerns (e.g., a digital system that links presentations of a vaccination credential with locational information) then it is important to carefully assess, critique, and re-develop systems so that they provide the minimum data required to reduce the risk of Covid-19’s spread. Also, as the population of vaccinated persons reaches certain percentages there may simply be less of a need to assess or check that someone is vaccinated. While this means that some ‘free riders’ will succeed, insofar as they will decline to be vaccinated and not suffer any direct consequences, the goal is not to punish people who refuse vaccination and instead to very strongly encourage enough people to get vaccinated so that the population as a whole is well-protected.

However, taking a position that Covid-19 is part of society and that society just has to get used to people refusing to be vaccinated while participating in ‘regular’ social life, and that this is just a cost of enjoying civil liberties, seems like a bad argument and a poor framing of the issue. Making this kind of broader argument risks pushing the majority of Canadians towards discounting all reasons that individuals may present to justify or explain not getting vaccinated, with the effect of inhibiting civil society from getting the public on board to protect the rights of those who would be harmfully affected by mandatory vaccination policies or demands that individuals always carry vaccine passport documents.

Those who have made a choice to opt-out of vaccination may experience resulting social costs, but those who cannot opt to get a vaccine in the first place or who have proven good reasons for avoiding vaccination shouldn’t be unduly disadvantaged. That’s the line in the sand to hold and defend, not that protecting civil liberties means that there should be no cost for voluntarily opting out of life saving vaccination programs.

Building a Strategic Vision to Combat Cybercrime

The Financial Times has a good piece examining the how insurance companies are beginning to recalculate how they assess insurance premiums that are used to cover ransomware payments. In addition to raising fees (and, in some cases, deciding whether to drop insuring against ransomware) some insurers like AIG are adopting stronger underwriting, including:

… an additional 25 detailed questions on clients’ security measures. “If [clients] have very, very low controls, then we may not write coverage at all,” Tracie Grella, AIG’s global head of cyber insurance, told the Financial Times.

To be sure, there is an ongoing, and chronic, challenge of getting companies to adopt baseline security postures, inclusive of running moderately up-to-date software, adopting multi-factor authorization, employing encryption at rest, and more. In the Canadian context this is made that much harder because the majority of Canadian businesses are small and mid-sized; they don’t have an IT team that can necessarily maintain or improve on their organization’s increasingly complicated security posture.

In the case of larger mid-sized, or just large, companies the activities of insurers like AIG could force them to modify their security practices for the better. Insurance is generally regarded as cheaper than security and so seeing the insurance companies demand better security to receive insurance is a way of incentivizing organizational change. Further change can be incentivized by government adopting policies such as requiring a particular security posture in order to bid on, or receive, government contracts. This governmental incentivization doesn’t necessarily encourage change for small organizations that already find it challenging to contract with government due to the level of bureaucracy involved. For other organizations, however, it will mean that to obtain/maintain government contracts they’ll need to focus on getting the basics right. Again, this is about aligning incentives such that organizations see value in changing their operational policies and postures to close off at least some security vulnerabilities. There may be trickle down effects to these measures, as well, insofar as even small-sized companies may adopt better security postures based on actionable guidance that is made available to the smaller companies responsible for supplying those middle and larger-sized organizations, which do have to abide by insurers’ or governments’ requirements.1

While the aforementioned incentives might improve the cybersecurity stance of some organizations the key driver of ransomware and other criminal activities online is its sheer profitability. The economics of cybercrime have been explored in some depth over the past 20 years or so, and there are a number of conclusions that have been reached that include focusing efforts on actually convicting cybercriminals (this is admittedly hard where countries like Russia and former-Soviet Republic states indemnify criminals that do not target CIS-region organizations or governments) to selectively targeting payment processors or other intermediaries that make it possible to derive revenues from the criminal activities.

Clearly it’s not possible to prevent all cybercrime, nor is it possible to do all things at once: we can’t simultaneously incentivize organizations to adopt better security practices, encourage changes to insurance schemas, and find and address weak links in cybercrime monetization systems with the snap of a finger. However, each of the aforementioned pieces can be done with a strategic vision of enhancing defenders’ postures while impeding the economic incentives that drive online criminal activities. Such a vision is ostensibly shared by a very large number of countries around the world. Consequently, in theory, this kind of strategic vision is one that states can cooperate on across borders and, in the process, build up or strengthen alliances focused on addressing challenging international issues pertaining to finance, crime, and cybersecurity. Surely that’s a vision worth supporting and actively working towards.


  1. To encourage small suppliers to adopt better security practices when they are working with larger organizations that have security requirements placed on them, governments might set aside funds to assist the mid-sized and large-sized vendors to secure down the supply chain and thus relieve small businesses of these costs. ↩︎

Two Thoughts on China’s Draft Privacy Law

Alexa Lee, Samm Sacks, Rogier Creemers, Mingli Shi, and Graham Webster have collectively written a helpful summary of the new Chinese Data Privacy Law over at Stanford’s DigiChina.

There were a pair of features that most jump out to me.

First, that the proposed legislation will compel Chinese companies “to police the personal data practices across their platforms” as part of Article 57. As noted by the team at Stanford,

“the three responsibilities identified for big platform companies here resonate with the “gatekeeper” concept for online intermediaries in Europe, and a requirement for public social responsibility reports echoes the DMA/DSA mandate to provide access to platform data by academic researchers and others. The new groups could also be compared with Facebook’s nominally independent Oversight Board, which the company established to review content moderation decisions.”

I’ll be particularly curious to see the kinds of transparency reporting that emerges out of these companies. I doubt the reports will parallel those in the West, which tend to focus on the processes and number of disclosures from private companies to government and, instead, the Chinese companies’ reports will focus on how companies are being ‘socially responsible’ with how they collect, process, and disclose data to other Chinese businesses. Still, if we see this more consumer-focused approach it will demonstrate yet another transparency report tradition that will be useful to assess in academic and public policy writing.

Second, the Stanford team notes that,

“new drafts of both the PIPL and the DSL added language toughening requirements for Chinese government approval before data holders in China cooperate with foreign judicial or law enforcement requests for data, making failure to gain permission a clear violation punishable by financial penalties up to 1 million RMB.”

While not surprising, this kind of restriction will continue to raise data sovereignty borders around personal information held in China. The effect? Western states will still need to push for Mutual Legal Assistant Treaty (MLAT) reform to successfully extract information from Chinese companies (and, perhaps in all likelihood, fail to conclude these reforms).1

It’s perhaps noteworthy that while China is moving to build up walls there is a simultaneous attempt by the Council of Europe to address issues of law enforcement access to information held by cloud providers (amongst other things). The United States passed the CLOUD Act in 2018 to begin to try and alleviate the issue of states gaining access to information held by cloud providers operating in foreign jurisdictions (though did not address human rights concerns which were mitigated through traditional MLAT processes). Based on the proposed Chinese law, it’s unlikely that the CLOUD Act will gain substantial traction with the Chinese government, though admittedly this wasn’t the aim of the CLOUD Act or an expected outcome of its passage.

Nevertheless, as competing legal frameworks are established that place the West on one side, and China and Russia on the other, the effect will be further entrenching the legal cultures of the Internet between different economic and political (and security) regimes. At the same time, data will be easily stored anywhere in the world including out of reach of relevant law enforcement agencies by criminal actors that routinely behave with technical and legal savvy.

Ultimately, the raising of regional and national digital borders is a topic to watch, both to keep an eye on what the forthcoming legal regimes will look like and, also, to assess the extents to which we see languages of ‘strong sovereignty’ or nationalism creep functionally into legislation around the world.


  1. For more on MLAT reform, see these pieces from Lawfare ↩︎

Overclassification and Its Impacts

Photo by Wiredsmart on Pexels.com

Jason Healey and Robert Jervis have a thought provoking piece over at the Modern War Institute at West Point. The crux of the argument is that, as a result of overclassification, it’s challenging if not impossible for policymakers or members of the public (to say nothing of individual analysts in the intelligence community or legislators) to truly understand the nature of contemporary cyberconflict. While there’s a great deal written about how Western organizations have been targeted by foreign operators, and how Western governments have been detrimentally affected by foreign operations, there is considerably less written about the effects of Western governments’ own operations towards foreign states because those operations are classified.

To put it another way, there’s no real way of understanding the cause and effect of operations, insofar as it’s not apparent why foreign operators are behaving as they are in what may be reaction to Western cyber operations or perceptions of Western cyber operations. The kinds of communiques provided by American intelligence officials, while somewhat helpful, also tend to obscure as much as they reveal (on good days). Healey and Jervis write:

General Nakasone and others are on solid ground when highlighting the many activities the United States does not conduct, like “stealing intellectual property” for commercial profit or disrupting the Olympic opening ceremonies. There is no moral equivalent between the most aggressive US cyber operations like Stuxnet and shutting down civilian electrical power in wintertime Ukraine or hacking a French television station and trying to pin the blame on Islamic State terrorists. But it clouds any case that the United States is the victim here to include such valid complaints alongside actions the United States does engage in, like geopolitical espionage. The concern of course is a growing positive feedback loop, with each side pursuing a more aggressive posture to impose costs after each fresh new insult by others, a posture that tempts adversaries to respond with their own, even more aggressive posture.

Making things worse, the researchers and academics who are ostensibly charged with better understanding and unpacking what Western intelligence agencies are up to sometimes decline to fulfill their mandate. The reasons are not surprising: engaging in such revelations threaten possible career prospects, endanger the very publication of the research in question, or risk cutting off access to interview subjects in the future. Healey and Jervis focus on the bizarre logics of working and researching the intelligence community in the United States, saying (with emphasis added):

Think-tank staff and academic researchers in the United States often shy away from such material (with exceptions like Ben Buchanan) so as not to hamper their chances of a future security clearance. Even as senior researchers, we were careful not to directly quote NSA’s classified assessment of Iran, but rather paraphrased a derivative article.

A student, working in the Department of Defense, was not so lucky, telling us that to get through the department’s pre-publication review, their thesis would skip US offensive operations and instead focus on defense.

Such examples highlight the distorting effects of censorship or overclassification: authors are incentivized to avoid what patrons want ignored and emphasize what patrons want highlighted or what already exists in the public domain. In paper after paper over the decades, new historical truths are cumulatively established in line with patrons’ preferences because they control the flow and release of information.

What are the implications as written by Healey and Jervis? In intelligence communities the size of the United States’, information gets lost or not passed to whomever it ideally should be presented to. Overclassification also means that policy makers and legislators who aren’t deeply ‘in the know’ will likely engage in decisions based on half-founded facts, at best. In countries such as Canada, where parliamentary committees cannot access classified information, they will almost certainly be confined to working off of rumour, academic reports, government reports that are unclassified, media accounts that divulge secrets or gossip, and the words spoken by the heads of security and intelligence agencies. None of this is ideal for controlling these powerful organizations, and the selective presentation of what Western agencies are up to actually risks compounding broader social ills.

Legislative Ignorance and Law

One of the results of overclassification is that legislators, in particular, become ill-suited to actually understanding national security legislation that is presented before them. It means that members of the intelligence and national security communities can call for powers and members of parliament are largely prevented from asking particularly insightful questions, or truly appreciate the implications of the powers that are being asked for.

Indeed, in the Canadian context it’s not uncommon for parliamentarians to have debated a national security bill in committee for months and, when asked later about elements of the bill, they admit that they never really understood it in the first place. The same is true for Ministers who have, subsequently, signed off on broad classes of operations that have been authorized by said legislation.

Part of that lack of understanding is the absence of examples of how powers have been used in the past, and how they might be used in the future; when engaging with this material entirely in the abstract, it can be tough to grasp the likely or possible implications of any legislation or authorization that is at hand. This is doubly true in situations where new legislation or Ministerial authorization will permit secretive behaviour, often using secretive technologies, to accomplish equally secretive objectives.

Beyond potentially bad legislative debates leading to poorly understood legislation being passed into law and Ministers consenting to operations they don’t understand, what else may follow from overclassification?

Nationalism, Miscalculated Responses, and Racism

To begin with, it creates a situation where ‘we’ in the West are being attacked by ‘them’ in Russia, Iran, China, North Korea, or other distant lands. I think this is problematic because it casts Western nations, and especially those in the Five Eyes, as innocent victims in the broader world of cyber conflict. Of course, individuals with expertise in this space will scoff at the idea–we all know that ‘our side’ is up to tricks and operations as well!–but for the general public or legislators, that doesn’t get communicated using similarly robust or illustrative examples. The result is that the operations of competitor nations can be cast as acts of ‘cyberwar’ without any appreciation that those actions may, in fact, be commensurate with the operations that Five Eyes nations have themselves launched. In creating an Us versus Them, and casting the Five Eyes and West more broadly as victims, a kind of nationalism can be incited where ‘They’ are threats whereas ‘We’ are innocents. In a highly complex and integrated world, these kinds of sharp and inaccurate concepts can fuel hate and socially divisive attitudes, activities, and policies.

At the same time, nations may perceive themselves to be targeted by Five Eyes nations, and deduce effects to Five Eyes operations even when that isn’t the case. When a set of perimeter logs show something strange, or when computers are affected by ransomware or wiperware, or another kind of security event takes place, these less resourced nations may simply assume that they’re being targeted by a Five Eyes operation. The result is that foreign government may both drum up nationalist concerns about ‘the West’ or ‘the Five Eyes’ while simultaneously queuing up their own operations to respond to what may, in fact, have been an activity that was totally divorced from the Five Eyes.

I also worry that the overclassification problem can lead to statements in Western media that demonizes broad swathes of the world as dangerous or bad, or threatening for reasons that are entirely unapparent because Western activities are suppressed from public commentary. Such statements arise with regular frequency, where China is attributed to this or to that, or when Russia or Middle Eastern countries are blamed for the most recent ill on the Internet.

The effect of such statements can be to incite differential degrees of racism. When mainstream newspapers, as an example, constantly beat the drum that the Chinese government (and, by extension, Chinese people) are threats to the stability and development of national economies or world stability, over time this has the effect of teaching people that China’s government and citizens alike are dangerous. Moreover, without information about Western activities, the operations conducted by foreign agencies can be read out of context with the effect that people of certain ethnicities are regarded as inherently suspicious or sneaky as compared to those (principally white) persons who occupy the West. While I would never claim that the overclassification of Western intelligence operations are the root cause of racism in societies I do believe that overclassification can fuel misinformation about the scope of geopolitics and Western intelligence gathering operations, with the consequence of facilitating certain subsequent racist attitudes.

Solutions

A colleague of mine has, in the past, given presentations and taught small courses in some of Canada’s intelligence community. This colleague lacks any access to classified materials and his classes focus on how much high quality information is publicly available when you know how and where to look for it, and how to analyze it. Students are apparently regularly shocked: they have access to the classified materials, but their understandings of the given issues are routinely more myopic and less robust. However, because they have access to classified material they tend to focus as much, or more, on it because the secretive nature of the material makes it ‘special’.

This is not a unique issue and, in fact, has been raised in the academic literature. When someone has access to special or secret knowledge they are often inclined to focus in on that material, on the assumption that it will provide insights in excess of what are available in open source. Sometimes that’s true, but oftentimes less so. And this ‘less so’ becomes especially problematic when operating in an era where governments tend to classify a great deal of material simply because the default is to assume that anything could potentially be revelatory to an agency’s operations. In this kind of era, overvaluing classified materials can lead to less insightful understandings of the issues of the day while simultaneously not appreciating that much of what is classified, and thus cast as ‘special’, really doesn’t provide much of an edge when engaging in analysis.

The solution is not to declassify all materials but, instead, to adopt far more aggressive declassification processes. This could, as just an example, entail tying declassification in some way to organizations’ budgets, such that if they fail to declassify materials their budgets are forced to be realigned in subsequent quarters or years until they make up from the prior year(s)’ shortfalls. Extending the powers of Information Commissioners, which are tasked with forcing government institutions to publish documents when they are requested by members of the public or parliamentarians (preferably subject to a more limited set of exemptions than exist today) might help. And having review agencies which can unpack higher-level workings of intelligence community organizations can also help.

Ultimately, we need to appreciate that national security and intelligence organizations do not exist in a bubble, but that their mandates mean that the externalized problems linked with overclassification are typically not seen as issues that these organizations, themselves, need to solve. Nor, in many cases, will they want to solve them: it can be very handy to keep legislators in the dark and then ask for more powers, all while raising the spectre of the Other and concealing the organizations’ own activities.

We do need security and intelligence organizations, but as they stand today their tendency towards overclassification runs the risk of compounding a range of deleterious conditions. At least one way of ameliorating those conditions almost certainly includes reducing the amount of material that these agencies currently classify as secret and thus kept from public eye. On this point, I firmly agree with Healey and Jervis.