Link

Housing in Ottawa Now a National Security Issue

David Pugliese is reporting in the Ottawa Citizen that the Canadian Forces Intelligence Command (CFINTCOM) is “trying to avoid posting junior staff to Ottawa because it has become too expensive to live in the region.” The risk is that financial hardship associated with living in Ottawa could make junior members susceptible to subversion. Housing costs in Ottawa have risen much faster than either wage increases or inflation. Moreover, the special allowance provided to staff that is meant to assauge the high costs of living in Canadian cities has been frozen for 13 years.

At this point energy, telecommunications, healthcare, and housing all raise their own national security concerns. To some extent, such concerns have tracked with these industry categories: governments have always worried about the security of telecommunications networks as well as the availability of sufficient energy supplies. But in other cases, such as housing affordability, the national security concerns we are seeing are the result of long-term governance failures. These failures have created new national security threats that would not exist in the face of good (or even just better) governance.1

There is a profound danger in trying to address all the new national security challenges and issues using national security tools or governance processes. National security incidents are often regarded as creating moments of exception and, in such moments, actions can be undertaken that otherwise could not. The danger is that states of exception become the norm and, in the process, the regular modes of governance and law are significantly set aside to resolve the crises of the day. What is needed is a regeneration and deployment of traditional governance capacity instead of a routine reliance on national security-type responses to these issues.

Of course, governments don’t just need to respond to these metastasized governance problems in order to alleviate national security issues and threats. They need to do so, in equable and inclusive ways, so as to preserve or (re)generate the trust between the residents of Canada and their government.

The public may justifiably doubt that their system of government is working where successive governments under the major political parties are seen as having failed to provide for basic needs. The threat, then, is that ongoing governance failures run the risk of placing Canada’s democracy under pressure. While this might seem overstated I don’t think that’s the case: we are seeing a rise of politicians who are capitalizing on the frustrations and challenges faced by Canadians across the country, but who do not have their own solutions. Capitalizing on rage and frustration, and then failing to deliver on fixes, will only further alienate Canadians from their government.

Governments across Canada flexed their muscles during the earlier phases of the COVID-19 pandemic. Having used them, then, it’s imperative they keep flexing these muscles to address the serious issues that Canadians are experiencing. Doing so will assuage existent national security issues. It will also, simultaneously, serve to prevent other normal governance challenges from metastasizing into national security threats.


  1. As an aside, these housing challenges are not necessarily new. Naval staff posted to Esquimalt have long complained about the high costs of off-base housing in Victoria and the surrounding towns and cities. ↩︎

Mitigating AI-Based Harms in National Security

Photo by Pixabay on Pexels.com

Government agencies throughout Canada are investigating how they might adopt and deploy ‘artificial intelligence’ programs to enhance how they provide services. In the case of national security and law enforcement agencies these programs might be used to analyze and exploit datasets, surface threats, identify risky travellers, or automatically respond to criminal or threat activities.

However, the predictive software systems that are being deployed–‘artificial intelligence’–are routinely shown to be biased. These biases are serious in the commercial sphere but there, at least, it is somewhat possible for researchers to detect and surface biases. In the secretive domain of national security, however, the likelihood of bias in agencies’ software being detected or surfaced by non-government parties is considerably lower.

I know that organizations such as the Canadian Security Intelligence Agency (CSIS) have an interest in understanding how to use big data in ways that mitigate bias. The Canadian government does have a policy on the “Responsible use of artificial intelligence (AI)” and, at the municipal policing level, the Toronto Police Service has also published a policy on its use of artificial intelligence. Furthermore, the Office of the Privacy Commissioner of Canada has published a proposed regulatory framework for AI as part of potential reforms to federal privacy law.

Timnit Gebru, in conversation with Julia Angwin, suggests that there should be ‘datasheets for algorithms’ that would outline how predictive software systems have been tested for bias in different use cases prior to being deployed. Linking this to traditional circuit-based datasheets, she says (emphasis added):

As a circuit designer, you design certain components into your system, and these components are really idealized tools that you learn about in school that are always supposed to work perfectly. Of course, that’s not how they work in real life.

To account for this, there are standards that say, “You can use this component for railroads, because of x, y, and z,” and “You cannot use this component for life support systems, because it has all these qualities we’ve tested.” Before you design something into your system, you look at what’s called a datasheet for the component to inform your decision. In the world of AI, there is no information on what testing or auditing you did. You build the model and you just send it out into the world. This paper proposed that datasheets be published alongside datasets. The sheets are intended to help people make an informed decision about whether that dataset would work for a specific use case. There was also a follow-up paper called Model Cards for Model Reporting that I wrote with Meg Mitchell, my former co-lead at Google, which proposed that when you design a model, you need to specify the different tests you’ve conducted and the characteristics it has.

What I’ve realized is that when you’re in an institution, and you’re recommending that instead of hiring one person, you need five people to create the model card and the datasheet, and instead of putting out a product in a month, you should actually do it in three years, it’s not going to happen. I can write all the papers I want, but it’s just not going to happen. I’m constantly grappling with the incentive structure of this industry. We can write all the papers we want, but if we don’t change the incentives of the tech industry, nothing is going to change. That is why we need regulation.

Government is one of those areas where regulation or law can work well to discipline its behaviours, and where the relatively large volume of resources combined with a law-abiding bureaucracy might mean that formally required assessments would actually be conducted. While such assessments matter, generally, they are of particular importance where state agencies might be involved in making decisions that significantly or permanently alter the life chances of residents of Canada, visitors who are passing through our borders, or foreign national who are interacting with our government agencies.

As it stands, today, many Canadian government efforts at the federal, provincial, or municipal level seem to be signficiantly focused on how predictive software might be used or the effects it may have. These are important things to attend to! But it is just as, if not more, important for agencies to undertake baseline assessments of how and when different predictive software engines are permissible or not, as based on robust testing and evaluation of their features and flaws.

Having spoken with people at different levels of government the recurring complaint around assessing training data, and predictive software systems more generally, is that it’s hard to hire the right people for these assessment jobs on the basis that they are relatively rare and often exceedingly expensive. Thus, mid-level and senior members of government have a tendency to focus on things that government is perceived as actually able to do: figure out and track how predictive systems would be used and to what effect.

However, the regular focus on the resource-related challenges of predictive software assessment raises the very real question of whether these constraints should just compel agencies to forgo technologies on the basis of failing to determine, and assess, their prospective harms. In the firearms space, as an example, government agencies are extremely rigorous in assessing how a weapon operates to ensure that it functions precisely as meant given that the weapon might be used in life-changing scenarios. Such assessments require significant sums of money from agency budgets.

If we can make significant budgetary allocations for firearms, on the grounds they can have life-altering consequences for all involved in their use, then why can’t we do the same for predictive software systems? If anything, such allocations would compel agencies to make a strong(er) business case for testing the predictive systems in question and spur further accountability: Does the system work? At a reasonable cost? With acceptable outcomes?

Imposing cost discipline on organizations is an important way of ensuring that technologies, and other business processes, aren’t randomly adopted on the basis of externalizing their full costs. By internalizing those costs, up front, organizations may need to be much more careful in what they choose to adopt, when, and for what purpose. The outcome of this introspection and assessment would, hopefully, be that the harmful effects of predictive software systems in the national security space were mitigated and the systems which were adopted actually fulfilled the purposes they were acquired to address.

Chinese Spies Accused of Using Huawei in Secret Australia Telecom Hack

Bloomberg has an article that discusses how Chinese spies were allegedly involved in deploying implants on Huawei equipment which was operated in Australia and the United States. The key parts of the story include:

At the core of the case, those officials said, was a software update from Huawei that was installed on the network of a major Australian telecommunications company. The update appeared legitimate, but it contained malicious code that worked much like a digital wiretap, reprogramming the infected equipment to record all the communications passing through it before sending the data to China, they said. After a few days, that code deleted itself, the result of a clever self-destruct mechanism embedded in the update, they said. Ultimately, Australia’s intelligence agencies determined that China’s spy services were behind the breach, having infiltrated the ranks of Huawei technicians who helped maintain the equipment and pushed the update to the telecom’s systems. 

Guided by Australia’s tip, American intelligence agencies that year confirmed a similar attack from China using Huawei equipment located in the U.S., six of the former officials said, declining to provide further detail.

The details from the story are all circa 2012. The fact that Huawei equipment was successfully being targeted by these operations, in combination with the large volume of serious vulnerabilities in Huawei equipment, contributed to the United States’ efforts to bar Huawei equipment from American networks and the networks of their closest allies.1

Analysis

We can derive a number of conclusions from the Bloomberg article, as well as see links between activities allegedly undertaken by the Chinese government and those of Western intelligence agencies.

To begin, it’s worth noting that the very premise of the article–that the Chinese government needed to infiltrate the ranks of Huawei technicians–suggests that circa 2012 Huawei was not controlled by, operated by, or necessarily unduly influenced by the Chinese government. Why? Because if the government needed to impersonate technicians to deploy implants, and do so without the knowledge of Huawei’s executive staff, then it’s very challenging to say that the company writ large (or its executive staff) were complicit in intelligence operations.

Second, the Bloomberg article makes clear that a human intelligence (HUMINT) operation had to be conducted in order to deploy the implants in telecommunications networks, with data then being sent back to servers that were presumably operated by Chinese intelligence and security agencies. These kinds of HUMINT operations can be high-risk insofar because if operatives are caught then the whole operation (and its surrounding infrastructure) can be detected and burned down. Building legends for assets is never easy, nor is developing assets if they are being run from a distance as opposed to spies themselves deploying implants.2

Third, the United States’ National Security Agency (NSA) has conducted similar if not identical operations when its staff interdicted equipment while it was being shipped, in order to implant the equipment before sending it along to its final destination. Similarly, the CIA worked for decades to deliberately provide cryptographically-sabotaged equipment to diplomatic facilities around the world. All of which is to say that multiple agencies have been involved in using spies or assets to deliberately compromise hardware, including Western agencies.

Fourth, the Canadian Communications Security Establish Act (‘CSE Act’), which was passed into law in 2019, includes language which authorizes the CSE to do, “anything that is reasonably necessary to maintain the covert nature of the [foreign intelligence] activity” (26(2)(c)). The language in the CSE Act, at a minimum, raises the prospect that the CSE could undertake operations which parallel those of the NSA and, in theory, the Chinese government and its intelligence and security services.3

Of course, the fact that the NSA and other Western agencies have historically tampered with telecommunications hardware to facilitate intelligence collection doesn’t take away from the seriousness of the allegations that the Chinese government targeted Huawei equipment so as to carry out intelligence operations in Australia and the United States. Moreover, the reporting in Bloomberg covers a time around 2012 and it remains unclear whether the relationship(s) between the Chinese government and Huawei have changed since then; it is possible, though credible open source evidence is not forthcoming to date, that Huawei has since been captured by the Chinese state.

Takeaway

The Bloomberg article strongly suggests that Huawei, as of 2012, didn’t appear captured by the Chinese government given the government’s reliance on HUMINT operations. Moreover, and separate from the article itself, it’s important that readers keep in mind that the activities which were allegedly carried out by the Chinese government were (and remain) similar to those also carried out by Western governments and their own security and intelligence agencies. I don’t raise this latter point as a kind of ‘whataboutism‘ but, instead, to underscore that these kinds of operations are both serious and conducted by ‘friendly’ and adversarial intelligence services alike. As such, it behooves citizens to ask whether these are the kinds of activities we want our governments to be conducting on our behalves. Furthermore, we need to keep these kinds of facts in mind and, ideally, see them in news reporting to better contextualize the operations which are undertaken by domestic and foreign intelligence agencies alike.


  1. While it’s several years past 2012, the 2021 UK HCSEC report found that it continued “to uncover issues that indicate there has been no overall improvement over the course of 2020 to meet the product software engineering and cyber security quality expected by the NCSC.” (boldface in original) ↩︎
  2. It is worth noting that, post-2012, the Chinese government has passed national security legislation which may make it easier to compel Chinese nationals to operate as intelligence assets, inclusive of technicians who have privileged access to telecommunications equipment that is being maintained outside China. That having been said, and as helpfully pointed out by Graham Webster, this case demonstrates that the national security laws were not needed in order to use human agents or assets to deploy implants. ↩︎
  3. There is a baseline question of whether the CSE Act created new powers for the CSE in this regard or if, instead, it merely codified existing secret policies or legal interpretations which had previously authorized the CSE to undertake covert activities in carrying out its foreign signals intelligence operations. ↩︎

Detecting Academic National Security Threats

Photo by Pixabay on Pexels.com

The Canadian government is following in the footsteps of it’s American counterpart and has introduced national security assessments for recipients of government natural science (NSERC) funding. Such assessments will occur when proposed research projects are deemed sensitive and where private funding is also used to facilitate the research in question. Social science (SSHRC) and health (CIHR) funding will be subject to these assessments in the near future.

I’ve written, elsewhere, about why such assessments are likely fatally flawed. In short, they will inhibit student training, will cast suspicion upon researchers of non-Canadian nationalities (and especially upon researchers who hold citizenship with ‘competitor nations’ such as China, Russia, and Iran), and may encourage researchers to hide their sources of funding to be able to perform their required academic duties while also avoiding national security scrutiny.

To be clear, such scrutiny often carries explicit racist overtones, has led to many charges but few convictions in the United States, and presupposes that academic units or government agencies can detect a human-based espionage agent. Further, it presupposes that HUMINT-based espionage is a more serious, or equivalent, threat to research productivity as compared to cyber-espionage. As of today, there is no evidence in the public record in Canada that indicates that the threat facing Canadian academics is equivalent to the invasiveness of the assessments, nor that human-based espionage is a greater risk than cyber-based means.

To the best of my knowledge, while HUMINT-based espionage does generate some concerns they pale in comparison to the risk of espionage linked to cyber-operations.

However, these points are not the principal focus of this post. I recently re-read some older work by Bruce Schneier that I think nicely casts why asking scholars to engage in national security assessments of their own, and their colleagues’, research is bound to fail. Schneier wrote the following in 2007, when discussing the US government’s “see something, say something” campaign:

[t]he problem is that ordinary citizens don’t know what a real terrorist threat looks like. They can’t tell the difference between a bomb and a tape dispenser, electronic name badge, CD player, bat detector, or trash sculpture; or the difference between terrorist plotters and imams, musicians, or architects. All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.

Replace “terrorist” with “national security” threat and we get to approximately the same conclusions. Individuals—even those trained to detect and investigate human intelligence driven espionage—can find it incredibly difficult to detect human agent-enabled espionage. Expecting academics, who are motivated to develop international and collegial relationships, who may be unable to assess the national security implications of their research, and who are being told to abandon funding while the government fails to supplement that which is abandoned, guarantees that this measure will fail.

What will that failure mean, specifically? It will involve incorrect assessments and suspicion being aimed at scholars from ‘competitor’ and adversary nations. Scholars will question whether they should work with a Chinese, Russian, or Iranian scholar even when they are employed in a Western university let alone when they are in a non-Western institution. I doubt these same scholars will similarly question whether they should work with Finish, French, or British scholars. Nationality and ethnicity lenses will be used to assess who are the ‘right’ people with whom to collaborate.

Failure will not just affect professors. It will also extend to affect undergraduate and graduate students, as well as post-doctoral fellows and university staff. Already, students are questioning what they must do in order to prove that they are not considered national security threats. Lab staff and other employees who have access to university research environments will similarly be placed under an aura of suspicion. We should not, we must not, create an academy where these are the kinds of questions with which our students and colleagues and staff must grapple.

Espionage is, it must be recognized, a serious issue that faces universities and Canadian businesses more broadly. The solution cannot be to ignore it and hope that the activity goes away. However, the response to such threats must demonstrate necessity and proportionality and demonstrably involve evidence-based and inclusive policy making. The current program that is being rolled out by the Government of Canada does not meet this set of conditions and, as such, needs to be repealed.

Link

Project GUNMAN and the Telling of Intelligence Histories

This story of how the National Security Agency (NSA) was involved in analyzing typewriter bugs that were implanted by agents of the USSR in the 1980s is pretty amazing (.pdf) in terms of the technical and operational details which are have been written about. It’s also revealing in terms of how the parties who are permitted to write about these materials breathlessly describe the agencies’ past exploits. In critically reading these kinds of accounts its possible to learn how the agencies, themselves, regard themselves and their activities. In effect, how history is ‘created’—or propaganda written, depending on how your read the article in question—functions to reveal the nature of the actors involved in that creation and the way that myths and truths are created and replicated.

As a slight aside, whenever I come across material like this I’m reminded of just how poor the Canadian government is in disclosing its own intelligence agencies’ histories. As senior members of the Canadian intelligence community retire or pass away, and as recorded materials waste away or are disposed of, key information that is needed to understand how and why Canada has acted in the world are being lost. This has the effect of impoverishing Canadians’ own understandings of how their governments have operated, with the result that Canadian histories often risk missing essential information that could reveal hidden depths to what Canadians know about their country and its past.

Link

When the Government Decides to Waylay Parliament

Steven Chaplin has a really great explanation of whether the Canadian government can rely on national security and evidentiary laws to lawfully justify refusing to provide documents to the House of Commons, and to House committees. His analysis and explanation arose as a result of the Canadian government doing everything it could to, first, refuse to provide documents to the Parliamentary Committee which was studying Canadian-Chinese relations and, subsequently, refusing to provide the documents when compelled to do so by the House of Commons itself.

Rather than releasing the requested documents the government turned to the courts to adjudicate whether the documents in question–which were asserted to contain sensitive national security information–must, in fact, be released to the House or whether they could instead be sent to an executive committee, filled with Members of Parliament and Senators, to assess the contents instead. As Chaplin notes,

Having the courts intervene, as proposed by the government’s application in the Federal Court, is not an option. The application is clearly precluded by Article 9 of the Bill of Rights, 1689, which provides that a proceeding in Parliament ought not to be impeached or questioned in court. Article 9 not only allows for free speech; it is also a constitutional limit on the jurisdiction of the courts to preclude judicial interference in the business of the House.

The House ordered that the documents be tabled without redaction. Any decision of the court that found to the contrary would impeach or question the proceeding that led to the Order. And any attempt by the courts to balance the interests involved would constitute the courts becoming involved in ascertaining, and thereby questioning, the needs of the House and why the House wants the documents.

Beyond the Court’s involvement impeding into the territory of Parliament, there could be serious and long-term implications of letting the court become a space wherein the government and the House fight to obtain information that has been demanded. Specifically,

It may be that at the end of the day the government will continue to refuse to produce documents. In the same way that the government cannot use the courts to withhold documents, the House cannot go to court to compel the government to produce them, or to order witnesses to attend proceedings. It could also invite disobedience of witnesses, requiring the House to either drop inquiries or involve the courts to compel attendance or evidence. Allowing, or requiring, the government and the House to resolve their differences in the courts would not only be contrary to the constitutional principles of Article 9, but “would inevitably create delays, disruption, uncertainties and costs which would hold up the nation’s business and on that account would be unacceptable even if, in the end, the Speaker’s rulings were vindicated as entirely proper” (Canada (House of Commons) v. Vaid [2005]). In short, the courts have no business intervening one way or the other.

Throughout the discussions that have taken place about this issue in Canada, what has been most striking is that the national security commentators and elites have envisioned that the National Security and Intelligence Committee of Parliamentarians (NSICOP) could (and should) be tasked to resolve any and all particularly sensitive national security issues that might be of interest to Parliament. None, however, seems to have contemplated that Parliament, itself, might take issue with the government trying to exclude Parliament from engaging in assessments of the government’s national security decisions nor that issue would be taken when topics of interest to Parliamentarians were punted into an executive body, wherein their fellow Members of Parliament on the body were sworn to the strictest secrecy. Instead, elites have hand waved to the importance of preserving secrecy in order for Canada to receive intelligence from allies, as well as asserted that the government would never mislead Parliament on national security matters (about which, these same experts explain, Members of Parliament are not prepared to receive, process, or understand given the sophistication of the intelligence and the apparent simplicity of most Parliamentarians themselves).

This was the topic of a recent episode of the Intrepid Podcast, where Philippe Lagassé noted that the exclusion of parliamentary experts when creating NSICOP meant that these entirely predictable showdown situations were functionally baked into how the executive body was composed. As someone who raised the issue of adopting an executive, versus a standing House, committee and was rebuffed as being ignorant of the reality of national security it’s with more than a little satisfaction that the very concerns which were raised when NSICOP was being created are, in fact, arising on the political agenda.

With regard to the documents that the House Committee was seeking, I don’t know or particularly care what their contents include. From my own experience I’m all too well aware that ‘national security’ is often stamped on things that either governments want to keep from the public because they can be politically damaging, be kept from the public just generally because of a culture of non-transparency and refusal of accountability, as well as (less often) be kept from the public on the basis that there are bonafide national security interests at stake. I do, however, care that the Government of Canada has (again) acted counter to Parliament’s wishes and has deliberately worked to impede the House from doing its work.

Successive governments seem to genuinely believe that they get to ‘rule’ Canada absolutely and with little accountability. While this is, in function, largely true given how cowed Members of Parliament are to their party leaders it’s incredibly serious and depressing to see the government further erode Parliament’s powers and abilities to fulfil its duties. A healthy democracy is filled with bumps for the government as it is held to account but, sadly, the Government of Canada–regardless of the party in power–is incredibly active in keeping itself, and its behaviours, from the public eye and thus held to account.

If only a committee might be struck to solve this problem…

Quote

Strategy is critical because it establishes a common goal that guides agencies in policymaking and provides the framework for collaboration and cohesion of vision. Strategy is difficult to devise, devilish to agree upon, and often painfully reductive when one considers competing demands. But without it, security boils down to ad hoc government responses based on urgent yet contradicting concepts.

Tatyana Bolton, Mary Brooks, and Kathryn Waldron, “Three Key Questions to Define ICT Supply Chain Security

Link

Russia, China, the USA and the Geopolitical and National Security Implications of Climate Change

Lustgarden, writing for the New York Times, has probably the best piece on the national security and geopolitical implications of climate change that I’ve recently come across. The assessment for the USA is not good:

… in the long term, agriculture presents perhaps the most significant illustration of how a warming world might erode America’s position. Right now the U.S. agricultural industry serves as a significant, if low-key, instrument of leverage in America’s own foreign affairs. The U.S. provides roughly a third of soy traded globally, nearly 40 percent of corn and 13 percent of wheat. By recent count, American staple crops are shipped to 174 countries, and democratic influence and power comes with them, all by design. And yet climate data analyzed for this project suggest that the U.S. farming industry is in danger. Crop yields from Texas north to Nebraska could fall by up to 90 percent by as soon as 2040 as the ideal growing region slips toward the Dakotas and the Canadian border. And unlike in Russia or Canada, that border hinders the U.S.’s ability to shift north along with the optimal conditions.

Now, the advantages faced by Canada might be eroded by a militant America, and those of Russia similarly threatened by a belligerent and desperate China (and desperate Southeast Asia more generally). Regardless, food and arable land are generally likely to determine which countries take the longest to most suffer from climate change. Though, in the end, it’s almost a forgone conclusion that we are all ultimately going to suffer horribly for the errors of our ways.

Links for November 16-20, 2020

  • The future of U.S. Foreign intelligence surveillance. “Despite President Trump’s many tweets about wiretapping, his administration failed to support meaningful reforms to traditional FISA, Section 702, and EO 12333. Meanwhile, the U.S. government’s foreign intelligence apparatus has continued to expand, violating Americans’ constitutional rights and threatening a $7.1 trillion transatlantic economic relationship. Given the stakes, the next President and Congress must prioritize surveillance reform in 2021.” // I can’t imagine an American administration passing even a small number of the proposed legislative updates suggested in this article. Still, it is helpful to reflect on why such measures should be passed to protect global citizens’ rights and, more broadly, why they almost certainly will not be passed into law.
  • Why Obama fears for our democracy. “But more than anything, I wanted this book to be a way in which people could better understand the world of politics and foreign policy, worlds that feel opaque and inaccessible. Part of my goal is describing quirks and people’s family backgrounds, just to remind people that these are humans and you can understand them and make judgments.” // The whole interview is a good read, and may signal some of the pressures on tech policy the incoming administration may face from their own former leader, but more than anything I think that Obama’s relentless effort to contextualize, socialize, and humanize politics speaks to the underlying ethos he took with him into office. And, more than that, it showcases that he truly is hopeful in an almost Kantian sense; throughout the interview I couldn’t help but feel I was reading someone who had been deeply touched by “Perpetual Peace” amongst other essays in Kant’s Political Writings.
  • Ralfy’s world – whisky magazine. “At a time when the debate over new and old media is raging full on, and questions are asked about integrity and independence, Ralfy is just getting on with it – blogging randomly in the true spirit of the medium and making do it yourself recordings about whiskies he has tasted. Or to put it in his words: “My malt mission over the last two years has been a website called ralfy.com for all things whisky, so long as it’s unorthodox, marketing-light, informative, independent, educational …and entertaining.” // I’ve learned, and continue to learn, a lot from Ralfy’s YouTube channel. But I have to admit it’s more than a bit uncomfortable figuring out the ethics of watching videos from a guy who has inaccurate understandings of vaccines and the pandemics alike. His knowledge of whiskey is on the whole excellent. His knowledge of epidemiology and immunology…let’s just say less so.
Link

To What Extent is China’s Control of Information a Cyber Weakness?

Lawfare has a good piece on How China’s control of information is a cyber weakness:

“Policymakers need to be aware that successful competition in cyberspace depends on having intrinsic knowledge of the consequences a democratic or authoritarian mode of government has for a country’s cyber defense. Western leaders have for a long time prioritized security of physical infrastructure. This might translate into better cyber defense capabilities, but it leaves those governments open to information operations. At the same time, more authoritarian-leaning countries may have comparative advantages when it comes to defending against information operations but at the cost of perhaps being more vulnerable to cyber network attack and exploitation. Authoritarian governments may tolerate this compromise on security due to their prioritization of surveillance and censorship practices.

I have faith that professionals in the intelligence community have previously assessed this divide between what democracies have developed defences against versus what countries like China have prepared against. Nonetheless this is a helpful summary of the two sides of the coin.

I’m less certain of a subsequent argument made in the same piece:

These diverging emphases on different aspects of cybersecurity by democratic and authoritarian governments are not new. However, Western governments have put too much emphasis on the vulnerability of democracies to information operations, and not enough attention has been dedicated to the vulnerability of authoritarian regimes in their cyber defenses. It is crucial for democratic governments to assess the impact of information controls and regime security considerations in authoritarian-leaning countries for their day-to-day cyber operations.”

I really don’t think that intelligence community members in the West are ignorant of the vulnerabilities that may be present in China or other authoritarian jurisdictions. While the stories in Western media emphasize how effective foreign operators are extracting data from Western companies and organizations, intelligence agencies in the Five Eyes are also deeply invested in penetrating strategically and tactically valuable digital resources abroad. One of the top-line critiques against the Five Eyes is that they have invested heavily on offence over defence, and the article from Lawfare doesn’t really ever take that up. Instead, and inaccurately to my mind, it suggests that cyber defence is something done with a truly serious degree of resourcing in the Five Eyes. I have yet to find someone in the intelligence community that would seriously assert a similar proposition.

One thing that isn’t assessed in the article, and which would have been interesting to see considered, is the extent(s) to which the relative dearth of encryption in China better enables their defenders to identify and terminate exfiltration of data from their networks. Does broader visibility into data networks enhance Chinese defenders’ operations? I have some doubts, but it would be curious to see the arguments for and against that position.