Link

Messaging Interoperability and Client Security

Eric Rescorla has a thoughtful and nuanced assessment of recent EU proposals which would compel messaging companies to make their communications services interoperable. To his immense credit he spends time walking the reader through historical and contemporary messaging systems in order to assess the security issues prospectively associated with requiring interoperability. It’s a very good, and compact, read on a dense and challenging subject.

I must admit, however, that I’m unconvinced that demanding interoperability will have only minimal security implications. While much of the expert commentary has focused on whether end-to-end encryption would be compromised I think that too little time has been spent considering the client-end side of interoperable communications. So if we assume it’s possible to facilitate end-to-end communications across messaging companies and focus just on clients receiving/sending communications, what are some risks?1

As it stands, today, the dominant messaging companies have large and professional security teams. While none of these teams are perfect, as shown by the success of cyber mercenary companies such as NSO group et al, they are robust and constantly working to improve the security of their products. The attacks used by groups such as NSO, Hacking Team, Candiru, FinFisher, and such have not tended to rely on breaking encryption. Rather, they have sought vulnerabilities in client devices. Due to sandboxing and contemporary OS security practices this has regularly meant successfully targeting a messaging application and, subsequently, expanding a foothold on the device more generally.

In order for interoperability to ‘work’ properly there will need to be a number of preconditions. As noted in Rescorla’s post, this may include checking what functions an interoperable client possesses to determine whether ‘standard’ or ‘enriched’ client services are available. Moreover, APIs will need to be (relatively) stable or rely on a standardized protocol to facilitate interoperability. Finally, while spam messages are annoying on messaging applications today, they may become even more commonplace where interoperability is required and service providers cannot use their current processes to filter/quality check messages transiting their infrastructure.

What do all the aforementioned elements mean for client security?

  1. Checking for client functionality may reveal whether a targeted client possesses known vulnerabilities, either generally (following a patch update) or just to the exploit vendor (where they know of a vulnerability and are actively exploiting it). Where spam filtering is not great exploit vendors can use spam messaging as reconnaissance messaging with the service provider, client vendor, or client applications not necessarily being aware of the threat activity.
  2. When or if there is a significant need to rework how keying operates, or surveillance of identity properties more broadly that are linked to an API, then there is a risk that implementation of updates may be delayed until the revisions have had time to be adopted by clients. While this might be great for competition vis-a-vis interoperability it will, also, have the effect of signalling an oncoming change to threat actors who may accelerate activities to get footholds on devices or may warn these actors that they, too, need to update their tactics, techniques, and procedures (TTPs).
  3. As a more general point, threat actors might work to develop and propagate interoperable clients that they have, already, compromised–we’ve previously seen nation-state actors do so and there’s no reason to expect this behaviour to stop in a world of interoperable clients. Alternately, threat actors might try and convince targets to move to ‘better’ clients that contain known vulnerabilities but which are developed and made available by legitimate vendors. Whereas, today, an exploit developer must target specific messaging systems that deliver that systems’ messages, a future world of interoperable messaging will likely expand the clients that threat actors can seek to exploit.

One of the severe dangers and challenges facing the current internet regulation landscape has been that a large volume of new actors have entered the various overlapping policy fields. For a long time there’s not been that many of us and anyone who’s been around for 10-15 years tends to be suitably multidisciplinary that they think about how activities in policy domain X might/will have consequences for domains Y and Z. The new raft of politicians and their policy advisors, in contrast, often lack this broad awareness. The result is that proposals are being advanced around the world by ostensibly well-meaning individuals and groups to address issues associated with online harms, speech, CSAM, competition, and security. However, these same parties often lack awareness of how the solutions meant to solve their favoured policy problems will have effects on neighbouring policy issues. And, where they are aware, they often don’t care because that’s someone else’s policy domain.

It’s good to see more people participating and more inclusive policy making processes. And seeing actual political action on many issue areas after 10 years of people debating how to move forward is exciting. But too much of that action runs counter to the thoughtful warnings and need for caution that longer-term policy experts have been raising for over a decade.

We are almost certainly moving towards a ‘new Internet’. It remains in question, however, whether this ‘new Internet’ will see resolutions to longstanding challenges or if, instead, the rush to regulate will change the landscape by finally bringing to life the threats that long-term policy wonks have been working to forestall or prevent for much of their working lives. To date, I remain increasingly concerned that we will experience the latter than witness the former.


  1. For the record, I currently remain unconvinced it is possible to implement end-to-end encryption across platforms generally. ↩︎

Cyber Attacks Versus Operations in Ukraine

Photo by cottonbro on Pexels.com

For the past decade there has been a steady drumbeat that ‘cyberwar is coming’. Sometimes the parties holding these positions are in militaries and, in other cases, from think tanks or university departments that are trying to link kinetic-adjacent computer operations with ‘war’.

Perhaps the most famous rebuttal to the cyberwar proponents has been Thomas Rid’s Cyber War Will Not Take Place. The title was meant to be provocative and almost has the effect of concealing a core insight of Rid’s argument: cyber operations will continue to be associated with conflicts but cyber operations are unlikely to constitute (or lead to) out-and-out war on their own. Why? Because it is very challenging to prepare and launch cyber operations that have significant kinetic results at the scale we associate with full-on war.

Since the Russian Federation’s war of aggression towards Ukraine there have regularly been shocked assertions that cyberware isn’t taking place. A series of pieces by The Economist, as an example, sought to prepare readers for a cyberwar that just hasn’t happened. Why not? Because The Economist–much like other outlets!–often presumed that the cyber dimensions of the conflict in Ukraine would bear at least some resemblance to the long-maligned concept of a ‘cyber Pearl Harbour’: a critical cyber-enable strike of some sort would have a serious, and potentially devastating, effect on how Ukraine could defend against Russian aggression and thus tilt the balance towards Russian military victory.

As a result of the early mistaken understandings of cyber operations, scholars and experts have once more come out and explained why cyber operations are not the same as an imagined cyber Pearl Harbour situation, while still taking place in the Ukrainian conflict. Simultaneously, security and malware researchers have taken the opportunity to belittle International Relations theorists who have written about cyberwar and argued that these theorists have fundamentally misunderstood how cyber operations take place.

Part of the challenge is ‘cyberwar’ has often been popularly seen as the equivalent of hundreds of thousands of soldiers and their associated military hardware being deployed into a foreign country. As noted by Rid in a recent op-ed, while some cyber operations are meant to be apparent others are much more subtle. The former might be meant to reduce the will to fight or diminish command and control capabilities. The latter, in contrast, will look a lot like other reconnaissance operations: knowing who is commanding which battle group, the logistical challenges facing the opponent, or state of infrastructure in-country. All these latter dimensions provide strategic and tactical advantages to the party who’s launched the surveillance operation. Operations meant to degrade capabilities may occur but will often be more subtle. This subtly can be a particularly severe risk in a conflict, such as if your ammunition convoy is sent to the wrong place or train timetables are thrown off with the effect of stymying civilian evacuation or resupply operations.1

What’s often seemingly lost in the ‘cyberwar’ debates–which tend to take place either between people who don’t understand cyber operations, those who stand to profit from misrepresentations of them, or those who are so theoretical in their approaches as to be ignorant of reality–is that contemporary wars entail blended forces. Different elements of those blends have unique and specific tactical and strategic purposes. Cyber isn’t going to have the same effect as a Grad Missile Launcher or a T-90 Battle Tank, but that missile launcher or tank isn’t going to know that the target it’s pointed towards is strategically valuable without reconnaissance nor is it able to impair logistics flows the same way as a cyber operation targeting train schedules. To expect otherwise is to grossly misunderstand how cyber operations function in a conflict environment.

I’d like to imagine that one result of the Russian war of aggression will be to improve the general population’s understanding of cyber operations and what they entail, and do not entail. It’s possible that this might happen given that major news outlets, such as the AP and Reuters, are changing how they refer to such activities: they will not be called ‘cyberattacks’ outside very nuanced situations now. In simply changing what we call cyber activities–as operations as opposed to attacks–we’ll hopefully see a deflating of the language and, with it, more careful understandings of how cyber operations take place in and out of conflict situations. As such, there’s a chance (hope?) we might see a better appreciation of the significance of cyber operations in the population writ-large in the coming years. This will be increasingly important given the sheer volume of successful (non-conflict) operations that take place each day.


  1. It’s worth recognizing that part of why we aren’t reading about successful Russian operations is, first, due to Ukrainian and allies’ efforts to suppress such successes for fear of reducing Ukrainian/allied morale. Second, however, is that Western signals intelligence agencies such as the NSA, CSE, and GCHQ, are all very active in providing remote defensive and other operational services to Ukrainian forces. There was also a significant effort ahead of the conflict to shore up Ukrainian defences and continues to be a strong effort by Western companies to enhance the security of systems used by Ukrainians. Combined, this means that Ukraine is enjoying additional ‘forces’ while, simultaneously, generally keeping quiet about its own failures to protect its systems or infrastructure. ↩︎
Link

The Risks Linked With Canadian Cyber Operations in Ukraine

Photo by Sora Shimazaki on Pexels.com

Late last month, Global News published a story on how the Canadian government is involved in providing cyber support to the Ukrainian government in the face of Russia’s illegal invasion. While the Canadian military declined to confirm or deny any activities they might be involved in, the same was not true of the Communications Security Establishment (CSE). The CSE is Canada’s foreign signals intelligence agency. In addition to collecting intelligence, it is also mandated to defend Canadian federal systems and those designated as of importance to the government of Canada, provide assistance to other federal agencies, and conduct active and defensive cyber operations.1

From the Global News article it is apparent that the CSE is involved in both foreign intelligence operations as well as undertaking cyber defensive activities. Frankly these kinds of activity are generally, and persistently, undertaken with regard to the Russian government and so it’s not a surprise that these activities continue apace.

The CSE spokesperson also noted that the government agency is involved in ‘cyber operations’ though declined to explain whether these are defensive cyber operations or active cyber operations. In the case of the former, the Minister of National Defense must consult with the Minister of Foreign Affairs before authorizing an operation, whereas in the latter both Ministers must consent to an operation prior to it taking place. Defensive and active operations can assume the same form–roughly the same activities or operations might be undertaken–but the rationale for the activity being taken may vary based on whether it is cast as defensive or active (i.e., offensive).2

These kinds of cyber operations are the ones that most worry scholars and practitioners, on the basis that there is a risk that foreign operators or adversaries may misread a signal from a cyber operation or because the operation might have unintended consequences. Thus, the risk is that the operations that the CSE is undertaking run the risk of accidentally (or intentionally, I guess) escalating affairs between Canada and the Russian Federation in the midst of the shooting war between Russian and Ukrainian forces.

While there is, of course, a need for some operational discretion on the part of the Canadian government it is also imperative that the Canadian public be sufficiently aware of the government’s activities to understand the risks (or lack thereof) which are linked to the activities that Canadian agencies are undertaking. To date, the Canadian government has not released its cyber foreign policy doctrine nor has the Canadian Armed Forces released its cyber doctrine.3 The result is that neither Canadians nor Canada’s allies or adversaries know precisely what Canada will do in the cyber domain, how Canada will react when confronted, or the precise nature of Canada’s escalatory ladder. The government’s secrecy runs the risk of putting Canadians in greater jeopardy of a response from the Russian Federation (or other adversaries) without the Canadian public really understanding what strategic or tactical activities might be undertaken on their behalf.

Canadians have a right to know at least enough about what their government is doing to be able to begin assessing the risks linked with conducting operations during an active militant conflict against an adversary with nuclear weapons. Thus far such information has not been provided. The result is that Canadians are ill-prepared to assess the risk that they may be quietly and quickly drawn into the conflict between the Russian Federation and Ukraine. Such secrecy bodes poorly for being able to hold government to account, to say nothing of preventing Canadians from appreciating the risk that they could become deeply drawn into a very hot conflict scenario.


  1. For more on the CSE and the laws governing its activities, see “A Deep Dive into Canada’s Overhaul of Its Foreign Intelligence and Cybersecurity Laws.↩︎
  2. For more on this, see “Analysis of the Communications Security Establishment Act and Related Provisions in Bill C-59 (An Act respecting national security matters), First Reading (December 18, 2017)“, pp 27-32. ↩︎
  3. Not for lack of trying to access them, however, as in both cases I have filed access to information requests to the government for these documents 1 years ago, with delays expected to mean I won’t get the documents before the end of 2022 at best. ↩︎

Mandatory Patching of Serious Vulnerabilities in Government Systems

Photo by Mati Mango on Pexels.com

The Cybersecurity and Infrastructure Security Agency (CISA) is responsible for building national capacity to defend American infrastructure and cybersecurity assets. In the past year they have been tasked with receiving information about American government agencies’ progress (or lack thereof) in implementing elements of Executive Order 14028: Improving the Nation’s Cybersecurity and have been involved in responses to a number of events, including Solar Winds, the Colonial Pipeline ransomware attack, and others. The Executive Order required that CISA first collect a large volume of information from government agencies and vendors alike to assess the threats towards government infrastructure and, subsequently, to provide guidance concerning cloud services, track the adoption of multi factor authentication and seek ways of facilitating its implementation, establish a framework to respond to security incidents, enhance CISA’s threat hunting abilities in government networks, and more.1

Today, CISA promulgated a binding operational directive that will require American government agencies to adopt more aggressive patch tempos for vulnerabilities. In addition to requiring agencies to develop formal policies for remediating vulnerabilities it establishes a requirement that vulnerabilities with a common vulnerabilities and exposure ID be remediated within 6 months, and all others with two weeks. Vulnerabilities to be patched/remediated are found in CISA’s “Known Exploited Vulnerabilities Catalogue.”

It’s notable that while patching is obviously preferred, the CISA directive doesn’t mandate patching but that ‘remediation’ take place.2 As such, organizations may be authorized to deploy defensive measures that will prevent the vulnerability from being exploited but not actually patch the underlying vulnerability, so as to avoid a patch having unintended consequences for either the application in question or for other applications/services that currently rely on either outdated or bespoke programming interfaces.

In the Canadian context, there aren’t equivalent levels of requirements that can be placed on Canadian federal departments. While Shared Services Canada can strongly encourage departments to patch, and the Treasury Board Secretariat has published a “Patch Management Guidance” document, and Canada’s Canadian Centre for Cyber Security has a suggested patch deployment schedule,3 final decisions are still made by individual departments by their respective deputy minister under the Financial Administration Act.

The Biden administration is moving quickly to accelerate its ability to identify and remediate vulnerabilities while simultaneously lettings its threat intelligence staff track adversaries in American networks. That last element is less of an issue in the Canadian context but the first two remain pressing and serious challenges.

While its positive to see the Americans moving quickly to improve their security positions I can only hope that the Canadian federal, and provincial, governments similarly clear long-standing logjams that delegate security decisions to parties who may be ill-suited to make optimal decisions, either out of ignorance or because patching systems is seen as secondary to fulfilling a given department’s primary service mandate.


  1. For a discussion of the Executive Order, see: “Initial Thoughts on Biden’s Executive Order on Improving the Nation’s Cybersecurity” or “Everything You Need to Know About the New Executive Order on Cybersecurity.” ↩︎
  2. For more, see CISA’s “Vulnerability Remediation Requirements“. ↩︎
  3. “CCCS’s deployment schedule only suggests timelines for deployment. In actuality, an organization should take into consideration risk tolerance and exposure to a given vulnerability and associated attack vector(s) as part of a risk‑based approach to patching, while also fully considering their individual threat profile. Patch management tools continue to improve the efficiency of the process and enable organizations to hasten the deployment schedule.” Source: “Patch Management Guidance↩︎
Quote

Strategy is critical because it establishes a common goal that guides agencies in policymaking and provides the framework for collaboration and cohesion of vision. Strategy is difficult to devise, devilish to agree upon, and often painfully reductive when one considers competing demands. But without it, security boils down to ad hoc government responses based on urgent yet contradicting concepts.

Tatyana Bolton, Mary Brooks, and Kathryn Waldron, “Three Key Questions to Define ICT Supply Chain Security

Link

Economics and Software Bills of Materials (SBOM)

In an article for The Hill, Shannon Lantzy and Kelly Rozumalski have discussed how Software Bill Of Materials (SBOMs) are good for business as well as security. SBOMs more forcefully emerged on the American policy space after the Biden Whitehouse promulgated an Executive Order on cybersecurity on May 12, 2021. The Order included a requirement that developers and private companies providing services to the United States government be required to produce Software Bill of Materials (SBOM).1 SBOMs are meant to help incident responders to cybersecurity events assess what APIs, libraries, or other digital elements might be vulnerable to an identified operation, and also help government procurement agencies better ensure the digital assets in a product or service meet a specified security standard.

Specifically, Lantzy and Rozumalsko write:

Product offerings that are already secure-by-design will be able to command a premium price because consumers will be able to compare SBOMs.

Products with inherently less patchable components will also benefit. A universal SBOM mandate will make it easy to spot vulnerabilities, creating market risk for lagging products; firms will be forced to reengineer the products before getting hacked. While this seems like a new cost to the laggards, it’s really just a transfer of future risk to a current cost of reengineering. The key to a universal mandate is that all laggards will incur this cost at roughly the same time, thereby not losing a competitive edge.

The promise of increased security and reduced risk will not be realized by SBOM mandates alone. Tooling and putting this mandate in practice will be required to realize the full power of the SBOM.

The idea of internalizing security costs to developers, and potentially increasing the cost of goods, has been something that has been discussed publicly and with Western governments for at least two decades or more. We’ve seen the overall risk profiles presented to organizations continue to increase year over year as a result of companies racing to market with little regard for security, which was a business development strategy that made sense when they experienced few economic liabilities for selling products with severe cybersecurity limitations or vulnerabilities. In theory, enabling comparison shopping vis-a-vis SBOMs will disincentivize companies from selling low-grade equipment and services if they want to get into high-profit enterprise or high-reliability government contracts, with the effect being that security improvements will also trickle down to the products purchased by consumers as well (‘trickle down cybersecurity’).

While I think that SBOMs are definitely a part of developing cybersecurity resilience it remains to be seen just how much consumers will pay for ‘more secure’ products given that, first, they are economically incentivized to pay the lowest possible amounts for goods and services and, second, they are unlikely to know for certain what is a good or bad security practice. Advocates of SBOMs often refer to them as akin to nutrition labels but we know that at most about a third of consumers read those labels (and those who read them often experience societal pressures to regulate caloric intake and thus read the labels) and, also, that the labels are often inaccurate.

It will be very interesting to see whether enterprise and consumers alike will be able or willing to pay higher up-front costs, to say nothing of being able to actually trust what is on the SBOM labels. Will companies that adopt SBOM products suffer a lower rate of cybersecurity incidents, or ones that are of reduced seriousness, or be able to respond more quickly when a cybersecurity incident has been realized? We’re going to actually be able to test the promises of SBOMs, soon, and it’s going to be fascinating to see things play out.


  1. I have a published a summary and brief analysis of this Executive Order elsewhere in case you want to read it. ↩︎

The Kaseya Ransomware Attack Is a Really Big Deal

Screen Shot 2021-07-19 at 2.26.52 PM
(Managed Service Provider image by the Canadian Centre for Cybersecurity)

Matt Tait, as normal, has good insights into just why the Kaseya ransomware attack1 was such a big deal:

In short, software supply chain security breaches don’t look like other categories of breaches. A lot of this comes down to the central conundrum of system security: it’s not possible to defend the edges of a system without centralization so that we can pool defensive resources. But this same centralization concentrates offensive action against a few single points of failure that, if breached, cause all of the edges to fall at once. And the more edges that central failure point controls, the more likely the collateral real-world consequences of any breach, but especially a ransomware breach will be catastrophic, and cause overwhelm the defensive cybersecurity industry’s ability to respond.

Managed Service Providers (MSPs) are becoming increasingly common targets. It’s worth noting that the Canadian Centre for Cybersecurity‘s National Cyber Threat Assessment 2020 listed ransomware as well as the exploitation of MSPs as two of the seven key threats to Canadian financial and economic health. The Centre went so far as to state that it expected,

… that over the next two years ransomware campaigns will very likely increasingly target MSPs for the purpose of targeting their clients as a means of scaling targeted ransomware campaigns.

Sadly, if not surprisingly, this assessment has been entirely correct. It remains to be seen what impact the 2020 threats assessment has, or will have, on Canadian organizations and their security postures. Based on conversations I’ve had over the past few months the results are not inspiring and the threat assessment has generally been less effective than hoped in driving change in Canada.

As discussed by Steven Bellovin, part of the broader challenge for the security community in preparing for MSP operations has been that defenders are routinely behind the times; operators modify what and who their campaigns will target and defenders are forced to scramble to catch up. He specifically, and depressingly, recognizes that, “…when it comes to target selection, the attackers have outmaneuvered defenders for almost 30 years.”

These failures are that much more noteworthy given that the United States has trumpeted for years that the NSA will ‘defend forward‘ to identify and hunt threats, and respond to them before they reach ‘American cybershores’.2 The seemingly now routine targeting of both system update mechanisms as well as vendors which provide security or operational controls for wide swathes of organizations demonstrates that things are going to get a lot worse before they’re likely to improve.

A course correction could follow from Western nations developing effective and meaningful cyber-deterrence processes that encourage nations such as Russia, China, Iran, and North Korea to punish computer operators who are behind some of the worst kinds of operations that have emerged in public view. However, this would in part require the American government (and its allies) to actually figure out how they can deter adversaries. It’s been 12 years or so, and counting, and it’s not apparent that any American administration has figured out how to implement a deterrence regime that exceeds issuing toothless threats. The same goes for most of their allies.

Absent an actual deterrence response, such as one which takes action in sovereign states that host malicious operators, Western nations have slowly joined together to issue group attributions of foreign operations. They’ve also come together to recognize certain classes of cyber operations as particularly problematic, including ransomware. Must nations build this shared capacity, first, before they can actually undertake deterrence activities? Should that be the case then it would strongly underscore the need to develop shared norms in advance of sovereign states exercising their latent capacities in cyber and other domains and lend credence to the importance of the Tallinn manual process . If, however, this capacity is built and nothing is still undertaken to deter, then what will the capacity actually be worth? While this is a fascinating scholarly exercise–it’s basically an opportunity to test competing scholarly hypotheses–it’s one that has significant real-world consequences and the danger is that once we recognize which hypothesis is correct, years of time and effort could have been wasted for little apparent gain.

What’s worse is that this even is a scholarly exercise. Given that more than a decade has passed, and that ‘cyber’ is not truly new anymore, why must hypotheses be spun instead of states having developed sufficient capacity to deter? Where are Western states’ muscles after so much time working this problem?


  1. As a point of order, when is an act of ransomware an attack versus an operation? ↩︎
  2. I just made that one up. No, I’m not proud of it. ↩︎

Building a Strategic Vision to Combat Cybercrime

The Financial Times has a good piece examining the how insurance companies are beginning to recalculate how they assess insurance premiums that are used to cover ransomware payments. In addition to raising fees (and, in some cases, deciding whether to drop insuring against ransomware) some insurers like AIG are adopting stronger underwriting, including:

… an additional 25 detailed questions on clients’ security measures. “If [clients] have very, very low controls, then we may not write coverage at all,” Tracie Grella, AIG’s global head of cyber insurance, told the Financial Times.

To be sure, there is an ongoing, and chronic, challenge of getting companies to adopt baseline security postures, inclusive of running moderately up-to-date software, adopting multi-factor authorization, employing encryption at rest, and more. In the Canadian context this is made that much harder because the majority of Canadian businesses are small and mid-sized; they don’t have an IT team that can necessarily maintain or improve on their organization’s increasingly complicated security posture.

In the case of larger mid-sized, or just large, companies the activities of insurers like AIG could force them to modify their security practices for the better. Insurance is generally regarded as cheaper than security and so seeing the insurance companies demand better security to receive insurance is a way of incentivizing organizational change. Further change can be incentivized by government adopting policies such as requiring a particular security posture in order to bid on, or receive, government contracts. This governmental incentivization doesn’t necessarily encourage change for small organizations that already find it challenging to contract with government due to the level of bureaucracy involved. For other organizations, however, it will mean that to obtain/maintain government contracts they’ll need to focus on getting the basics right. Again, this is about aligning incentives such that organizations see value in changing their operational policies and postures to close off at least some security vulnerabilities. There may be trickle down effects to these measures, as well, insofar as even small-sized companies may adopt better security postures based on actionable guidance that is made available to the smaller companies responsible for supplying those middle and larger-sized organizations, which do have to abide by insurers’ or governments’ requirements.1

While the aforementioned incentives might improve the cybersecurity stance of some organizations the key driver of ransomware and other criminal activities online is its sheer profitability. The economics of cybercrime have been explored in some depth over the past 20 years or so, and there are a number of conclusions that have been reached that include focusing efforts on actually convicting cybercriminals (this is admittedly hard where countries like Russia and former-Soviet Republic states indemnify criminals that do not target CIS-region organizations or governments) to selectively targeting payment processors or other intermediaries that make it possible to derive revenues from the criminal activities.

Clearly it’s not possible to prevent all cybercrime, nor is it possible to do all things at once: we can’t simultaneously incentivize organizations to adopt better security practices, encourage changes to insurance schemas, and find and address weak links in cybercrime monetization systems with the snap of a finger. However, each of the aforementioned pieces can be done with a strategic vision of enhancing defenders’ postures while impeding the economic incentives that drive online criminal activities. Such a vision is ostensibly shared by a very large number of countries around the world. Consequently, in theory, this kind of strategic vision is one that states can cooperate on across borders and, in the process, build up or strengthen alliances focused on addressing challenging international issues pertaining to finance, crime, and cybersecurity. Surely that’s a vision worth supporting and actively working towards.


  1. To encourage small suppliers to adopt better security practices when they are working with larger organizations that have security requirements placed on them, governments might set aside funds to assist the mid-sized and large-sized vendors to secure down the supply chain and thus relieve small businesses of these costs. ↩︎
Link

To What Extent is China’s Control of Information a Cyber Weakness?

Lawfare has a good piece on How China’s control of information is a cyber weakness:

“Policymakers need to be aware that successful competition in cyberspace depends on having intrinsic knowledge of the consequences a democratic or authoritarian mode of government has for a country’s cyber defense. Western leaders have for a long time prioritized security of physical infrastructure. This might translate into better cyber defense capabilities, but it leaves those governments open to information operations. At the same time, more authoritarian-leaning countries may have comparative advantages when it comes to defending against information operations but at the cost of perhaps being more vulnerable to cyber network attack and exploitation. Authoritarian governments may tolerate this compromise on security due to their prioritization of surveillance and censorship practices.

I have faith that professionals in the intelligence community have previously assessed this divide between what democracies have developed defences against versus what countries like China have prepared against. Nonetheless this is a helpful summary of the two sides of the coin.

I’m less certain of a subsequent argument made in the same piece:

These diverging emphases on different aspects of cybersecurity by democratic and authoritarian governments are not new. However, Western governments have put too much emphasis on the vulnerability of democracies to information operations, and not enough attention has been dedicated to the vulnerability of authoritarian regimes in their cyber defenses. It is crucial for democratic governments to assess the impact of information controls and regime security considerations in authoritarian-leaning countries for their day-to-day cyber operations.”

I really don’t think that intelligence community members in the West are ignorant of the vulnerabilities that may be present in China or other authoritarian jurisdictions. While the stories in Western media emphasize how effective foreign operators are extracting data from Western companies and organizations, intelligence agencies in the Five Eyes are also deeply invested in penetrating strategically and tactically valuable digital resources abroad. One of the top-line critiques against the Five Eyes is that they have invested heavily on offence over defence, and the article from Lawfare doesn’t really ever take that up. Instead, and inaccurately to my mind, it suggests that cyber defence is something done with a truly serious degree of resourcing in the Five Eyes. I have yet to find someone in the intelligence community that would seriously assert a similar proposition.

One thing that isn’t assessed in the article, and which would have been interesting to see considered, is the extent(s) to which the relative dearth of encryption in China better enables their defenders to identify and terminate exfiltration of data from their networks. Does broader visibility into data networks enhance Chinese defenders’ operations? I have some doubts, but it would be curious to see the arguments for and against that position.

Link

VPN and Security Friction

Troy Hunt spent some time over the weekend writing on the relative insecurity of the Internet and how VPNs reduce threats without obviating those threats entirely. The kicker is:

To be clear, using a VPN doesn’t magically solve all these issues, it mitigates them. For example, if a site lacks sufficient HTTPS then there’s still the network segment between the VPN exit node and the site in question to contend with. It’s arguably the least risky segment of the network, but it’s still there. The effectiveness of black-holing DNS queries to known bad domains depends on the domain first being known to be bad. CyberSec is still going to do a much better job of that than your ISP, but it won’t be perfect. And privacy wise, a VPN doesn’t remove DNS or the ability to inspect SNI traffic, it simply removes that ability from your ISP and grants it to NordVPN instead. But then again, I’ve always said I’d much rather trust a reputable VPN to keep my traffic secure, private and not logged, especially one that’s been independently audited to that effect.

Something that security professionals are still not great at communicating—because we’re not asked to and because it’s harder for regular users to use the information—is that security is about adding friction that prevents adversaries from successfully exploiting whomever or whatever they’re targeting. Any such friction, however, can be overcome in the face of a sufficiently well-resourced attacker. But when you read most articles that talk about any given threat mitigation tool what is apparent is that the problems that are faced are systemic; while individuals can undertake some efforts to increase friction the crux of the problem is that individuals are operating in an almost inherently insecure environment.

Security is a community good and, as such, individuals can only do so much to protect themselves. But what’s more is that their individual efforts functionally represent a failing of the security community, and reveals the need for group efforts to reduce the threats faced by individuals everyday when they use the Internet or Internet-connected systems. Sure, some VPNs are a good thing to help individuals but, ideally, these are technologies to be discarded in some distant future after groups of actors successfully have worked to mitigate the threats that lurk all around us. Until then, though, adopting a trusted VPN can be a very good idea if you can afford the costs linked to them.