Strategy is critical because it establishes a common goal that guides agencies in policymaking and provides the framework for collaboration and cohesion of vision. Strategy is difficult to devise, devilish to agree upon, and often painfully reductive when one considers competing demands. But without it, security boils down to ad hoc government responses based on urgent yet contradicting concepts.Tatyana Bolton, Mary Brooks, and Kathryn Waldron, “Three Key Questions to Define ICT Supply Chain Security”
In an article for The Hill, Shannon Lantzy and Kelly Rozumalski have discussed how Software Bill Of Materials (SBOMs) are good for business as well as security. SBOMs more forcefully emerged on the American policy space after the Biden Whitehouse promulgated an Executive Order on cybersecurity on May 12, 2021. The Order included a requirement that developers and private companies providing services to the United States government be required to produce Software Bill of Materials (SBOM).1 SBOMs are meant to help incident responders to cybersecurity events assess what APIs, libraries, or other digital elements might be vulnerable to an identified operation, and also help government procurement agencies better ensure the digital assets in a product or service meet a specified security standard.
Specifically, Lantzy and Rozumalsko write:
Product offerings that are already secure-by-design will be able to command a premium price because consumers will be able to compare SBOMs.
Products with inherently less patchable components will also benefit. A universal SBOM mandate will make it easy to spot vulnerabilities, creating market risk for lagging products; firms will be forced to reengineer the products before getting hacked. While this seems like a new cost to the laggards, it’s really just a transfer of future risk to a current cost of reengineering. The key to a universal mandate is that all laggards will incur this cost at roughly the same time, thereby not losing a competitive edge.
The promise of increased security and reduced risk will not be realized by SBOM mandates alone. Tooling and putting this mandate in practice will be required to realize the full power of the SBOM.
The idea of internalizing security costs to developers, and potentially increasing the cost of goods, has been something that has been discussed publicly and with Western governments for at least two decades or more. We’ve seen the overall risk profiles presented to organizations continue to increase year over year as a result of companies racing to market with little regard for security, which was a business development strategy that made sense when they experienced few economic liabilities for selling products with severe cybersecurity limitations or vulnerabilities. In theory, enabling comparison shopping vis-a-vis SBOMs will disincentivize companies from selling low-grade equipment and services if they want to get into high-profit enterprise or high-reliability government contracts, with the effect being that security improvements will also trickle down to the products purchased by consumers as well (‘trickle down cybersecurity’).
While I think that SBOMs are definitely a part of developing cybersecurity resilience it remains to be seen just how much consumers will pay for ‘more secure’ products given that, first, they are economically incentivized to pay the lowest possible amounts for goods and services and, second, they are unlikely to know for certain what is a good or bad security practice. Advocates of SBOMs often refer to them as akin to nutrition labels but we know that at most about a third of consumers read those labels (and those who read them often experience societal pressures to regulate caloric intake and thus read the labels) and, also, that the labels are often inaccurate.
It will be very interesting to see whether enterprise and consumers alike will be able or willing to pay higher up-front costs, to say nothing of being able to actually trust what is on the SBOM labels. Will companies that adopt SBOM products suffer a lower rate of cybersecurity incidents, or ones that are of reduced seriousness, or be able to respond more quickly when a cybersecurity incident has been realized? We’re going to actually be able to test the promises of SBOMs, soon, and it’s going to be fascinating to see things play out.
- I have a published a summary and brief analysis of this Executive Order elsewhere in case you want to read it. ↩︎
In short, software supply chain security breaches don’t look like other categories of breaches. A lot of this comes down to the central conundrum of system security: it’s not possible to defend the edges of a system without centralization so that we can pool defensive resources. But this same centralization concentrates offensive action against a few single points of failure that, if breached, cause all of the edges to fall at once. And the more edges that central failure point controls, the more likely the collateral real-world consequences of any breach, but especially a ransomware breach will be catastrophic, and cause overwhelm the defensive cybersecurity industry’s ability to respond.
Managed Service Providers (MSPs) are becoming increasingly common targets. It’s worth noting that the Canadian Centre for Cybersecurity‘s National Cyber Threat Assessment 2020 listed ransomware as well as the exploitation of MSPs as two of the seven key threats to Canadian financial and economic health. The Centre went so far as to state that it expected,
… that over the next two years ransomware campaigns will very likely increasingly target MSPs for the purpose of targeting their clients as a means of scaling targeted ransomware campaigns.
Sadly, if not surprisingly, this assessment has been entirely correct. It remains to be seen what impact the 2020 threats assessment has, or will have, on Canadian organizations and their security postures. Based on conversations I’ve had over the past few months the results are not inspiring and the threat assessment has generally been less effective than hoped in driving change in Canada.
As discussed by Steven Bellovin, part of the broader challenge for the security community in preparing for MSP operations has been that defenders are routinely behind the times; operators modify what and who their campaigns will target and defenders are forced to scramble to catch up. He specifically, and depressingly, recognizes that, “…when it comes to target selection, the attackers have outmaneuvered defenders for almost 30 years.”
These failures are that much more noteworthy given that the United States has trumpeted for years that the NSA will ‘defend forward‘ to identify and hunt threats, and respond to them before they reach ‘American cybershores’.2 The seemingly now routine targeting of both system update mechanisms as well as vendors which provide security or operational controls for wide swathes of organizations demonstrates that things are going to get a lot worse before they’re likely to improve.
A course correction could follow from Western nations developing effective and meaningful cyber-deterrence processes that encourage nations such as Russia, China, Iran, and North Korea to punish computer operators who are behind some of the worst kinds of operations that have emerged in public view. However, this would in part require the American government (and its allies) to actually figure out how they can deter adversaries. It’s been 12 years or so, and counting, and it’s not apparent that any American administration has figured out how to implement a deterrence regime that exceeds issuing toothless threats. The same goes for most of their allies.
Absent an actual deterrence response, such as one which takes action in sovereign states that host malicious operators, Western nations have slowly joined together to issue group attributions of foreign operations. They’ve also come together to recognize certain classes of cyber operations as particularly problematic, including ransomware. Must nations build this shared capacity, first, before they can actually undertake deterrence activities? Should that be the case then it would strongly underscore the need to develop shared norms in advance of sovereign states exercising their latent capacities in cyber and other domains and lend credence to the importance of the Tallinn manual process . If, however, this capacity is built and nothing is still undertaken to deter, then what will the capacity actually be worth? While this is a fascinating scholarly exercise–it’s basically an opportunity to test competing scholarly hypotheses–it’s one that has significant real-world consequences and the danger is that once we recognize which hypothesis is correct, years of time and effort could have been wasted for little apparent gain.
What’s worse is that this even is a scholarly exercise. Given that more than a decade has passed, and that ‘cyber’ is not truly new anymore, why must hypotheses be spun instead of states having developed sufficient capacity to deter? Where are Western states’ muscles after so much time working this problem?
The Financial Times has a good piece examining the how insurance companies are beginning to recalculate how they assess insurance premiums that are used to cover ransomware payments. In addition to raising fees (and, in some cases, deciding whether to drop insuring against ransomware) some insurers like AIG are adopting stronger underwriting, including:
… an additional 25 detailed questions on clients’ security measures. “If [clients] have very, very low controls, then we may not write coverage at all,” Tracie Grella, AIG’s global head of cyber insurance, told the Financial Times.
To be sure, there is an ongoing, and chronic, challenge of getting companies to adopt baseline security postures, inclusive of running moderately up-to-date software, adopting multi-factor authorization, employing encryption at rest, and more. In the Canadian context this is made that much harder because the majority of Canadian businesses are small and mid-sized; they don’t have an IT team that can necessarily maintain or improve on their organization’s increasingly complicated security posture.
In the case of larger mid-sized, or just large, companies the activities of insurers like AIG could force them to modify their security practices for the better. Insurance is generally regarded as cheaper than security and so seeing the insurance companies demand better security to receive insurance is a way of incentivizing organizational change. Further change can be incentivized by government adopting policies such as requiring a particular security posture in order to bid on, or receive, government contracts. This governmental incentivization doesn’t necessarily encourage change for small organizations that already find it challenging to contract with government due to the level of bureaucracy involved. For other organizations, however, it will mean that to obtain/maintain government contracts they’ll need to focus on getting the basics right. Again, this is about aligning incentives such that organizations see value in changing their operational policies and postures to close off at least some security vulnerabilities. There may be trickle down effects to these measures, as well, insofar as even small-sized companies may adopt better security postures based on actionable guidance that is made available to the smaller companies responsible for supplying those middle and larger-sized organizations, which do have to abide by insurers’ or governments’ requirements.1
While the aforementioned incentives might improve the cybersecurity stance of some organizations the key driver of ransomware and other criminal activities online is its sheer profitability. The economics of cybercrime have been explored in some depth over the past 20 years or so, and there are a number of conclusions that have been reached that include focusing efforts on actually convicting cybercriminals (this is admittedly hard where countries like Russia and former-Soviet Republic states indemnify criminals that do not target CIS-region organizations or governments) to selectively targeting payment processors or other intermediaries that make it possible to derive revenues from the criminal activities.
Clearly it’s not possible to prevent all cybercrime, nor is it possible to do all things at once: we can’t simultaneously incentivize organizations to adopt better security practices, encourage changes to insurance schemas, and find and address weak links in cybercrime monetization systems with the snap of a finger. However, each of the aforementioned pieces can be done with a strategic vision of enhancing defenders’ postures while impeding the economic incentives that drive online criminal activities. Such a vision is ostensibly shared by a very large number of countries around the world. Consequently, in theory, this kind of strategic vision is one that states can cooperate on across borders and, in the process, build up or strengthen alliances focused on addressing challenging international issues pertaining to finance, crime, and cybersecurity. Surely that’s a vision worth supporting and actively working towards.
- To encourage small suppliers to adopt better security practices when they are working with larger organizations that have security requirements placed on them, governments might set aside funds to assist the mid-sized and large-sized vendors to secure down the supply chain and thus relieve small businesses of these costs. ↩︎
Lawfare has a good piece on How China’s control of information is a cyber weakness:
“Policymakers need to be aware that successful competition in cyberspace depends on having intrinsic knowledge of the consequences a democratic or authoritarian mode of government has for a country’s cyber defense. Western leaders have for a long time prioritized security of physical infrastructure. This might translate into better cyber defense capabilities, but it leaves those governments open to information operations. At the same time, more authoritarian-leaning countries may have comparative advantages when it comes to defending against information operations but at the cost of perhaps being more vulnerable to cyber network attack and exploitation. Authoritarian governments may tolerate this compromise on security due to their prioritization of surveillance and censorship practices.
I have faith that professionals in the intelligence community have previously assessed this divide between what democracies have developed defences against versus what countries like China have prepared against. Nonetheless this is a helpful summary of the two sides of the coin.
I’m less certain of a subsequent argument made in the same piece:
These diverging emphases on different aspects of cybersecurity by democratic and authoritarian governments are not new. However, Western governments have put too much emphasis on the vulnerability of democracies to information operations, and not enough attention has been dedicated to the vulnerability of authoritarian regimes in their cyber defenses. It is crucial for democratic governments to assess the impact of information controls and regime security considerations in authoritarian-leaning countries for their day-to-day cyber operations.”
I really don’t think that intelligence community members in the West are ignorant of the vulnerabilities that may be present in China or other authoritarian jurisdictions. While the stories in Western media emphasize how effective foreign operators are extracting data from Western companies and organizations, intelligence agencies in the Five Eyes are also deeply invested in penetrating strategically and tactically valuable digital resources abroad. One of the top-line critiques against the Five Eyes is that they have invested heavily on offence over defence, and the article from Lawfare doesn’t really ever take that up. Instead, and inaccurately to my mind, it suggests that cyber defence is something done with a truly serious degree of resourcing in the Five Eyes. I have yet to find someone in the intelligence community that would seriously assert a similar proposition.
One thing that isn’t assessed in the article, and which would have been interesting to see considered, is the extent(s) to which the relative dearth of encryption in China better enables their defenders to identify and terminate exfiltration of data from their networks. Does broader visibility into data networks enhance Chinese defenders’ operations? I have some doubts, but it would be curious to see the arguments for and against that position.
Troy Hunt spent some time over the weekend writing on the relative insecurity of the Internet and how VPNs reduce threats without obviating those threats entirely. The kicker is:
To be clear, using a VPN doesn’t magically solve all these issues, it mitigates them. For example, if a site lacks sufficient HTTPS then there’s still the network segment between the VPN exit node and the site in question to contend with. It’s arguably the least risky segment of the network, but it’s still there. The effectiveness of black-holing DNS queries to known bad domains depends on the domain first being known to be bad. CyberSec is still going to do a much better job of that than your ISP, but it won’t be perfect. And privacy wise, a VPN doesn’t remove DNS or the ability to inspect SNI traffic, it simply removes that ability from your ISP and grants it to NordVPN instead. But then again, I’ve always said I’d much rather trust a reputable VPN to keep my traffic secure, private and not logged, especially one that’s been independently audited to that effect.
Something that security professionals are still not great at communicating—because we’re not asked to and because it’s harder for regular users to use the information—is that security is about adding friction that prevents adversaries from successfully exploiting whomever or whatever they’re targeting. Any such friction, however, can be overcome in the face of a sufficiently well-resourced attacker. But when you read most articles that talk about any given threat mitigation tool what is apparent is that the problems that are faced are systemic; while individuals can undertake some efforts to increase friction the crux of the problem is that individuals are operating in an almost inherently insecure environment.
Security is a community good and, as such, individuals can only do so much to protect themselves. But what’s more is that their individual efforts functionally represent a failing of the security community, and reveals the need for group efforts to reduce the threats faced by individuals everyday when they use the Internet or Internet-connected systems. Sure, some VPNs are a good thing to help individuals but, ideally, these are technologies to be discarded in some distant future after groups of actors successfully have worked to mitigate the threats that lurk all around us. Until then, though, adopting a trusted VPN can be a very good idea if you can afford the costs linked to them.
Nothing quite like starting the day by refreshing a password that was apparently compromised, and then trying to determine where/how the operators might have obtained the login credentials in the first place. Still, props to Google’s AI systems for detecting the aberrant login attempt and blocking it, as well as for password managers which make having unique login credentials for every service so easy to manage/replace.
Between 2002 and 2009, the [Industrial Control System Cyber Emergency Response Team] conducted more than 100 site assessments across multiple industries–oil and natural gas, chemical, and water–and found more than 38,000 vulnerabilities. These included critical systems that were accessible over the internet, default vendor passwords that operators had never bothered to change or hard-coded passwords that couldn’t be changed, outdated software patches, and a lack of standard protections such as firewalls and intrusion-detection systems.
But despite the best efforts of the test-bed and site-assessment researchers, they were battling decades of industry intertia–vendors took months and years to patch vulnerabilities that government researchers found in their systems, and owners of crucial infrastructure were only willing to make cosmetic changes to their systems and networks, resisting more extensive ones.
– Kim Zetter, Countdown to Zero-Day
Matt Green has a good writeup of the confusion associated with Apple’s decision to relocate Chinese users’ data to data centres in China. He notes:
Unfortunately, the problem with Apple’s disclosure of its China’s news is, well, really just a version of the same problem that’s existed with Apple’s entire approach to iCloud.
Where Apple provides overwhelming detail about their best security systems (file encryption, iOS, iMessage), they provide distressingly little technical detail about the weaker links like iCloud encryption. We know that Apple can access and even hand over iCloud backups to law enforcement. But what about Apple’s partners? What about keychain data? How is this information protected? Who knows.
This vague approach to security might make it easier for Apple to brush off the security impact of changes like the recent China news (“look, no backdoors!”) But it also confuses the picture, and calls into doubt any future technical security improvements that Apple might be planning to make in the future. For example, this article from 2016 claims that Apple is planning stronger overall encryption for iCloud. Are those plans scrapped? And if not, will those plans fly in the new Chinese version of iCloud? Will there be two technically different versions of iCloud? Who even knows?
And at the end of the day, if Apple can’t trust us enough to explain how their systems work, then maybe we shouldn’t trust them either.
Apple is regarded as providing incredibly secure devices to the public. But as more and more of the data on Apple devices is offloaded to Apple-controlled Cloud services it’s imperative that the company both explain how it is securing data and, moreover, the specific situations under which it can disclose data it is stewarding for its users.
Blew away over 10K emails that were collecting dust in one of my main accounts. My goal over the next few months is to remove the mass majority of old email that serves no purpose. Doing so will both free up some space (not that I really need it) while also cutting down on the possible deleterious effects of having the account in question getting hacked and contents selectively modified and/or leaked.