Categories
Links

Economics and Software Bills of Materials (SBOM)

In an article for The Hill, Shannon Lantzy and Kelly Rozumalski have discussed how Software Bill Of Materials (SBOMs) are good for business as well as security. SBOMs more forcefully emerged on the American policy space after the Biden Whitehouse promulgated an Executive Order on cybersecurity on May 12, 2021. The Order included a requirement that developers and private companies providing services to the United States government be required to produce Software Bill of Materials (SBOM).1 SBOMs are meant to help incident responders to cybersecurity events assess what APIs, libraries, or other digital elements might be vulnerable to an identified operation, and also help government procurement agencies better ensure the digital assets in a product or service meet a specified security standard.

Specifically, Lantzy and Rozumalsko write:

Product offerings that are already secure-by-design will be able to command a premium price because consumers will be able to compare SBOMs.

Products with inherently less patchable components will also benefit. A universal SBOM mandate will make it easy to spot vulnerabilities, creating market risk for lagging products; firms will be forced to reengineer the products before getting hacked. While this seems like a new cost to the laggards, it’s really just a transfer of future risk to a current cost of reengineering. The key to a universal mandate is that all laggards will incur this cost at roughly the same time, thereby not losing a competitive edge.

The promise of increased security and reduced risk will not be realized by SBOM mandates alone. Tooling and putting this mandate in practice will be required to realize the full power of the SBOM.

The idea of internalizing security costs to developers, and potentially increasing the cost of goods, has been something that has been discussed publicly and with Western governments for at least two decades or more. We’ve seen the overall risk profiles presented to organizations continue to increase year over year as a result of companies racing to market with little regard for security, which was a business development strategy that made sense when they experienced few economic liabilities for selling products with severe cybersecurity limitations or vulnerabilities. In theory, enabling comparison shopping vis-a-vis SBOMs will disincentivize companies from selling low-grade equipment and services if they want to get into high-profit enterprise or high-reliability government contracts, with the effect being that security improvements will also trickle down to the products purchased by consumers as well (‘trickle down cybersecurity’).

While I think that SBOMs are definitely a part of developing cybersecurity resilience it remains to be seen just how much consumers will pay for ‘more secure’ products given that, first, they are economically incentivized to pay the lowest possible amounts for goods and services and, second, they are unlikely to know for certain what is a good or bad security practice. Advocates of SBOMs often refer to them as akin to nutrition labels but we know that at most about a third of consumers read those labels (and those who read them often experience societal pressures to regulate caloric intake and thus read the labels) and, also, that the labels are often inaccurate.

It will be very interesting to see whether enterprise and consumers alike will be able or willing to pay higher up-front costs, to say nothing of being able to actually trust what is on the SBOM labels. Will companies that adopt SBOM products suffer a lower rate of cybersecurity incidents, or ones that are of reduced seriousness, or be able to respond more quickly when a cybersecurity incident has been realized? We’re going to actually be able to test the promises of SBOMs, soon, and it’s going to be fascinating to see things play out.


  1. I have a published a summary and brief analysis of this Executive Order elsewhere in case you want to read it. ↩︎
Categories
Writing

The Kaseya Ransomware Attack Is a Really Big Deal

Screen Shot 2021-07-19 at 2.26.52 PM
(Managed Service Provider image by the Canadian Centre for Cybersecurity)

Matt Tait, as normal, has good insights into just why the Kaseya ransomware attack1 was such a big deal:

In short, software supply chain security breaches don’t look like other categories of breaches. A lot of this comes down to the central conundrum of system security: it’s not possible to defend the edges of a system without centralization so that we can pool defensive resources. But this same centralization concentrates offensive action against a few single points of failure that, if breached, cause all of the edges to fall at once. And the more edges that central failure point controls, the more likely the collateral real-world consequences of any breach, but especially a ransomware breach will be catastrophic, and cause overwhelm the defensive cybersecurity industry’s ability to respond.

Managed Service Providers (MSPs) are becoming increasingly common targets. It’s worth noting that the Canadian Centre for Cybersecurity‘s National Cyber Threat Assessment 2020 listed ransomware as well as the exploitation of MSPs as two of the seven key threats to Canadian financial and economic health. The Centre went so far as to state that it expected,

… that over the next two years ransomware campaigns will very likely increasingly target MSPs for the purpose of targeting their clients as a means of scaling targeted ransomware campaigns.

Sadly, if not surprisingly, this assessment has been entirely correct. It remains to be seen what impact the 2020 threats assessment has, or will have, on Canadian organizations and their security postures. Based on conversations I’ve had over the past few months the results are not inspiring and the threat assessment has generally been less effective than hoped in driving change in Canada.

As discussed by Steven Bellovin, part of the broader challenge for the security community in preparing for MSP operations has been that defenders are routinely behind the times; operators modify what and who their campaigns will target and defenders are forced to scramble to catch up. He specifically, and depressingly, recognizes that, “…when it comes to target selection, the attackers have outmaneuvered defenders for almost 30 years.”

These failures are that much more noteworthy given that the United States has trumpeted for years that the NSA will ‘defend forward‘ to identify and hunt threats, and respond to them before they reach ‘American cybershores’.2 The seemingly now routine targeting of both system update mechanisms as well as vendors which provide security or operational controls for wide swathes of organizations demonstrates that things are going to get a lot worse before they’re likely to improve.

A course correction could follow from Western nations developing effective and meaningful cyber-deterrence processes that encourage nations such as Russia, China, Iran, and North Korea to punish computer operators who are behind some of the worst kinds of operations that have emerged in public view. However, this would in part require the American government (and its allies) to actually figure out how they can deter adversaries. It’s been 12 years or so, and counting, and it’s not apparent that any American administration has figured out how to implement a deterrence regime that exceeds issuing toothless threats. The same goes for most of their allies.

Absent an actual deterrence response, such as one which takes action in sovereign states that host malicious operators, Western nations have slowly joined together to issue group attributions of foreign operations. They’ve also come together to recognize certain classes of cyber operations as particularly problematic, including ransomware. Must nations build this shared capacity, first, before they can actually undertake deterrence activities? Should that be the case then it would strongly underscore the need to develop shared norms in advance of sovereign states exercising their latent capacities in cyber and other domains and lend credence to the importance of the Tallinn manual process . If, however, this capacity is built and nothing is still undertaken to deter, then what will the capacity actually be worth? While this is a fascinating scholarly exercise–it’s basically an opportunity to test competing scholarly hypotheses–it’s one that has significant real-world consequences and the danger is that once we recognize which hypothesis is correct, years of time and effort could have been wasted for little apparent gain.

What’s worse is that this even is a scholarly exercise. Given that more than a decade has passed, and that ‘cyber’ is not truly new anymore, why must hypotheses be spun instead of states having developed sufficient capacity to deter? Where are Western states’ muscles after so much time working this problem?


  1. As a point of order, when is an act of ransomware an attack versus an operation? ↩︎
  2. I just made that one up. No, I’m not proud of it. ↩︎
Categories
Links Writing

Building a Strategic Vision to Combat Cybercrime

The Financial Times has a good piece examining the how insurance companies are beginning to recalculate how they assess insurance premiums that are used to cover ransomware payments. In addition to raising fees (and, in some cases, deciding whether to drop insuring against ransomware) some insurers like AIG are adopting stronger underwriting, including:

… an additional 25 detailed questions on clients’ security measures. “If [clients] have very, very low controls, then we may not write coverage at all,” Tracie Grella, AIG’s global head of cyber insurance, told the Financial Times.

To be sure, there is an ongoing, and chronic, challenge of getting companies to adopt baseline security postures, inclusive of running moderately up-to-date software, adopting multi-factor authorization, employing encryption at rest, and more. In the Canadian context this is made that much harder because the majority of Canadian businesses are small and mid-sized; they don’t have an IT team that can necessarily maintain or improve on their organization’s increasingly complicated security posture.

In the case of larger mid-sized, or just large, companies the activities of insurers like AIG could force them to modify their security practices for the better. Insurance is generally regarded as cheaper than security and so seeing the insurance companies demand better security to receive insurance is a way of incentivizing organizational change. Further change can be incentivized by government adopting policies such as requiring a particular security posture in order to bid on, or receive, government contracts. This governmental incentivization doesn’t necessarily encourage change for small organizations that already find it challenging to contract with government due to the level of bureaucracy involved. For other organizations, however, it will mean that to obtain/maintain government contracts they’ll need to focus on getting the basics right. Again, this is about aligning incentives such that organizations see value in changing their operational policies and postures to close off at least some security vulnerabilities. There may be trickle down effects to these measures, as well, insofar as even small-sized companies may adopt better security postures based on actionable guidance that is made available to the smaller companies responsible for supplying those middle and larger-sized organizations, which do have to abide by insurers’ or governments’ requirements.1

While the aforementioned incentives might improve the cybersecurity stance of some organizations the key driver of ransomware and other criminal activities online is its sheer profitability. The economics of cybercrime have been explored in some depth over the past 20 years or so, and there are a number of conclusions that have been reached that include focusing efforts on actually convicting cybercriminals (this is admittedly hard where countries like Russia and former-Soviet Republic states indemnify criminals that do not target CIS-region organizations or governments) to selectively targeting payment processors or other intermediaries that make it possible to derive revenues from the criminal activities.

Clearly it’s not possible to prevent all cybercrime, nor is it possible to do all things at once: we can’t simultaneously incentivize organizations to adopt better security practices, encourage changes to insurance schemas, and find and address weak links in cybercrime monetization systems with the snap of a finger. However, each of the aforementioned pieces can be done with a strategic vision of enhancing defenders’ postures while impeding the economic incentives that drive online criminal activities. Such a vision is ostensibly shared by a very large number of countries around the world. Consequently, in theory, this kind of strategic vision is one that states can cooperate on across borders and, in the process, build up or strengthen alliances focused on addressing challenging international issues pertaining to finance, crime, and cybersecurity. Surely that’s a vision worth supporting and actively working towards.


  1. To encourage small suppliers to adopt better security practices when they are working with larger organizations that have security requirements placed on them, governments might set aside funds to assist the mid-sized and large-sized vendors to secure down the supply chain and thus relieve small businesses of these costs. ↩︎
Categories
Links

🦓 Zebra Crossing: an easy-to-use digital safety checklist

There are a lot of different security guides, but I think that in terms of trying to balancing being comprehensive, accessible, and directly actionable, Zebra Crossing is amongst the better guides out there. Who’s it for?

1. You use the internet on a day-to-day basis – for work, social media, financial transactions, etc.

2. You feel you could be doing more to ensure your digital safety and privacy, but you’re not in immediate danger. (If you are, seek out an expert for a one-on-one consult.)

3. You’re comfortable with technology. For example, you’re comfortable going into the settings section of your computer/smartphone.

How should it be used?

1. Recommendations have been sorted in ascending levels of difficulty. Start from level one and work your way up!

2. Everyone should follow the recommendations in levels one and two. They will protect you from the widely-used (yet simple) attacks. Going through them shouldn’t take more than 1-2 hours.

3. Level three is a bit more involved in terms of time and money and may not be 100% necessary. But if you’re worried at all and can afford to, we recommend going through that list too. Depending on the amount of digital housekeeping you have to do, it may take anywhere from an hour to an afternoon.

4. The scenarios listed after are for higher-stakes situations — scan them to see if any of them apply to you. (Because the stakes are higher, they assume that you’ve done everything in levels 1-3.)

Another great resource is Consumer Reports’ Security Planner. While it’s not designed to comprehensively guide you through upgrading your security profile, it is probably even better for helping individuals improve specific security practices.

Categories
Links Writing

To What Extent is China’s Control of Information a Cyber Weakness?

Lawfare has a good piece on How China’s control of information is a cyber weakness:

“Policymakers need to be aware that successful competition in cyberspace depends on having intrinsic knowledge of the consequences a democratic or authoritarian mode of government has for a country’s cyber defense. Western leaders have for a long time prioritized security of physical infrastructure. This might translate into better cyber defense capabilities, but it leaves those governments open to information operations. At the same time, more authoritarian-leaning countries may have comparative advantages when it comes to defending against information operations but at the cost of perhaps being more vulnerable to cyber network attack and exploitation. Authoritarian governments may tolerate this compromise on security due to their prioritization of surveillance and censorship practices.

I have faith that professionals in the intelligence community have previously assessed this divide between what democracies have developed defences against versus what countries like China have prepared against. Nonetheless this is a helpful summary of the two sides of the coin.

I’m less certain of a subsequent argument made in the same piece:

These diverging emphases on different aspects of cybersecurity by democratic and authoritarian governments are not new. However, Western governments have put too much emphasis on the vulnerability of democracies to information operations, and not enough attention has been dedicated to the vulnerability of authoritarian regimes in their cyber defenses. It is crucial for democratic governments to assess the impact of information controls and regime security considerations in authoritarian-leaning countries for their day-to-day cyber operations.”

I really don’t think that intelligence community members in the West are ignorant of the vulnerabilities that may be present in China or other authoritarian jurisdictions. While the stories in Western media emphasize how effective foreign operators are extracting data from Western companies and organizations, intelligence agencies in the Five Eyes are also deeply invested in penetrating strategically and tactically valuable digital resources abroad. One of the top-line critiques against the Five Eyes is that they have invested heavily on offence over defence, and the article from Lawfare doesn’t really ever take that up. Instead, and inaccurately to my mind, it suggests that cyber defence is something done with a truly serious degree of resourcing in the Five Eyes. I have yet to find someone in the intelligence community that would seriously assert a similar proposition.

One thing that isn’t assessed in the article, and which would have been interesting to see considered, is the extent(s) to which the relative dearth of encryption in China better enables their defenders to identify and terminate exfiltration of data from their networks. Does broader visibility into data networks enhance Chinese defenders’ operations? I have some doubts, but it would be curious to see the arguments for and against that position.

Categories
Links Writing

VPN and Security Friction

Troy Hunt spent some time over the weekend writing on the relative insecurity of the Internet and how VPNs reduce threats without obviating those threats entirely. The kicker is:

To be clear, using a VPN doesn’t magically solve all these issues, it mitigates them. For example, if a site lacks sufficient HTTPS then there’s still the network segment between the VPN exit node and the site in question to contend with. It’s arguably the least risky segment of the network, but it’s still there. The effectiveness of black-holing DNS queries to known bad domains depends on the domain first being known to be bad. CyberSec is still going to do a much better job of that than your ISP, but it won’t be perfect. And privacy wise, a VPN doesn’t remove DNS or the ability to inspect SNI traffic, it simply removes that ability from your ISP and grants it to NordVPN instead. But then again, I’ve always said I’d much rather trust a reputable VPN to keep my traffic secure, private and not logged, especially one that’s been independently audited to that effect.

Something that security professionals are still not great at communicating—because we’re not asked to and because it’s harder for regular users to use the information—is that security is about adding friction that prevents adversaries from successfully exploiting whomever or whatever they’re targeting. Any such friction, however, can be overcome in the face of a sufficiently well-resourced attacker. But when you read most articles that talk about any given threat mitigation tool what is apparent is that the problems that are faced are systemic; while individuals can undertake some efforts to increase friction the crux of the problem is that individuals are operating in an almost inherently insecure environment.

Security is a community good and, as such, individuals can only do so much to protect themselves. But what’s more is that their individual efforts functionally represent a failing of the security community, and reveals the need for group efforts to reduce the threats faced by individuals everyday when they use the Internet or Internet-connected systems. Sure, some VPNs are a good thing to help individuals but, ideally, these are technologies to be discarded in some distant future after groups of actors successfully have worked to mitigate the threats that lurk all around us. Until then, though, adopting a trusted VPN can be a very good idea if you can afford the costs linked to them.

Categories
Aside

2019.1.17

Nothing quite like starting the day by refreshing a password that was apparently compromised, and then trying to determine where/how the operators might have obtained the login credentials in the first place. Still, props to Google’s AI systems for detecting the aberrant login attempt and blocking it, as well as for password managers which make having unique login credentials for every service so easy to manage/replace.

Categories
Reviews

Review of the Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon

Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon

Rating: ⭐️⭐️⭐️⭐️⭐️

Zetter’s book engages in a heroic effort to summarize, describe, and explain the significance of the NSA’s and Israel’s first ‘cyber weapon’, named Stuxnet. This piece of malware was used to disrupt the production of nuclear material in Iran as part of broader covert efforts to delimit the country’s ability to construct a nuclear weapon. 

Multiple versions of Stuxnet were created, as were a series of complementary or derivative malware species with names such as Duqu and Flame. In all cases the malware was unusually sophisticated and relied on chains of exploits or novel techniques that advanced certain capabilities from academic theory to implementable practice. The reliance on zero-day vulnerabilities, or those for which no patches are available, combined with deliberate efforts to subvert the Windows Update system as well as use fraudulently signed digital certificates, bear the hallmarks of developers being willing to compromise global security for the sake of a specific American-Israeli malware campaign. In effect, the decision to leave the world’s computers vulnerable to the exploits used in the creation of Stuxnet demonstrate that offence was prioritized over defence by the respective governments and their signals intelligence agencies which authored the malware.

The book regales the reader with any number of politically sensitive tidbits of information: the CIA was responsible for providing some information on Iran’s nuclear ambitions to the IAEA, Russian antivirus researchers were monitored by Israeli (and perhaps other nations’) spies, historically the CIA and renown physicists planted false stories in Nature, the formal recognition as cyberspace as the fifth domain of battle in 2010 was merely formal recognition of work that had been ongoing for a decade prior, the shift to a wildly propagating version of Stuxnet likely followed after close access operations were no longer possible and the flagrancy of the propagation was likely an error, amongst many other bits of information.

Zetter spends a significant amount of time unpacking the ways in which the United States government determines if a vulnerability should be secretly retained for government use as part of a vulnerabilities equities process. Representatives from the Department of Homeland Security who were quoted in the book noted that they had never received information from the National Security Agency of a vulnerability and, moreover, that in cases where the Agency was already exploiting a reported vulnerability it was unlikely that disclosure would happen after entering the vulnerability into the equities process. As noted by any number of people in the course of the book, the failure by the United States (and other Western governments) to clearly explain their vulnerabilities disclosure processes, or the manners in which they would respond to a cyber attack, leaves unsettled the norms of digital security as well as leaves unanswered the norms and policies concerning when (and how) a state will respond to cyber attacks. To date these issues remain as murky as when the book was published in 2014.

The Countdown to Zero Day, in many respects, serves to collate a large volume of information that has otherwise existed in the public sphere. It draws in interviews, past technical and policy reports, and a vast quantity of news reports. But more than just collating materials it also explains the meanings of them, draws links between them that had not previously been made in such clear or straightforward fashions, and explains the broader implications of the United States’ and Israel’s actions. Further, the details of the book render (more) transparent how anti-virus companies and malware researchers conduct their work, as well as the threats to that work in an era when a piece of malware could be used by a criminal enterprise or a major nation-state actor with a habit of proactively working to silence researchers. The book remains an important landmark in the history of security journalism, cybersecurity, and the politics of cybersecurity. I would heartily recommend it to a layperson and expert alike.

Categories
Quotations

2019.1.14

Between 2002 and 2009, the [Industrial Control System Cyber Emergency Response Team] conducted more than 100 site assessments across multiple industries–oil and natural gas, chemical, and water–and found more than 38,000 vulnerabilities. These included critical systems that were accessible over the internet, default vendor passwords that operators had never bothered to change or hard-coded passwords that couldn’t be changed, outdated software patches, and a lack of standard protections such as firewalls and intrusion-detection systems.

But despite the best efforts of the test-bed and site-assessment researchers, they were battling decades of industry intertia–vendors took months and years to patch vulnerabilities that government researchers found in their systems, and owners of crucial infrastructure were only willing to make cosmetic changes to their systems and networks, resisting more extensive ones.

Kim Zetter, Countdown to Zero-Day
Categories
Links Quotations

Cellebrite can unlock any iPhone (for some values of “any”)

An update by Ars Technica on Cellebrite’s ability to access the content on otherwise secured iOS devices:

Cellebrite is not revealing the nature of the Advanced Unlocking Services’ approach. However, it is likely software based, according to Dan Guido, CEO of the security firm Trail of Bits. Guido told Ars that he had heard Cellebrite’s attack method may be blocked by an upcoming iOS update, 11.3.

“That leads me to believe [Cellebrite] have a power/timing attack that lets them bypass arbitrary delays and avoid device lockouts,” Guido wrote in a message to Ars. “That method would rely on specific characteristics of the software, which explains how Apple could patch what appears to be a hardware issue.”

Regardless of the approach, Cellebrite’s method almost certainly is dependent on a brute-force attack to discover the PIN. And the easiest way to protect against that is to use a longer, alphanumeric password—something Apple has been attempting to encourage with TouchID and FaceID, since the biometric security methods reduce the number of times an iPhone owner has to enter a password.

This once again confirms the importance of establishing strong, long, passwords for iOS devices. Sure they’re less convenient but they provide measurably better security.