Categories
Links

Podcast Recommendation: A Snapshot of the Contemporary Ransomware Ecosystem

For those looking to catch up on the current ransomware ecosystem, this podcast discussion with Greg Linares, Principal Threat Intelligence Analyst at Huntress, is worth a listen.

Linares shares insights into the modern ransomware landscape, including how crews increasingly operate like businesses and why groups such as Akira, Medusa, RansomHub, and Qilin continue to cause significant damage.

The discussion also touches on overlap between ransomware actors and nation-state activity, what “time to ransom” means operationally for defenders, and why techniques such as ClickFix and credential theft continue to succeed at scale.

It further examines the surge in abusing remote monitoring and management (RMM) tools, how “living-off-the-land” techniques allow operations to unfold without traditional malware, and the practical defenses smaller organizations can realistically prioritize.

You can listen online, Apple Podcasts, or other podcast directories and applications.

Categories
Links Writing

AI-Assisted Vulnerability Hunting is Here

Aisle’s recent blog, “What AI Security Research Looks Like When It Works,” does a nice job in explaining the utility of LLM-enabled security research. Properly scoped and resourced, researchers can identify serious vulnerabilities that make communities much safer after patches are applied.

However, there is a distinction between high-quality reports and slop-quality reports. Some groups, such as those operating open source projects, are seeing increasing amounts of low-quality reports that are overwhelming their ability to triage incoming reports.

Aisle highlights several emergent challenges associated with LLM-enabled security research:

  1. If vulnerability reporting increases while maintainer numbers remain flat, there is a question of whether this will cause burnout among maintainers and thus impair both security- and feature-related development.
  2. Whether the 90-day responsible disclosure window remains appropriate, or needs to be tightened, in the current era of LLM-assisted discovery. At the same time, how can or should vulnerability reports be deduplicated?
  3. Whether the ability to identify and patch vulnerabilities will ultimately favour defenders or attackers.
  4. The community’s response to a substantial shift in vulnerability discovery remains uncertain.

There are a few other considerations not taken up in Aisle’s blog:

  1. To what extent will the increased ability of attackers to find vulnerabilities shift who is identified as an ‘advanced’ threat actor? While persistence is currently still linked to resourcing to maintain operations, if serious vulnerabilities (and their chains) become more widely discoverable, what effect will this have on a broader subset of actors being able to conduct cyber operations?
  2. In what ways will the organizations producing foundational models need to build in user identity or verification functionalities or access controls to potentially restrict who can (and cannot) use the models to undertake cybersecurity research?
  3. What might occur if adversaries attempt to poison training data or model weights in order to impede specific forms of LLM-enabled cybersecurity research, either now or in the future?
Categories
Links

Vibe-Coded Malware Isn’t a Game Changer (Yet)

Over the past week there’s been heightened concern about how LLMs can be used to facilitate cyber operations. Much of that concern is tightly linked to recentreports from Anthropic, which are facing growing criticism from the security community.

Anthropic claimed that a threat actor launched an AI-assisted operation which was up to 90% autonomous. But the LLM largely relied on pre-existing open source tools that operators already chain together, and the success rates appear low. Moreover, hallucinations meant that adversaries were often told that the LLM had done something, or had access to credentials, when it did not.

We should anticipate that LLMs will enable some adversaries to chain together code that could exploit vulnerabilities. But vibe‑coding an exploit chain is not the same as building something that can reliably compromise real systems. To date, experiments with vibe‑coded malware and autonomous agents suggest that generated outputs typically require skilled operators to debug, adapt, and operationalise them. Even then, the outputs of LLM‑assisted malware often fail outright when confronted with real‑world constraints and defences.

That’s partly because exploit development is a different skill set and capability than building “functional‑enough” software. Vibe coding for productivity apps might tolerate flaky edge cases and messy internals. Exploit chains, by contrast, often fail to exploit vulnerabilities unless they are properly tailored to a given target.

An AI system that can assemble a roughly working application from a series of prompts does not automatically inherit the ability to produce highly reliable, end‑to‑end exploit chains. Some capability will transfer, but we should be wary of assuming a neat, 100% carry‑over from vibe‑coded software to effective vibe‑coded malware.

Categories
Links

Even Minimal Data Poisoning Can Undermine AI Model Integrity

As reported by Benj Edwards at Ars Technica, researchers demonstrated that even minimal data poisoning can implant backdoors in large language models.

For the largest model tested (13 billion parameters trained on 260 billion tokens), just 250 malicious documents representing 0.00016 percent of total training data proved sufficient to install the backdoor. The same held true for smaller models, even though the proportion of corrupted data relative to clean data varied dramatically across model sizes.

The findings apply to straightforward attacks like generating gibberish or switching languages. Whether the same pattern holds for more complex malicious behaviors remains unclear. The researchers note that more sophisticated attacks, such as making models write vulnerable code or reveal sensitive information, might require different amounts of malicious data.

The same pattern appeared in smaller models as well:

Despite larger models processing over 20 times more total training data, all models learned the same backdoor behavior after encountering roughly the same small number of malicious examples.

The authors note important limitations: the tested models were all relatively small, the results depend on tainted data being present in the training set, and real-world mitigations like guardrails or corrective fine-tuning may blunt such effects.

Even so, the findings point to the ongoing immaturity of LLM cybersecurity practices and the difficulty of assuring trustworthiness in systems trained at scale. Safely deploying AI in high-risk contexts will require not just policy oversight, but rigorous testing, data provenance controls, and continuous monitoring of model behaviour.

Categories
Links Writing

Japan’s New Active Cyberdefence Law

Japan has passed legislation that will significantly reshape the range of cyber operations that its government agencies can undertake. As reported by The Record, the law will enable the following.

  1. Japan’s Self-Defence Forces will be able to provide material support to allies under the justification that failing to do so could endanger the whole of the country.
  2. Japanese LEAs can infiltrate and neutralize hostile servers before any malicious activity has taken place and to do so below the level of an armed attack against Japan.
  3. The Self-Defence Forces be authorized to undertake offensive cyber operations against particularly sophisticated incidents.
  4. The government will be empowered to analyze foreign internet traffic entering the country or just transiting through it. (The government has claimed it won’t collect or analyze the contents of this traffic.) Of note: the new law will not authorize the government to collect or analyze domestically generated internet traffic.
  5. Japan will establish an independent oversight panel that will give prior authorization to all acts of data collection and analysis, as well as for offensive operations intended to target attackers’ servers. This has some relationship to Ministerial oversight of the CSE in Canada, though perhaps (?) with a greater degree of control over the activities understand by Japanese agencies.

The broader result of this legislative update will be to further align the Japanese government, and its agencies, with its Five Eyes friends and allies.

It will be interesting to learn over time whether these activities are impaired by the historical stovepiping of Japan’s defence and SIGINT competencies. Historically the strong division between these organizations impeded cyber operations and was an issue that the USA (and NSA in particular) had sought to have remedied over a decade ago. If these issues persist then the new law may not be taken up as effectively as would otherwise be possible.

Categories
Links

Google to Provide Enhanced Security for Android

It’s positive to see Google providing enhanced security controls for its Android user base, including journalists, human rights defenders, politicians, and c-suite executives. These controls are designed to reduce some of the attack surface available to adversaries.

Some of the protections include:

  • The inability to connect to 2G networks, which lack encryption protections preventing over-the-air monitoring of voice and text-messaging communications
  • No automatic connections to insecure Wi-Fi networks, such as those using WEP or no encryption at all
  • The enabling of the Memory Tagging Extension, a relatively new form of memory management that’s designed to provide an extra layer of protection against use-after-free exploits and other memory-corruption attacks
  • Automatically locking when offline for extended periods
  • Automatically powering down a device when locked for prolonged periods to make user data unreadable without a fresh unlock
  • Intrusion logging that writes system events to a fortified region of the phone for use in detecting and diagnosing successful or attempted hacks
  • JavaScript protections that shut down Android’s JavaScript optimizer, a feature that can be abused in certain types of exploits

You can read more on Google’s blog post announcing the new controls.

Categories
Links Writing

Categorizing Contemporary Attacks on Strong Encryption

Matt Burgess at Wired has a good summary article on the current (and always ongoing) debate concerning the availability of strong encryption.

In short, he sees three ‘classes’ of argument which are aimed at preventing individuals from protecting their communications (and their personal information) with robust encryption.

  1. Governments or law enforcement agencies are asking for backdoors to be built into encrypted platforms to gain “lawful access” to content. This is best exemplified by recent efforts by the United Kingdom to prevent residents from using Apple’s Advanced Data Protection.
  2. An increase in proposals related to a technology known as “client-side scanning.” Perhaps the best known effort is an ongoing European proposal to monitor all users’ communications for child sexual abuse material, notwithstanding the broader implications of integrating a configurable detector (and censor) on all individuals’ devices.
  3. The threat of potential bans or blocks for encrypted services. We see this in Russia, concerning Signal and legal action against WhatsApp in India.

In this broader context it’s worth recognizing that alleged Chinese compromises of key American lawful interception systems led the US government to recommend that all Americans use strongly encrypted communications in light of network compromises. If strong encryption is banned then there is a risk that there will be no respite from such network intrusions while, also, likely creating an entirely new domain of cyber threats.

Categories
Writing

Details from the DNI’s Annual VEP Report

For a long time external observers wondered how many vulnerabilities were retained vs disclosed by FVEY SIGINT agencies. Following years of policy advocacy there is some small visibility into this by way of Section 6270 of Public Law 116-92. This law requires the U.S. Director of National Intelligence (DNI) to disclose certain annual data about the vulnerabilities disclosed and retained by US government agencies.

The Fiscal Year 2023 VEP Annual Report Unclassified Appendix reveals “the aggregate number of vulnerabilities disclosed to vendors or the public pursuant to the [VEP] was 39. Of those disclosed, 29 of them were initial submissions, and 10 of them were reconsiderations that originated in prior years.”1

There can be many reasons to reassess vulnerability equities. Some include:

  1. Utility of given vulnerabilities decrease either due to changes in the environment or research showing a vulnerability would not (or would no longer) have desired effect(s) or possess desired operational characteristics.
  2. Adversaries have identified the vulnerabilities themselves, or through 4th party collection, and disclosure is a defensive action to protect US or allied assets.
  3. Independent researchers / organizations are pursuing lines of research that would likely result in finding the vulnerabilities.
  4. By disclosing the vulnerabilities the U.S. agencies hope or expect adversaries to develop similar attacks on still-vulnerable systems, with the effect of masking future U.S. actions on similarly vulnerable systems.
  5. Organizations responsible for the affected software (e.g., open source projects) are now perceived as competent / resourced to remediate vulnerabilities.
  6. The effects of vulnerabilities are identified as having greater possible effects than initially perceived which rebalances disclosure equities.
  7. Orders from the President in securing certain systems result in a rebalancing of equities regarding holding the vulnerabilities in question.
  8. Newly discovered vulnerabilities are seen as more effective in mission tasks, thus deprecating the need for the vulnerabilities which were previously retained.
  9. Disclosure of vulnerabilities may enable adversaries to better target one another and thus enable new (deniable) 4th party collection opportunities.
  10. Vulnerabilities were in fact long used by adversaries (and not the U.S. / FVEY) and this disclosure burns some of their infrastructure or operational capacity.
  11. Vulnerabilities are associated with long-terminated programs and the release has no effect of current, recent, or deprecated activities.

This is just a very small subset of possible reasons to disclose previously-withheld vulnerabilities. While we don’t have a strong sense of how many vulnerabilities are retained each year, we do at least have a sense that rebalancing of equities year-over-year(s) is occurring. Though without a sense of scale the disclosed information is of middling value, at best.

Categories
Links Writing

VW Leaks Geolocation Data

Contemporary devices collect vast sums of personal and sensitive information, and usually for legitimate purposes. However this means that there are an ever growing number of market participants that need to carefully safeguard the data they are collecting, using, retaining, or disclosing.

One of Volkswagen’s software development subsidiaries, Cariad, reportedly failed to adequately secure software installed in VW, Audi, Seat, and Skoda vehicles:

The sensitive information was left exposed on an unprotected and misconfigured Amazon cloud storage system for months – the problem has now been patched.

In some 466,000 of the 800,000 vehicles involved, location data was extremely precise so that anyone could track the driver’s daily routine. Spiegel reported that the list of owners includes German politicians, entrepreneurs, the entire EV fleet driven by Hamburg police, and even suspected intelligence service employees – so while nothing happened, it seriously could have been a lot worse.

This is a case where no clear harm has been detected. But it speaks more broadly of the continuing need for organizations to know what sensitive information they are collecting, the purposes of the collection, and need to establish adequate controls to protect collected and retained data.

Categories
Writing

ASD is Clearly Preparing for a Quantum Future

National cryptological organizations, such as the NSA, CSE, GCHQ, ASD, and GCSB, routinely assess the strength of different modes of encryption and offer recommendations on what organizations should be using. They make their assessments based on the contemporary strength of encryption algorithms as well as based on the planned or expected vulnerabilities of those algorithms in the face of new or forthcoming technologies.

Quantum computing has the potential to undermine the security that is currently provided by a range of approved cryptographic algorithms.1 On December 12, 2024, Australia’s ASD published a series of recommendations for what algorithms should be deprecated by 2030. What is notable about their decision is that they are proposing deprecations before other leading agencies, including the USA’s National Institute of Standards and Technology and Canada’s CSE, though with an acknowledgement that the deprecation is focused on High Assurance Cryptographic Equipment (HACE).

To-be-deprecated algorithms include:

  • Elliptic Curve Diffie-Hellman (EDHC)
  • Elliptic Curve Digital Signature Algorithm (ECDSA)
  • Module-Lattice-Based Digital Signature Algorithm 65 (ML-DSA-65)
  • Module-Lattice-Based Key Encapsulation Mechanism 768 (ML-KEM-768)
  • Rivest-Shamir-Adleman (RSA)
  • Secure Hashing Mechanisms 224 and 256 (SHA-224 and RSA-256)
  • AES-128 and AES-192

Given that the English-speaking Five Eyes agencies regularly walk in near-lockstep we might see updated guidance from the different agencies in the coming weeks and months. Alternately, policy processes may prevent countries from updating their standards (or publicly announcing changes), leaving ASD as a path leader in cybersecurity while other agencies wait until policy mechanisms eventually lead to these algorithms being deprecated by 2035.

Looking further out, and aside from the national security space, the concerns around cryptographic algorithms speak to challenges that embedded systems will having in the coming decade where manufacturers fail to to get ahead of things and integrate quantum-resistance algorithms in the products they sell. Moreover, for embedded systems (e.g., Operational Technology, Internet of Things, and related systems) where it may be challenging or impossible to update cryptographic algorithms there may be a whole world of currently-secure solutions that will become woefully insecure in the not-so-distant future. That’s a future that we need to start planning for, today, so that at least a decade’s worth of work can hopefully head off the worst of the harms associated with deprecated embedded systems’ (in)security.


  1. What continues to be my favourite, and most accessible, explanation of the risks posed by quantum computing is written by Bruce Schneier. ↩︎