Categories
Links

Vibe-Coded Malware Isn’t a Game Changer (Yet)

Over the past week there’s been heightened concern about how LLMs can be used to facilitate cyber operations. Much of that concern is tightly linked to recentreports from Anthropic, which are facing growing criticism from the security community.

Anthropic claimed that a threat actor launched an AI-assisted operation which was up to 90% autonomous. But the LLM largely relied on pre-existing open source tools that operators already chain together, and the success rates appear low. Moreover, hallucinations meant that adversaries were often told that the LLM had done something, or had access to credentials, when it did not.

We should anticipate that LLMs will enable some adversaries to chain together code that could exploit vulnerabilities. But vibe‑coding an exploit chain is not the same as building something that can reliably compromise real systems. To date, experiments with vibe‑coded malware and autonomous agents suggest that generated outputs typically require skilled operators to debug, adapt, and operationalise them. Even then, the outputs of LLM‑assisted malware often fail outright when confronted with real‑world constraints and defences.

That’s partly because exploit development is a different skill set and capability than building “functional‑enough” software. Vibe coding for productivity apps might tolerate flaky edge cases and messy internals. Exploit chains, by contrast, often fail to exploit vulnerabilities unless they are properly tailored to a given target.

An AI system that can assemble a roughly working application from a series of prompts does not automatically inherit the ability to produce highly reliable, end‑to‑end exploit chains. Some capability will transfer, but we should be wary of assuming a neat, 100% carry‑over from vibe‑coded software to effective vibe‑coded malware.

Categories
Links

Even Minimal Data Poisoning Can Undermine AI Model Integrity

As reported by Benj Edwards at Ars Technica, researchers demonstrated that even minimal data poisoning can implant backdoors in large language models.

For the largest model tested (13 billion parameters trained on 260 billion tokens), just 250 malicious documents representing 0.00016 percent of total training data proved sufficient to install the backdoor. The same held true for smaller models, even though the proportion of corrupted data relative to clean data varied dramatically across model sizes.

The findings apply to straightforward attacks like generating gibberish or switching languages. Whether the same pattern holds for more complex malicious behaviors remains unclear. The researchers note that more sophisticated attacks, such as making models write vulnerable code or reveal sensitive information, might require different amounts of malicious data.

The same pattern appeared in smaller models as well:

Despite larger models processing over 20 times more total training data, all models learned the same backdoor behavior after encountering roughly the same small number of malicious examples.

The authors note important limitations: the tested models were all relatively small, the results depend on tainted data being present in the training set, and real-world mitigations like guardrails or corrective fine-tuning may blunt such effects.

Even so, the findings point to the ongoing immaturity of LLM cybersecurity practices and the difficulty of assuring trustworthiness in systems trained at scale. Safely deploying AI in high-risk contexts will require not just policy oversight, but rigorous testing, data provenance controls, and continuous monitoring of model behaviour.

Categories
Links Writing

Japan’s New Active Cyberdefence Law

Japan has passed legislation that will significantly reshape the range of cyber operations that its government agencies can undertake. As reported by The Record, the law will enable the following.

  1. Japan’s Self-Defence Forces will be able to provide material support to allies under the justification that failing to do so could endanger the whole of the country.
  2. Japanese LEAs can infiltrate and neutralize hostile servers before any malicious activity has taken place and to do so below the level of an armed attack against Japan.
  3. The Self-Defence Forces be authorized to undertake offensive cyber operations against particularly sophisticated incidents.
  4. The government will be empowered to analyze foreign internet traffic entering the country or just transiting through it. (The government has claimed it won’t collect or analyze the contents of this traffic.) Of note: the new law will not authorize the government to collect or analyze domestically generated internet traffic.
  5. Japan will establish an independent oversight panel that will give prior authorization to all acts of data collection and analysis, as well as for offensive operations intended to target attackers’ servers. This has some relationship to Ministerial oversight of the CSE in Canada, though perhaps (?) with a greater degree of control over the activities understand by Japanese agencies.

The broader result of this legislative update will be to further align the Japanese government, and its agencies, with its Five Eyes friends and allies.

It will be interesting to learn over time whether these activities are impaired by the historical stovepiping of Japan’s defence and SIGINT competencies. Historically the strong division between these organizations impeded cyber operations and was an issue that the USA (and NSA in particular) had sought to have remedied over a decade ago. If these issues persist then the new law may not be taken up as effectively as would otherwise be possible.

Categories
Links

Google to Provide Enhanced Security for Android

It’s positive to see Google providing enhanced security controls for its Android user base, including journalists, human rights defenders, politicians, and c-suite executives. These controls are designed to reduce some of the attack surface available to adversaries.

Some of the protections include:

  • The inability to connect to 2G networks, which lack encryption protections preventing over-the-air monitoring of voice and text-messaging communications
  • No automatic connections to insecure Wi-Fi networks, such as those using WEP or no encryption at all
  • The enabling of the Memory Tagging Extension, a relatively new form of memory management that’s designed to provide an extra layer of protection against use-after-free exploits and other memory-corruption attacks
  • Automatically locking when offline for extended periods
  • Automatically powering down a device when locked for prolonged periods to make user data unreadable without a fresh unlock
  • Intrusion logging that writes system events to a fortified region of the phone for use in detecting and diagnosing successful or attempted hacks
  • JavaScript protections that shut down Android’s JavaScript optimizer, a feature that can be abused in certain types of exploits

You can read more on Google’s blog post announcing the new controls.

Categories
Links Writing

Categorizing Contemporary Attacks on Strong Encryption

Matt Burgess at Wired has a good summary article on the current (and always ongoing) debate concerning the availability of strong encryption.

In short, he sees three ‘classes’ of argument which are aimed at preventing individuals from protecting their communications (and their personal information) with robust encryption.

  1. Governments or law enforcement agencies are asking for backdoors to be built into encrypted platforms to gain “lawful access” to content. This is best exemplified by recent efforts by the United Kingdom to prevent residents from using Apple’s Advanced Data Protection.
  2. An increase in proposals related to a technology known as “client-side scanning.” Perhaps the best known effort is an ongoing European proposal to monitor all users’ communications for child sexual abuse material, notwithstanding the broader implications of integrating a configurable detector (and censor) on all individuals’ devices.
  3. The threat of potential bans or blocks for encrypted services. We see this in Russia, concerning Signal and legal action against WhatsApp in India.

In this broader context it’s worth recognizing that alleged Chinese compromises of key American lawful interception systems led the US government to recommend that all Americans use strongly encrypted communications in light of network compromises. If strong encryption is banned then there is a risk that there will be no respite from such network intrusions while, also, likely creating an entirely new domain of cyber threats.

Categories
Writing

Details from the DNI’s Annual VEP Report

For a long time external observers wondered how many vulnerabilities were retained vs disclosed by FVEY SIGINT agencies. Following years of policy advocacy there is some small visibility into this by way of Section 6270 of Public Law 116-92. This law requires the U.S. Director of National Intelligence (DNI) to disclose certain annual data about the vulnerabilities disclosed and retained by US government agencies.

The Fiscal Year 2023 VEP Annual Report Unclassified Appendix reveals “the aggregate number of vulnerabilities disclosed to vendors or the public pursuant to the [VEP] was 39. Of those disclosed, 29 of them were initial submissions, and 10 of them were reconsiderations that originated in prior years.”1

There can be many reasons to reassess vulnerability equities. Some include:

  1. Utility of given vulnerabilities decrease either due to changes in the environment or research showing a vulnerability would not (or would no longer) have desired effect(s) or possess desired operational characteristics.
  2. Adversaries have identified the vulnerabilities themselves, or through 4th party collection, and disclosure is a defensive action to protect US or allied assets.
  3. Independent researchers / organizations are pursuing lines of research that would likely result in finding the vulnerabilities.
  4. By disclosing the vulnerabilities the U.S. agencies hope or expect adversaries to develop similar attacks on still-vulnerable systems, with the effect of masking future U.S. actions on similarly vulnerable systems.
  5. Organizations responsible for the affected software (e.g., open source projects) are now perceived as competent / resourced to remediate vulnerabilities.
  6. The effects of vulnerabilities are identified as having greater possible effects than initially perceived which rebalances disclosure equities.
  7. Orders from the President in securing certain systems result in a rebalancing of equities regarding holding the vulnerabilities in question.
  8. Newly discovered vulnerabilities are seen as more effective in mission tasks, thus deprecating the need for the vulnerabilities which were previously retained.
  9. Disclosure of vulnerabilities may enable adversaries to better target one another and thus enable new (deniable) 4th party collection opportunities.
  10. Vulnerabilities were in fact long used by adversaries (and not the U.S. / FVEY) and this disclosure burns some of their infrastructure or operational capacity.
  11. Vulnerabilities are associated with long-terminated programs and the release has no effect of current, recent, or deprecated activities.

This is just a very small subset of possible reasons to disclose previously-withheld vulnerabilities. While we don’t have a strong sense of how many vulnerabilities are retained each year, we do at least have a sense that rebalancing of equities year-over-year(s) is occurring. Though without a sense of scale the disclosed information is of middling value, at best.

Categories
Links Writing

VW Leaks Geolocation Data

Contemporary devices collect vast sums of personal and sensitive information, and usually for legitimate purposes. However this means that there are an ever growing number of market participants that need to carefully safeguard the data they are collecting, using, retaining, or disclosing.

One of Volkswagen’s software development subsidiaries, Cariad, reportedly failed to adequately secure software installed in VW, Audi, Seat, and Skoda vehicles:

The sensitive information was left exposed on an unprotected and misconfigured Amazon cloud storage system for months – the problem has now been patched.

In some 466,000 of the 800,000 vehicles involved, location data was extremely precise so that anyone could track the driver’s daily routine. Spiegel reported that the list of owners includes German politicians, entrepreneurs, the entire EV fleet driven by Hamburg police, and even suspected intelligence service employees – so while nothing happened, it seriously could have been a lot worse.

This is a case where no clear harm has been detected. But it speaks more broadly of the continuing need for organizations to know what sensitive information they are collecting, the purposes of the collection, and need to establish adequate controls to protect collected and retained data.

Categories
Writing

ASD is Clearly Preparing for a Quantum Future

National cryptological organizations, such as the NSA, CSE, GCHQ, ASD, and GCSB, routinely assess the strength of different modes of encryption and offer recommendations on what organizations should be using. They make their assessments based on the contemporary strength of encryption algorithms as well as based on the planned or expected vulnerabilities of those algorithms in the face of new or forthcoming technologies.

Quantum computing has the potential to undermine the security that is currently provided by a range of approved cryptographic algorithms.1 On December 12, 2024, Australia’s ASD published a series of recommendations for what algorithms should be deprecated by 2030. What is notable about their decision is that they are proposing deprecations before other leading agencies, including the USA’s National Institute of Standards and Technology and Canada’s CSE, though with an acknowledgement that the deprecation is focused on High Assurance Cryptographic Equipment (HACE).

To-be-deprecated algorithms include:

  • Elliptic Curve Diffie-Hellman (EDHC)
  • Elliptic Curve Digital Signature Algorithm (ECDSA)
  • Module-Lattice-Based Digital Signature Algorithm 65 (ML-DSA-65)
  • Module-Lattice-Based Key Encapsulation Mechanism 768 (ML-KEM-768)
  • Rivest-Shamir-Adleman (RSA)
  • Secure Hashing Mechanisms 224 and 256 (SHA-224 and RSA-256)
  • AES-128 and AES-192

Given that the English-speaking Five Eyes agencies regularly walk in near-lockstep we might see updated guidance from the different agencies in the coming weeks and months. Alternately, policy processes may prevent countries from updating their standards (or publicly announcing changes), leaving ASD as a path leader in cybersecurity while other agencies wait until policy mechanisms eventually lead to these algorithms being deprecated by 2035.

Looking further out, and aside from the national security space, the concerns around cryptographic algorithms speak to challenges that embedded systems will having in the coming decade where manufacturers fail to to get ahead of things and integrate quantum-resistance algorithms in the products they sell. Moreover, for embedded systems (e.g., Operational Technology, Internet of Things, and related systems) where it may be challenging or impossible to update cryptographic algorithms there may be a whole world of currently-secure solutions that will become woefully insecure in the not-so-distant future. That’s a future that we need to start planning for, today, so that at least a decade’s worth of work can hopefully head off the worst of the harms associated with deprecated embedded systems’ (in)security.


  1. What continues to be my favourite, and most accessible, explanation of the risks posed by quantum computing is written by Bruce Schneier. ↩︎
Categories
Links Writing

American Telecommunication Companies’ Cybersecurity Deficiencies Increasingly Apparent

Five Eyes countries have regularly and routinely sought, and gained, access to foreign telecommunications infrastructures to carry out their operations. The same is true of other well resourced countries, including China.

Salt Typhoon’s penetration of American telecommunications and email platforms is slowly coming into relief. The New York Times has an article that summarizes what is being publicly disclosed at this point in time:

  • The full list of phone numbers that the Department of Justice had under surveillance in lawful interception systems has been exposed, with the effect of likely undermining American counter-intelligence operations aimed at Chinese operatives
  • Phone calls, unencrypted SMS messages, and email providers have been compromised
  • The FBI has heightened concerns that informants may have been exposed
  • Apple’s services, as well as end to end encrypted systems, were not penetrated

American telecommunications networks were penetrated, in part, due to companies relying on decades old systems and equipment that do not meet modern security requirements. Fixing these deficiencies may require rip-and-replacing some old parts of the network with the effect of creating “painful network outages for consumers.” Some of the targeting of American telecommunications networks is driven by an understanding that American national security defenders have some restrictions on how they can operate on American-based systems.

The weaknesses of telecommunications networks and their associated systems are generally well known. And mobile systems are particularly vulnerable to exploitation as a result of archaic standards and an unwillingness by some carriers to activate the security-centric aspects of 4G and 5G standards.

Some of the Five Eyes, led by Canada, have been developing and deploying defensive sensor networks that are meant to shore up some defences of government and select non-government organizations.1 But these edge, network, and cloud based sensors can only do so much: telecommunications providers, themselves, need to prioritize ensuring their core networks are protected against the classes of adversaries trying to penetrate them.2

At the same time, it is worth recognizing that end to end communications continued to be protected even in the face of Salt Typhoon’s actions. This speaks the urgent need to ensure that these forms of communications security continue to be available to all users. We often read that law enforcement needs select access to such communications and that they can be trusted to not abuse such exceptional access.

Setting aside the vast range of legal, normative, or geopolitical implications of weakening end to end encryption, cyber operations like the one perpetrated by Salt Typhoon speak to governments’ collective inabilities to protect their lawful access systems. There’s no reason to believe they’d be any more able to protect exceptional access measures that weakened, or otherwise gained access to, select content of end to end encrypted communications.


  1. I have discussed these sensors elsewhere, including in “Unpacking NSICOP’s Special Report on the Government of Canada’s Framework and Activities to Defend its Systems and Networks from Cyber Attack”. Historical information about these sensors, which were previously referred to under the covernames of CASCADE, EONBLUE, and PHOTONICPRISM, is available at the SIGINT summaries. ↩︎
  2. We are seeing some governments introducing, and sometimes passing, laws that would foster more robust security requirements. In Canada, Bill C-26 is generally meant to do this though the legislation as introduced raised some serious concerns. ↩︎
Categories
Links Writing

Emerging Trends from Canadian Privacy Regulators and Cybersecurity Legislation?

Earlier this evening, the Office of the Privacy Commissioner of Canada (OPC) appeared before the Standing Senate Committee on National Security, Defence and Veterans Affairs on the topic of Bill C-26: An Act respecting cyber security, amending the Telecommunications Act and making consequential amendments to other Acts.

While at Committee, Commissioner Dufresne recognized the value of making explicit the OPC’s oversight role concerning the legislation. He, also, reaffirmed the importance of requiring any collection, use, or disclosure of personal information to be both necessary and proportionate. And should the Standing Committee decline to adopt this amendment they were advised to, at a minimum, include a requirement that data only be retained for as long as necessary. Government institutions should also be required to undertake privacy impact assessments and consult with the OPC.

Finally, in cases of cyber incidents that may result in a material breach, his office should be notified; this could entail the OPC being notified by the Communications Security Establishment based on a real risk of significant harm standard. Information sharing agreements should also be put in place that provide minimum privacy safeguards while also strengthening governance and accountability processes.

The safeguards the OPC are calling for are important and, also, overlap with many of the Information and Privacy Commissioner of Ontario’s (written submission, Commissioner Kosseim’s oral remarks) concerning the provincial government’s Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024.

Should other Canadian jurisdictions propose their own cybersecurity legislation to protect critical infrastructure and regulated bodies it will be interesting to monitor for the consistency in the amendments called for by Canada’s privacy regulators.