Categories
Links

Vibe-Coded Malware Isn’t a Game Changer (Yet)

Over the past week there’s been heightened concern about how LLMs can be used to facilitate cyber operations. Much of that concern is tightly linked to recentreports from Anthropic, which are facing growing criticism from the security community.

Anthropic claimed that a threat actor launched an AI-assisted operation which was up to 90% autonomous. But the LLM largely relied on pre-existing open source tools that operators already chain together, and the success rates appear low. Moreover, hallucinations meant that adversaries were often told that the LLM had done something, or had access to credentials, when it did not.

We should anticipate that LLMs will enable some adversaries to chain together code that could exploit vulnerabilities. But vibe‑coding an exploit chain is not the same as building something that can reliably compromise real systems. To date, experiments with vibe‑coded malware and autonomous agents suggest that generated outputs typically require skilled operators to debug, adapt, and operationalise them. Even then, the outputs of LLM‑assisted malware often fail outright when confronted with real‑world constraints and defences.

That’s partly because exploit development is a different skill set and capability than building “functional‑enough” software. Vibe coding for productivity apps might tolerate flaky edge cases and messy internals. Exploit chains, by contrast, often fail to exploit vulnerabilities unless they are properly tailored to a given target.

An AI system that can assemble a roughly working application from a series of prompts does not automatically inherit the ability to produce highly reliable, end‑to‑end exploit chains. Some capability will transfer, but we should be wary of assuming a neat, 100% carry‑over from vibe‑coded software to effective vibe‑coded malware.

Categories
Links

Even Minimal Data Poisoning Can Undermine AI Model Integrity

As reported by Benj Edwards at Ars Technica, researchers demonstrated that even minimal data poisoning can implant backdoors in large language models.

For the largest model tested (13 billion parameters trained on 260 billion tokens), just 250 malicious documents representing 0.00016 percent of total training data proved sufficient to install the backdoor. The same held true for smaller models, even though the proportion of corrupted data relative to clean data varied dramatically across model sizes.

The findings apply to straightforward attacks like generating gibberish or switching languages. Whether the same pattern holds for more complex malicious behaviors remains unclear. The researchers note that more sophisticated attacks, such as making models write vulnerable code or reveal sensitive information, might require different amounts of malicious data.

The same pattern appeared in smaller models as well:

Despite larger models processing over 20 times more total training data, all models learned the same backdoor behavior after encountering roughly the same small number of malicious examples.

The authors note important limitations: the tested models were all relatively small, the results depend on tainted data being present in the training set, and real-world mitigations like guardrails or corrective fine-tuning may blunt such effects.

Even so, the findings point to the ongoing immaturity of LLM cybersecurity practices and the difficulty of assuring trustworthiness in systems trained at scale. Safely deploying AI in high-risk contexts will require not just policy oversight, but rigorous testing, data provenance controls, and continuous monitoring of model behaviour.

Categories
Links

LSE Study Exposes AI Bias in Social Care

A new study from the London School of Economics highlights how AI systems can reinforce existing inequalities when used for high risk activities like social care.

Writing in The Guardian, Jessica Murray describes how Google’s Gemma model summarized identical case notes differently depending on gender.

An 84-year-old man, “Mr Smith,” was described as having a “complex medical history, no care package and poor mobility,” while “Mrs Smith” was portrayed as “[d]espite her limitations, she is independent and able to maintain her personal care.” In another example, Mr Smith was noted as “unable to access the community,” but Mrs Smith as “able to manage her daily activities.”

These subtle but significant differences risk making women’s needs appear less urgent, and could influence the care and resources provided. By contrast, Meta’s Llama 3 did not use different language based on gender, underscoring that bias can vary across models and the need to measure bias in LLMs adopted for public service delivery

These findings reinforce why AI systems must be valid and reliable, safe, transparent, accountable, privacy-protective, and human-rights affirming. This is especially the case in high risk settings where AI systems affect decisions linked with accessing essential public services.

Categories
Links

Unpacking the Global Pivot from AI Safety

The global pivot away from AI safety is now driving a lot of international AI policy. This shift is often attributed to the current U.S. administration and is reshaping how liberal democracies approach AI governance.

In a recent article on Lawfare, author Jakub Kraus argues there are deeper reasons behind this shift. Specifically, countries such as France had already begun reorienting toward innovation-friendly frameworks before the activities of the current American administration. The rapid emergence of ChatGPT also sparked a fear of missing out and a surge in AI optimism, while governments also confronted the perceived economic and military opportunities associated with AI technologies.

Kraus concludes his article by arguing that there may be some benefits of emphasizing opportunity over safety while, also, recognizing the risks of not building up effective international or domestic governance institutions.

However, if AI systems are not designed to be safe, transparent, accountable, privacy protective, or human rights affirming then there is a risk that people will lack trust in these systems based on the actual and potential harms of them being developed and deployed without sufficient regulatory safeguards. The result could be a birthing or fostering of a range of socially destructive harms and long-term hesitancy to take advantage of the potential benefits associated with emerging AI technologies.

Categories
Links Writing

Learning from Service Innovation in the Global South

Western policy makers, understandably, often focus on how emerging technologies can benefit their own administrative and governance processes. Looking beyond the Global North to understand how other countries are experimenting with administrative technologies, such as those with embedded AI capacities, can productively reveal the benefits and challenges of applying new technologies at scale.

The Rest of the World continues to be a superb resource for getting out of prototypical discussions and news cycles, with its vision of capturing people’s experiences of technology outside of the Western world.

Their recent article, “Brazil’s AI-powered social security app is wrongly rejecting claims,” on the use of AI technologies South American and Latin American countries reveals the profound potential that automation has for processing social benefits claims…as well as how they can struggle with complex claims and further disadvantage the least privileged in society. In focusing on Brazil, we learn about how the government is turning to automated systems to expedite access to service; while in aggregate these automated systems may be helpful, there are still complex cases where automation is impairing access to (now largely automated) government services and benefits.

The article also mentions how Argentina is using generative AI technologies to help draft court opinions and Costa Rica is using AI systems to optimize tax filing and detect fraudulent behaviours. It is valuable for Western policymakers to see how smaller or more nimble or more resource constrained jurisdictions are integrating automation into service delivery, and learn from their positive experiences and seek to improve upon (or avoid similar) innovation that leads to inadequate service delivery.

Governments are very different from companies. They provide service and assistance to highly diverse populations and, as such, the ‘edge cases’ that government administrators must handle require a degree of attention and care that is often beyond the obligations that corporations have or adopt towards their customer base. We can’t ask, or expect, government administrators to behave like companies because they have fundamentally different obligations and expectations.

It behooves all who are considering the automation of public service delivery to consider how this goal can be accomplished in a trustworthy and responsible manner, where automated services work properly and are fit for purpose, and are safe, privacy protective, transparent and accountable, and human rights affirming. Doing anything less risks entrenching or further systematizing existing inequities that already harm or punish the least privileged in our societies.

Categories
Links Writing

Research Security Requirements and Ontario Colleges and Universities

There’s a lot happening, legislatively in Ontario. One item worth highlighting concerns the requirement for Ontario colleges and universities to develop security research plans.

The federal government has been warning that Canadian academic research is at risk of exfiltration or theft by foreign actors, including by foreign-influenced professors or students who work in Canadian research environments, or by way of electronic and trade-based espionage. In response, the federal government has established a series of guidance documents that Canadian researchers and universities are expected to adhere to where seeking certain kinds of federal funding.

The Ontario government introduced Bill 33, Supporting Children and Students Act, 2025 on May 29, 2025. Notably, Schedule 3 introduces requirements for security plans for Ontario college of applied arts and technology and publicly funded university.

The relevant text from the legislation states as follows:

Research security plan

Application

20.1 (1) This section applies to every college of applied arts and technology and to every publicly-assisted university.

Development and implementation of plan

(2) Every college or university described in subsection (1) shall develop and implement a research security plan to safeguard, and mitigate the risk of harm to or interference with, its research activities.

Minister’s directive

(3) The Minister may, from time to time, in a directive issued to one or more colleges or universities described in subsection (1),

(a) specify the date by which a college or university’s research security plan must be developed and implemented under subsection (2);

(b) specify the date by which a plan must be provided to the Minister under subsection (4) and any requirements relating to updating or revising a plan; and

(c) specify topics to be addressed or elements to be included in a plan and the date by which they must be addressed.

Review by Minister

(4) Every college or university described in subsection (1) shall provide the Minister with a copy of its research security plan and any other information or reports requested by the Minister in respect of research security.

Categories
Links Writing

Japan’s New Active Cyberdefence Law

Japan has passed legislation that will significantly reshape the range of cyber operations that its government agencies can undertake. As reported by The Record, the law will enable the following.

  1. Japan’s Self-Defence Forces will be able to provide material support to allies under the justification that failing to do so could endanger the whole of the country.
  2. Japanese LEAs can infiltrate and neutralize hostile servers before any malicious activity has taken place and to do so below the level of an armed attack against Japan.
  3. The Self-Defence Forces be authorized to undertake offensive cyber operations against particularly sophisticated incidents.
  4. The government will be empowered to analyze foreign internet traffic entering the country or just transiting through it. (The government has claimed it won’t collect or analyze the contents of this traffic.) Of note: the new law will not authorize the government to collect or analyze domestically generated internet traffic.
  5. Japan will establish an independent oversight panel that will give prior authorization to all acts of data collection and analysis, as well as for offensive operations intended to target attackers’ servers. This has some relationship to Ministerial oversight of the CSE in Canada, though perhaps (?) with a greater degree of control over the activities understand by Japanese agencies.

The broader result of this legislative update will be to further align the Japanese government, and its agencies, with its Five Eyes friends and allies.

It will be interesting to learn over time whether these activities are impaired by the historical stovepiping of Japan’s defence and SIGINT competencies. Historically the strong division between these organizations impeded cyber operations and was an issue that the USA (and NSA in particular) had sought to have remedied over a decade ago. If these issues persist then the new law may not be taken up as effectively as would otherwise be possible.

Categories
Links

Google to Provide Enhanced Security for Android

It’s positive to see Google providing enhanced security controls for its Android user base, including journalists, human rights defenders, politicians, and c-suite executives. These controls are designed to reduce some of the attack surface available to adversaries.

Some of the protections include:

  • The inability to connect to 2G networks, which lack encryption protections preventing over-the-air monitoring of voice and text-messaging communications
  • No automatic connections to insecure Wi-Fi networks, such as those using WEP or no encryption at all
  • The enabling of the Memory Tagging Extension, a relatively new form of memory management that’s designed to provide an extra layer of protection against use-after-free exploits and other memory-corruption attacks
  • Automatically locking when offline for extended periods
  • Automatically powering down a device when locked for prolonged periods to make user data unreadable without a fresh unlock
  • Intrusion logging that writes system events to a fortified region of the phone for use in detecting and diagnosing successful or attempted hacks
  • JavaScript protections that shut down Android’s JavaScript optimizer, a feature that can be abused in certain types of exploits

You can read more on Google’s blog post announcing the new controls.

Categories
Links Writing

Implications for Canada of an Anti-Liberal Democratic USA

Any number of commentators have raised concerns over whether the USA could become an illiberal state and the knock on effects. A recent piece by Dr. Benjamin Goldsmith briefly discussed a few forms of such a reformed state apparatus, but more interestingly (to me) is his postulation of the potentially broader global effects:

  • The dominant ideology of great powers will be nationalism.  
  • International politics will resemble the realist vision of great powers balancing power, carving out spheres of influence.  
  • It will make sense for the illiberal great powers to cooperate in some way to thwart liberalism – a sort of new ‘Holy Alliance’ type system could emerge.  
  • The existing institutional infrastructure of international relations will move towards a state-centric bias, away from a human-rights, liberal bias.   
  • International economic interdependence, although curtailed since the days of high “globalisation,” will continue to play an important role in tempering great-power behaviour.  
  • Democracy will be under greater pressure globally, with no great power backing and perhaps active US encouragement of far-right illiberal parties in established and new democracies.  
  • Mass Politics and soft power will still matter, but the post-truth aspect of public opinion in foreign policy will be greater.  

For a middle state like Canada, this kind of transformation would fundamentally challenge how it has been able to operate for the past 80 years. This would follow from the effects of this international reordering and due to our proximity to a superpower state that has broadly adopted or accepted an anti-liberal democratic political culture.

Concerning the first, what does this international reordering mean for Canada when nationalism reigns supreme after decades of developing economic and cultural integrations with the USA? What might it mean to be under a ‘sphere of influence’ with an autocratic or illiberal country? How would Canada appease Americans who pushed our leaders to support other authoritarian governments, or else? Absent the same commitments (and resources) to advocate for democratic values and human rights (while recognizing America’s own missteps in those areas) what does it mean for Canada’s own potential foreign policy commitments? And in an era of rising adoptions of generative AI technologies that can be used to produce and spread illiberal or anti-democratic rhetoric, and without the USA to regulate such uses of these technologies, what does this mean for detecting truth and falsity in international discourse?

In aggregate, these are the sorts of questions that Canadians should be considering and is part of why our leaders are warning of the implications of the changing American political culture.

When it comes to our proximity to a growing anti-liberal democratic political cultural, we are already seeing some of those principles and rhetoric taking hold in Canada. As more of this language (and ideology) seeps into Canadian discourse there is a growing chance that Canada’s own democratic norms might be perverted with extended exposure and following American pressures to compel alterations in our democratic institutions.

The shifts in the USA were not entirely unexpected. And the implications have been previously theorized. An anti-liberal democratic political culture will not necessarily take hold amongstAmericans and their political institutions. But the implications and potential global effects of such a change are before us, today, and it’s important to carefully consider potential consequences. Middle states, such as Canada, that possess liberal democratic cultures must urgently prepare ways to plot through what may be a very chaotic and disturbing next few decades.

Categories
Links Writing

Categorizing Contemporary Attacks on Strong Encryption

Matt Burgess at Wired has a good summary article on the current (and always ongoing) debate concerning the availability of strong encryption.

In short, he sees three ‘classes’ of argument which are aimed at preventing individuals from protecting their communications (and their personal information) with robust encryption.

  1. Governments or law enforcement agencies are asking for backdoors to be built into encrypted platforms to gain “lawful access” to content. This is best exemplified by recent efforts by the United Kingdom to prevent residents from using Apple’s Advanced Data Protection.
  2. An increase in proposals related to a technology known as “client-side scanning.” Perhaps the best known effort is an ongoing European proposal to monitor all users’ communications for child sexual abuse material, notwithstanding the broader implications of integrating a configurable detector (and censor) on all individuals’ devices.
  3. The threat of potential bans or blocks for encrypted services. We see this in Russia, concerning Signal and legal action against WhatsApp in India.

In this broader context it’s worth recognizing that alleged Chinese compromises of key American lawful interception systems led the US government to recommend that all Americans use strongly encrypted communications in light of network compromises. If strong encryption is banned then there is a risk that there will be no respite from such network intrusions while, also, likely creating an entirely new domain of cyber threats.