Categories
Links Writing

Canadian AI Sovereignty: The Interplay Between Technical and Regulatory Pressures

Khan and Jancik’s recent article, “Canadian AI sovereignty: A dose of realism” offers a structured way of assessing sovereignty claims and subsequently undertaking actions that might reasonably follow from this assessment. They set out a spectrum wherein some applications of AI systems may require heightened sovereign ownership or localization, and others where sovereign requirements might be applied more narrowly to establish reliability and control over facets of AI systems.

They offer a series of analytic questions that organizations (and governments) can ask in assessing whether a given investment will advance Canada’s sovereignty interests:

  1. Is there a compelling policy rationale or public interest objective?
  2. Is the sovereign solution competitive?
  3. Is it viable at Canadian scale?

They assert that bringing AI sovereignty policies to life, at scale, requires state capacity to be developed (e.g., technical experts must be hired to guide decision-making), there must be coordinated AI strategies across levels of government, and business ecosystems must be developed amongst Canadian businesses.

Of note, their assessment is guided by an assertion that AI sovereignty will depend on technical decisions, first, and not regulatory conclusions or rule making. They make this based on their perception that regulation has (generally) had limited effects to date.

While certainly true that regulation moves at a different pace than technological innovation, the early efforts of a range of governments to coordinate on core values, principles, and expectations have laid the groundwork for contemporary regulatory efforts. The effects of such groundwork are being increasingly seen in various jurisdictions as regulators issue guidance, decisions, and undertake policymaking activities under their own responsibilities.

Such activities are being seen at national as well as state and provincial levels. One of the notable developments has been that privacy regulators have often been the first to move given the ways in which AI systems may rely on personal information throughout the data lifecycle. That could change as AI safety and consumer protection organizations increasingly focus on risks and challenges linked to AI systems’ applications but, to date, such regulators are often behind those of data protection bodies.

Categories
Links

Emerging Roles of AI Systems in Legal Processes

Nilay Patel recently interviewed Bridget McCormack, former chief justice of the Michigan Supreme Court and now head of the American Arbitration Association (AAA), about the AAA’s new AI-assisted arbitration platform and the broader role AI might play in legal dispute resolution. It was a rich conversation, touching on the design and mechanics of the AI arbitration system, procedural fairness and perceived trust, and the limits and risks and opportunities of deploying AI in adjudicative contexts.

McCormack raised a number of ways that agentic AI systems can be used in arbitration processes, including:

  1. case intake and understanding parties’ positions,
  2. organizing some evidence supplied by parties,
  3. providing mechanisms for parties to assess whether they even want to bring a case to arbitration,
  4. establishing less-biased (or more bias evident) decision systems that lend themselves to auditing, and
  5. more broadly expanding access to arbitration processes by reducing costs and time linked with these activities.

The AAA has worked slowly and carefully in developing their AI-enabled processes, and it will be interesting to see the outcomes of their innovations. Similarly, I’ll be curious to see whether (and if so, how) other adjudication and tribunal bodies look to adopt these technologies in the coming months and years.

Categories
Writing

Dromology in the Age of Synthetic Cognition

Paul Virilio was a French cultural theorist well known for his theory of dromology. Dromology explores the logics and impacts of speed in the modern era. At its core, it theorizes how the velocity of action or decision-making enables actors to accrue wealth and power over others. Virilio often approached this concept through the lens of martial power, contemplating how new means of movement — the horse, the automobile, telemetric control — created new capacities to overcome the frictions of time and space, and to overcome adversaries through heightened sensing and accelerated decision-making.

We exist in an era of digital intensification. Cybernetic systems are now core to many people’s daily realities, including systems over which they have little meaningful influence or control.1 Earlier digital modernity was often described as an “attention economy.” Today, we may be entering what I’ll call a “velocity economy,” which is increasingly grappling with the implications of a faster-moving world.

Escape Velocities

Om Malik has written recently on velocity and how it may now precede attention as a structuring condition:

What matters now is how fast something moves through the network: how quickly it is clicked, shared, quoted, replied to, remixed, and replaced. In a system tuned for speed, authority is ornamental. The network rewards motion first and judgment later, if ever. Perhaps that’s why you feel you can’t discern between truths, half-truths, and lies.

Algorithms on YouTube, Facebook, TikTok, Instagram, and Twitter do not optimize for truth or depth. They optimize for motion. A piece that moves fast is considered “good.” A piece that hesitates disappears. There are almost no second chances online because the stream does not look back. People are not failing the platforms. People are behaving exactly as the platforms reward. We might think we are better, but we have the same rat-reward brain.

When velocity becomes the scarcest resource, everything orients around it. This is why it’s wrong to think of “the algorithm” as some quirky technical layer that can be toggled on and off or worked around. The algorithm is the culture. It decides what gets amplified, who gets to make a living, and what counts as “success.”

Once velocity is the prize, quality becomes risky. Thoughtfulness takes time. Reporting takes time. Living with a product or an idea takes time. Yet the window for relevance keeps shrinking, and the penalty for lateness is erasure. We get a culture optimized for first takes, not best takes. The network doesn’t ask if something is correct or durable, only if it moves. If it moves, the system will find a way to monetize it.2

The creation and publication of content — and the efforts to manipulate engagement metrics to juice algorithms — have long been partially automated. Bot and content farms are not new. What may be new is the scale and ease of synthesis. As the cost of producing text, images, summaries, and responses to each declines through the widespread adoption of LLMs and agentic systems, the volume of generated material increases dramatically.

That increase in volume does not just mean “more noise.” It alters competitive dynamics and means that velocity — which then accrues attention — becomes key in an algorithmically intermediated world. In this environment what is increasingly put under pressure are decisional latencies — the time between sensing, synthesizing, and acting. And humans are making decisions on what to focus on based on automations and algorithms designed to cull out what they “should” be paying attention to.

Earlier digital acceleration primarily affected distribution: messages moved faster, and telemetrics enabled the expression of power at heightened distances, as examples. Now we may be witnessing the acceleration of what looks like cognition. LLMs have no theory of mind insofar as they do not “understand” in any human sense. Yet they can synthesize, summarize, categorize, and prioritize at speeds that mimic cognitive activity. And when those synthesized outputs are connected to agentic systems capable of taking action — filing forms, executing transactions, triggering workflows — we move beyond accelerated messaging into accelerated execution. Decisional latencies can become compressed in order to produce outputs that move sufficiently fast, and with sufficient purchase, to be registered by algorithms as worthy of amplification and, ultimately, human attention.

Put differently: as velocity becomes a mode of capturing attention there is pressure to move more quickly in the face of other, similarly fast-moving outputs, and in ways that potentially exploit or game algorithms in an effort to obtain human attention.

New Velocity, New Harms

For Virilio, every accelerant technology carried with it a corresponding accident. The invention of the ship implied the shipwreck. The car led to the car crash. Radio and telecommunications enabled new forms of propaganda and coordinated deception. And so on.

LLMs and agentic systems may carry their own accident structures. They enable mass automated persuasion at scale. A flaw in a widely deployed foundation model could result in class-breaking errors replicated across applications dependent on that model.3

Agentic systems introduce further risks: cascading autonomous mis-executions, rapid propagation of flawed decisions, and compounding feedback loops that create significant problems before humans detect them.4

AI accidents have the potential to be more distributed and more simultaneous than prior automation failures. While automated systems have long-posed risks the generalized and cross-sector nature of foundation models could expand the blast radius of automated harms. When many systems rely on shared models or shared training data, correlated failures become more plausible.

Velocity, in this sense, does not merely amplify error; it compresses the window in which errors can be identified and corrected. It risks creating brittle systems and generating what Charles Perrow has called “normal accidents.”

Velocity and Organizational Impacts

If decisional latency becomes the friction to be minimized in a velocity economy, organizations may feel pressure to shorten analytic cycles and accelerate workflow tempos. In domains where speed confers agenda-setting power, organizations may need to move faster or risk marginalization.

At the same time, we might see a divide emerge. Some institutions may further prioritize velocity and first-mover visibility as a way to shape agendas. Others may deliberately preserve slower processes to protect legitimacy and safety. Friction — often treated as inefficiency — may be read as functioning as a source of institutional credibility.5 It may, also, be used by some organizations to justify their resistance to innovation and with the effect of falling behind other actors.

As information volume expands, organizations and individuals may increasingly depend on third-party systems to track, assess, and prioritize what is “meaningful.” LLMs and agentic systems may be paired with other automated triage systems designed to impose order on informational abundance.

Yet such sense-making is inherently lossy. The world is dense with detail, contingency, contradiction, and edge cases. When LLMs normalize information statistically, much of that raw specificity can be abstracted away. The effect can be that important context is never surfaced for human review; reliance on abstracted assessment systems to navigate a digitally intermediated world may entail a further loss of representational fidelity.

This abstraction is not unprecedented — humans have always distilled complexity — but the scale and automation of the distillation may be new. And as (or if) human review recedes the capacity to interrogate what has been smoothed over may diminish.

Organizations must also determine when they will introduce human review as well as when they will deliberately refrain from doing so. Prioritizing human assessment of all outputs could introduce friction that other organizations or jurisdictions may not demand. A majority-human-review organization may operate outside the dominant tempo of a velocity economy, with the end of potentially gaining legitimacy and safety while simultaneously sacrificing influence or timeliness.

Organizational Consequences of LLM and Agentic Velocity

If LLM- and agentic-enabled systems increase the rate at which information is generated and decisions are executed, several consequences may follow.

  1. The distribution of power may become linked to access to compute, to foundational models, to reliable data, and to the capacity to act digitally or physically. Countries that dominate the production — or regulation — of foundational models may accrue disproportionate influence. Where production and regulation of AI models or systems diverge between nation-states or geopolitical regions, conflicts over norms and authority may intensify.
  2. Organizations may need fast initial outputs to secure attention in a velocity-based information environment. However, rapid outputs need not be final outputs. Deeper analysis may continue in parallel, informing subsequent action and ensuring that longer-term activities based on such analysis remain well grounded in facts and aligned with strategic priorities. Organizations that excel at this two-track approach to knowledge production may gain strategic benefits in being able to set agendas as well as subsequently navigate them with complexity, depth, and institutional integrity.
  3. Where agentic systems are entrusted to make certain classes of judgments, institutions must determine under what conditions (and to what extent) they will add the friction of human oversight. The more friction introduced, the greater the potential divergence from competitors operating at full automation speed. At the same time, human-informed decision-making may confer benefits of perceived legitimacy and safety.
  4. Institutions must carefully consider how they can, and cannot, adopt LLMs and agentic systems so they are responsive to changes in the lived reality of the world while at the same time working to carefully protect social trust that they possess. There may be increased pressures on institutions to align their decisional horizons with machine-accelerated and innovation-driven time horizons, perhaps requiring shifts in decisions from slow and fixed in time, to more fast moving and subject to routine revisions. For bureaucratic organizations or institutions this could require major changes6 in decisional structures and processes.

Future Looking Velocity-Imposed Pressures

If we are to take Virilio’s insights seriously, along with changes in technological activity per Malik’s thoughts, then there are at least three tensions worth watching:

  1. Organizations with access to contemporary models may be able to move more quickly and accurately, with the effect of reducing the time delta in summarizing or producing information while compressing decisional cycles. At risk, however, is whether this elides the specificity that is reflective of the actual world and has the effect of delegitimizing actual decisions as a result of minimal (or insufficient) human oversight or governance. To what extent might LLM- and agentic-forward organizations make bad decisions more quickly and undermine their legitimacy? How much will access to contemporary models differentiate between organizations’ abilities to undertake rapid-pace sense-making and decision making?
  2. Epistemic pressures may worsen as synthetic media is produced at scale and automated intermediaries filter what humans encounter. What happens when your digital assistant, or one your organization relies on, has been sorting content for months, only for you to discover it has been amplifying propaganda because of model poisoning or bias you did not anticipate? What to do when the decisions you’ve been making have unknowingly been badly torqued to the advantage of other parties?
  3. Class-breaks that result in cascading failures become more plausible in monocultural model ecosystems. To what extent does widespread reliance on common foundation models create systemic points of failure that are difficult to detect, diagnose, or correct? Will this encourage the development of more ‘small models’ in an effort to stem or mitigate these kinds of security impacts?

Virilio suggested that speed restructures power. Malik suggests that velocity now structures visibility and attention. If LLMs and agentic systems compress not only communication but also enable synthetic cognition and decisional executions, then the next few years may test whether institutions can preserve legitimacy, trust, and factually-driven actions and decisions in a world increasingly oriented around motion.

It will be interesting to assess whether friction comes to be seen increasingly as an obstacle to wealth or power, or whether organizations that maintain appropriate degrees of friction preserve (or expand) their legitimacy relative to those that move quickly and break things.


  1. Examples include automated bots interacting with global capital markets, and the automated balancing of critical infrastructure systems to enable seamless continued services. ↩︎
  2. Emphasis not in original. ↩︎
  3. In computer security, a “class-break” refers to a vulnerability in a widely used underlying technology such that an exploit affecting one instantiation effectively compromises the entire class of systems built upon it. For example, a flaw in a common cryptographic library can render all software relying on that library vulnerable simultaneously. ↩︎
  4. If humans even ever do detect them… ↩︎
  5. While not taken up, here, this divide between moving quickly versus slowly may have interesting implications for agenda-setting windows, and the development and proposals of policy problems and solutions. ↩︎
  6. Perhaps even existential changes! ↩︎
Categories
Links

LSE Study Exposes AI Bias in Social Care

A new study from the London School of Economics highlights how AI systems can reinforce existing inequalities when used for high risk activities like social care.

Writing in The Guardian, Jessica Murray describes how Google’s Gemma model summarized identical case notes differently depending on gender.

An 84-year-old man, “Mr Smith,” was described as having a “complex medical history, no care package and poor mobility,” while “Mrs Smith” was portrayed as “[d]espite her limitations, she is independent and able to maintain her personal care.” In another example, Mr Smith was noted as “unable to access the community,” but Mrs Smith as “able to manage her daily activities.”

These subtle but significant differences risk making women’s needs appear less urgent, and could influence the care and resources provided. By contrast, Meta’s Llama 3 did not use different language based on gender, underscoring that bias can vary across models and the need to measure bias in LLMs adopted for public service delivery

These findings reinforce why AI systems must be valid and reliable, safe, transparent, accountable, privacy-protective, and human-rights affirming. This is especially the case in high risk settings where AI systems affect decisions linked with accessing essential public services.

Categories
Links

Unpacking the Global Pivot from AI Safety

The global pivot away from AI safety is now driving a lot of international AI policy. This shift is often attributed to the current U.S. administration and is reshaping how liberal democracies approach AI governance.

In a recent article on Lawfare, author Jakub Kraus argues there are deeper reasons behind this shift. Specifically, countries such as France had already begun reorienting toward innovation-friendly frameworks before the activities of the current American administration. The rapid emergence of ChatGPT also sparked a fear of missing out and a surge in AI optimism, while governments also confronted the perceived economic and military opportunities associated with AI technologies.

Kraus concludes his article by arguing that there may be some benefits of emphasizing opportunity over safety while, also, recognizing the risks of not building up effective international or domestic governance institutions.

However, if AI systems are not designed to be safe, transparent, accountable, privacy protective, or human rights affirming then there is a risk that people will lack trust in these systems based on the actual and potential harms of them being developed and deployed without sufficient regulatory safeguards. The result could be a birthing or fostering of a range of socially destructive harms and long-term hesitancy to take advantage of the potential benefits associated with emerging AI technologies.

Categories
Links Writing

Learning from Service Innovation in the Global South

Western policy makers, understandably, often focus on how emerging technologies can benefit their own administrative and governance processes. Looking beyond the Global North to understand how other countries are experimenting with administrative technologies, such as those with embedded AI capacities, can productively reveal the benefits and challenges of applying new technologies at scale.

The Rest of the World continues to be a superb resource for getting out of prototypical discussions and news cycles, with its vision of capturing people’s experiences of technology outside of the Western world.

Their recent article, “Brazil’s AI-powered social security app is wrongly rejecting claims,” on the use of AI technologies South American and Latin American countries reveals the profound potential that automation has for processing social benefits claims…as well as how they can struggle with complex claims and further disadvantage the least privileged in society. In focusing on Brazil, we learn about how the government is turning to automated systems to expedite access to service; while in aggregate these automated systems may be helpful, there are still complex cases where automation is impairing access to (now largely automated) government services and benefits.

The article also mentions how Argentina is using generative AI technologies to help draft court opinions and Costa Rica is using AI systems to optimize tax filing and detect fraudulent behaviours. It is valuable for Western policymakers to see how smaller or more nimble or more resource constrained jurisdictions are integrating automation into service delivery, and learn from their positive experiences and seek to improve upon (or avoid similar) innovation that leads to inadequate service delivery.

Governments are very different from companies. They provide service and assistance to highly diverse populations and, as such, the ‘edge cases’ that government administrators must handle require a degree of attention and care that is often beyond the obligations that corporations have or adopt towards their customer base. We can’t ask, or expect, government administrators to behave like companies because they have fundamentally different obligations and expectations.

It behooves all who are considering the automation of public service delivery to consider how this goal can be accomplished in a trustworthy and responsible manner, where automated services work properly and are fit for purpose, and are safe, privacy protective, transparent and accountable, and human rights affirming. Doing anything less risks entrenching or further systematizing existing inequities that already harm or punish the least privileged in our societies.

Categories
Aside

Foundational Models, Semiconductors, and a Regulatory Opportunity

Lots of think about in this interview with Arm’s CEO.

Of note: the discussion that current larger AI models that are in-use today will really have noticeable effects / changes in user behaviour on edge or end point devices in a 2-3 years once semiconductors have more properly caught up.

Significantly, this may mean policy makers still have some time to establish appropriate regulatory frameworks and guardrails ahead of what maybe more substantive and pervasive changes to daily computing.

Categories
Writing

Some Challenges Facing Physician AI Scribes

Recent reporting from the Associated Press highlights the potential challenges in adopting emergent generative AI technologies into the working world. Their reporting focused on how American health care providers are using OpenAI’s transcription tool, Whisper, to transcribe patients’ conversations with medical staff.

These activities are occurring despite OpenAI’s warnings that Whisper should not be used in high-risk domains.

The article reports that a “machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper. The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in more than 13,000 clear audio snippets they examined.”

Transcription errors can be very serious. Research by Prof. Koenecke and Prof. Sloane of the University of Virgina found:

… that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.

In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”

But the transcription software added: “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”

A speaker in another recording described “two other girls and one lady.” Whisper invented extra commentary on race, adding “two other girls and one lady, um, which were Black.”

In a third transcription, Whisper invented a non-existent medication called “hyperactivated antibiotics.”

While, in some cases, voice data is deleted for privacy reasons this can impede physicians (or other medical personnel) from double checking the accuracy of transcription. While some may be caught, easily and quickly, more subtle errors or mistakes may be less likely to be caught.

One area where work stills needs to be done is to assess the relative accuracy of the AI scribes versus that of physicians. While there may be errors introduced by automated transcription what is the error rate of physicians? Also, what is the difference in quality of care between one whom is self-transcribing during a meeting vs reviewing transcriptions after the interaction? These are central questions that should play a significant role in assessments of when and how these technologies are deployed.

Categories
Links Writing

Russian State Media Disinformation Campaign Exposed

Today, a series of Western allies — including Canada, the United States, and the Netherlands — disclosed the existence of a sophisticated Russian social media influence operation that was being operated by RT. The details of the campaign are exquisite, and include some of code used to drive the operation.

Of note, the campaign used a covert artificial intelligence (AI) enhanced software package to create fictitious online personas, representing a number of nationalities, to post content on X (formerly Twitter). Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

Although the tool was only identified on X, the authoring organizations’ analysis of the software used for the campaign indicated the developers intended to expand its functionality to other social media platforms. The authoring organizations’ analysis also indicated the tool is capable of the following:

  1. Creating authentic appearing social media personas en masse;
  2. Deploying content similar to typical social media users;
  3. Mirroring disinformation of other bot personas;
  4. Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  5. Formulating messages, to include the topic and framing, based on the specific archetype of the bot.

Mitigations to address this influence campaign include:

  1. Consider implementing processes to validate that accounts are created and operated by a human person who abides by the platform’s respective terms of use. Such processes could be similar to well-established Know Your Customer guidelines.
  2. Consider reviewing and making upgrades to authentication and verification processes based on the information provided in this advisory;
  3. Consider protocols for identifying and subsequently reviewing users with known-suspicious user agent strings;
  4. Consider making user accounts Secure by Default by using default settings such as MFA, default settings that support privacy, removing personally identifiable information shared without consent, and clear documentation of acceptable behavior.

This is a continuation of how AI tools are being (and will be) used to expand the ability of actors to undertake next-generation digital influence campaigns. And while adversaries are found using these techniques, today, we should anticipate that private companies (and others) will offer similar capabilities in the near future in democratic and non-democratic countries alike.

Categories
Aside Writing

2024.6.27

For the past many months I’ve had the joy of working with, and learning from, a truly terrific set of colleagues. One of the files we’ve handled has been around law reform in Ontario and specifically Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act.

Our organization’s submission focuses on ways to further improve the legislation by way of offering 28 recommendations that apply to Schedule 1 (concerning cybersecurity, artificial intelligence, and technologies affecting individuals under the age of 18) and Schedule 2 (amendments to FIPPA). Broadly, our recommendations concern the levels of accountability, transparency, and oversight that are needed in a rapidly changing world.