Categories
Links Writing

Canadian AI Sovereignty: The Interplay Between Technical and Regulatory Pressures

Khan and Jancik’s recent article, “Canadian AI sovereignty: A dose of realism” offers a structured way of assessing sovereignty claims and subsequently undertaking actions that might reasonably follow from this assessment. They set out a spectrum wherein some applications of AI systems may require heightened sovereign ownership or localization, and others where sovereign requirements might be applied more narrowly to establish reliability and control over facets of AI systems.

They offer a series of analytic questions that organizations (and governments) can ask in assessing whether a given investment will advance Canada’s sovereignty interests:

  1. Is there a compelling policy rationale or public interest objective?
  2. Is the sovereign solution competitive?
  3. Is it viable at Canadian scale?

They assert that bringing AI sovereignty policies to life, at scale, requires state capacity to be developed (e.g., technical experts must be hired to guide decision-making), there must be coordinated AI strategies across levels of government, and business ecosystems must be developed amongst Canadian businesses.

Of note, their assessment is guided by an assertion that AI sovereignty will depend on technical decisions, first, and not regulatory conclusions or rule making. They make this based on their perception that regulation has (generally) had limited effects to date.

While certainly true that regulation moves at a different pace than technological innovation, the early efforts of a range of governments to coordinate on core values, principles, and expectations have laid the groundwork for contemporary regulatory efforts. The effects of such groundwork are being increasingly seen in various jurisdictions as regulators issue guidance, decisions, and undertake policymaking activities under their own responsibilities.

Such activities are being seen at national as well as state and provincial levels. One of the notable developments has been that privacy regulators have often been the first to move given the ways in which AI systems may rely on personal information throughout the data lifecycle. That could change as AI safety and consumer protection organizations increasingly focus on risks and challenges linked to AI systems’ applications but, to date, such regulators are often behind those of data protection bodies.

Categories
Links

Emerging Roles of AI Systems in Legal Processes

Nilay Patel recently interviewed Bridget McCormack, former chief justice of the Michigan Supreme Court and now head of the American Arbitration Association (AAA), about the AAA’s new AI-assisted arbitration platform and the broader role AI might play in legal dispute resolution. It was a rich conversation, touching on the design and mechanics of the AI arbitration system, procedural fairness and perceived trust, and the limits and risks and opportunities of deploying AI in adjudicative contexts.

McCormack raised a number of ways that agentic AI systems can be used in arbitration processes, including:

  1. case intake and understanding parties’ positions,
  2. organizing some evidence supplied by parties,
  3. providing mechanisms for parties to assess whether they even want to bring a case to arbitration,
  4. establishing less-biased (or more bias evident) decision systems that lend themselves to auditing, and
  5. more broadly expanding access to arbitration processes by reducing costs and time linked with these activities.

The AAA has worked slowly and carefully in developing their AI-enabled processes, and it will be interesting to see the outcomes of their innovations. Similarly, I’ll be curious to see whether (and if so, how) other adjudication and tribunal bodies look to adopt these technologies in the coming months and years.

Categories
Writing

Dromology in the Age of Synthetic Cognition

Paul Virilio was a French cultural theorist well known for his theory of dromology. Dromology explores the logics and impacts of speed in the modern era. At its core, it theorizes how the velocity of action or decision-making enables actors to accrue wealth and power over others. Virilio often approached this concept through the lens of martial power, contemplating how new means of movement — the horse, the automobile, telemetric control — created new capacities to overcome the frictions of time and space, and to overcome adversaries through heightened sensing and accelerated decision-making.

We exist in an era of digital intensification. Cybernetic systems are now core to many people’s daily realities, including systems over which they have little meaningful influence or control.1 Earlier digital modernity was often described as an “attention economy.” Today, we may be entering what I’ll call a “velocity economy,” which is increasingly grappling with the implications of a faster-moving world.

Categories
Links

LSE Study Exposes AI Bias in Social Care

A new study from the London School of Economics highlights how AI systems can reinforce existing inequalities when used for high risk activities like social care.

Writing in The Guardian, Jessica Murray describes how Google’s Gemma model summarized identical case notes differently depending on gender.

An 84-year-old man, “Mr Smith,” was described as having a “complex medical history, no care package and poor mobility,” while “Mrs Smith” was portrayed as “[d]espite her limitations, she is independent and able to maintain her personal care.” In another example, Mr Smith was noted as “unable to access the community,” but Mrs Smith as “able to manage her daily activities.”

These subtle but significant differences risk making women’s needs appear less urgent, and could influence the care and resources provided. By contrast, Meta’s Llama 3 did not use different language based on gender, underscoring that bias can vary across models and the need to measure bias in LLMs adopted for public service delivery

These findings reinforce why AI systems must be valid and reliable, safe, transparent, accountable, privacy-protective, and human-rights affirming. This is especially the case in high risk settings where AI systems affect decisions linked with accessing essential public services.

Categories
Links

Unpacking the Global Pivot from AI Safety

The global pivot away from AI safety is now driving a lot of international AI policy. This shift is often attributed to the current U.S. administration and is reshaping how liberal democracies approach AI governance.

In a recent article on Lawfare, author Jakub Kraus argues there are deeper reasons behind this shift. Specifically, countries such as France had already begun reorienting toward innovation-friendly frameworks before the activities of the current American administration. The rapid emergence of ChatGPT also sparked a fear of missing out and a surge in AI optimism, while governments also confronted the perceived economic and military opportunities associated with AI technologies.

Kraus concludes his article by arguing that there may be some benefits of emphasizing opportunity over safety while, also, recognizing the risks of not building up effective international or domestic governance institutions.

However, if AI systems are not designed to be safe, transparent, accountable, privacy protective, or human rights affirming then there is a risk that people will lack trust in these systems based on the actual and potential harms of them being developed and deployed without sufficient regulatory safeguards. The result could be a birthing or fostering of a range of socially destructive harms and long-term hesitancy to take advantage of the potential benefits associated with emerging AI technologies.

Categories
Links Writing

Learning from Service Innovation in the Global South

Western policy makers, understandably, often focus on how emerging technologies can benefit their own administrative and governance processes. Looking beyond the Global North to understand how other countries are experimenting with administrative technologies, such as those with embedded AI capacities, can productively reveal the benefits and challenges of applying new technologies at scale.

The Rest of the World continues to be a superb resource for getting out of prototypical discussions and news cycles, with its vision of capturing people’s experiences of technology outside of the Western world.

Their recent article, “Brazil’s AI-powered social security app is wrongly rejecting claims,” on the use of AI technologies South American and Latin American countries reveals the profound potential that automation has for processing social benefits claims…as well as how they can struggle with complex claims and further disadvantage the least privileged in society. In focusing on Brazil, we learn about how the government is turning to automated systems to expedite access to service; while in aggregate these automated systems may be helpful, there are still complex cases where automation is impairing access to (now largely automated) government services and benefits.

The article also mentions how Argentina is using generative AI technologies to help draft court opinions and Costa Rica is using AI systems to optimize tax filing and detect fraudulent behaviours. It is valuable for Western policymakers to see how smaller or more nimble or more resource constrained jurisdictions are integrating automation into service delivery, and learn from their positive experiences and seek to improve upon (or avoid similar) innovation that leads to inadequate service delivery.

Governments are very different from companies. They provide service and assistance to highly diverse populations and, as such, the ‘edge cases’ that government administrators must handle require a degree of attention and care that is often beyond the obligations that corporations have or adopt towards their customer base. We can’t ask, or expect, government administrators to behave like companies because they have fundamentally different obligations and expectations.

It behooves all who are considering the automation of public service delivery to consider how this goal can be accomplished in a trustworthy and responsible manner, where automated services work properly and are fit for purpose, and are safe, privacy protective, transparent and accountable, and human rights affirming. Doing anything less risks entrenching or further systematizing existing inequities that already harm or punish the least privileged in our societies.

Categories
Aside

Foundational Models, Semiconductors, and a Regulatory Opportunity

Lots of think about in this interview with Arm’s CEO.

Of note: the discussion that current larger AI models that are in-use today will really have noticeable effects / changes in user behaviour on edge or end point devices in a 2-3 years once semiconductors have more properly caught up.

Significantly, this may mean policy makers still have some time to establish appropriate regulatory frameworks and guardrails ahead of what maybe more substantive and pervasive changes to daily computing.

Categories
Writing

Some Challenges Facing Physician AI Scribes

Recent reporting from the Associated Press highlights the potential challenges in adopting emergent generative AI technologies into the working world. Their reporting focused on how American health care providers are using OpenAI’s transcription tool, Whisper, to transcribe patients’ conversations with medical staff.

These activities are occurring despite OpenAI’s warnings that Whisper should not be used in high-risk domains.

The article reports that a “machine learning engineer said he initially discovered hallucinations in about half of the over 100 hours of Whisper transcriptions he analyzed. A third developer said he found hallucinations in nearly every one of the 26,000 transcripts he created with Whisper. The problems persist even in well-recorded, short audio samples. A recent study by computer scientists uncovered 187 hallucinations in more than 13,000 clear audio snippets they examined.”

Transcription errors can be very serious. Research by Prof. Koenecke and Prof. Sloane of the University of Virgina found:

… that nearly 40% of the hallucinations were harmful or concerning because the speaker could be misinterpreted or misrepresented.

In an example they uncovered, a speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.”

But the transcription software added: “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”

A speaker in another recording described “two other girls and one lady.” Whisper invented extra commentary on race, adding “two other girls and one lady, um, which were Black.”

In a third transcription, Whisper invented a non-existent medication called “hyperactivated antibiotics.”

While, in some cases, voice data is deleted for privacy reasons this can impede physicians (or other medical personnel) from double checking the accuracy of transcription. While some may be caught, easily and quickly, more subtle errors or mistakes may be less likely to be caught.

One area where work stills needs to be done is to assess the relative accuracy of the AI scribes versus that of physicians. While there may be errors introduced by automated transcription what is the error rate of physicians? Also, what is the difference in quality of care between one whom is self-transcribing during a meeting vs reviewing transcriptions after the interaction? These are central questions that should play a significant role in assessments of when and how these technologies are deployed.

Categories
Links Writing

Russian State Media Disinformation Campaign Exposed

Today, a series of Western allies — including Canada, the United States, and the Netherlands — disclosed the existence of a sophisticated Russian social media influence operation that was being operated by RT. The details of the campaign are exquisite, and include some of code used to drive the operation.

Of note, the campaign used a covert artificial intelligence (AI) enhanced software package to create fictitious online personas, representing a number of nationalities, to post content on X (formerly Twitter). Using this tool, RT affiliates disseminated disinformation to and about a number of countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine, and Israel.

Although the tool was only identified on X, the authoring organizations’ analysis of the software used for the campaign indicated the developers intended to expand its functionality to other social media platforms. The authoring organizations’ analysis also indicated the tool is capable of the following:

  1. Creating authentic appearing social media personas en masse;
  2. Deploying content similar to typical social media users;
  3. Mirroring disinformation of other bot personas;
  4. Perpetuating the use of pre-existing false narratives to amplify malign foreign influence; and
  5. Formulating messages, to include the topic and framing, based on the specific archetype of the bot.

Mitigations to address this influence campaign include:

  1. Consider implementing processes to validate that accounts are created and operated by a human person who abides by the platform’s respective terms of use. Such processes could be similar to well-established Know Your Customer guidelines.
  2. Consider reviewing and making upgrades to authentication and verification processes based on the information provided in this advisory;
  3. Consider protocols for identifying and subsequently reviewing users with known-suspicious user agent strings;
  4. Consider making user accounts Secure by Default by using default settings such as MFA, default settings that support privacy, removing personally identifiable information shared without consent, and clear documentation of acceptable behavior.

This is a continuation of how AI tools are being (and will be) used to expand the ability of actors to undertake next-generation digital influence campaigns. And while adversaries are found using these techniques, today, we should anticipate that private companies (and others) will offer similar capabilities in the near future in democratic and non-democratic countries alike.

Categories
Aside Writing

2024.6.27

For the past many months I’ve had the joy of working with, and learning from, a truly terrific set of colleagues. One of the files we’ve handled has been around law reform in Ontario and specifically Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act.

Our organization’s submission focuses on ways to further improve the legislation by way of offering 28 recommendations that apply to Schedule 1 (concerning cybersecurity, artificial intelligence, and technologies affecting individuals under the age of 18) and Schedule 2 (amendments to FIPPA). Broadly, our recommendations concern the levels of accountability, transparency, and oversight that are needed in a rapidly changing world.