Khan and Jancik’s recent article, “Canadian AI sovereignty: A dose of realism” offers a structured way of assessing sovereignty claims and subsequently undertaking actions that might reasonably follow from this assessment. They set out a spectrum wherein some applications of AI systems may require heightened sovereign ownership or localization, and others where sovereign requirements might be applied more narrowly to establish reliability and control over facets of AI systems.
They offer a series of analytic questions that organizations (and governments) can ask in assessing whether a given investment will advance Canada’s sovereignty interests:
- Is there a compelling policy rationale or public interest objective?
- Is the sovereign solution competitive?
- Is it viable at Canadian scale?
They assert that bringing AI sovereignty policies to life, at scale, requires state capacity to be developed (e.g., technical experts must be hired to guide decision-making), there must be coordinated AI strategies across levels of government, and business ecosystems must be developed amongst Canadian businesses.
Of note, their assessment is guided by an assertion that AI sovereignty will depend on technical decisions, first, and not regulatory conclusions or rule making. They make this based on their perception that regulation has (generally) had limited effects to date.
While certainly true that regulation moves at a different pace than technological innovation, the early efforts of a range of governments to coordinate on core values, principles, and expectations have laid the groundwork for contemporary regulatory efforts. The effects of such groundwork are being increasingly seen in various jurisdictions as regulators issue guidance, decisions, and undertake policymaking activities under their own responsibilities.
Such activities are being seen at national as well as state and provincial levels. One of the notable developments has been that privacy regulators have often been the first to move given the ways in which AI systems may rely on personal information throughout the data lifecycle. That could change as AI safety and consumer protection organizations increasingly focus on risks and challenges linked to AI systems’ applications but, to date, such regulators are often behind those of data protection bodies.