Categories
Links Writing

A Pragmatic Paper on Regulatory Sandboxes

Novelli et al’s paper productively explains how regulatory sandboxes must do more than support innovation: they structure oversight, maintain accountability, and ensure that experimentation strengthens, rather than weakens, regulatory regimes.

I can’t recommend the paper, “Getting Regulatory Sandboxes Right: Design and Governance Under the AI Act” highly enough.

In addition to walking through regulatory sandboxes under the EU AI Act, it also helpfully frames sandboxes not merely as tools to support innovation, but as governance instruments for managing risk, uncertainty, and fundamental rights in real-world AI deployment. In doing so, the paper draws a clear distinction between sandboxes and more general regulatory engagement mechanisms: unlike advisory or guidance-based approaches, sandboxes may involve controlled and conditional flexibility from existing regulatory requirements, but always within a structured and supervised framework.

The paper functionally summarizes both the opportunities and challenges for participants, regulators and authorities, and governments, while emphasizing that this flexibility is neither absolute nor risk-free. Participation can be suspended or terminated where “significant risks to health and safety and fundamental rights” arise and cannot be mitigated, and critically, sandbox participation does not displace existing liability regimes. Even within a sandbox, actors remain subject to civil and criminal liability where harm occurs.

The walkthrough of each major stage of sandbox activity — including pre-testing, testing, and post-testing — is particularly valuable. The authors stress that sandboxes should be understood as structured, lifecycle-based processes, with clearly defined entry, monitoring, and exit conditions. During participation, organizations may benefit from reduced or sequenced compliance obligations — for example, not being required to meet the full suite of post-market monitoring or documentation requirements that apply to commercialized AI systems. However, this flexibility is conditional, time-limited, and tied to adherence to a defined sandbox plan under regulatory supervision. Once systems are market-ready, full obligations can apply, underscoring that sandboxes are not meant as a pathway to permanent regulatory relaxation.

Importantly, the paper makes clear that sandboxes can function as places where fundamental rights protections are actively tested, validated, and enforced from the outset. This is reflected in the expectation that oversight should be calibrated to risk: higher-risk systems may require closer monitoring and longer testing periods, while lower-risk applications may justify lighter-touch oversight and shorter durations.

One concept I found particularly useful is that sandboxes can be used not only to test a participant’s system, but also to actively probe it from a regulatory perspective. For example, incorporating simulated regulatory audits during sandbox participation can help prepare participants for real-world compliance expectations.

Throughout, the paper is explicit about potential governance risks associated with regulatory sandboxes. Without transparency, sandboxes can produce opaque decision-making and confer unfair advantages, as illustrated by examples where limited public information has been provided about approved experiments. Similarly, historical precedents demonstrate how poorly designed regimes can encourage regulatory arbitrage and “race to the bottom” dynamics. The authors also caution against risks of regulatory capture and uneven access, where well-resourced actors may disproportionately benefit or informally shape regulatory expectations.

The authors also make clear that sandboxes cannot be separated from broader market and regulatory dynamics. Without strong connections to implementation ecosystems, they risk becoming isolated experiments rather than pathways to responsible deployment.

Leave a comment