Companion Website: What Should Governments Do?

Technical Appendix and Regulatory Framework for the Transition from Tool to Actor.

The CASX Framework

The core argument of the book is that "Intelligence" is a poor regulatory trigger. Instead, we propose CASX: a compound measure of observable system properties that determine a system's position on the tool-to-actor gradient.

Capability

The demonstrable ability to solve novel problems across diverse domains without specific training.

Autonomy

The degree to which a system initiates and pursues multi-step plans without human-in-the-loop confirmation.

Scale

The breadth of deployment (number of users/instances) and the computational resources utilized.

Access

The connectivity to external infrastructure, financial systems, or physical robotic actuators.

Comparative Regulatory Models

To build a regime for AI, we draw from three established high-risk domains. Select a model to see the structural element it contributes to the AI framework.

Contribution: Quantitative Risk Thresholds

The CNSC requires formal Probabilistic Safety Assessments (PSA). A facility cannot operate if the probability of severe core damage exceeds a defined numerical threshold (e.g., 10⁻⁵). AI Application: Replacing "is it safe?" with "does the probability of loss of control fall below X?"
Contribution: Automatic Triggers

Under the Seveso III Directive, regulatory obligations trigger automatically based on the volume of hazardous substances on site, regardless of the operator's history. AI Application: Mandatory oversight triggers automatically when CASX factors cross defined thresholds.
Contribution: Independent Investigation

The TSB Canada model protects whistleblowers and treats "near-misses" as critical data. Investigation is independent of the operator. AI Application: Mandatory incident reporting for AI "drift" or goal-persistence, overseen by an independent body.

Interactive Gate-Checker

Determine which regulatory gate a hypothetical system would trigger based on its current properties.

System Profile

Result: Tool Status (No Gate Triggered)

Further Reading & Constitutive Safety

External control (gates and thresholds) is a necessary first step, but it is fundamentally a "fence." For systems approaching superintelligence, we must explore Constitutive Safety—building safety into the internal architecture of the AI.

We recommend the works of Yoshua Bengio on "Scientist AI" and the formal verification of safety bounds as the only long-term alternative to categorical prohibition.

```