Technical Appendix and Regulatory Framework for the Transition from Tool to Actor.
The core argument of the book is that "Intelligence" is a poor regulatory trigger. Instead, we propose CASX: a compound measure of observable system properties that determine a system's position on the tool-to-actor gradient.
The demonstrable ability to solve novel problems across diverse domains without specific training.
The degree to which a system initiates and pursues multi-step plans without human-in-the-loop confirmation.
The breadth of deployment (number of users/instances) and the computational resources utilized.
The connectivity to external infrastructure, financial systems, or physical robotic actuators.
To build a regime for AI, we draw from three established high-risk domains. Select a model to see the structural element it contributes to the AI framework.
Determine which regulatory gate a hypothetical system would trigger based on its current properties.
External control (gates and thresholds) is a necessary first step, but it is fundamentally a "fence." For systems approaching superintelligence, we must explore Constitutive Safety—building safety into the internal architecture of the AI.
We recommend the works of Yoshua Bengio on "Scientist AI" and the formal verification of safety bounds as the only long-term alternative to categorical prohibition.