William Leiss and Richard Smith

Forthcoming from McGill-Queen’s University Press

Can humanity maintain meaningful control over increasingly capable AI systems? This book examines that question through the lens of risk assessment, governance analysis, and two new analytical frameworks: the Leiss Framework, which distinguishes between external control and self-control as fundamentally different approaches to AI safety, and the tool-to-actor gradient, which traces how AI systems move from passive instruments under human direction toward autonomous actors capable of independent judgment and action.

Drawing on risk assessment methods from nuclear safety, probabilistic analysis, and the history of technology governance, the authors argue that the probability of losing meaningful human control over advanced AI is high enough to warrant immediate policy action, including bans on the development of superintelligent AI and powerful autonomous systems operating beyond human oversight.

The companion website will launch fully at publication, with ongoing commentary, an annotated bibliography, interactive tools, and resources for readers.

Publication details forthcoming from McGill-Queen's University Press.