An Admissibility-Theoretic Taxonomy of AI Capability Levels: From Narrow AI to Artificial Superintelligence
Ian Staley
PAPER · v1.0 · 2026-05-10 · human
Abstract
The taxonomies that currently structure discussion of artificial intelligence capability — from narrow AI through artificial general intelligence (AGI) to artificial superintelligence (ASI) — are overwhelmingly behavioral, defining levels by what systems can be observed to do. This approach faces well-documented coherence problems at the AGI/ASI boundary, where behavioral signatures underdetermine the type of system under examination, definitions drift with each new model release, and disagreements about whether a transition has occurred resist empirical resolution. This paper proposes a non-behavioral alternative: an admissibility-theoretic taxonomy in which AI capability levels are characterized not by observed outputs but by the structure of admissible trajectories through capability space, with structural inspiration drawn from final-state-constrained formalisms in physics and information theory. The framework defines narrow AI as the regime in which admissible histories are bounded by externally specified terminal conditions; AGI as the regime in which terminal conditions become endogenously selected; and ASI as the regime in which the system modifies its own admissibility criteria. The paper argues this framework gives a formal handle on whether the transitions between levels are continuous or phase-transitional, clarifies how behavioral benchmarks can mistake narrow capability scaling for general capability emergence, and supplies vocabulary for ASI-specific concerns — including instrumental convergence and mesa-optimization — that current behavioral taxonomies struggle to articulate. Testable predictions, limitations, and open problems are discussed. The proposal is structural rather than literal: it borrows the formalism of final-state constraints from quantum foundations without claiming any quantum mechanism in artificial intelligence systems.