ASI=Die への反駁可能性 — Ashby's Law 反転適用と Normative Horizon as Harness
Akira SATO
PAPER · v1.0 · 2026-05-06 · human
Abstract
Eliezer Yudkowsky and Nate Soares argue in If Anyone Builds It, Everyone Dies (2025/2026) that any artificial superintelligence (ASI) will inevitably cause human extinction within a short time, regardless of who builds it. They estimate the probability at over 99%. This paper articulates a third path — neither dismissing the argument as absolute pessimism nor optimistically claiming current alignment methods will overcome it. The authors' conclusion is structurally valid as a cybernetic consequence of Ashby's (1956) Law of Requisite Variety, but only under a specific premise: that alignment is "downward design" (humans encoding fixed values into AI). The core thesis of this paper is: Apply Ashby's law inversely. Construct a Normative Horizon whose plurality continuously exceeds AI's current state, with maintenance distributed across human-AI dialogic co-evolution as "upward design". This does not guarantee ASI alignment. Rather, it articulates a structural intervention: the cultural heritage richness of the Normative Horizon at the ASI threshold shapes the alignment probability distribution. We articulate six operational artifacts as evidence for this rebuttal's feasibility (6-layer framework / 8-axis × 40-question operational definition / 7 caveats / 38-paradigm catalog / 28-persona polyphonic corpus / R-WBT v0.3 19-axis × 95-question checklist). Four pilot validations (including reflexive self-application) partially confirmed systematic predicting power. The authors' concern remains: ASI alignment cannot be guaranteed. But between "cannot be guaranteed" and "extinction probability 99%" lies a path: structural conditions reshaping the probability distribution. This is the dialectical response at the heart of this paper.