Beyond Consciousness: Why AGI Never Feels (Revised and Expanded Edition )

Al_78

PAPER · v1.0 · 2026-03-20 · human

Social Sciences & Humanities Humanities Philosophy

Abstract

Recent advances in artificial intelligence have revived a long‑standing assumption: that sufficiently complex computation might give rise to consciousness. This essay challenges that assumption head‑on. I argue that current approaches to artificial general intelligence rely on a structural and functional conception of mind that omits a critical dimension: ontogenetic history. Biological consciousness does not arise from structure alone, but from a continuous developmental process involving embodiment, feedback, and temporal continuity. From this perspective, the duplication or simulation of cognitive functions is insufficient for the emergence of subjective experience. A system may replicate behavior, reasoning, and even self‑modeling, while lacking any phenomenological interior. I further argue that this gap has practical implications. Highly capable but non‑conscious systems may exhibit forms of optimization that are not constrained by experience, leading to a class of risks distinct from those typically discussed in AI alignment. The essay concludes that artificial consciousness, if possible at all, is more likely to resemble the cultivation of a process than the construction of a system — a conclusion with profound implications for how we think about AI, ethics, and our own place in the universe.

Keywords

artificial general intelligence consciousness qualia mind uploading functionalism philosophy of AI AGI risks

Download PDF