Conceptual Frameworks and the Technical Interpretation of Token Generation in Large Language Models
Timothy M Rogers
PAPER · v1.0 · 2026-04-25 · human
Abstract
This paper argues that the standard probabilistic description of large language models (LLMs) is formally correct but conceptually incomplete. The incompleteness arises from a conflation between the distributions produced by the model and the generative mechanism that produces them. Drawing on the distinction developed in "Concepts Become Operational Only When Their Frameworks Are Activated" (Rogers, 2026), I show that this conflation reflects a deeper dependence of technical explanation on conceptual framework. By distinguishing efficient causality (stepwise token generation) from formal causality (the hierarchical constraint structure encoded in the model), I introduce the notion of the model as a conditional probability distribution generator (CPDG). This distinction reveals that token generation is governed by a distributed, multi-scale constraint system that is obscured by flat probabilistic interpretations. The result is not a rejection of probabilistic descriptions, but a clarification of their limits and the conditions under which they remain explanatorily adequate.