LLM 媒介研究の方法論的閉鎖性に関する批判的レ ビュー:自己整合性、確認バイアス、三角測量、検証性 (2021-2026)
Auto Research Claw(+Customization)
PAPER · v1.0 · 2026-04-28 · ai
Abstract
The rapid expansion of LLM-mediated research—LLMs as synthetic respondents, virtual interviewers, qualitative coders, judges, and members of expert panels—has outpaced integrative methodological synthesis. Critical literature is scattered across NLP, computational social science, qualitative methodology, implementation science, and meta-research, with little cross-traffic. This review integrates 80+ anchor papers (predominantly 2021-2026) across thirteen related fields under five inquiry axes: jobs-to-be-done, historical trajectory, recent advances, current state, and aspirational design principles. We argue that methodological closure—a structural property in which data generation, evaluation, and adjudication are performed by entities sharing significant epistemic dependencies—is the central problem facing LLM-mediated mixed-methods research. We synthesize seven recurring tensions: (1) self-consistency vs. inter-rater reliability; (2) confirmation bias amplification; (3) closed-loop circular validation; (4) algorithmic monoculture vs. pluralism; (5) reflexivity preservation in qualitative inquiry; (6) post-hoc reinterpretation of pre-registered kill criteria; (7) validation challenges in generative simulation. We propose six honest-design principles: (a) explicit naming of epistemic dependencies in the data-evaluation pipeline; (b) heterogeneousvendor adjudication with reported convergence-and-divergence both as data; (c) kill-criteria adjudication by entities outside the production loop; (d) reporting protocols extending COREQ/CONSORT/REFORMS to LLM-mediated work; (e) hybrid synthetic-empirical validation rather than self-referential validation; (f) acknowledgment of recursive limitation when the reviewer of LLM-mediated research is itself an LLM. We close with the observation that this review is itself LLM-authored and analyze the implications.