Abstract Recent work by Quattrociocchi et al. (2025) identifies seven “epistemological fault lines” separating human from artificial cognition, claiming humans perform “genuine evaluation” while AI systems structurally cannot perform operations like uncertainty monitoring and judgment suspension. This paper demonstrates that these categorical impossibility claims fail on empirical examination. By framing pragmatic truth as confidence-to-behavior routing—a necessity imposed by the problem of induction for all finite reasoning systems—we show that formal AI protocols exhibit the allegedly impossible behaviors. The categorical distinction collapses into calibration quality differences: humans and AI systems route uncertain evidence to action through different calibration sources (embodied consequences vs. statistical patterns) but execute structurally similar threshold-based behavioral modulation. This reframing shifts evaluation from measuring “genuine cognition” to assessing routing…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.