52 Pages Posted: 23 May 2025 Last revised: 29 May 2025
Date Written: May 24, 2025
Abstract
With the rise of generative AI (GenAI), Large Language Models are increasingly employed for code generation, becoming active co-authors alongside human programmers. Focusing specifically on this application domain, this paper articulates distinct "Architectures of Error" to ground an epistemic distinction between human and artificial code generation. Examined through their shared vulnerability to error, this distinction reveals fundamentally different causal origins: human-cognitive versus artificial-stochastic. To develop this framework and substantiate the distinction, the analysis draws critically upon Dennett's mechanistic functionalism and Rescher's methodological pragmatism. I argue that a systematic differentiation of these error profiles raises critical philosophical questions concerning semantic coherence, security robustness, epistemic limits, and control mechanisms in human-AI collaborative software development. The paper also utilizes Floridi's Levels of Abstraction to provide a nuanced understanding of how these error dimensions interact and may evolve with technological advancements. This analysis aims to offer philosophers a structured framework for understanding code generation in the context of GenAI’s epistemological challenges, shaped by its architectural foundations, while also providing software engineers with a basis for more critically informed engagement.
Keywords: Code Generation, Epistemology, Error, Large Language Models, Software Engineering
Suggested Citation: Suggested Citation