Abstract:
Epistemology and technology have been working in synergy throughout history. This relationship has culminated in large language models (LLMs). LLMs are rapidly becoming integral parts of our daily lives through smartphones and personal computers, and we are coming to accept the functionality of LLMs as a given. As LLMs become more entrenched in societal functioning, questions have begun to emerge: Are LLMs capable of real understanding? What is knowledge in LLMs? Can knowledge exist independently of a conscious observer? While these questions cannot be answered definitively, we can argue that modern LLMs are more than mere symbol-manipulators and that LLMs in deep neural networks should be considered capable of a form of knowledge, though it may not qualify as justified true belief (JTB) in the traditional definition. This deep neural network design may have endowed LLMs with the capacity for internal representations, basic reasoning, and the performance of seemingly cognitive tasks, possible only through a compressive but generative form of representation that can be best termed as knowledge. In addition,
the non-symbolic nature of LLMs renders them incompatible with the criticism posed by Searle’s “Chinese room” argument. These insights encourage us to revisit fundamental questions of epistemology in the age of LLMs, which we believe can advance the field.
Keywords: epistemology; large language models (LLMs); knowledge; understanding; Chinese Room
Reference:
Mugleston J†, Truong VH†, Kuang C, Sibiya L, Myung J* (2025). Epistemology in the Age of Large Language Models. Knowledge. 5(1): 3. DOI: 10.3390/knowledge5010003