The impact of LLMs on collective knowledge practices
ChatGPT has disrupted not only the public discourse on AI, but also the social and epistemic practices of its users. Large Language Models (LLMs), or more specifically generative text AI, have been repeatedly portrayed as omniscient oracles by tech giants and the media alike. As a result, they are often understood as knowledge models rather than (large) language model. This specific view of the new generation of chatbots is not only presented externally to the user but is also mediated by the (interface) design of the AI model and thus reinforced by the user's interaction with it. The interaction with an LLM guides the user's knowledge practices and potentially poses challenges to the collective epistemic action necessary to create a sustainable and just future. In this paper, I present three challenges that arise from interacting with generative text AI as a knowledge model. First, the ubiquitous use of chatbots exacerbates the already worsening 'loneliness epidemic'. It exacerbates the existing trend set by the development and application of other digital technologies. Generative text AI models remove the little human contact that is still present in knowledge retrieval on the web, leaving only the shell of human interaction. Secondly, no new knowledge can be generated by using chatbots. Their inherent ties to training data and statistical calculations condemn them to an eternal repetition of the mainstream, which can never create knowledge. Third, generative text AI models are built on the "yes, and..." principle, which is most commonly found in improvisational theatre. Chatbots as commodities are designed to please the company's customers (the users), and therefore an implicit or explicit assumption made in a prompt usually goes unchallenged. Rather, these assumptions seem to be accepted by the façade of the interlocutor (the chatbot). These three aspects pose either new or exacerbated challenges to the collective knowledge practices needed to create the necessary imaginaries of possible futures. I therefore advocate that LLMs need to be re-imagined as their reality, which is stochastic language models, and not as a fantasy of knowledge models.