Chair: Philip Brey
Room: Gallery/Designlab (17) - Inform
Are algorithms more objective than humans? On the objectivity and epistemic authority of algorithmic decision-making systems
Calling something ‘objective’ typically implies praise. Developers and some users of algorithmic decision-support systems often advertise such systems as making more objective judgments than humans. Objectivity, here, serves as a marker for epistemic authority. Such epistemic authority is desirable if we are to rely on algorithms for decisions. It signals that we can trust an algorithm’s judgment. By calling an algorithm objective, therefore, we promote its trustworthiness. The opposite is equally true: those who deny that algorithms are objective (see e.g.
The impact of LLMs on collective knowledge practices
ChatGPT has disrupted not only the public discourse on AI, but also the social and epistemic practices of its users. Large Language Models (LLMs), or more specifically generative text AI, have been repeatedly portrayed as omniscient oracles by tech giants and the media alike. As a result, they are often understood as knowledge models rather than (large) language model. This specific view of the new generation of chatbots is not only presented externally to the user but is also mediated by the (interface) design of the AI model and thus reinforced by the user's interaction with it.
Slouching towards Utopia: The Uncertain Normative Foundations of Fair Synthetic Data
The success story of generative AI has been driven by the availability of vast datasets. Now, researchers are led to explore synthetic training data to address data availability challenges. Synthetic data can also purportedly help address ethical concerns such as privacy violations, authorship rights, and algorithmic bias. However, there is a glaring research gap on the ethics of synthetic data as such. This paper investigates the normative foundations of using synthetic data to address bias in AI, focusing on generative models.