Chair: To be annouced
Uncovering the gap: challenging the agential nature of AI responsibility problems
In this presentation, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies.
Are algorithms more objective than humans? On the objectivity and epistemic authority of algorithmic decision-making systems
Calling something ‘objective’ typically implies praise. Developers and some users of algorithmic decision-support systems often advertise such systems as making more objective judgments than humans. Objectivity, here, serves as a marker for epistemic authority. Such epistemic authority is desirable if we are to rely on algorithms for decisions. It signals that we can trust an algorithm’s judgment. By calling an algorithm objective, therefore, we promote its trustworthiness. The opposite is equally true: those who deny that algorithms are objective (see e.g.
Slouching towards Utopia: The Uncertain Normative Foundations of Fair Synthetic Data
The success story of generative AI has been driven by the availability of vast datasets. Now, researchers are led to explore synthetic training data to address data availability challenges. Synthetic data can also purportedly help address ethical concerns such as privacy violations, authorship rights, and algorithmic bias. However, there is a glaring research gap on the ethics of synthetic data as such. This paper investigates the normative foundations of using synthetic data to address bias in AI, focusing on generative models.