How AI-Adjudication might Erode Law as a Site of Moral Perceptual Progress

How AI-Adjudication might Erode Law as a Site of Moral Perceptual Progress

AI and other forms of digital tech are profoundly changing legal contexts. These changes come in various sorts. AI can support and replace judicial processes, but it can also profoundly disrupt existing judicial practices and values ( Sourdin 2015). In my talk, I am interested in AI’s disruptive effects. Specifically,I will argue that AI used for adjudication purposes has the potential to disrupt the law as a site of moral perceptual progress.

Presenters
Janna van Grunsven
Kind of session / presentation

Rethinking AI Ethics in, for and from sub-Saharan Africa - continued from parallel VII track 1 part 1

Rethinking AI Ethics in, for and from sub-Saharan Africa - continued from parallel VII track 1 part 1

In the discourse surrounding AI, Africa's role and unique perspective remain conspicuously marginalised. This panel seeks to address this oversight by examining the current state of AI in Africa through three lenses: that of AI in, for and from sub-Saharan Africa. 

Organizers
Kristy Claassen
Kind of session / presentation

Can value alignment solve the control problem for AI?

Can value alignment solve the control problem for AI?

Some worry that as AI-systems become more capable and more integrated in society, we will lose control over those systems. The loss of control, some worry, may yield bad or even catastrophic outcomes. Some think that we can solve this problem in a way that avoids having to have control, simply by ensuring that the systems are value-aligned. Here is argued that value-alignment cannot solve the control problem, unless the value-alignment includes a notion of control.

Presenters
Björn Lundgren
Kind of session / presentation

The Concept of Recognitional Justice: A Case for African Inclusivity in the AI Ethics

The Concept of Recognitional Justice: A Case for African Inclusivity in the AI Ethics

It is common knowledge that Western philosophy underpins artificial intelligence (AI) ethics studies. This is not surprising since there are few known AI experts in Africa, and by extension, few African researchers contributing to AI ethics (Eke et al., 2023). That notwithstanding, the concept of recognitional justice suggests that the current notions of AI ethics, sustainable AI, and technology in general, are not satisfactory to resolving ethical and political issues if they do not include the local and contextual philosophies of the Global South.

Presenters
Jahaziel Kwabena Osei-Mensah
Kind of session / presentation

The Right to Our Own Reasons: Autonomy and Online Behavioural Influence

The Right to Our Own Reasons: Autonomy and Online Behavioural Influence

In recent years, it has become clear that social media platforms routinely employ self-learning algorithms to select the content and advertisements shown to users. These algorithms are trained to maximise users’ time spent on the platform in question and/or their likelihood to click on advertisements, often on the basis of personal data extracted form users’ online behaviour. There has been growing concern that such ‘timeline curation’ algorithms may influence users’ behaviour in morally problematic ways, but the grounds for this moral concern (if any) are not always clear.

Presenters
Joris Graff
Kind of session / presentation

Do artefacts have promises? Do promises have artefacts? On why AI ethics should pay attention to the question of the performative

Do artefacts have promises? Do promises have artefacts? On why AI ethics should pay attention to the question of the performative

Adapting ethical frameworks such as value sensitive design (VSD) and ethics by design (EbD) to the specificity of AI systems (Umbrello and van de Poel, 2021; Brey and Dainow, 2023) can be seen as a recent attempt to systematically respond to the more general ideas of AI for Social Good (Floridi et al., 2020) or AI alignment (Dung, 2023). Despite the differences among these frameworks, the motivation stems from the same challenge—to ensure that AI systems promote, bring about or perform desirable ethical values through their own design.

Presenters
Víctor Betriu Yáñez
Kind of session / presentation

Ethical oversight of algorithmic systems – contextualizing the call for increased governance in the Dutch public sector

Ethical oversight of algorithmic systems – contextualizing the call for increased governance in the Dutch public sector

The increasing integration of algorithms into various aspects of society has raised concerns about potential biases, ethical implications, and the lack of transparency governing decision-making processes, which in turn led to a call for increased oversight. This paper will contextualise the call for increased ethical governance and oversight of algorithmic systems used within the Dutch public sector, by focusing on the aim of the proposed governance mechanisms and instruments. 

Presenters
Tynke Schepers
Kind of session / presentation

The interplay between ethical and epistemic virtues in AI-driven science

The interplay between ethical and epistemic virtues in AI-driven science

When striving for the responsible use of AI, it is important that we analyze and develop ethical virtues with their epistemic counterparts. As Hagendorff (2022) noted, ethical virtues correspond to the prominent four principles guiding our responsible use of AI. According to Hagendorff (2022), the ethical virtues of justice, honesty, responsibility, and care correspond to the principles of fairness, transparency, accountability, and privacy, respectively.

Presenters
Vlasta Sikimic
Kind of session / presentation

Uncovering the gap: challenging the agential nature of AI responsibility problems

Uncovering the gap: challenging the agential nature of AI responsibility problems

In this presentation, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies.

Presenters
Joan Llorca Albareda
Kind of session / presentation

Are algorithms more objective than humans? On the objectivity and epistemic authority of algorithmic decision-making systems

Are algorithms more objective than humans? On the objectivity and epistemic authority of algorithmic decision-making systems

Calling something ‘objective’ typically implies praise. Developers and some users of algorithmic decision-support systems often advertise such systems as making more objective judgments than humans. Objectivity, here, serves as a marker for epistemic authority. Such epistemic authority is desirable if we are to rely on algorithms for decisions. It signals that we can trust an algorithm’s judgment. By calling an algorithm objective, therefore, we promote its trustworthiness. The opposite is equally true: those who deny that algorithms are objective (see e.g.

Presenters
Carina Prunkl
Kind of session / presentation

Explainability in AI through the lens of feminist standpoint epistemology: the valuation of experiential knowledges to understand and repurpose AI’s social implications

Explainability in AI through the lens of feminist standpoint epistemology: the valuation of experiential knowledges to understand and repurpose AI’s social implications

This paper presentation seeks to understand how feminist standpoint theory can complement the concept of explainability in AI, and how AI’s use could be repurposed for inclusive political aims. Explainability refers to the possibility of AI systems to provide clear, understandable explanations for its actions and decisions in order to counter black box effects, and decrease potential risks of negative biases’ creation towards minorities.

Presenters
Marilou Niedda
Kind of session / presentation

Translation technology, conceptual disruption and the values of interlingual communication

Translation technology, conceptual disruption and the values of interlingual communication

The development of language technologies – e.g. machine translation and LLM-powered AI chatbots – is significantly impacting various forms of cognitive labour, as well as the institutions and business practices that support and exploit them. Arguably, language technologies are also conceptually disruptive in challenging how we characterise and understand these forms of cognitive labour, the capacities and expertise involved.

Presenters
Anna Pakes
Félix do Carmo
Kind of session / presentation

Explaining the behavior of LLMs, are interventions the way to go?

Explaining the behavior of LLMs, are interventions the way to go?

Given the impressive performance and widespread adoption of large language models (LLMs), there is a pressing need to explain how these systems work and what information they use for their predictions. This would not only allow us to better predict and control their behavior, thereby increasing their trustworthiness, but also help gain insight into the internal processes underlying linguistic behavior in LLMs. 

Presenters
Céline Budding
Kind of session / presentation

Trust and Transparency in AI

Trust and Transparency in AI

In this paper, I consider an important question in the philosophy of AI. Does the fact that we cannot know how an AI reaches its conclusions entail that we cannot reasonably trust it? I argue that it does not.

The relationship between trust and transparency, whether in AI or elsewhere, appears quite puzzling. It seems unreasonable to trust that which is opaque, but too much transparency renders trust superfluous, for trust requires some degree of uncertainty and vulnerability. 

Presenters
Thomas Mitchell
Kind of session / presentation

Responsible AI: Evolving Bodies of Practice

Responsible AI: Evolving Bodies of Practice

In recent years ‘Responsible AI’ (R-AI) has been applied to a number of contexts and research applications (Dignum, 2019; Zhu, 2019; De Laat, 2021). On the surface this seems a good thing, as of course we want the development, deployment, and use of AI-systems to be in line with certain normative principles, and it seems the ‘responsible’ frame can give us just that. R-AI can ensure that AI-systems respect human rights and are aligned with democratic values. However, just what exactly R-AI means is contested, and often undefined. 

Presenters
Fabio Tollon
Kind of session / presentation

Beyond rules and justice: A systematic literature review on the environmental impact of AI

Beyond rules and justice: A systematic literature review on the environmental impact of AI

AI is developing rapidly, as are concerns about the impact of its training and deployment on the environment. Recent studies suggest that since 2019, data centers have produced more CO2-emissions than the aviation industry (Shift Project, 2019), and they are extremely water-demanding (Li et al., 2023a). Given the urgency to achieve AI growth sustainably, the environmental impact of AI itself can no longer be overlooked. While studies about the environmental impact of AI have begun to emerge in the past few years, this emergent knowledge brings about new ethical questions and dilemmas.

Presenters
Olya Kudina
Kind of session / presentation

What was understandable in symbolic AI? How the Philosophy and Ethics of Technology might benefit each other

What was understandable in symbolic AI? How the Philosophy and Ethics of Technology might benefit each other

The current call for explainable AI (XAI) is most often framed as an answer to the so-called black box problem of machine learning. Following this conceptualisation of the problem, the recent effectiveness of many machine learning (ML) based systems comes at the cost of intelligibility: the more accurate AI performs the less understandable it becomes to humans. Accordingly, XAI is seen as the endeavour to whiten the black box, so that society can profit from the latest AI success without endangering being alienated.

Presenters
Suzana Alpsancar
Kind of session / presentation

The limits of AI friendship – how good can AI friends be?

The limits of AI friendship – how good can AI friends be?

In this paper, I examine the current scope of human-AI friendships, and the prospects for near-future development of more sophisticated AI friends. I argue that in some current and many possible future contexts, these friendships can be valuable – good for the humans who have them. But there are significant risks attached to the shifting of the concept of friendship, from being primarily a relationship between humans, to the kind of relationship that at least some humans have, not with other humans but with our technological creations.  

Presenters
Nick Munn
Kind of session / presentation

Algorithmic manipulation of weak preferences as a threat to moral consciousness

Algorithmic manipulation of weak preferences as a threat to moral consciousness

The paper analyses the impact of persuasive technologies (PTs) on moral agency. PTs, being based on profiling and targeting techniques, direct users’ choices towards predetermined sets of options. One widely shared view is that by implying a decrease in the diversity of available information, PTs manipulate individuals by jeopardizing their moral agency, precisely constraining their epistemic and moral autonomy. This paper instead argues that PTs are not morally problematic per se but only when they threaten consciousness, one of the necessary conditions of our moral agency (Himma 2009).

Presenters
Ermanno Petrocchi
Kind of session / presentation

The impact of LLMs on collective knowledge practices

The impact of LLMs on collective knowledge practices

ChatGPT has disrupted not only the public discourse on AI, but also the social and epistemic practices of its users. Large Language Models (LLMs), or more specifically generative text AI, have been repeatedly portrayed as omniscient oracles by tech giants and the media alike. As a result, they are often understood as knowledge models rather than (large) language model. This specific view of the new generation of chatbots is not only presented externally to the user but is also mediated by the (interface) design of the AI model and thus reinforced by the user's interaction with it.

Presenters
Marte Henningsen
Kind of session / presentation

AI is not alone: Meaningful Human Control in AI-based Sociotechnical Systems

AI is not alone: Meaningful Human Control in AI-based Sociotechnical Systems

When Artificial Intelligence (AI) systems are deployed, they are used in a particular setting. In this setting, there are people who are users, developers and other stakeholders that interact with the system directly or indirectly. The particular setting of an AI system that we are concerned with here consists thus of technological artifacts, being the AI system in question as well as any other relevant systems, as well as humans interacting with it. I thus define the AI system and the context it operates in as a sociotechnical system, centered around that specific AI system.

Presenters
Annet Onnes
Kind of session / presentation

Understanding AI in relation to the social

Understanding AI in relation to the social

The phrase "the impact of AI on society" has almost become a platitude in debates on strategies for channeling this perceived impact, implicitly presenting us with an insignificant model of cause and dramatic effects. The phrase is taken as an unquestioned premise for public, academic and governmental discussions that work towards practicable solutions, such as ethical guidelines and regulations. But do solutions not hinge on an explicit and precise understanding of problems? How exactly does the impact of AI on society come about?

Presenters
Juliet van Rosendaal
Kind of session / presentation

Personal Autonomy in Digital Spaces

Personal Autonomy in Digital Spaces

Navigating the digital world is highly mediated by AI-powered systems that select information and arrange options purportedly to support our decision-making and to improve our choices. But these systems can also be used for manipulative purposes. Besides straightforwardly deceptive means such as the so-called »dark patterns«, AI-powered systems can also employ subtler means to influence people’s behaviour.

Presenters
Marius Bartmann
Kind of session / presentation

Embedding human morality "in" AI using the attention functions for human and artificial moral agents

Embedding human morality "in" AI using the attention functions for human and artificial moral agents

Consciousness, emotion, or intention are central concepts in discussing artificial moral agents (AMA). We want to add to this debate and, for two reasons, argue to explore another concept: attention.

Presenters
Gunter Bombaerts
Bram Delisse
Uzay Kaymak
Kind of session / presentation

Is AI a ‘defective concept’?

Is AI a ‘defective concept’?

Philosophical literature on conceptual engineering has identified different kinds of ‘conceptual defects’. For instance, a defect concept may prevent the realization of moral and political values, or it may hinder the acquisition of knowledge and theoretical progress (Cappelen en Plunkett 2020). There are different ‘ameliorative strategies’ to respond to these defects, such as conceptual elimination, conceptual replacement, and conceptual modification.

Presenters
Jeroen Hopster
Kind of session / presentation

The fear of AI is a Trojan Horse — A Rationale

The fear of AI is a Trojan Horse — A Rationale

Mainstream media report that governments and business leaders see AI as an "extinction-level threat" to humans[1,2,3]. In post-humanism, we find, besides conceptional tools for thinking technology in an indeterministic way[4,5,6] argumentations for singularity[7], accelerationism[8] and appeasement[9] towards *strong AI*.

Presenters
Dominik Schlienger
Kind of session / presentation

Internet Friends and Motivational Rootedness

Internet Friends and Motivational Rootedness

The drawing view of friendship claims that friendship is a matter of two individuals being able to mutually understand each other, that they come to know themselves better, and ultimately shape their own identities in response to their friendship. There has been some discussion as to whether friendships developed over the internet, via social media, email, or text messaging, or other forms of digital communication can meet these conditions for friendship.

Presenters
Joseph Larse
Kind of session / presentation

AI, human-capacity habituation and the deskilling problem

AI, human-capacity habituation and the deskilling problem

AI tools replace or stand to replace human activity with non-human activity, via automated decision-making, recommender systems and content generation. The more AI replaces valuable human activity, the more it risks deskilling humans of their human capacities. While others have warned of moral deskilling caused by AI-warfare and social robotics, I argue that deskilling encompasses other valuable capacities such as the epistemic, social, creative, physical and the capacity to will.

Presenters
Avigail Ferdman
Kind of session / presentation

The instrumental role of explanations for trustworthy AI

The instrumental role of explanations for trustworthy AI

Do we need explanations of AI outputs in order to obtain trustworthy AI? In recent years there has been an active philosophical debate on this question, with a range of authors arguing that in fact explanations are not needed for justification in believing AI outputs (Dúran & Jongsma, 2021; London, 2019) or even for more broadly ethical use of AI (Kawamleh, 2022; Krishnan, 2020).

Presenters
Stefan Buijsman
Kind of session / presentation

Global Technology and Environmental Inequality: The Imperative Not to Create Morally Permissible Environmental Degradation

Global Technology and Environmental Inequality: The Imperative Not to Create Morally Permissible Environmental Degradation

In Henry Shue’s influential 1999 article “Global Environment and International Inequality,” he argues not only that developed nations bear a disproportionately large burden of the costs involved in fixing the environmental problems caused by industrialization and globalization, but also that members of developing nations are morally permitted to cause environmental degradation insofar as they have been unfairly prevented from reaching the appropriate threshold of dignity and respect.

Presenters
Chelsea Haramia
Kind of session / presentation

Intercultural Conceptual Disruption

Intercultural Conceptual Disruption

Recent debates in the philosophy of technology center on the notion that technology can disrupt concepts and values. Among these, Artificial Intelligence (AI) emerges as a prominent example, demonstrating its potential to disrupt fundamental notions such as personhood, agency, and responsibility. However, existing debates have thus far failed to adequately explore how such disruption manifests across diverse cultural and ethical frameworks.

Presenters
Kristy Claassen
Kind of session / presentation

On the moral status of humanoid robots: an African inspired approach

On the moral status of humanoid robots: an African inspired approach

Some people relate to, and treat, humanoid robots as if they are human, although they know that they are not. Such reactions have sparked discussion about whether humanoid robots should be granted the same, or similar, moral status as human beings. A relational approach to robot moral status is unconcerned with whether the robot has the necessary properties for moral status, and argues that if we relate to the robot as if it is human, it should indeed have the same (or similar) moral status as human beings.

Presenters
Cindy Friedman
Kind of session / presentation

Challenges to Responsible AI in Africa: Using Matolino’s lenses on modernity and development

Challenges to Responsible AI in Africa: Using Matolino’s lenses on modernity and development

I use Bernard Matolino’s lenses on modernity and development to reflect on and discuss the challenges that Africa will face in adapting to Responsible AI. Matolino looks at the relationship between values and technological developments in an African context. He defines technological development as an ongoing human episode that signifies development and innovation. Matolino proposes two perspectives on modernity to define it. The first definition is that modernity is an actual transition that happens when people’s lives and systems shift from one mode to another.

Presenters
Eddie Liywalii
Kind of session / presentation

Technology Transfer in sub-Saharan Africa: A Form of Technological Disruption

Technology Transfer in sub-Saharan Africa: A Form of Technological Disruption

How does technology transfer affect sub-Saharan Africa, especially her sociocultural and economic circumstances? I argue that technology transfer creates a disruption to the cultural worldviews and socio-economic conditions of sub-Saharan Africa. As it bears mentioning, in our contemporary social milieu, digital technologies such as artificial intelligence (AI), machine learning (ML), and robots have become pervasive, reshaping our perceptions of the world, as well as our societal norms and cultural values.

Presenters
Edmund Terem Ugar
Kind of session / presentation

The Ethical Tightrope: Chinese AI in Africa and the Shadow of Authoritarianism

The Ethical Tightrope: Chinese AI in Africa and the Shadow of Authoritarianism

I examine the increase of Chinese artificial intelligence (AI) in Africa through the lens of Michel Foucault's theory of knowledge and power. Foucault argued that knowledge is not objective, but rather it can be used as a tool of domination by those in power. This paper explores how China's involvement in African AI development shapes knowledge production and governance on the continent. China has become one of the players in the African AI landscape, investing in AI technologies across various sectors, including healthcare, agriculture, education, and governance.

Presenters
Bridget Chipungu Chimbga
Kind of session / presentation

Rethinking AI Ethics in, for and from sub-Saharan Africa - will continue in parallel VIII, track 1 part 1

Rethinking AI Ethics in, for and from sub-Saharan Africa - will continue in parallel VIII, track 1 part 1

In the discourse surrounding AI, Africa's role and unique perspective remain conspicuously marginalised. This panel seeks to address this oversight by examining the current state of AI in Africa through three lenses: that of AI in, for and from sub-Saharan Africa. 

Organizers
Kristy Claassen
Kind of session / presentation

Robots and dignity from an Afro-communitarian perspective: an evaluation

Robots and dignity from an Afro-communitarian perspective: an evaluation

One of the often-cited reasons against the use of technologies with artificial intelligence is that such a use would undermine human dignity. The use of these robots, it is argued, undermines the dignity of the patients who use them because the use of these robots deceives, manipulates, humiliates, invades privacy, infantilises and causes loss of human contact. Such actions disrespect their autonomy and treat them as mere means to an end and not ends in themselves. Western conceptions of dignity, such as Kant’s and Nussbaum’s, are salient conceptions used to conduct such evaluations.

Presenters
Karabo Maiyane
Kind of session / presentation

AI Niche Disruptions and Human Flourishing

AI Niche Disruptions and Human Flourishing

Scientific research in artificial intelligence has been immensely successful in recent years, ranging from the development of Large Language Models to the deeper integration of humanoid robots into everyday life. However. With the success of AI research come societal (Hopster, 2024) and conceptual disruptions (Löhr, 2023, ) of existing practices and norms that require adaptations on the level of larger social communities as well as the individuals embedded within.

Organizers
Guido Löhr (HI)
Matthew Dennis (ESDiT)
Kind of session / presentation

Slouching towards Utopia: The Uncertain Normative Foundations of Fair Synthetic Data

Slouching towards Utopia: The Uncertain Normative Foundations of Fair Synthetic Data

The success story of generative AI has been driven by the availability of vast datasets. Now, researchers are led to explore synthetic training data to address data availability challenges. Synthetic data can also purportedly help address ethical concerns such as privacy violations, authorship rights, and algorithmic bias. However, there is a glaring research gap on the ethics of synthetic data as such. This paper investigates the normative foundations of using synthetic data to address bias in AI, focusing on generative models.

Presenters
Mykhaylo Bogachov
Kind of session / presentation