The social disruption of what?

The social disruption of what?

Hopster lists among the potential targets of technologically induced social disruption ‘social relations, institutions, epistemic paradigms, foundational concepts, values, and the very nature of human cognition and experience’ (2021, 1). This is quite a heterogeneous list; it is not immediately obvious what unifies these objects as potential targets of disruption, if anything.

Presenters
Benedict Lane
Kind of session / presentation

The Ethics of Progress - will continue in parallel session V track 8

The Ethics of Progress - will continue in parallel session V track 8

The idea of ‘progress’ raises quite some philosophical and moral puzzles. The idea of progress pervades modern life, spelling out a direction where we should be heading. But what this direction is or can be, remains unarticulated. We are moving ‘forward’, without knowing where moving forward will take us. One of the reasons for this lack of clarity seems to be the belief that progress ensues from science and then spills over into other societal domains, such as technology, economy, and politics. 

This panel will address the following research question: 

Organizers
Udo Pesch
Kind of session / presentation

Two readings of moving towards bio-centered AI: rethinking the computational logic of capture

Two readings of moving towards bio-centered AI: rethinking the computational logic of capture

In recent years, ‘human-centered artificial intelligence’ (HCAI) has emerged as a dominant framing device in contemporary AI discourse and policy. However, alongside its widespread acceptance, the phrase has received criticism. One type of critique levied against HCAI attacks its tendency towards anthropocentrism, claiming that by taking human well-being as the focus of moral considerations, HCAI is ill-equipped for addressing the harms that AI technologies pose to nonhuman animals and other elements of the natural world.

Presenters
Luuk Stellinga
Kind of session / presentation

The Right to Our Own Reasons: Autonomy and Online Behavioural Influence

The Right to Our Own Reasons: Autonomy and Online Behavioural Influence

In recent years, it has become clear that social media platforms routinely employ self-learning algorithms to select the content and advertisements shown to users. These algorithms are trained to maximise users’ time spent on the platform in question and/or their likelihood to click on advertisements, often on the basis of personal data extracted form users’ online behaviour. There has been growing concern that such ‘timeline curation’ algorithms may influence users’ behaviour in morally problematic ways, but the grounds for this moral concern (if any) are not always clear.

Presenters
Joris Graff
Kind of session / presentation

Do artefacts have promises? Do promises have artefacts? On why AI ethics should pay attention to the question of the performative

Do artefacts have promises? Do promises have artefacts? On why AI ethics should pay attention to the question of the performative

Adapting ethical frameworks such as value sensitive design (VSD) and ethics by design (EbD) to the specificity of AI systems (Umbrello and van de Poel, 2021; Brey and Dainow, 2023) can be seen as a recent attempt to systematically respond to the more general ideas of AI for Social Good (Floridi et al., 2020) or AI alignment (Dung, 2023). Despite the differences among these frameworks, the motivation stems from the same challenge—to ensure that AI systems promote, bring about or perform desirable ethical values through their own design.

Presenters
Víctor Betriu Yáñez
Kind of session / presentation

Uncovering the gap: challenging the agential nature of AI responsibility problems

Uncovering the gap: challenging the agential nature of AI responsibility problems

In this presentation, I will argue that the responsibility gap arising from new AI systems is reducible to the problem of many hands and collective agency. Systematic analysis of the agential dimension of AI will lead me to outline a disjunctive between the two problems. Either we reduce individual responsibility gaps to the many hands, or we abandon the individual dimension and accept the possibility of responsible collective agencies.

Presenters
Joan Llorca Albareda
Kind of session / presentation

Explainability in AI through the lens of feminist standpoint epistemology: the valuation of experiential knowledges to understand and repurpose AI’s social implications

Explainability in AI through the lens of feminist standpoint epistemology: the valuation of experiential knowledges to understand and repurpose AI’s social implications

This paper presentation seeks to understand how feminist standpoint theory can complement the concept of explainability in AI, and how AI’s use could be repurposed for inclusive political aims. Explainability refers to the possibility of AI systems to provide clear, understandable explanations for its actions and decisions in order to counter black box effects, and decrease potential risks of negative biases’ creation towards minorities.

Presenters
Marilou Niedda
Kind of session / presentation

Trust and Transparency in AI

Trust and Transparency in AI

In this paper, I consider an important question in the philosophy of AI. Does the fact that we cannot know how an AI reaches its conclusions entail that we cannot reasonably trust it? I argue that it does not.

The relationship between trust and transparency, whether in AI or elsewhere, appears quite puzzling. It seems unreasonable to trust that which is opaque, but too much transparency renders trust superfluous, for trust requires some degree of uncertainty and vulnerability. 

Presenters
Thomas Mitchell
Kind of session / presentation

Algorithmic manipulation of weak preferences as a threat to moral consciousness

Algorithmic manipulation of weak preferences as a threat to moral consciousness

The paper analyses the impact of persuasive technologies (PTs) on moral agency. PTs, being based on profiling and targeting techniques, direct users’ choices towards predetermined sets of options. One widely shared view is that by implying a decrease in the diversity of available information, PTs manipulate individuals by jeopardizing their moral agency, precisely constraining their epistemic and moral autonomy. This paper instead argues that PTs are not morally problematic per se but only when they threaten consciousness, one of the necessary conditions of our moral agency (Himma 2009).

Presenters
Ermanno Petrocchi
Kind of session / presentation

Understanding AI in relation to the social

Understanding AI in relation to the social

The phrase "the impact of AI on society" has almost become a platitude in debates on strategies for channeling this perceived impact, implicitly presenting us with an insignificant model of cause and dramatic effects. The phrase is taken as an unquestioned premise for public, academic and governmental discussions that work towards practicable solutions, such as ethical guidelines and regulations. But do solutions not hinge on an explicit and precise understanding of problems? How exactly does the impact of AI on society come about?

Presenters
Juliet van Rosendaal
Kind of session / presentation

“Mind the Gap: an adjusted approach for addressing conceptual gaps and overlaps with conceptual engineering”

“Mind the Gap: an adjusted approach for addressing conceptual gaps and overlaps with conceptual engineering”

In recent years, there has been an increasing interest in Socially Disruptive Technologies (Hopster 2021) and the phenomenon of conceptual disruption in the fields of philosophy and ethics of technology (Löhr 2023). Additionally, topics regarding conceptual engineering have risen to prominence. This presentation contributes to these areas of research by examining the categorization system of technology-induced conceptual disruptions, which could potentially limit conceptual engineering. This presentation has three objectives.

Presenters
Robin Hillenbrink
Kind of session / presentation

Personal Autonomy in Digital Spaces

Personal Autonomy in Digital Spaces

Navigating the digital world is highly mediated by AI-powered systems that select information and arrange options purportedly to support our decision-making and to improve our choices. But these systems can also be used for manipulative purposes. Besides straightforwardly deceptive means such as the so-called »dark patterns«, AI-powered systems can also employ subtler means to influence people’s behaviour.

Presenters
Marius Bartmann
Kind of session / presentation

Nothing Comes Without Its World: A Situated Perspective on the Limitations and Opportunities of AI Value Alignment with RLHF

Nothing Comes Without Its World: A Situated Perspective on the Limitations and Opportunities of AI Value Alignment with RLHF

Work on value alignment focuses on ensuring that human values are respected by AI systems. However, existing approaches tend to rely on universal framings of human values that obscure the question of what values the systems should elicit and align with, given a variety of operational contexts. Here, many ethical guidelines exist, yet translating these into actionable steps to follow in developers' decision-making processes has proven to be far removed from the vast array of scientific, technical, and economic contexts, leading to confusion and negligible impact. 

Presenters
Anne Arzberger
Kind of session / presentation

The Humane Measure: The Virtue Between the Universal and the Particular in AI Ethics

The Humane Measure: The Virtue Between the Universal and the Particular in AI Ethics

A major concern in AI ethics is that Machine Learning systems impose determinability on human lives that are fundamentally indeterminable (Birhane, 2021). The introduction of AI and algorithmic decision-making brings with it the risk that rigid machine-like decision-making will make it impossible to make the exceptions that will inevitably be required to their output, and that categories of people who are often overlooked or omitted will not be taken into consideration by them (Star & Bowker, 2007).

Presenters
Bauke Wielinga
Kind of session / presentation

Responsible Computing for Human Vulnerability: Three Perspectives

Responsible Computing for Human Vulnerability: Three Perspectives

In this presentation, the Human Condition Line presents the outcome of a collaborative research project on vulnerability. 

Digital technologies are becoming intimately interwoven with our society and our individual daily lives. As these technologies are transforming and reconfiguring people's cognitive, affective, bodily and conative capacities, there is a growing interest in the field of responsible computing in how this entanglement of human life with digital technology requires analyses that foreground human vulnerability.

Presenters
Naomi Jacobs
Janna van Grunsven
Kind of session / presentation

The fear of AI is a Trojan Horse — A Rationale

The fear of AI is a Trojan Horse — A Rationale

Mainstream media report that governments and business leaders see AI as an "extinction-level threat" to humans[1,2,3]. In post-humanism, we find, besides conceptional tools for thinking technology in an indeterministic way[4,5,6] argumentations for singularity[7], accelerationism[8] and appeasement[9] towards *strong AI*.

Presenters
Dominik Schlienger
Kind of session / presentation

Digital Agroecology and the Inhuman: Paradigm Crossroads

Digital Agroecology and the Inhuman: Paradigm Crossroads

Agriculture is undergoing a great transformation, often pronounced the fourth agricultural revolution, driven by technologies such as robotics, variable rate chemical applicators, the Internet of Things, big data, drones and automation (Balafoutis et al. 2020). This transformation is marked by the double pressure of a burgeoning world population, on the one hand, and evermore strained life-support systems, on the other (Blok 2017, 133). Life-support systems include both wild ecosystems and human food production systems. Protecting wild ecosystems is a demanding imperative.

Presenters
Georgios Tsagdis
Kind of session / presentation

The instrumental role of explanations for trustworthy AI

The instrumental role of explanations for trustworthy AI

Do we need explanations of AI outputs in order to obtain trustworthy AI? In recent years there has been an active philosophical debate on this question, with a range of authors arguing that in fact explanations are not needed for justification in believing AI outputs (Dúran & Jongsma, 2021; London, 2019) or even for more broadly ethical use of AI (Kawamleh, 2022; Krishnan, 2020).

Presenters
Stefan Buijsman
Kind of session / presentation

Did We Forget Something? Performing Technology Ethics

Did We Forget Something? Performing Technology Ethics

This paper argues that the concept of performativity can help us gain a better understanding of the relation between how we conceptualize ‘technology’ and our ways of doing technology ethics. In other words, we claim that technology ethics is performed by how ‘technology’ is defined in the first place. To substantiate this claim, we zoom in on classical philosophy of technology and the empirical turn, arguing that conceptualizations of technology in each respective cluster performed a different way of (not) doing technology ethics.

Presenters
Donovan van der Haak
Kind of session / presentation

Techno-Moral Progress: Exploring the technological mediation of better morality

Techno-Moral Progress: Exploring the technological mediation of better morality

Moral progress and technological progress do not necessarily go hand in hand. The twentieth century is a prime example in this regard. According to various commentators (Mitcham 1994; Ihde 1990; Verbeek 2011), the atrocious two world wars and the growing environmental impact of technological societies spread among many postwar philosophers a critical view of modern technology. Certainly, material progress (mainly produced thanks to economic and scientific-technological advancement) does not equate to progress towards a more humane world.

Presenters
Jon Rueda
Kind of session / presentation

Nudges, norms, and moral progress

Nudges, norms, and moral progress

Nudges, tweaks in choice environments that predictably steer behavior without restricting options, can be either self-regarding (benefiting the nudgee) or other-regarding (other aims such as organ donation, charity, tax compliance). Other-regarding nudges, on which we focus here, have been claimed to preserve moral worth and participate in cultivating moral virtues.

Presenters
Viktor Ivanković
Karolina Kudlek
Kind of session / presentation

Vindication and the Value of ‘Choice’

Vindication and the Value of ‘Choice’

Philosophers have been interested in how technological change can drive changes in values and many have also proposed that particular causal histories can vindicate or debunk our confidence in certain values. For either inquiry we need robust evidence of technologically induced value change and of the causal mechanisms behind it. In my paper I offer such evidence of technology-driven value change and propose a vindicating argument for this value.

Presenters
Charlie Blunden
Kind of session / presentation

Moral progress through conceptual disruption and deep disagreement

Moral progress through conceptual disruption and deep disagreement

“Technosocial disruption” affects “deeply held beliefs, values, social norms, and basic human capacities”, “basic human practices, fundamental concepts, [and] ontological distinctions” (Hopster 2021: 6). For this reason, it is also referred to as “deep disruption” (ibid.). It brings about different kinds of uncertainty, including “conceptual ambiguity and contestation, moral confusion, and moral disagreement” (ibid.: 7). Among such deep disruptions are disruptions of fundamental concepts.

Presenters
Julia Hermann
Kind of session / presentation

The Deliberative Model of Progress

The Deliberative Model of Progress

Modern life is characterised by a shared belief that we are moving forward, that ‘we’ – that is, humanity – are progressing to a better life. Even those people who demonstrate to point at the serious global problems we are currently facing – and there are still many of these, such as climate change, war, pandemics, racism, and social injustice – appear to entertain the belief that we can divert potential catastrophes if we are willing to act.

Presenters
Udo Pesch
Kind of session / presentation

Naturalistic epistemology and moral regress through technology

Naturalistic epistemology and moral regress through technology

Naturalistic moral epistemologists have recently argued that there are distinct social factors and forces under which moral progress – or regress – are likely to occur. According to Smyth (in prep.) current technological trends in many societies are conducive to moral regress: whereas once technology freed humans and encouraged the formation of new ends and experiences, much of it now forces humans down conditioning pathways where we end up pursuing remarkably simple and uniform goals. In this presentation I criticize Smyth’s assessment on three philosophical grounds.

Presenters
Jeroen Hopster
Kind of session / presentation

The Ethics of Progress - continued from parallel session IV track 8 part 1

The Ethics of Progress - continued from parallel session IV track 8 part 1

The idea of ‘progress’ raises quite some philosophical and moral puzzles. The idea of progress pervades modern life, spelling out a direction where we should be heading. But what this direction is or can be, remains unarticulated. We are moving ‘forward’, without knowing where moving forward will take us. One of the reasons for this lack of clarity seems to be the belief that progress ensues from science and then spills over into other societal domains, such as technology, economy, and politics. 

This panel will address the following research question: 

Organizers
Udo Pesch
Kind of session / presentation