Technosolutionism and the empatethic medical chatbot

Technosolutionism and the empatethic medical chatbot

Recently, a number of studies have shown that chatbots are outperforming healthcare professionals when it comes to empathy (Ayers et al., 2023; Lenharo, 2024). This is remarkable for at least two reasons. First, insofar as empathy is broadly recognized as a core value of good healthcare (Kim et al., 2004). Second, because empathy has typically been considered an essentially human quality, whereby the promise of technology in healthcare, including most recently AI, has been to free up healthcare professionals to do what they are good at: providing empathetic care (Topol, 2019).

Presenters
Tamar Sharon
Kind of session / presentation

AI for psychiatry: close encounters of the algorithmic kind

AI for psychiatry: close encounters of the algorithmic kind

Psychiatry includes the assessment and diagnosis of illness and disorder within a largely interpersonal communicative structure involving physicians and patients. In such contexts, AI can help to spot patterns and generate predictions, e.g. using ‘big data’ analysis via statistical learning-based models. In these ways, AI can help to automate more routine steps, improve efficiency, mitigate clinician bias, offer predictive potential, including through analysis of neuroscientific data.

Presenters
Y. J. Erden
Kind of session / presentation

Exploring Commons, Inequality, and Progress in the Digital Age

Exploring Commons, Inequality, and Progress in the Digital Age

“The first man who, having enclosed a piece of ground, to whom it occurred to say this is mine, and found people sufficiently simple to believe him was the true founder of civil society.” This iconic saying of Jean-Jacques Rousseau does not address the foundation of civil society as a celebrated event in the speculative history of the humankind. For Rousseau, it is rather a catastrophic turn in that it has sown the seeds of inequality among human beings. The question whether the commons that came to be partitioned among humans led to a better societal life remains unresolved.

Presenters
Halil Turan
Sinan Senel
Kind of session / presentation

The Role of Normative Functions in Artifact Design and Use

The Role of Normative Functions in Artifact Design and Use

This paper introduces a novel theoretical approach to understanding artefact functions by advocating for a "normative functions" account, inspired by the literature on conceptual functions in philosophy. Normative functions of concepts are, roughly, things that they allow us to do that matter normatively (for example, things in virtue of which we have normative reasons to have these concepts).

Presenters
Herman Veluwenkamp
Kind of session / presentation

AI and the Burdens of Care in Education: A Call for Distribution

AI and the Burdens of Care in Education: A Call for Distribution

If the challenges AI introduces to the classroom are to be addressed adequately, and educational care deployed effectively in the process, a considerable burden of responsibility and additional work is likely to be placed upon the teachers. However, teachers are already overburdened in their professional capacities (Stacey et al. 2023); adding to their workload could have negative effect not only on their performance but also on the performance of the whole educational system (Creagh et al. 2023). What is more, AI might come to disrupt education not only in terms of workplace efficiency.

Presenters
Gavrilo Marčetić
Kind of session / presentation

Eco-Anxiety and Ecological Citizenship

Eco-Anxiety and Ecological Citizenship

Anxiety has become a defining feature of our time. Rapid technological change, growing migration flows, war, and pandemics are all feeding into a growing feeling of uncertainty, insecurity, and powerlessness. This paper investigates a new but rapidly spreading form of anxiety: ecological anxiety. 

Presenters
Michel Bourban
Kind of session / presentation

How do digitalization and AI disrupt moral concepts?

How do digitalization and AI disrupt moral concepts?

In the field of digital ethics, the question has been asked regularly whether digital ethics is unique in the kinds of ethical issues it raises, moral principles it requires or methods or approaches it is in need of. A debate on this issue took place between 1985 and 2002, and has been titled the uniqueness debate within digital ethics (or computer ethics, at the time). Various authors, such as Deborah Jpohnson, Walter Maner, Krystyna Gorniak-Kocikowska and Luciano Floridi, made arguments in favor of uniqueness.

Presenters
Philip Brey
Kind of session / presentation

Patient Perspectives on Digital Twins for Self-monitoring for Cardiovascular Disease

Patient Perspectives on Digital Twins for Self-monitoring for Cardiovascular Disease

This presentation is situated within the MyDigiTwin (MDT) consortium, a research project aiming to create a Digital Twin (DT), where Dutch citizens, including patients, can compare their health data (e.g., heart rate, weight, exercise) to existing big datasets. The platform will implement Artificial Intelligence (AI) models to predict a person’s risk of cardiovascular disease (CVD).

Presenters
Mignon Hagemeijer
Kind of session / presentation

Therapy Bots and Emotional Complexity: Do Therapy Bots Really Empathise?

Therapy Bots and Emotional Complexity: Do Therapy Bots Really Empathise?

“Youper: an empathetic, safe, and clinically validated chatbot for mental healthcare.” (Youper, n.d.) This slogan is used for the marketing campaign of therapy bot Youper, a chatbot that mimics psychotherapy, or at least uses methods of therapeutic practices to improve users’ mental health (Fulmer et al., 2018). Other examples are Woebot (Woebot, n.d.) and Wysa (Wysa, n.d.). Most therapy bots are based on the theory and practice of Cognitive Behavioural Therapy (CBT). Marketing campaigns of these therapy bots mention that they have “empathy”.

Presenters
Kris Goffin
Katleen Gabriels
Kind of session / presentation

Convergence Ethics

Convergence Ethics

Bioethics traditionally focuses on normative questions related to medical practice. The inquiry involves the moral permissibility of using new technologies for medical purpose and also existing medical technologies for non-medical purpose. While these inquiries probe into the ethical issues raised by the medical technologies, they take a rather reactive attitude towards the application of technologies.

Presenters
Pei-hua Huang
Samantha Copeland
Kind of session / presentation

Measuring, Defining, and Reframing Uncertainty in AI for Clinical Medicine

Measuring, Defining, and Reframing Uncertainty in AI for Clinical Medicine

Recent advancements in artificial intelligence (AI) have demonstrated significant promise in the field of medicine. From disease diagnosis to personalized treatment plans, AI has the potential to revolutionize the healthcare industry. However, as with any emerging technology, there are questions about how to quantify the benefits and trade-offs of AI in medicine. One of the biggest challenges in assessing the benefits of AI in medicine is determining how to measure “uncertainty”. Biomedical and computer engineering define and measure uncertainty differently.

Presenters
Anna van Oosterzee
Anya Plutynski
Abhinav Kumar
Kind of session / presentation

Redefining the Corporate Purpose of Social Media Companies: A Democratic Approach

Redefining the Corporate Purpose of Social Media Companies: A Democratic Approach

This paper proposes a new normative framework to think about Big Tech reform. Focusing on the case of digital communication, I argue that rethinking the corporate purpose of social media companies is a distinctive entry point to the debate on how to render the powers of tech corporations democratically legitimate. I contend that we need to strive for a reform that redefines the corporate purpose of social media companies. In this view, their purpose should be to create and maintain a free, egalitarian, and democratic public sphere rather than profit seeking.

Presenters
Ugur Aytac
Kind of session / presentation

Being Blinded by the Concrete – On the Extractivist Blindspot of the Philosophy of Technology

Being Blinded by the Concrete – On the Extractivist Blindspot of the Philosophy of Technology

Technological artefacts have become “world objects” – they affect the world as a whole. This becomes increasingly evident considering the imprint of their development and use on our global natural environment. The growing awareness of the entanglement of humans, their practices, and their technological artefacts with their natural environments goes along with growing uncertainty. The rising number of experiences with the catastrophic consequences of climate change and environmental crises are increasingly shaking many people’s belief in a stable course of life and of the future.

Presenters
Tijs Vandemeulebroucke
Julia Pelger
Larissa Bolte
Kind of session / presentation

How to Imagine Educational AI: Filling of a Pail or Lighting a Fire?

How to Imagine Educational AI: Filling of a Pail or Lighting a Fire?

Recent advances in artificial intelligence (e.g., machine learning, generative AI) have led to an increased interest in its application in educational settings. AI companies hope to revolutionise teaching and learning by tailoring material to the individual needs of students, automating parts of teachers’ jobs, or analyse educational data to optimise the delivery of content. The main goal of this presentation is to consider the role of imaginaries in shaping concrete practices and understandings of educational AI. 

Presenters
Alberto Romele
Michał Wieczorek
Kind of session / presentation

Exemplary situations of technological fallibility in the philosophy of technology: from breakdown as epistemology to failure as politics

Exemplary situations of technological fallibility in the philosophy of technology: from breakdown as epistemology to failure as politics

The theme of technological breakdown, error, failure, or malfunction has, in a certain way, always been present in the history of the philosophy of technology. From Heidegger’s hammer to contemporary discussion of algorithmic bias, technological failure has been seen as revealing of something, as a means to obtain knowledge about technology or the world. For example, Verbeek (2004, p.

Presenters
Dmitry Muravyov
Kind of session / presentation

Preserving Autonomy: The “Dos” and “Don’ts” of Mental Health Chatbot Personalization

Preserving Autonomy: The “Dos” and “Don’ts” of Mental Health Chatbot Personalization

Large language models utilized for basic talk therapy, often referred to as mental health chatbots, are frequently personalized based on user interactions or other input. While personalization could improve the patient’s experience, it could also pose a risk to their autonomy through, for example, the inappropriate use of personalized nudges.

Presenters
Sarah Carter
Kind of session / presentation

AI and Democratic Education: A Critical Pragmatist Assessment

AI and Democratic Education: A Critical Pragmatist Assessment

In this paper, I draw on pragmatist philosophy to assess the impact of educational AI (AIED) on the democratic dimension of education. AIED is expected to facilitate teaching and learning by personalizing content to the needs of students, automating parts of teachers’ jobs, and monitoring students’ performance and behavior, among others. However, I argue that we should pay close attention to AIED’s impact on the social development of students and the civic values and attitudes it is going to promote.

Presenters
Michał Wieczorek
Kind of session / presentation

Ethical virtues for deep uncertainty

Ethical virtues for deep uncertainty

A high-level virtue ethics approach to situations of deep uncertainty would complement and/or contrast with consequentialist and deontological approaches to uncertainty. Such an account would satisfy the following criteria: (1) it provides normative guidance that allows individuals and societies to cope with deep uncertainty ethically and sustainably in the presence of strong emotions of fear, apprehension, and anxiety; (2) it allows for responsiveness to unexpected situations (“black swans” (Taleb 2007)); and (3) it is realistically accessible to ordinary people. 

Presenters
Philip James Nickel
Kind of session / presentation

The Politics of Platform Technologies: A Critical Conceptualization of the Platform and Sharing Economy’s Politics

The Politics of Platform Technologies: A Critical Conceptualization of the Platform and Sharing Economy’s Politics

Digital platforms increasingly mediate social, economic, and other forms of human interactions, which puts them in a position to influence the power dynamics and moral values that shape these interactions. This paper focuses on the platform and sharing economy – an economic model, in which digital platforms facilitate social and economic interactions. Its two central models, mainstream and cooperative platforms, offer similar applications and services. However, they fundamentally differ in aspects such as ownership and governance structures, economic models, and technical designs.

Presenters
Shaked Spier
Kind of session / presentation

Value Experiences and Techno-Environmental Dilemmas

Value Experiences and Techno-Environmental Dilemmas

This contribution will explore the methodological significance of value experiences for the ethics of human interactions with nature. I begin by detailing how environmentally disruptive technologies often pose “techno-environmental dilemmas.” For example, offshore windfarms enable us to mitigate global environmental harm. Simultaneously, they disrupt the environments in which they are built, negatively impacting human and nonhuman lives. How should we decide what to do in the face of these environmental dilemmas?

Presenters
James Hutton
Kind of session / presentation

Art and Emotions as Methods for Value Experience and Deliberation on Socially Disruptive Technologies

Art and Emotions as Methods for Value Experience and Deliberation on Socially Disruptive Technologies

This contribution will provide a novel method for value deliberation on technologies, grounded in art and emotions. Philosophy tends to see itself as a rational discipline, emphasizing logical argumentation and seeing emotions as belonging to the realm of irrationality and subjectivity. This view of emotions has been challenged by philosophers and psychologists who emphasize the cognitive dimension of emotions. Emotions can then play an important epistemological role, providing us with insights into the evaluative dimension of our lived experience.

Presenters
Sabine Roeser
Kind of session / presentation

Value Experiences and Design for Value

Value Experiences and Design for Value

In this contribution, I explore why and how value experiences are relevant to Design for Values. In a value experience, something seems to the experiencer to be valuable (or disvaluable). Design for Values is a design approach that aims at systematically integrating value of moral importance in (technological) design.

Presenters
Ibo van de Poel
Kind of session / presentation

Value Experiences & Technomoral Deliberation

Value Experiences & Technomoral Deliberation

Over the past year, a major topic of research among ESDiT members has been the role of “value experiences” in ethical deliberation about disruptive technologies. Ibo van de Poel defines value experiences as “experiences in which something seems valuable or disvaluable to the experiencer.” Examples of value experiences include emotions such as anger, in which something seems wrong or unjust to the experiencer, and—more speculatively—forms of perceptual experience that have evaluative content, akin to the perception of affordances.

Organizers
James Hutton
Kind of session / presentation