Technosolutionism and the empatethic medical chatbot

Technosolutionism and the empatethic medical chatbot

Recently, a number of studies have shown that chatbots are outperforming healthcare professionals when it comes to empathy (Ayers et al., 2023; Lenharo, 2024). This is remarkable for at least two reasons. First, insofar as empathy is broadly recognized as a core value of good healthcare (Kim et al., 2004). Second, because empathy has typically been considered an essentially human quality, whereby the promise of technology in healthcare, including most recently AI, has been to free up healthcare professionals to do what they are good at: providing empathetic care (Topol, 2019).

Presenters
Tamar Sharon
Kind of session / presentation

Moving towards forward-looking responsibility with questions in human-machine decision-making

Moving towards forward-looking responsibility with questions in human-machine decision-making

Clinical decision-making is being supported by machine learning models. So-called decision-support systems (DSS) are intended to improve the decision-making of human operators (e.g., physicians), but they also introduce issues of epistemic uncertainty and over-reliance, and thereby open up responsibility gaps. To overcome these shortcomings, explainable AI attempts to provide insights into how the system made a decision. Explanations are, however, provided post-hoc and require contextual interpretation.

Presenters
Simon Fischer
Kind of session / presentation

AI for psychiatry: close encounters of the algorithmic kind

AI for psychiatry: close encounters of the algorithmic kind

Psychiatry includes the assessment and diagnosis of illness and disorder within a largely interpersonal communicative structure involving physicians and patients. In such contexts, AI can help to spot patterns and generate predictions, e.g. using ‘big data’ analysis via statistical learning-based models. In these ways, AI can help to automate more routine steps, improve efficiency, mitigate clinician bias, offer predictive potential, including through analysis of neuroscientific data.

Presenters
Y. J. Erden
Kind of session / presentation

Unveiling Epistemic Injustice: Overlooking Emotional Knowledge in AI-Driven Healthcare

Unveiling Epistemic Injustice: Overlooking Emotional Knowledge in AI-Driven Healthcare

Healthcare faces increasing challenges from aging populations, chronic illnesses, and emergent health crises. The integration of artificial intelligence (AI) into healthcare systems promises revolutionary changes across diagnosis, treatment planning, patient monitoring, and administrative tasks. However, amidst these technological advancements, the development and implementation of AI systems often overlook the critical role of emotional practices within the healthcare field.

Presenters
Eliana Bergamin
Kind of session / presentation

Cryonics and the story of a life: Closing the book on the frozen dead

Cryonics and the story of a life: Closing the book on the frozen dead

In Europe and the United States, cryo-preservation of the dead is increasingly common. The aim of cryonics techniques is to preserve the body in the hope that it will one day be possible to repair the damage that led to death. If successful, cryo-preservation and similar biostasis technologies may challenge the conceptualization of death as something that is irreversible.

Presenters
Christopher Wareham
Kind of session / presentation

Language matters: deterministic and factual language in an increasingly probabilistic healthcare environment

Language matters: deterministic and factual language in an increasingly probabilistic healthcare environment

One of the big shifts in healthcare caused by so-called disruptive innovations in healthcare powered by AI and big data, is a shift from diagnostic and curative healthcare to predictive and preventive healthcare. While preventive healthcare is almost exclusively cloaked in positive attributes, we need to maintain semantic clarity about what it can and cannot deliver, so that patients are not misguided about its benefits and limitations and can make well-informed decisions regarding their healthcare.

Presenters
Heidi Mertes
Kind of session / presentation

Patient Perspectives on Digital Twins for Self-monitoring for Cardiovascular Disease

Patient Perspectives on Digital Twins for Self-monitoring for Cardiovascular Disease

This presentation is situated within the MyDigiTwin (MDT) consortium, a research project aiming to create a Digital Twin (DT), where Dutch citizens, including patients, can compare their health data (e.g., heart rate, weight, exercise) to existing big datasets. The platform will implement Artificial Intelligence (AI) models to predict a person’s risk of cardiovascular disease (CVD).

Presenters
Mignon Hagemeijer
Kind of session / presentation

Ethical reflections on organizing the first human trial of artificial womb technologies

Ethical reflections on organizing the first human trial of artificial womb technologies

In 2017 Partridge et al. announced the first successful animal trial with an artificial placenta, a technology meant to improve the survival and quality of life of preterm infants. The first in-human trial is expected in the next 2-5 years. This trial will pose notable challenges. For example, how do we predict risk of a trial with an innovative and potentially disruptive technology and how do protect participants? Further, as transfer in AP requires a C-section, the pregnant person is also a participant. How do we balance the interests of both participants?

Presenters
Alice Cavolo
Kind of session / presentation

Googlization of Health Research and Epistemic Trustworthiness

Googlization of Health Research and Epistemic Trustworthiness

Data-intensive health research projects led or initiated by large tech companies, such as Alphabet and Palantir, are emblematic of a research model Sharon (2016) has termed the “Googlization of Health Research” (GHR). GHR, according to Sharon, is characterized by a promise to advance health research through collection of a large variety of heterogeneous data, such as through consumer-oriented tracking devices, as well as offering technological capabilities to effectively manage and analyze this complex data.

Presenters
Chirag Arora
Kind of session / presentation

Therapy Bots and Emotional Complexity: Do Therapy Bots Really Empathise?

Therapy Bots and Emotional Complexity: Do Therapy Bots Really Empathise?

“Youper: an empathetic, safe, and clinically validated chatbot for mental healthcare.” (Youper, n.d.) This slogan is used for the marketing campaign of therapy bot Youper, a chatbot that mimics psychotherapy, or at least uses methods of therapeutic practices to improve users’ mental health (Fulmer et al., 2018). Other examples are Woebot (Woebot, n.d.) and Wysa (Wysa, n.d.). Most therapy bots are based on the theory and practice of Cognitive Behavioural Therapy (CBT). Marketing campaigns of these therapy bots mention that they have “empathy”.

Presenters
Kris Goffin
Katleen Gabriels
Kind of session / presentation

Measuring, Defining, and Reframing Uncertainty in AI for Clinical Medicine

Measuring, Defining, and Reframing Uncertainty in AI for Clinical Medicine

Recent advancements in artificial intelligence (AI) have demonstrated significant promise in the field of medicine. From disease diagnosis to personalized treatment plans, AI has the potential to revolutionize the healthcare industry. However, as with any emerging technology, there are questions about how to quantify the benefits and trade-offs of AI in medicine. One of the biggest challenges in assessing the benefits of AI in medicine is determining how to measure “uncertainty”. Biomedical and computer engineering define and measure uncertainty differently.

Presenters
Anna van Oosterzee
Anya Plutynski
Abhinav Kumar
Kind of session / presentation

Moral repair after disruption: rethinking sustainability and innovation in medical ethics

Moral repair after disruption: rethinking sustainability and innovation in medical ethics

Innovations that have been regarded as disruptive in the medical realm, such as mHealth applications, or machine learning, are perceived as part of a positive shift towards a more preventive, participatory and affordable healthcare model. More recently, several contributions have started exploring the ecological impacts of disruptive innovations in healthcare. New principles have been developed concerning sustainable development and use of technology in health care.

Presenters
Michiel De Proost
Kind of session / presentation

Preserving Autonomy: The “Dos” and “Don’ts” of Mental Health Chatbot Personalization

Preserving Autonomy: The “Dos” and “Don’ts” of Mental Health Chatbot Personalization

Large language models utilized for basic talk therapy, often referred to as mental health chatbots, are frequently personalized based on user interactions or other input. While personalization could improve the patient’s experience, it could also pose a risk to their autonomy through, for example, the inappropriate use of personalized nudges.

Presenters
Sarah Carter
Kind of session / presentation

Disruptive Technology and Health: Navigating Data Privacy Concerns in an Era of Innovation

Disruptive Technology and Health: Navigating Data Privacy Concerns in an Era of Innovation

The integration of disruptive technologies in healthcare has ushered in a new era of innovation and advancement, promising transformative solutions to longstanding challenges in patient care. However, amidst the potential benefits lies a pressing concern: data privacy. This abstract explores the intricate landscape of data privacy within the context of disruptive technology and health, with a particular emphasis on elucidating the responsible entities accountable for safeguarding sensitive medical information.

Presenters
Raghvendra Singh Yadav
Kind of session / presentation

Reproductive autonomy in the age of artificial intelligence

Reproductive autonomy in the age of artificial intelligence

Artificial intelligence (AI) is increasingly being used in reproductive medicine and in various digital applications on sexual and reproductive health. Recently, these developments have sparked various ethical analyses (Afnan et al. 2021; Coghlan et al. 2023; Rolges et al. 2023; Tamir 2023). Not surprisingly, many of the ethical problems of AI—such as its explanability deficits or the existence of biases—are also present in these AI tools in the service of procreative purposes. However, other issues have been less explored.

Presenters
Jon Rueda
Kind of session / presentation