Technosolutionism and the empatethic medical chatbot
Recently, a number of studies have shown that chatbots are outperforming healthcare professionals when it comes to empathy (Ayers et al., 2023; Lenharo, 2024). This is remarkable for at least two reasons. First, insofar as empathy is broadly recognized as a core value of good healthcare (Kim et al., 2004). Second, because empathy has typically been considered an essentially human quality, whereby the promise of technology in healthcare, including most recently AI, has been to free up healthcare professionals to do what they are good at: providing empathetic care (Topol, 2019). In this talk, I analyze the empathetic medical chatbot through the lens of the concept of technosolutionism (Morozov, 2013) – the belief that every problem has a solution based in technology – and the harms that technosolutionism raises, namely the redefinition of complex problems to fit technsolutions (rather than the other way around), and the forsaking of our original, complex problems for problems that technologies claim to be able to solve.
While empathetic medical chatbots may sound promising, they cannot actually be empathetic. They have no agency, no intentionality, nor any concept of empathy (Bender et al., 2021). An important question to ask in this context is if this matters: Perhaps the beneficial effects of empathy in healthcare can be achieved when an interaction is only perceived as empathetic, without there being any intention of empathy; similar to a placebo effect. I argue that even if this were the case, it would amount to a technosolutionist fallacy, whereby a technosolution – seemingly empathetic chatbots – leads to a redefinition of the concept of empathy – now a sequence of keywords in a text that does not feel rushed. Perhaps more importantly I argue, this may lead to the forsaking of a real problem that our healthcare systems face: that empathy takes time, and that the pursuit of greater cost efficiency – often through the adaptation of new technologies – has taken time away from healthcare professionals (EHRs are a tell-tale example) (Kerasidou, 2019). If our solution to this problem is seemingly empathetic chatbots, we will have left this problem unsolved, while twisting the definition of empathy into a hallowed version of itself.
Bibliography
Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., Faix, D. J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589–596. https://doi.org/10.1001/jamainternmed.2023.1838
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
Kerasidou, A. (2019). Empathy and Efficiency in Healthcare at Times of Austerity. Health Care Analysis: HCA: Journal of Health Philosophy and Policy, 27(3), 171–184. https://doi.org/10.1007/s10728-019-00373-x
Kim, S., Kaplowitz, S., Johnston, M.V. (2004). The effects of physician empathy on patient satisfaction and compliance. Eval Health Prof , 27(3), 237–51.
Lenharo, M. (2024). Google AI has better bedside manner than human doctors—And makes better diagnoses. Nature, 625(7996), 643–644. https://doi.org/10.1038/d41586-024-00099-4
Morozov, E. (2013). To Save Everything, Click Here: Technology, Solutionism and the Urge to Fix Problems that Don’t Exist. Allen Lane.
Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can make Healthcare Human Again. Basic Books.