Presenters
Y. J. Erden
Kind of session / presentation

AI for psychiatry: close encounters of the algorithmic kind

Psychiatry includes the assessment and diagnosis of illness and disorder within a largely interpersonal communicative structure involving physicians and patients. In such contexts, AI can help to spot patterns and generate predictions, e.g. using ‘big data’ analysis via statistical learning-based models. In these ways, AI can help to automate more routine steps, improve efficiency, mitigate clinician bias, offer predictive potential, including through analysis of neuroscientific data. Electroencephalography (EEG), for instance, promises data on brain activity related to cognition, plus emotions and behaviour, as apparently objective accounts of what otherwise requires interpersonal engagement and observation. Yet psychiatric theories (including about emotion and behaviour) are not neutral, and any problematic assumptions in psychiatric theories, as well as discredited theories and retracted studies, can (and do) find their way into the design of AI. AI can thus encode and reify such values and judgements. Even where research in psychology is sound, psychiatry is more than can be automated. AI analysis of big data for predictive purposes cannot supplant the phenomenological perspectives that underlie a person’s actions, choices, and experiences, or bypass the necessarily discursive engagement between patient and clinician. Brain data can improve explanatory models, but this should not be at the expense of essential qualitative practices. Technological methods for assessment and diagnosis might seem time and cost efficient, but there remains an important role for (even imperfect) interpersonal methods in medicine and care. We therefore need some core principles for the appropriate use of AI in psychiatry. These include: (1) to not undermine necessary relational aspects of care; (2) to not cement simplistic classifications, exacerbate harmful biases, retain discredited theories or rely on retracted papers; (3) to not use brain data to bypass self-reporting and interpersonal, discursive methods; (4) to remain sufficiently transparent (methods, processes, data sets, including for training) and open to critique. In short, those who develop these technologies need to be aware of the complexity and necessary imprecision of the theories they adapt. Otherwise the scope for harm can be extensive.