Presenters
Simon Fischer
Kind of session / presentation

Moving towards forward-looking responsibility with questions in human-machine decision-making

Clinical decision-making is being supported by machine learning models. So-called decision-support systems (DSS) are intended to improve the decision-making of human operators (e.g., physicians), but they also introduce issues of epistemic uncertainty and over-reliance, and thereby open up responsibility gaps. To overcome these shortcomings, explainable AI attempts to provide insights into how the system made a decision. Explanations are, however, provided post-hoc and require contextual interpretation. Moreover, they might be ignored or can even make an undesirable decision more trustworthy. For these reasons, explanations in themselves do not sufficiently address responsibility gaps.

In order to stimulate critical reflection of the physician, which ideally promotes deliberative decision-making, and to shift the focus from backward-looking responsibility to forward-looking responsibility, we want to delegate the decision-making process to the human operator from the outset. To facilitate this, we introduce a system that raises questions about the pending decision. These questions can range from scrutinising the adequateness of the human and inbuilt DSS assumptions, to contemplating scenarios where the data might look different. Further, although the questions are directed at the physician, they can take into account the patient's perspective and interests. 

As a case study, we focus on treatment options for lower back pain. To formulate questions, we use tabular data from patient self-assessment questionnaires, which are also used by a DSS to recommend a course of action to the physician. In this respect, our proposed system complements the DSS and gives the physician (and potentially the patient) a more central role in the decision-making process.

The aim of this paper is twofold. First, we want to show how questions can be useful to overcome certain shortcomings of explanations and mitigate automation bias by adding friction. We do not, however, dismiss explanations overall. Second, in the context of over-reliance on machines, we want to make the point that questions could support the physician to get a more holistic picture of the patient. Whereas explanations are primarily backward-looking and refer only to existing data, questions allow for a more open, forward-looking approach to decision-making.