Moving towards forward-looking responsibility with questions in human-machine decision-making
Clinical decision-making is being supported by machine learning models. So-called decision-support systems (DSS) are intended to improve the decision-making of human operators (e.g., physicians), but they also introduce issues of epistemic uncertainty and over-reliance, and thereby open up responsibility gaps. To overcome these shortcomings, explainable AI attempts to provide insights into how the system made a decision. Explanations are, however, provided post-hoc and require contextual interpretation.