Measuring, Defining, and Reframing Uncertainty in AI for Clinical Medicine
Recent advancements in artificial intelligence (AI) have demonstrated significant promise in the field of medicine. From disease diagnosis to personalized treatment plans, AI has the potential to revolutionize the healthcare industry. However, as with any emerging technology, there are questions about how to quantify the benefits and trade-offs of AI in medicine. One of the biggest challenges in assessing the benefits of AI in medicine is determining how to measure “uncertainty”. Biomedical and computer engineering define and measure uncertainty differently. With advances in AI technologies and the introduction of these tools to clinical contexts, they are increasingly coming into conversation with one another. Researchers and clinicians need to translate and talk about various measures or types of uncertainty from one context to another. Translation failures may lead to confusion at best and medical errors or serious harm at worst. Thus, our goal is to provide a map or taxonomy of this evolving conceptual landscape, to prevent error, foster communication, and further “design practices that foreground questions of clinical value. In this paper we offer a taxonomy of different senses of “uncertainty” at play in this context, drawing in part on work by Hansson (2022) and Gal (2016). Computer engineers are mainly concerned with two forms of uncertainty: Aleatoric” (or, data) uncertainty, and “Epistemic” (or model) uncertainty. The former arises from inherent randomness in the data, while the latter is uncertainty about how we model a given data set. In medical AI, these uncertainties are met with uncertainties that stem from the medical field. For example, in breast cancer screening, we see a few of these types of uncertainty in practice. Factual uncertainties describe the low ‘absolute’ risk of breast cancer; this translates to the risk of false positives in AI. And metadoxastic uncertainty, the uncertainty about the veracity of one’s beliefs, is used to refer to the physician’s confidence in their judgement about a diagnosis, it now also includes the trust a physician places in the AI model that guides these decisions. Value uncertainty and linguistic uncertainty also play an important role in medical practice. Understanding how AI impacts these different types of uncertainty is crucial for using AI in medicine safely and effectively.
"Financial support for this work was provided by the National Institute of Biomedical Imaging and Bioengineering R01-EB031051-02S1"