AI is not alone: Meaningful Human Control in AI-based Sociotechnical Systems
When Artificial Intelligence (AI) systems are deployed, they are used in a particular setting. In this setting, there are people who are users, developers and other stakeholders that interact with the system directly or indirectly. The particular setting of an AI system that we are concerned with here consists thus of technological artifacts, being the AI system in question as well as any other relevant systems, as well as humans interacting with it. I thus define the AI system and the context it operates in as a sociotechnical system, centered around that specific AI system. Consider an AI system deployed to be used on a NICU to flag when there is a risk for early onset sepsis. Here doctors and nurses (users) and other stakeholders (parents of patients) want to work with the system and will as such form expectations of when flagging should occur.
Any human involved in the sociotechnical system either knows what this system's capabilities are or they have expectations about the behaviour ought to be in a particular setting. As we see increased use of AI in safety-critical contexts, such as in healthcare, anyone with expectations about the AI system's behaviour increasingly demands more accurate and reliable performance from the AI system. I argue that a distinction has to be made between performance which can be measured prior to deployment, such as accuracy on a given test set and performance in accordance with the expectations within the sociotechnical system. For example, when a NICU patient has an additional health concern resulting in similar symptoms as sepsis, then users of the AI system can expect a higher rate of false positives for that patient. Despite the system doing what it was trained for, it does not perform in accordance with that sociotechnical system. Since these expectations and further context in the sociotechnical system are variable, meaningful human control cannot be maintained through a-prior guarantees. I argue that continuously monitoring AI systems within their sociotechnical system becomes necessary, as the entire sociotechnical system affects whether the AI system behaves according to the expectations, or normatively, as it ought to.