Presenters
Suzana Alpsancar
Kind of session / presentation

What was understandable in symbolic AI? How the Philosophy and Ethics of Technology might benefit each other

The current call for explainable AI (XAI) is most often framed as an answer to the so-called black box problem of machine learning. Following this conceptualisation of the problem, the recent effectiveness of many machine learning (ML) based systems comes at the cost of intelligibility: the more accurate AI performs the less understandable it becomes to humans. Accordingly, XAI is seen as the endeavour to whiten the black box, so that society can profit from the latest AI success without endangering being alienated. This standard problematisation of the black box problem has been critiqued from various angles. Such critiques particularly highlight the overgeneralisation and simplification of both the problem-conceptualisation (e.g., opacity is not only a consequence of technical features; not all opacity is problematic) and the solution (e.g., explanations are not always wanted or asked, they might even be questionable from an ethical perspective). Adding to these critical voices, I want to revisit the paradigm of so-called symbolic AI which is often presented as the antidote to the current black box problem of ML. Symbolic AI was never a black box, it stands for ex ante explainability. I will show that the assumed comprehensibility of symbolic AI is due to both the programmatic design of paradigm, its underlying assumptions and objectives as well as circumstantial conditions which come with the laboratory situation. To see ML and symbolic AI as antipodes, we have to restrict our perspective to a limited scope: the question of the form in which knowledge is represented. It is only from this narrow view, which had been the scope of interest for early AI research, that symbolic AI appears to be a white box while ML-based systems appear to be black boxes. If we broaden our perspective and account for the conditions that made symbolic AI understandable (to experts), we may question the presumed opposition between both AI branches. Moreover, we may ask for the conditions and underlying presumptions in current AI research, which are still rarely addressed in the current XAI debate. This way, I aim to broaden the question of how to apply XAI in a meaningful way.