How do digitalization and AI disrupt moral concepts?
In the field of digital ethics, the question has been asked regularly whether digital ethics is unique in the kinds of ethical issues it raises, moral principles it requires or methods or approaches it is in need of. A debate on this issue took place between 1985 and 2002, and has been titled the uniqueness debate within digital ethics (or computer ethics, at the time). Various authors, such as Deborah Jpohnson, Walter Maner, Krystyna Gorniak-Kocikowska and Luciano Floridi, made arguments in favor of uniqueness. The prevailing opinion by 2002 was, however, that there was no strong evidence that digital ethics was unique in comparison to other fields of applied ethics.
I will argue that while this may have been true in 2002, the past two decades have brought new developments in digitalization that do require unique ethical principles, concepts, and methods. The first development is that of pervasive digitalization. Over the past two decades, many human practices, such as white-collar work, entertainment, communication, and commerce, have become predominantly digital. I will argue that pervasive digitalization requires us to reassess and reengineer moral concepts due to this shift in their application domain. This applies to concepts such as privacy, security, freedom of expression, well-being, and discrimination. These concepts are changing in meaning and normative scope because of pervasive digitalization.
The second development is the rise of AI, including autonomous applications in AI, which have resulted in a responsibility gap: a gap in responsibility and accountability for human actions due to the fact that some intelligent actions are now performed by machines. This notion was originally introduced by Andreas Matthias in 2004, and has since been theorized by many authors. The responsibility gap challenges and disrupts our concept of responsibility because it disrupts our understanding of agency and accountability. We are left with a choice to either assign responsibility to human actors even though they do not meet the conditions of responsibility, i.e., knowledge, control and intent, assign responsibility to machine actors even though they also do not meet some of the conditions, or accept that there is no concept of responsibility to fill the gap.
I will conclude by claiming that these two developments joinly support the uniqueness thesis and require us to rethink the key moral concepts that were identified.