Beyond Responsible AI Principles: embedding ethics in the system’s lifecycle

The year 2019 is commonly dubbed the “AI Ethics Year” as it marked a pivotal year in which previous discussions on the impact of AI in society were reflected in many documents endorsed by nations, international institutions, companies, and other organizations. Indeed, Anna Jobin and her colleagues already counted in 2019 more than 80 AI Ethics guidelines (Jobin et al., 2019) , and this number kept increasing in the following years. These ethics guidelines are often based on a principled approach: they offer a set of high-level principles that should govern AI development if we want to have an ethical AI or an AI for social good. 

Despite the significance of these documents, the principled approach they advocate has been deemed insufficient from the outset (Mittelstadt, 2019). Already at that stage and still today, one of the main problems faced in the Responsible AI domain is the so-called “operationalization gap,” i.e., the complexity of translating these principles or values into technical requirements.
I will contend that operationalizing these principles does not entail closing a gap but rather jumping over it. Instead of trying to derive technical requirements from abstract principles, it requires a different sort of analysis. This involves going closer to the artifacts themselves, the context, the people using them, and the other stakeholders involved. All this has been acknowledged in the philosophy of technology in the last decades, particularly after the empirical turn.

Here, I will address Responsible AI pronciples operationalization by focusing on the engineering practice. I will draw on recent proposals from this domain, such as Privacy Engineering or Value-Based Engineering, but also traditional engineering approaches, such as safety engineering.