Responsible AI: Evolving Bodies of Practice
In recent years ‘Responsible AI’ (R-AI) has been applied to a number of contexts and research applications (Dignum, 2019; Zhu, 2019; De Laat, 2021). On the surface this seems a good thing, as of course we want the development, deployment, and use of AI-systems to be in line with certain normative principles, and it seems the ‘responsible’ frame can give us just that. R-AI can ensure that AI-systems respect human rights and are aligned with democratic values. However, just what exactly R-AI means is contested, and often undefined.
Beyond rules and justice: A systematic literature review on the environmental impact of AI
AI is developing rapidly, as are concerns about the impact of its training and deployment on the environment. Recent studies suggest that since 2019, data centers have produced more CO2-emissions than the aviation industry (Shift Project, 2019), and they are extremely water-demanding (Li et al., 2023a). Given the urgency to achieve AI growth sustainably, the environmental impact of AI itself can no longer be overlooked. While studies about the environmental impact of AI have begun to emerge in the past few years, this emergent knowledge brings about new ethical questions and dilemmas.
Global Technology and Environmental Inequality: The Imperative Not to Create Morally Permissible Environmental Degradation
In Henry Shue’s influential 1999 article “Global Environment and International Inequality,” he argues not only that developed nations bear a disproportionately large burden of the costs involved in fixing the environmental problems caused by industrialization and globalization, but also that members of developing nations are morally permitted to cause environmental degradation insofar as they have been unfairly prevented from reaching the appropriate threshold of dignity and respect.