Track 1: AI - Intelligent Artifice? - part 2
Can value alignment solve the control problem for AI?
Some worry that as AI-systems become more capable and more integrated in society, we will lose control over those systems. The loss of control, some worry, may yield bad or even catastrophic outcomes. Some think that we can solve this problem in a way that avoids having to have control, simply by ensuring that the systems are value-aligned. Here is argued that value-alignment cannot solve the control problem, unless the value-alignment includes a notion of control.
Ethical oversight of algorithmic systems – contextualizing the call for increased governance in the Dutch public sector
The increasing integration of algorithms into various aspects of society has raised concerns about potential biases, ethical implications, and the lack of transparency governing decision-making processes, which in turn led to a call for increased oversight. This paper will contextualise the call for increased ethical governance and oversight of algorithmic systems used within the Dutch public sector, by focusing on the aim of the proposed governance mechanisms and instruments.
AI is not alone: Meaningful Human Control in AI-based Sociotechnical Systems
When Artificial Intelligence (AI) systems are deployed, they are used in a particular setting. In this setting, there are people who are users, developers and other stakeholders that interact with the system directly or indirectly. The particular setting of an AI system that we are concerned with here consists thus of technological artifacts, being the AI system in question as well as any other relevant systems, as well as humans interacting with it. I thus define the AI system and the context it operates in as a sociotechnical system, centered around that specific AI system.