Presenters
Marianna Capasso
Kind of session / presentation

Discrimination in the age of algorithmic hiring

Algorithmic hiring technologies, i.e. Artificial Intelligence (AI) tools for Human Resource Management (HRM) that are used to find and select candidates for job openings, have been increasingly used to improve recruitment efficency. However, these tools have also been proven to perpetuate discrimination for marginalised groups in society. The recent case of the Amazon CV-screening system is exemplar, as the system was found to be trained on biased historical data that led to a preference for men based on the fact that, in the past, the company hired more men as software engineers than women. To address this question, the use of synthetically or semi-synthetically generated data has been considered as a solution to mitigate the risk of biases. But synthetic data – e.g., the use of synthetic datasets of CVs – still comes with a great number of challenges, ranging from the question of what counts as ‘high-quality’ data to improve diversity and variability of training data and generators for AI models, to the identification and prevention of potential proxies of discrimination across diverse scenarios. In this presentation, my aim is to investigate how algorithmic hiring technologies can introduce discriminatory risks that are neither fully captured by traditional accounts of discrimination (i.e., differential treatment based on being a member of a socially salient group), nor by looking at the economic/technological/political power of identifiable social groups and actors in dyadic relationships (platform owners, managers, HR – workers). Instead, I demonstrate how these technologies can introduce systemic forms of discrimination across social groups, and open up the question of addressing discrimination as both a problem of distributive and epistemic injustice.