This track concerns AI. On the one hand, we invite contributions focusing on fundamental and conceptual issues in AI, for instance pertaining to ‘trustworthiness’, ‘explanation’, ‘human-centered’, ‘responsible’ or ‘behavior, manipulation, and autonomy’. On the other hand, we welcome contributions focusing on practices and prospects, for instance regarding ELSA, RRI, and VSD in the context of AI, or the philosophical and socio-political implications of generative AI in brain-computer interfacing (BCI) and mind-reading. Perspectives include (but need not be limited to): Philosophy of Technology; Science and Technology Studies (STS); ethical theory; philosophy of science; philosophy of mind; social theory; as well as more heterodox perspectives from literary studies (e.g. science fiction) and critical media studies.
Questions include (but need not be limited to):
- How to understand ‘human centric’ AI and how can it be analyzed in terms of power?
- How (not) to revisit and/or update methodologies from ESLA, RRI, and VSD to democratize AI?
- How to navigate the prospects of ‘mind reading’ by way of generative AI and BCIs, both philosophically and socially?
- How to understand manipulative technology in general, and in particular in relation to EU’s AI act? And how to evaluate in terms of an ethics of influence?