Track 8: General - Philosophy and Ethics of Technology

Chair: To be announced

Saliences and Collective Attention in Technology Development

Saliences and Collective Attention in Technology Development

Collective attention plays an important role in the development of sociotechnical systems. It directs the aggregation of social, financial, and political resources, which impacts technology development. In particular, shifts in collective attention can reflect changing relationships between specific technologies and social or moral values. When a value is relatively more important to a certain technology, the perspectives relevant to that value attract more attention within that specific technical system, which is crucial for value-sensitive and responsible technology design.

Presenters
Yunxuan MIAO
Kind of session / presentation

A Buddhist Attention Freedom Fight Club

A Buddhist Attention Freedom Fight Club

Within mere decades the attention economy has not only developed into an technological omnipresence, a major industry and a political force to be reckoned with. Less explicitly I has also developed into a major moral force working on our collective attention. It does not only come with an explicit economical deal - I pay attention in return for ‘free’ services - but also with an implicit ethical ideal: delivering myself to the ease of attention technology will confirm me as a truly modern human being. The good and the easy are near synonyms.

Presenters
Tom Hannes
Kind of session / presentation

Wisdom in the Age if Inteligent Machines

Wisdom in the Age if Inteligent Machines

This paper addresses the topic of artificial mediated agency and autonomy and its impact on human wellbeing. The key question of this paper is whether artificial agency and autonomy can be extended to the notion of wisdom: If agency is mediated and distributed between humans and intelligent machines, can there also be a mediated wisdom?

Presenters
Edward Howlett Spence
Kind of session / presentation

Track 7: TechnoPolitics

Chair: To be announced

Digital labour platforms ownership through the lens of Design Justice

Digital labour platforms ownership through the lens of Design Justice

The question of ownership has not been addressed enough in discussions about the politics of design of digital technologies. Whereas many approaches to design practice have recognised unfair decision-making processes, oppressive power-dynamics, and several instances of structural injustice, most of them fail to propose a political-economic alternative that addresses the ways in which the processes are owned. This has rendered many designers incapable of futuring digital technologies outside of the self-fulfilling prophecies professed by the different neoliberal capitalist institutions.

Presenters
Aarón Moreno Inglés
Kind of session / presentation

(Re)Designing the Public Sphere? Doing Political Theory After the Empirical Turn

(Re)Designing the Public Sphere? Doing Political Theory After the Empirical Turn

This paper critically addresses current debates on the digital transformation of the public sphere. It responds to two contrasting responses to this transformation: the school of destruction, which expresses pessimism about the design of social media, and the school of restoration, which advocates for the redesign of social media to align with normative conceptions of the public sphere. However, so far these responses have omitted an explicit philosophical reflection on the relationship between politics, technology and design.

Presenters
Anthony Longo
Kind of session / presentation

Rethinking the future of work debate with Hannah Arendt

Rethinking the future of work debate with Hannah Arendt

The recent successes and promises of artificial intelligence (AI) have given rise to a debate about the so-called ‘future of work’. Within this debate, economists discuss the likelihood that AI automates so many jobs that there will be structural unemployment. Ethicists discuss the moral implications of working with AI or losing one’s job to AI. Two moral issues are at the heart of this debate: the socio-economic impact and the existential impact of automation.

Presenters
Rosalie Waelen
Kind of session / presentation

Track 6: Methodological Issues, Questions & Practices

Chair: To be announced

Value Experiences & Technomoral Deliberation

Value Experiences & Technomoral Deliberation

Over the past year, a major topic of research among ESDiT members has been the role of “value experiences” in ethical deliberation about disruptive technologies. Ibo van de Poel defines value experiences as “experiences in which something seems valuable or disvaluable to the experiencer.” Examples of value experiences include emotions such as anger, in which something seems wrong or unjust to the experiencer, and—more speculatively—forms of perceptual experience that have evaluative content, akin to the perception of affordances.

Organizers
James Hutton
Kind of session / presentation

Track 5: Geo-Technology & Bio-Technology

Chair: To be announced

Climate justice, environmental ethics, and the global ecological crisis

Climate justice, environmental ethics, and the global ecological crisis

Despite the shared focus of climate ethics, environmental ethics, and political theory on threats to the natural environment and human well-being, these discourses have developed into largely isolated fields. One dividing line of thought is the ethical consideration of non-human entities, which is a key topic in environmental ethics, but is often side-stepped in climate ethics in favour of justice for human beings.

Organizers
Dominic Lenzi
Alexandria Poole
Kind of session / presentation

Track 4: Disruptive Technology & Health

Chair: To be announced

Unveiling Epistemic Injustice: Overlooking Emotional Knowledge in AI-Driven Healthcare

Unveiling Epistemic Injustice: Overlooking Emotional Knowledge in AI-Driven Healthcare

Healthcare faces increasing challenges from aging populations, chronic illnesses, and emergent health crises. The integration of artificial intelligence (AI) into healthcare systems promises revolutionary changes across diagnosis, treatment planning, patient monitoring, and administrative tasks. However, amidst these technological advancements, the development and implementation of AI systems often overlook the critical role of emotional practices within the healthcare field.

Presenters
Eliana Bergamin
Kind of session / presentation

Googlization of Health Research and Epistemic Trustworthiness

Googlization of Health Research and Epistemic Trustworthiness

Data-intensive health research projects led or initiated by large tech companies, such as Alphabet and Palantir, are emblematic of a research model Sharon (2016) has termed the “Googlization of Health Research” (GHR). GHR, according to Sharon, is characterized by a promise to advance health research through collection of a large variety of heterogeneous data, such as through consumer-oriented tracking devices, as well as offering technological capabilities to effectively manage and analyze this complex data.

Presenters
Chirag Arora
Kind of session / presentation

Disruptive Technology and Health: Navigating Data Privacy Concerns in an Era of Innovation

Disruptive Technology and Health: Navigating Data Privacy Concerns in an Era of Innovation

The integration of disruptive technologies in healthcare has ushered in a new era of innovation and advancement, promising transformative solutions to longstanding challenges in patient care. However, amidst the potential benefits lies a pressing concern: data privacy. This abstract explores the intricate landscape of data privacy within the context of disruptive technology and health, with a particular emphasis on elucidating the responsible entities accountable for safeguarding sensitive medical information.

Presenters
Raghvendra Singh Yadav
Kind of session / presentation

Track 3: Concepts & Values

Chair: To be announced

The Argumentation within Values: For a Dialectics of Value-Based Technology Design

The Argumentation within Values: For a Dialectics of Value-Based Technology Design

One would be excused to think that values are just “bullshit” - to use the philosophical term successfully brought from slang into philosophy by Harry Frankfurt (Frankfurt 2005). This is because both values (e.g., liberty, equality, justice) and meta-values (e.g., Ruth Chang’s parity, Michael Walzer’s complex equality) underdetermine the design process and can even be applied contradictorily on the same decision. For example, one and the same technology can be described as sustainable and not-sustainable, as serving/promoting and not promoting freedom.

Presenters
Eugen Octav Popa
Kind of session / presentation

Towards a Definition of “Socially Disruptive Technology”

Towards a Definition of “Socially Disruptive Technology”

The phrase "socially disruptive" is used to characterize a bewildering range of technologies. There is a need for an apt definition of "Socially Disruptive Technologies" (SDTs), particularly for a definition that captures their ethically relevant implications – positive and negative – in ways that are suitably contextualized. In this paper, we propose, explain, and defend the following definition:

Presenters
Björn Lundgren
Jeroen Hopster,
Joel Anderson
Kind of session / presentation

Vindication and the value of 'Choice'

Vindication and the value of 'Choice'

Philosophers have been interested in how technological change can drive changes in values (Hopster et al. 2022; Danaher 2021; Swierstra 2013; Nickel, Kudina, and van de Poel 2021) and many have also proposed that particular causal histories can vindicate or debunk our confidence in certain values (Street 2006; Queloz 2021; Smyth 2020). For either inquiry we need robust evidence of technologically induced value change and of the causal mechanisms behind it. In my paper I offer such evidence of technology-driven value change and propose a vindicating argument for this value.

Presenters
Charlie Blunden
Kind of session / presentation

Track 1: AI - Intelligent Artifice? - part 2

Chair: To be announced

The interplay between ethical and epistemic virtues in AI-driven science

The interplay between ethical and epistemic virtues in AI-driven science

When striving for the responsible use of AI, it is important that we analyze and develop ethical virtues with their epistemic counterparts. As Hagendorff (2022) noted, ethical virtues correspond to the prominent four principles guiding our responsible use of AI. According to Hagendorff (2022), the ethical virtues of justice, honesty, responsibility, and care correspond to the principles of fairness, transparency, accountability, and privacy, respectively.

Presenters
Vlasta Sikimic
Kind of session / presentation

The impact of LLMs on collective knowledge practices

The impact of LLMs on collective knowledge practices

ChatGPT has disrupted not only the public discourse on AI, but also the social and epistemic practices of its users. Large Language Models (LLMs), or more specifically generative text AI, have been repeatedly portrayed as omniscient oracles by tech giants and the media alike. As a result, they are often understood as knowledge models rather than (large) language model. This specific view of the new generation of chatbots is not only presented externally to the user but is also mediated by the (interface) design of the AI model and thus reinforced by the user's interaction with it.

Presenters
Marte Henningsen
Kind of session / presentation

The instrumental role of explanations for trustworthy AI

The instrumental role of explanations for trustworthy AI

Do we need explanations of AI outputs in order to obtain trustworthy AI? In recent years there has been an active philosophical debate on this question, with a range of authors arguing that in fact explanations are not needed for justification in believing AI outputs (Dúran & Jongsma, 2021; London, 2019) or even for more broadly ethical use of AI (Kawamleh, 2022; Krishnan, 2020).

Presenters
Stefan Buijsman
Kind of session / presentation

Track 1: AI - Intelligent Artifice? - part 1

Chair: To be announced

Rethinking AI Ethics in, for and from sub-Saharan Africa - will continue in parallel VIII, track 1 part 1

Rethinking AI Ethics in, for and from sub-Saharan Africa - will continue in parallel VIII, track 1 part 1

In the discourse surrounding AI, Africa's role and unique perspective remain conspicuously marginalised. This panel seeks to address this oversight by examining the current state of AI in Africa through three lenses: that of AI in, for and from sub-Saharan Africa. 

Organizers
Kristy Claassen
Kind of session / presentation

Digital labour platforms ownership through the lens of Design Justice

Digital labour platforms ownership through the lens of Design Justice

The question of ownership has not been addressed enough in discussions about the politics of design of digital technologies. Whereas many approaches to design practice have recognised unfair decision-making processes, oppressive power-dynamics, and several instances of structural injustice, most of them fail to propose a political-economic alternative that addresses the ways in which the processes are owned. This has rendered many designers incapable of futuring digital technologies outside of the self-fulfilling prophecies professed by the different neoliberal capitalist institutions.

Presenters
Aarón Moreno Inglés
Kind of session / presentation

The Argumentation within Values: For a Dialectics of Value-Based Technology Design

The Argumentation within Values: For a Dialectics of Value-Based Technology Design

One would be excused to think that values are just “bullshit” - to use the philosophical term successfully brought from slang into philosophy by Harry Frankfurt (Frankfurt 2005). This is because both values (e.g., liberty, equality, justice) and meta-values (e.g., Ruth Chang’s parity, Michael Walzer’s complex equality) underdetermine the design process and can even be applied contradictorily on the same decision. For example, one and the same technology can be described as sustainable and not-sustainable, as serving/promoting and not promoting freedom.

Presenters
Eugen Octav Popa
Kind of session / presentation

(Re)Designing the Public Sphere? Doing Political Theory After the Empirical Turn

(Re)Designing the Public Sphere? Doing Political Theory After the Empirical Turn

This paper critically addresses current debates on the digital transformation of the public sphere. It responds to two contrasting responses to this transformation: the school of destruction, which expresses pessimism about the design of social media, and the school of restoration, which advocates for the redesign of social media to align with normative conceptions of the public sphere. However, so far these responses have omitted an explicit philosophical reflection on the relationship between politics, technology and design.

Presenters
Anthony Longo
Kind of session / presentation

Saliences and Collective Attention in Technology Development

Saliences and Collective Attention in Technology Development

Collective attention plays an important role in the development of sociotechnical systems. It directs the aggregation of social, financial, and political resources, which impacts technology development. In particular, shifts in collective attention can reflect changing relationships between specific technologies and social or moral values. When a value is relatively more important to a certain technology, the perspectives relevant to that value attract more attention within that specific technical system, which is crucial for value-sensitive and responsible technology design.

Presenters
Yunxuan MIAO
Kind of session / presentation

The interplay between ethical and epistemic virtues in AI-driven science

The interplay between ethical and epistemic virtues in AI-driven science

When striving for the responsible use of AI, it is important that we analyze and develop ethical virtues with their epistemic counterparts. As Hagendorff (2022) noted, ethical virtues correspond to the prominent four principles guiding our responsible use of AI. According to Hagendorff (2022), the ethical virtues of justice, honesty, responsibility, and care correspond to the principles of fairness, transparency, accountability, and privacy, respectively.

Presenters
Vlasta Sikimic
Kind of session / presentation

Towards a Definition of “Socially Disruptive Technology”

Towards a Definition of “Socially Disruptive Technology”

The phrase "socially disruptive" is used to characterize a bewildering range of technologies. There is a need for an apt definition of "Socially Disruptive Technologies" (SDTs), particularly for a definition that captures their ethically relevant implications – positive and negative – in ways that are suitably contextualized. In this paper, we propose, explain, and defend the following definition:

Presenters
Björn Lundgren
Jeroen Hopster,
Joel Anderson
Kind of session / presentation

Unveiling Epistemic Injustice: Overlooking Emotional Knowledge in AI-Driven Healthcare

Unveiling Epistemic Injustice: Overlooking Emotional Knowledge in AI-Driven Healthcare

Healthcare faces increasing challenges from aging populations, chronic illnesses, and emergent health crises. The integration of artificial intelligence (AI) into healthcare systems promises revolutionary changes across diagnosis, treatment planning, patient monitoring, and administrative tasks. However, amidst these technological advancements, the development and implementation of AI systems often overlook the critical role of emotional practices within the healthcare field.

Presenters
Eliana Bergamin
Kind of session / presentation

The impact of LLMs on collective knowledge practices

The impact of LLMs on collective knowledge practices

ChatGPT has disrupted not only the public discourse on AI, but also the social and epistemic practices of its users. Large Language Models (LLMs), or more specifically generative text AI, have been repeatedly portrayed as omniscient oracles by tech giants and the media alike. As a result, they are often understood as knowledge models rather than (large) language model. This specific view of the new generation of chatbots is not only presented externally to the user but is also mediated by the (interface) design of the AI model and thus reinforced by the user's interaction with it.

Presenters
Marte Henningsen
Kind of session / presentation

Googlization of Health Research and Epistemic Trustworthiness

Googlization of Health Research and Epistemic Trustworthiness

Data-intensive health research projects led or initiated by large tech companies, such as Alphabet and Palantir, are emblematic of a research model Sharon (2016) has termed the “Googlization of Health Research” (GHR). GHR, according to Sharon, is characterized by a promise to advance health research through collection of a large variety of heterogeneous data, such as through consumer-oriented tracking devices, as well as offering technological capabilities to effectively manage and analyze this complex data.

Presenters
Chirag Arora
Kind of session / presentation

A Buddhist Attention Freedom Fight Club

A Buddhist Attention Freedom Fight Club

Within mere decades the attention economy has not only developed into an technological omnipresence, a major industry and a political force to be reckoned with. Less explicitly I has also developed into a major moral force working on our collective attention. It does not only come with an explicit economical deal - I pay attention in return for ‘free’ services - but also with an implicit ethical ideal: delivering myself to the ease of attention technology will confirm me as a truly modern human being. The good and the easy are near synonyms.

Presenters
Tom Hannes
Kind of session / presentation

The instrumental role of explanations for trustworthy AI

The instrumental role of explanations for trustworthy AI

Do we need explanations of AI outputs in order to obtain trustworthy AI? In recent years there has been an active philosophical debate on this question, with a range of authors arguing that in fact explanations are not needed for justification in believing AI outputs (Dúran & Jongsma, 2021; London, 2019) or even for more broadly ethical use of AI (Kawamleh, 2022; Krishnan, 2020).

Presenters
Stefan Buijsman
Kind of session / presentation

Disruptive Technology and Health: Navigating Data Privacy Concerns in an Era of Innovation

Disruptive Technology and Health: Navigating Data Privacy Concerns in an Era of Innovation

The integration of disruptive technologies in healthcare has ushered in a new era of innovation and advancement, promising transformative solutions to longstanding challenges in patient care. However, amidst the potential benefits lies a pressing concern: data privacy. This abstract explores the intricate landscape of data privacy within the context of disruptive technology and health, with a particular emphasis on elucidating the responsible entities accountable for safeguarding sensitive medical information.

Presenters
Raghvendra Singh Yadav
Kind of session / presentation

Vindication and the value of 'Choice'

Vindication and the value of 'Choice'

Philosophers have been interested in how technological change can drive changes in values (Hopster et al. 2022; Danaher 2021; Swierstra 2013; Nickel, Kudina, and van de Poel 2021) and many have also proposed that particular causal histories can vindicate or debunk our confidence in certain values (Street 2006; Queloz 2021; Smyth 2020). For either inquiry we need robust evidence of technologically induced value change and of the causal mechanisms behind it. In my paper I offer such evidence of technology-driven value change and propose a vindicating argument for this value.

Presenters
Charlie Blunden
Kind of session / presentation

Rethinking the future of work debate with Hannah Arendt

Rethinking the future of work debate with Hannah Arendt

The recent successes and promises of artificial intelligence (AI) have given rise to a debate about the so-called ‘future of work’. Within this debate, economists discuss the likelihood that AI automates so many jobs that there will be structural unemployment. Ethicists discuss the moral implications of working with AI or losing one’s job to AI. Two moral issues are at the heart of this debate: the socio-economic impact and the existential impact of automation.

Presenters
Rosalie Waelen
Kind of session / presentation

Wisdom in the Age if Inteligent Machines

Wisdom in the Age if Inteligent Machines

This paper addresses the topic of artificial mediated agency and autonomy and its impact on human wellbeing. The key question of this paper is whether artificial agency and autonomy can be extended to the notion of wisdom: If agency is mediated and distributed between humans and intelligent machines, can there also be a mediated wisdom?

Presenters
Edward Howlett Spence
Kind of session / presentation

Value Experiences and Techno-Environmental Dilemmas

Value Experiences and Techno-Environmental Dilemmas

This contribution will explore the methodological significance of value experiences for the ethics of human interactions with nature. I begin by detailing how environmentally disruptive technologies often pose “techno-environmental dilemmas.” For example, offshore windfarms enable us to mitigate global environmental harm. Simultaneously, they disrupt the environments in which they are built, negatively impacting human and nonhuman lives. How should we decide what to do in the face of these environmental dilemmas?

Presenters
James Hutton
Kind of session / presentation

Art and Emotions as Methods for Value Experience and Deliberation on Socially Disruptive Technologies

Art and Emotions as Methods for Value Experience and Deliberation on Socially Disruptive Technologies

This contribution will provide a novel method for value deliberation on technologies, grounded in art and emotions. Philosophy tends to see itself as a rational discipline, emphasizing logical argumentation and seeing emotions as belonging to the realm of irrationality and subjectivity. This view of emotions has been challenged by philosophers and psychologists who emphasize the cognitive dimension of emotions. Emotions can then play an important epistemological role, providing us with insights into the evaluative dimension of our lived experience.

Presenters
Sabine Roeser
Kind of session / presentation

Value Experiences and Design for Value

Value Experiences and Design for Value

In this contribution, I explore why and how value experiences are relevant to Design for Values. In a value experience, something seems to the experiencer to be valuable (or disvaluable). Design for Values is a design approach that aims at systematically integrating value of moral importance in (technological) design.

Presenters
Ibo van de Poel
Kind of session / presentation

Value Experiences & Technomoral Deliberation

Value Experiences & Technomoral Deliberation

Over the past year, a major topic of research among ESDiT members has been the role of “value experiences” in ethical deliberation about disruptive technologies. Ibo van de Poel defines value experiences as “experiences in which something seems valuable or disvaluable to the experiencer.” Examples of value experiences include emotions such as anger, in which something seems wrong or unjust to the experiencer, and—more speculatively—forms of perceptual experience that have evaluative content, akin to the perception of affordances.

Organizers
James Hutton
Kind of session / presentation

Intercultural Conceptual Disruption

Intercultural Conceptual Disruption

Recent debates in the philosophy of technology center on the notion that technology can disrupt concepts and values. Among these, Artificial Intelligence (AI) emerges as a prominent example, demonstrating its potential to disrupt fundamental notions such as personhood, agency, and responsibility. However, existing debates have thus far failed to adequately explore how such disruption manifests across diverse cultural and ethical frameworks.

Presenters
Kristy Claassen
Kind of session / presentation

On the moral status of humanoid robots: an African inspired approach

On the moral status of humanoid robots: an African inspired approach

Some people relate to, and treat, humanoid robots as if they are human, although they know that they are not. Such reactions have sparked discussion about whether humanoid robots should be granted the same, or similar, moral status as human beings. A relational approach to robot moral status is unconcerned with whether the robot has the necessary properties for moral status, and argues that if we relate to the robot as if it is human, it should indeed have the same (or similar) moral status as human beings.

Presenters
Cindy Friedman
Kind of session / presentation

Challenges to Responsible AI in Africa: Using Matolino’s lenses on modernity and development

Challenges to Responsible AI in Africa: Using Matolino’s lenses on modernity and development

I use Bernard Matolino’s lenses on modernity and development to reflect on and discuss the challenges that Africa will face in adapting to Responsible AI. Matolino looks at the relationship between values and technological developments in an African context. He defines technological development as an ongoing human episode that signifies development and innovation. Matolino proposes two perspectives on modernity to define it. The first definition is that modernity is an actual transition that happens when people’s lives and systems shift from one mode to another.

Presenters
Eddie Liywalii
Kind of session / presentation

Technology Transfer in sub-Saharan Africa: A Form of Technological Disruption

Technology Transfer in sub-Saharan Africa: A Form of Technological Disruption

How does technology transfer affect sub-Saharan Africa, especially her sociocultural and economic circumstances? I argue that technology transfer creates a disruption to the cultural worldviews and socio-economic conditions of sub-Saharan Africa. As it bears mentioning, in our contemporary social milieu, digital technologies such as artificial intelligence (AI), machine learning (ML), and robots have become pervasive, reshaping our perceptions of the world, as well as our societal norms and cultural values.

Presenters
Edmund Terem Ugar
Kind of session / presentation

The Ethical Tightrope: Chinese AI in Africa and the Shadow of Authoritarianism

The Ethical Tightrope: Chinese AI in Africa and the Shadow of Authoritarianism

I examine the increase of Chinese artificial intelligence (AI) in Africa through the lens of Michel Foucault's theory of knowledge and power. Foucault argued that knowledge is not objective, but rather it can be used as a tool of domination by those in power. This paper explores how China's involvement in African AI development shapes knowledge production and governance on the continent. China has become one of the players in the African AI landscape, investing in AI technologies across various sectors, including healthcare, agriculture, education, and governance.

Presenters
Bridget Chipungu Chimbga
Kind of session / presentation

Rethinking AI Ethics in, for and from sub-Saharan Africa - will continue in parallel VIII, track 1 part 1

Rethinking AI Ethics in, for and from sub-Saharan Africa - will continue in parallel VIII, track 1 part 1

In the discourse surrounding AI, Africa's role and unique perspective remain conspicuously marginalised. This panel seeks to address this oversight by examining the current state of AI in Africa through three lenses: that of AI in, for and from sub-Saharan Africa. 

Organizers
Kristy Claassen
Kind of session / presentation

Robots and dignity from an Afro-communitarian perspective: an evaluation

Robots and dignity from an Afro-communitarian perspective: an evaluation

One of the often-cited reasons against the use of technologies with artificial intelligence is that such a use would undermine human dignity. The use of these robots, it is argued, undermines the dignity of the patients who use them because the use of these robots deceives, manipulates, humiliates, invades privacy, infantilises and causes loss of human contact. Such actions disrespect their autonomy and treat them as mere means to an end and not ends in themselves. Western conceptions of dignity, such as Kant’s and Nussbaum’s, are salient conceptions used to conduct such evaluations.

Presenters
Karabo Maiyane
Kind of session / presentation

Climate justice, environmental ethics, and the global ecological crisis

Climate justice, environmental ethics, and the global ecological crisis

Despite the shared focus of climate ethics, environmental ethics, and political theory on threats to the natural environment and human well-being, these discourses have developed into largely isolated fields. One dividing line of thought is the ethical consideration of non-human entities, which is a key topic in environmental ethics, but is often side-stepped in climate ethics in favour of justice for human beings.

Organizers
Dominic Lenzi
Alexandria Poole
Kind of session / presentation