Conscious When It's Convenient: Anthropomorphisation, Anthropodenial, & AI

Discussions of the welfare of non-human animals & artificial intelligence are dominated by two principles: pathocentrism & the precautionary principle. Pathocentrism is the view that a being’s moral status depends on its capacity to suffer (Metzinger 2021). Pathocentrism establishes a vital connection between philosophical/scientific investigations of consciousness & ethical/legal frameworks. Yet, it also leaves important questions unanswered– chief among them: /which/ beings are capable of suffering? It would seem that, without a general theory (or reliable empirical measures) of consciousness, a commitment to pathocentrism would forestall advocacy efforts. This is where the precautionary principle comes in. The precautionary principle is motivated by the presumption that it’s better to err on the side of caution when attributing consciousness (Birch 2017): incorrectly attributing consciousness to a non-conscious being (a false positive) involves less suffering risk than failing to attribute consciousness to a conscious being (a false negative). For this reason, we should adopt a permissive approach to identifying conscious beings & recognising them as moral patients. In this way, the precautionary principle enables advocacy efforts to proceed in parallel with ongoing philosophical & scientific research into consciousness.

In this talk, I argue that if we are to accept pathocentrism & the precautionary principle, it is not enough to simply take a liberal, «theory-light» approach to ascribing consciousness. We need also to turn the lens on how we understand & attribute consciousness in the first place. To this end, I advance a conceptual engineering perspective on the concept of consciousness (understood in whatever sense is relevant to pathocentrism). What aims do we intend to achieve with a concept of consciousness? What is its *use*? Is our current concept well-suited towards that purpose? If not, then how might it be improved?

Different factors can influence our basic tendencies to anthropomorphise (or to engage in the opposite, anthropodenial; Perez-Osorio & Wykowska 2020). AI developers may exploit these factors one way or the other– to elicit or suppress consciousness perceptions (Schwitzgebel 2023). Users, too, may have vested interests in deciding whether or not to recognise AI systems as conscious. If pathocentrism is true, then concepts of consciousness make a difference to well-being. It behooves us to understand consciousness– & to understand it responsibly.