The limits of AI friendship – how good can AI friends be?
In this paper, I examine the current scope of human-AI friendships, and the prospects for near-future development of more sophisticated AI friends. I argue that in some current and many possible future contexts, these friendships can be valuable – good for the humans who have them. But there are significant risks attached to the shifting of the concept of friendship, from being primarily a relationship between humans, to the kind of relationship that at least some humans have, not with other humans but with our technological creations.
My focus here is on the risks posed by AI to their human friends, through both the deliberate choices of the developers of the AI, and through the expectations placed on AI friends by their human counterparts – expectations which, even when normal and reasonable as applied to human friends, should not be taken for granted when applied to AI friends.
I will argue that the role of an existing AI friend is not directly analogous to that of a human friend, as the AI friend is not functionally independent in the way that human friends characteristically are, but is rather a client or employee of the company which created it. That is to say, the AI friend has a pre-existing relationship, of which their human friend is aware, which impacts the role they play in a friendship. In particular, this pre-existing relationship raises issues of trustworthiness in disclosure of information (by the AI friend), and responsibility or transparency in behaviour (by the companies creating and selling AI friends).
I will argue that while neither of these concerns are sufficient to make friendships with even current AI impossible, they do generate an upper bound on the value AI friends can have, unless or until a generation of AI friends is developed with something more like free will.