The interplay between ethical and epistemic virtues in AI-driven science
When striving for the responsible use of AI, it is important that we analyze and develop ethical virtues with their epistemic counterparts. As Hagendorff (2022) noted, ethical virtues correspond to the prominent four principles guiding our responsible use of AI. According to Hagendorff (2022), the ethical virtues of justice, honesty, responsibility, and care correspond to the principles of fairness, transparency, accountability, and privacy, respectively. All these virtues have their epistemic counterparts, namely epistemic justice, epistemic honesty, epistemic responsibility, and directed curiosity. This interconnectedness underscores the symbiotic relationship between ethical conduct and knowledge acquisition, emphasizing the need for a harmonious integration of both facets in the development and evaluation of AI systems. To show how ethical and epistemic virtues governing our use of AI are intertwined, I will analyze cases from science where this connection is prominent.
Having an adequate data representation from both the Global South and Global North increases both justice and epistemic justice when it comes to results trained on these data sets. Though AI can mitigate some injustices in science, e.g., linguistic injustice, by correcting the written English of non-native speakers, it can also increase the general injustice as scientists from the Global South might have fewer resources to participate in large AI-driven experiments, especially when some of the tools are commercialized.
Honestly attributing parts of findings to humans and machines gives the epistemic benefit of being able to assess these results. Only a human researcher can take responsibility for a finding, and in science epistemic responsibility plays a key role when we talk about the replicability of results. Finally, caring for privacy in science is very important, but even epistemically directing one's curiosity towards important research instead of generating non-fruitful minor results is something that future generations could acquire as a virtue when dealing with technology. Just as privacy safeguards individual autonomy and dignity, moderate curiosity directs intellectual inquiry toward meaningful scientific questions, balancing the pursuit of knowledge with ethical considerations of individual rights and well-being. This synergy between ethics and epistemology fosters trustworthiness and societal acceptance of scientific findings directed by AI. Thus, in order to adequately use AI in science, researchers should work on improving and developing both their ethical and epistemic virtues.