Workshop on Epistemic Injustice and AI

Date: 23–24 March 2027
Location: Radboud University Nijmegen
Submission deadline: 1 December 2026

Description

Algorithmic bias is by now a widely recognised issue affecting societal domains such as education (e.g., Milano et al. 2023), medicine (e.g., Faissner & Braun 2024), criminal justice (e.g., Angwin 2016), and communication (Proost & Pozzi 2023). More generally, it poses a challenge to the trust we can or should attribute to AI systems (cf. Kelp & Simion 2023). While its problematic contribution to unfairness and injustice has predominantly been evaluated from ethical perspectives that evaluate its effects on the distribution of social goods (e.g., Leben 2025), new work is emerging that raises attention also to the epistemic dimensions of harm associated with the impact of algorithms on people’s status as knowers and our shared knowledge-giving and -seeking practices. Several examples and applications of AI systems have been found to materialise various aspects of Miranda Fricker’s (2007) seminal concepts of testimonial and hermeneutical injustice (see Mollema 2024 for a taxonomy).

For instance, certain AI-based healthcare apps and clinical decision systems may treat patients as passive recipients of knowledge rather than as knowers and contributors grounded in their lived experiences, thereby reinforcing paternalistic medical practices. The epistemic opacity associated with machine-learning algorithms (Burell 2016) is variously seen as a particular problem for recipients of AI-based decisions and may make it difficult for patients to challenge relevant decisions (Faissner et al. 2024), yet there remains little work on how epistemic opacity could be removed, if only partially, from machine-learning systems (Raleigh & Knoks 2025).

Also new notions of epistemic injustice are being developed in the analysis of AI algorithms’ epistemic impact. For example, Google Search seems to promote a form of epistemic conformism that may prevent users who belong to minority groups and thus fall out of AI algorithms’ statistical learning to find meaningful results, thus obstructing users not only in their capacity as knowledge givers, but also in their capacity as knowledge seekers, thus a form of zetetic injustice (Miragoli 2025). For example, Google Search seems to promote a form of epistemic conformism that may prevent users from minority groups—who are less well represented in training data—from finding meaningful results. This can obstruct users both as knowledge givers and as knowledge seekers, amounting to a form of zetetic injustice (Miragoli 2025). They may do this by presenting female platform users online job advertisements with systematically lower paying jobs as compared to male users, and, as a consequence, limit hermeneutical resources by reinforcing the structural underrepresentation of certain experiences of knowledge seekers, while limiting their potential influence over how certain events (job ads) are to be interpreted (Milano & Prunkl 2025).

The aim of this workshop is to bring together scholars, including early-career researchers and students who are working on algorithmic bias to critically discuss how machine-learning systems contribute to epistemic injustices, and what can be done to change sources and mechanisms of epistemic injustice or to ameliorate its effects, in theory and practice.

Invited Speakers

Call for Papers

We invite contributions on topics that include (but are not limited to):

The workshop will feature both oral presentations and a poster session.

Please note that all talks and posters must be presented in person.

Submission

Please submit your abstract via the submission system:
Submit via OpenReview

Submission guidelines

Important dates

Venue

The workshop will take place at Radboud University Nijmegen, The Netherlands.

Organisers

Contact

If you have any questions about the workshop or submissions, feel free to contact us at epistemic@ru.nl.

Funding

This workshop is funded by the Veronica Vasterling Fonds.