Abstract
Data filtering strategies are a crucial component to develop safe Large Language Models (LLM), since they support the removal of harmful contents from pretraining datasets. There is a lack of research on the actual impact of these strategies on vulnerable groups to discrimination, though, and their effectiveness has not been yet systematically addressed. In this paper we present a benchmark study of data filtering strategies for harm reduction aimed at providing a systematic evaluation on these approaches. We provide an overview 55 technical reports of English LMs and LLMs to identify the existing filtering strategies in literature and implement an experimental setting to test their impact against vulnerable groups. Our results show that the positive impact that strategies have in reducing harmful contents from documents has the side effect of increasing the underrepresentation of vulnerable groups to discrimination in datasets.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the AAAI Conference on Artificial Intelligence : AAAI Special Track on AI for Social Impact II |
| Number of pages | 11 |
| Volume | 40 |
| Publisher | AAAI Press |
| Publication date | 2026 |
| Edition | 46 |
| Pages | 39303-39313 |
| ISBN (Electronic) | 978-1-57735-906-7 |
| DOIs | |
| Publication status | Published - 2026 |
| Event | AAAI Conference on Artificial Intelligence - Singapore EXPO, Singapore Duration: 20 Jan 2026 → 27 Jan 2026 Conference number: 40 |
Conference
| Conference | AAAI Conference on Artificial Intelligence |
|---|---|
| Number | 40 |
| Location | Singapore EXPO |
| Country/Territory | Singapore |
| Period | 20/01/2026 → 27/01/2026 |
Keywords
- Data filtering strategies
- Large Language Models
- Harmful content filtering
- Discrimination and bias in datasets
- Underrepresentation of vulnerable groups
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver