Abstract
In recent years, Danish child protective services have experienced increasing pressure, prompting the adoption of a decision-support algorithm to aid caseworkers in identifying children at heightened risk of maltreatment, named Decision Support. Despite its critical role, this algorithm has not undergone formal evaluation. Through a freedom of information request, we were able to partially access the algorithm and conduct an audit. We find that the algorithm has significant methodological flaws, suffers from information leakage,
relies on inappropriate proxy values for maltreatment assessment, generates inconsistent risk scores, and exhibits age-based discrimination. Given these serious issues, we strongly advise against the use of this kind of algorithms in local government, municipal, and child protection settings, and we call for rigorous evaluation of such tools before implementation and for continual monitoring post-deployment by listing a series of specific recommendations.
relies on inappropriate proxy values for maltreatment assessment, generates inconsistent risk scores, and exhibits age-based discrimination. Given these serious issues, we strongly advise against the use of this kind of algorithms in local government, municipal, and child protection settings, and we call for rigorous evaluation of such tools before implementation and for continual monitoring post-deployment by listing a series of specific recommendations.
Original language | English |
---|---|
Title of host publication | FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency |
Publication date | 2024 |
Pages | 290-300 |
Publication status | Published - 2024 |
Keywords
- Algorithmic audit
- Technological fairness
- Algorithmic decision making
- Algorithmic discrimination