Failing Our Youngest: On the Biases, Pitfalls, and Risks in a Decision Support Algorithm Used for Child Protection

Therese Moreau Hansen, Roberta Sinatra, Vedran Sekara

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

In recent years, Danish child protective services have experienced increasing pressure, prompting the adoption of a decision-support algorithm to aid caseworkers in identifying children at heightened risk of maltreatment, named Decision Support. Despite its critical role, this algorithm has not undergone formal evaluation. Through a freedom of information request, we were able to partially access the algorithm and conduct an audit. We find that the algorithm has significant methodological flaws, suffers from information leakage,
relies on inappropriate proxy values for maltreatment assessment, generates inconsistent risk scores, and exhibits age-based discrimination. Given these serious issues, we strongly advise against the use of this kind of algorithms in local government, municipal, and child protection settings, and we call for rigorous evaluation of such tools before implementation and for continual monitoring post-deployment by listing a series of specific recommendations.
Original languageEnglish
Title of host publicationFAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
Publication date2024
Pages290-300
Publication statusPublished - 2024

Keywords

  • Algorithmic audit
  • Technological fairness
  • Algorithmic decision making
  • Algorithmic discrimination

Fingerprint

Dive into the research topics of 'Failing Our Youngest: On the Biases, Pitfalls, and Risks in a Decision Support Algorithm Used for Child Protection'. Together they form a unique fingerprint.

Cite this