TY - GEN
T1 - Gendered Ambiguous Pronoun (GAP) Shared Task at the Gender Bias in NLP Workshop 2019
AU - Webster, Kellie
AU - Costa-jussà, Marta R.
AU - Hardmeier, Christian
AU - Radford, Will
PY - 2019/8/2
Y1 - 2019/8/2
N2 - The 1st ACL workshop on Gender Bias in Natural Language Processing included a shared task on gendered ambiguous pronoun (GAP) resolution. This task was based on the coreference challenge defined in Webster et al. (2018), designed to benchmark the ability of systems to resolve pronouns in real-world contexts in a gender-fair way. 263 teams competed via a Kaggle competition, with the winning system achieving logloss of 0.13667 and near gender parity. We review the approaches of eleven systems with accepted description papers, noting their effective use of BERT (Devlin et al., 2018), both via fine-tuning and for feature extraction, as well as ensembling.
AB - The 1st ACL workshop on Gender Bias in Natural Language Processing included a shared task on gendered ambiguous pronoun (GAP) resolution. This task was based on the coreference challenge defined in Webster et al. (2018), designed to benchmark the ability of systems to resolve pronouns in real-world contexts in a gender-fair way. 263 teams competed via a Kaggle competition, with the winning system achieving logloss of 0.13667 and near gender parity. We review the approaches of eleven systems with accepted description papers, noting their effective use of BERT (Devlin et al., 2018), both via fine-tuning and for feature extraction, as well as ensembling.
KW - Gender bias in natural language processing
KW - Coreference resolution
KW - Gendered Ambiguous Pronoun (GAP) task
KW - BERT
KW - Fairness in NLP
U2 - 10.18653/v1/W19-3801
DO - 10.18653/v1/W19-3801
M3 - Article in proceedings
SN - 978-1-950737-40-6
BT - Proceedings of the First Workshop on Gender Bias in Natural Language Processing
ER -