Abstract
Large language models have emerged as a useful technology for job matching, for both candidates and employers. Job matching is often based on a particular geographic location, such as a city or region. However, LMs have known biases, commonly derived from their training data. In this work, we aim to quantify the metropolitan size bias encoded within large language models, evaluating zero-shot salary, employer presence, and commute duration predictions in 384 of the United States' metropolitan regions. Across all benchmarks, we observe correlations between metropolitan population and the accuracy of predictions, with the smallest 10 metropolitan regions showing upwards of 300% worse benchmark performance than the largest 10.
Original language | English |
---|---|
Title of host publication | Proceedings of the First Workshop on Natural Language Processing for Human Resources (NLP4HR 2024) |
Editors | Estevam Hruschka, Thom Lake, Naoki Otani, Tom Mitchell |
Number of pages | 5 |
Place of Publication | St. Julian's, Malta |
Publisher | Association for Computational Linguistics |
Publication date | 1 Mar 2024 |
Pages | 73-77 |
Publication status | Published - 1 Mar 2024 |