Big City Bias: Evaluating the Impact of Metropolitan Size on Computational Job Market Abilities of Language Models

Charlie Campanella, Rob van der Goot

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Large language models have emerged as a useful technology for job matching, for both candidates and employers. Job matching is often based on a particular geographic location, such as a city or region. However, LMs have known biases, commonly derived from their training data. In this work, we aim to quantify the metropolitan size bias encoded within large language models, evaluating zero-shot salary, employer presence, and commute duration predictions in 384 of the United States' metropolitan regions. Across all benchmarks, we observe correlations between metropolitan population and the accuracy of predictions, with the smallest 10 metropolitan regions showing upwards of 300% worse benchmark performance than the largest 10.
Original languageEnglish
Title of host publicationProceedings of the First Workshop on Natural Language Processing for Human Resources (NLP4HR 2024)
EditorsEstevam Hruschka, Thom Lake, Naoki Otani, Tom Mitchell
Number of pages5
Place of PublicationSt. Julian's, Malta
PublisherAssociation for Computational Linguistics
Publication date1 Mar 2024
Pages73-77
Publication statusPublished - 1 Mar 2024

Fingerprint

Dive into the research topics of 'Big City Bias: Evaluating the Impact of Metropolitan Size on Computational Job Market Abilities of Language Models'. Together they form a unique fingerprint.

Cite this