Rethinking Skill Extraction in the Job Market Domain using Large Language Models

Khanh Cao Nguyen, Mike Zhang, Syrielle Montariol, Antoine Bosselut

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Skill Extraction involves identifying skills and qualifications mentioned in documents such as job postings and resumes. It is commonly tackled by training supervised models using a sequence labeling approach with BIO tags. However, the reliance on manually annotated data limits the generalizability of such approaches. Moreover, the common BIO setting limits the ability of the models to capture complex skill patterns and handle ambiguous mentions. In this paper, we explore the use of in-context learning to overcome these challenges, on a benchmark of 6 skill extraction datasets that we uniformize. Our approach leverages the few-shot learning capabilities of large language models (LLMs) to identify and extract skills from sentences. We show that LLMs, despite not being on par with traditional supervised models in terms of performance, can better handle syntactically complex skill mentions in skill extraction tasks.
Original languageEnglish
Title of host publication1st Workshop on Natural Language Processing for Human Resources
Number of pages16
PublisherAssociation for Computational Linguistics
Publication dateMar 2024
Pages27–42
Publication statusPublished - Mar 2024
EventNLP4HR WORKSHOP 2024: Workshop on Natural Language Processing for Human Resources - St. Julians, Malta
Duration: 22 Mar 2024 → …
https://megagon.ai/nlp4hr-2024/

Workshop

WorkshopNLP4HR WORKSHOP 2024
Country/TerritoryMalta
CitySt. Julians
Period22/03/2024 → …
Internet address

Keywords

  • Skill Extraction
  • In-Context Learning
  • BIO Tags
  • Supervised Models
  • Few-Shot Learning

Fingerprint

Dive into the research topics of 'Rethinking Skill Extraction in the Job Market Domain using Large Language Models'. Together they form a unique fingerprint.

Cite this