On the Assessment of Expertise Profiles

Richard Berends, Maarten De Rijke, Krisztian Balog, Toine Bogers, Antal Van den Bosch

Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review


Expertise retrieval has attracted significant interest in the field of information retrieval. Expert finding has been studied extensively, with less attention going to the complementary task of expert profiling, that is, automatically identifying topics about which a person is knowledgeable. We describe a test collection for expert profiling in which expert users have self-selected their knowledge areas. Motivated by the sparseness of this set of knowledge areas, we report on an assessment experiment in which academic experts judge a profile that has been automatically generated by state-of-the-art expert-profiling algorithms; optionally, experts can indicate a level of expertise for relevant areas. Experts may also give feedback on the quality of the system-generated knowledge areas. We report on a content analysis of these comments and gain insights into what aspects of profiles matter to experts. We provide an error analysis of the system-generated profiles, identifying factors that help explain why certain experts may be harder to profile than others. We also analyze the impact on evaluating expert-profiling systems of using self-selected versus judged system-generated knowledge areas as ground truth; they rank systems somewhat differently but detect about the same amount of pairwise significant differences despite the fact that the judged system-generated assessments are more sparse.
Original languageEnglish
JournalJournal of the Association for Information Science and Technology
Issue number10
Pages (from-to)2024-2044
Number of pages21
Publication statusPublished - Oct 2013
Externally publishedYes


Dive into the research topics of 'On the Assessment of Expertise Profiles'. Together they form a unique fingerprint.

Cite this