Abstract
The goal of the SBS 2015 Suggestion Track is to evaluate approaches for supporting users in searching collections of books who express their information needs both in a query and through example books. The track investigates the complex nature of relevance in book search and the role of traditional and user-generated book metadata in retrieval. We extended last year's investigation into the nature of book suggestions from the LibraryThing forums and how they compare to book relevance judgements. Participants were encouraged to incorporate rich user profiles of both topic creators and other LibraryThing users to explore the relative value of recommendation and retrieval paradigms for book search. We found further support that such suggestions are a valuable alternative to traditional test collections that are based on top-k pooling and editorial relevance judgements. In terms of systems evaluation, the most effective systems include some form of learning-to-rank. It seems that the complex nature of the requests and the book descriptions, with multiple sources of evidence, requires a careful balancing of system parameters.
Original language | English |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 1391 |
ISSN | 1613-0073 |
Publication status | Published - 1 Jan 2015 |
Externally published | Yes |
Event | 16th Conference and Labs of the Evaluation Forum, CLEF 2015 - Toulouse, France Duration: 8 Sept 2015 → 11 Sept 2015 |
Conference
Conference | 16th Conference and Labs of the Evaluation Forum, CLEF 2015 |
---|---|
Country/Territory | France |
City | Toulouse |
Period | 08/09/2015 → 11/09/2015 |
Keywords
- Book Search
- Information Retrieval
- User-Generated Metadata
- Relevance Judgements
- Learning-to-Rank