Large-Scale Similarity Joins With Guarantees

Rasmus Pagh

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review


The ability to handle noisy or imprecise data is becoming increasingly important in computing. In the database community the notion of similarity join has been studied extensively, yet existing solutions have offered weak performance guarantees. Either they are based on deterministic filtering techniques that often, but not always, succeed in reducing computational costs, or they are based on randomized techniques that have improved guarantees on computational cost but come with a probability of not returning the correct result. The aim of this paper is to give an overview of randomized techniques for high-dimensional similarity search, and discuss recent advances towards making these techniques more widely applicable by eliminating probability of error and improving the locality of data access.
Original languageEnglish
Title of host publication18th International Conference on Database Theory (ICDT 2015)
Publication date2015
ISBN (Electronic)978-3-939897-79-8
Publication statusPublished - 2015
SeriesLeibniz International Proceedings in Informatics


  • Similarity Join
  • Randomized Techniques
  • High-Dimensional Data
  • Computational Guarantees
  • Locality of Data Access


Dive into the research topics of 'Large-Scale Similarity Joins With Guarantees'. Together they form a unique fingerprint.

Cite this