This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several different ways of integrating -NN algorithms, and its configuration system automatically tests a range of parameter settings for each algorithm. Algorithms are compared with respect to many different (approximate) quality measures, and adding more is easy and fast; the included plotting front-ends can visualize these as images, LaTeX plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of -NN algorithms. In the short term, this overview allows users to choose the correct -NN algorithm and parameters for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different approaches to -NN search yield comparable quality-performance trade-offs. The system is available at http://ann-benchmarks.com.
This page is printed from https://en.itu.dk/research/portalplaceholder?layoutfraction=top&langRef=https://pure.itu.dk/portal/da/persons/carsten-schurmann(1313519f-2074-41d6-8b79-589648e37211)/clippings.html?ordering=clippingOrderByDate&page=6&descending=true