Results of the NeurIPS’21 Challenge on Billion-Scale Approximate Nearest Neighbor Search

Martin Aumüller, Hasha Simhadri, George Williams, Matthijs Douze, Artem Babenko, Dmitry Baranchuk, Lucas Hosseini, Ravishankar Krishnaswamny, Gopal Srinivasa, Suhas Jayaram Subramanya, Jingdong Wang

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review


Despite the broad range of algorithms for Approximate Nearest Neighbor Search, most empirical evaluations of algorithms have focused on smaller datasets, typically of 1 million points \citep{Benchmark}. However, deploying recent advances in embedding based techniques for search, recommendation and ranking at scale require ANNS indices at billion, trillion or larger scale. Barring a few recent papers, there is limited consensus on which algorithms are effective at this scale vis-à-vis their hardware cost. This competition\footnote{\url{}} compares ANNS algorithms at billion-scale by hardware cost, accuracy and performance. We set up an open source evaluation framework\footnote{\url{}}% and leaderboards for both standardized and specialized hardware. The competition involves three tracks. The standard hardware track T1 evaluates algorithms on an Azure VM with limited DRAM, often the bottleneck in serving billion-scale indices, where the embedding data can be hundreds of GigaBytes in size. It uses FAISS \citep{Faiss17} as the baseline. The standard hardware track T2 additional allows inexpensive SSDs in addition to the limited DRAM and uses DiskANN \citep{DiskANN19} as the baseline. The specialized hardware track T3 allows any hardware configuration, and again uses FAISS as the baseline. We compiled six diverse billion-scale datasets, four newly released for this competition, that span a variety of modalities, data types, dimensions, deep learning models, distance functions and sources. The outcome of the competition was ranked leaderboards of algorithms in each track based on recall at a query throughput threshold. Additionally, for track T3, separate leaderboards were created based on recall as well as cost-normalized and power-normalized query throughput.
Original languageEnglish
Title of host publicationProceedings of the NeurIPS 2021 Competitions and Demonstrations Track
Publication date2022
Publication statusPublished - 2022


Dive into the research topics of 'Results of the NeurIPS’21 Challenge on Billion-Scale Approximate Nearest Neighbor Search'. Together they form a unique fingerprint.

Cite this