TY - GEN
T1 - The space complexity of inner product filters
AU - Pagh, Rasmus
AU - Sivertsen, Johan von Tangen
PY - 2020
Y1 - 2020
N2 - Motivated by the problem of filtering candidate pairs in inner product similarity joins we study the following inner product estimation problem: Given parameters d∈ℕ, α>β≥0 and unit vectors x,y∈ ℝ^d consider the task of distinguishing between the cases ⟨x,y⟩≤β and ⟨x,y⟩≥α where ⟨x,y⟩ = ∑_{i=1}^d x_i y_i is the inner product of vectors x and y. The goal is to distinguish these cases based on information on each vector encoded independently in a bit string of the shortest length possible. In contrast to much work on compressing vectors using randomized dimensionality reduction, we seek to solve the problem deterministically, with no probability of error. Inner product estimation can be solved in general via estimating ⟨x,y⟩ with an additive error bounded by ε = α - β. We show that d log₂ (√{1-β}/ε) ± Θ(d) bits of information about each vector is necessary and sufficient. Our upper bound is constructive and improves a known upper bound of d log₂(1/ε) + O(d) by up to a factor of 2 when β is close to 1. The lower bound holds even in a stronger model where one of the vectors is known exactly, and an arbitrary estimation function is allowed.
AB - Motivated by the problem of filtering candidate pairs in inner product similarity joins we study the following inner product estimation problem: Given parameters d∈ℕ, α>β≥0 and unit vectors x,y∈ ℝ^d consider the task of distinguishing between the cases ⟨x,y⟩≤β and ⟨x,y⟩≥α where ⟨x,y⟩ = ∑_{i=1}^d x_i y_i is the inner product of vectors x and y. The goal is to distinguish these cases based on information on each vector encoded independently in a bit string of the shortest length possible. In contrast to much work on compressing vectors using randomized dimensionality reduction, we seek to solve the problem deterministically, with no probability of error. Inner product estimation can be solved in general via estimating ⟨x,y⟩ with an additive error bounded by ε = α - β. We show that d log₂ (√{1-β}/ε) ± Θ(d) bits of information about each vector is necessary and sufficient. Our upper bound is constructive and improves a known upper bound of d log₂(1/ε) + O(d) by up to a factor of 2 when β is close to 1. The lower bound holds even in a stronger model where one of the vectors is known exactly, and an arbitrary estimation function is allowed.
KW - Inner product similarity joins
KW - Deterministic inner product estimation
KW - Bit string encoding
KW - Dimensionality reduction
KW - Vector compression
KW - Inner product similarity joins
KW - Deterministic inner product estimation
KW - Bit string encoding
KW - Dimensionality reduction
KW - Vector compression
U2 - 10.4230/LIPIcs.ICDT.2020.22
DO - 10.4230/LIPIcs.ICDT.2020.22
M3 - Article in proceedings
T3 - Leibniz International Proceedings in Informatics (LIPIcs)
SP - 22:1–22:14
BT - 23rd International Conference on Database Theory (ICDT 2020)
PB - Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH
ER -