Machine Learning Lunch Meeting
A bi-metric framework for fast similarity search
Event Details
Everyone is invited to the weekly Machine Learning Lunch Meetings every Friday 12:30-1:30pm. Faculty members from Computer Sciences, Statistics, ECE, and other departments will discuss their latest groundbreaking research in machine learning. This is an opportunity to network with faculty and fellow researchers, and to learn about the cutting-edge research being conducted in our university. Plese see https://sites.google.com/view/wiscmllm/home for the schedule and more information.
Speaker: Sandeep Silwal
Abstract: We propose a new ``bi-metric'' framework for designing nearest neighbor data structures. Our framework assumes two dissimilarity functions: a *ground-truth* metric that is accurate but expensive to compute (e.g., a cross-encoder that runs a large neural network to compare two sentences), and a *proxy* metric that is cheaper but less accurate (e.g., distance between embeddings from a very small model). In both theory and practice, we show how to construct data structures using only the proxy metric such that the query procedure achieves the accuracy of the expensive metric, while only using a limited number of calls to both metrics.
Our theoretical results instantiate this framework for two popular nearest neighbor search algorithms: DiskANN and Cover Tree. In both cases we show that, as long as the proxy metric used to construct the data structure approximates the ground-truth metric up to a bounded factor, our data structure achieves arbitrarily good approximation guarantees with respect to the ground-truth metric. On the empirical side, we apply the framework to the text retrieval problem with two dissimilarity functions evaluated by ML models with vastly different computational costs. We observe that for almost all data sets in the MTEB benchmark, our approach achieves a considerably better accuracy-efficiency tradeoff than the alternatives, such as re-ranking.
Joint work with Haike Xu and Piotr Indyk.