Abstract
In this demonstration, we present Exquisitor, a media explorer capable of learning user preferences in real-time during interactions with the 99.2 million images of YFCC100M. Exquisitor owes its efficiency to innovations in data representation, compression, and indexing. Exquisitor can complete each interaction round, including learning preferences and presenting the most relevant results, in less than 30 ms using only a single CPU core and modest RAM. In short, Exquisitor can bring large-scale interactive learning to standard desktops and laptops, and even high-end mobile devices.
Original language | English |
---|---|
Title of host publication | Proceedings of the ACM Multimedia Conference |
Number of pages | 3 |
Place of Publication | Nice, France |
Publisher | Association for Computing Machinery |
Publication date | Oct 2019 |
Pages | 1029-1031 |
ISBN (Electronic) | 978-1-4503-6889-6 |
DOIs | |
Publication status | Published - Oct 2019 |
Keywords
- Interactive multimodal learning
- Scalability
- 100 million images