Abstract
Standardized benchmarks are crucial to ensure a fair comparison of performance across systems. While extremely valuable, these benchmarks all use a setup where the workload is well-defined and known in advance. Unfortunately, this has led to overly-tuning data management systems for particular benchmark workloads such as TPC-H or TPC-C. As a result, benchmarking results frequently do not reflect the behavior of these systems in many real-world settings since workloads often significantly vary from the “known” benchmarking workloads. To address this issue, we present surprise benchmarking , a complementary approach to the current standardized benchmarking where “unknown” queries are exercised during the evaluation.
Original language | English |
---|---|
Title of host publication | Proceedings of the Tenth International Workshop on Testing Database Systems, DBTest 2024, Santiago, Chile, 9 June 2024 |
Number of pages | 8 |
Publisher | Association for Computing Machinery |
Publication date | 2024 |
Pages | 1-8 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- Benchmarking
- Database