Abstract
Standardized benchmarks are crucial to ensure a fair comparison of performance across systems. While extremely valuable, these benchmarks all use a setup where the workload is well-defined and known in advance. Unfortunately, this has led to overly-tuning data management systems for particular benchmark workloads such as TPC-H or TPC-C. As a result, benchmarking results frequently do not reflect the behavior of these systems in many real-world settings since workloads often significantly vary from the “known” benchmarking workloads. To address this issue, we present surprise benchmarking , a complementary approach to the current standardized benchmarking where “unknown” queries are exercised during the evaluation.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the Tenth International Workshop on Testing Database Systems, DBTest 2024, Santiago, Chile, 9 June 2024 |
Antal sider | 8 |
Forlag | Association for Computing Machinery |
Publikationsdato | 2024 |
Sider | 1-8 |
DOI | |
Status | Udgivet - 2024 |
Emneord
- Benchmarking
- Database