Every designer of a new data structure wants to know how well it performs i
n comparison with others. But finding, coding and testing applications as b
enchmarks can be tedious and time-consuming. Besides, how a benchmark uses
a data structure may considerably affect its apparent efficiency, so the ch
oice of applications may bias the results. We address these problems by dev
eloping a tool for inductive benchmarking. This tool, Auburn, can generate
benchmarks across a wide distribution of uses. We precisely define 'the use
of a data structure', upon which we build the core algorithms of Auburn: h
ow to generate a benchmark from a description of use, and how to extract a
description of use from an application. We then apply inductive classificat
ion techniques to obtain decision trees for the choice between competing da
ta structures. We test Auburn by benchmarking several implementations of th
ree common data structures: queues, random-access lists and heaps. These an
d other results show Auburn to be a useful and accurate tool, but they also
reveal some limitations of the approach.