Independence is good: Dependency-based histogram synopses for high-dimensional data

Citation
A. Deshpande et al., Independence is good: Dependency-based histogram synopses for high-dimensional data, SIG RECORD, 30(2), 2001, pp. 199-210
Citations number
20
Categorie Soggetti
Computer Science & Engineering
Journal title
SIGMOD RECORD
ISSN journal
01635808 → ACNP
Volume
30
Issue
2
Year of publication
2001
Pages
199 - 210
Database
ISI
SICI code
0163-5808(200106)30:2<199:IIGDHS>2.0.ZU;2-D
Abstract
Approximating the joint data distribution of a multi-dimensional data set t hrough a compact and accurate histogram synopsis is a fundamental problem a rising in numerous practical scenarios, including query optimization and ap proximate query answering. Existing solutions either rely on simplistic ind ependence assumptions or try to directly approximate the full joint data di stribution over the complete set of attributes. Unfortunately, both approac hes are doomed to fail for high-dimensional data sets with complex correlat ion patterns between attributes. In this paper, we propose a novel approach to histogram-based synopses that employs the solid foundation of statistic al interaction models to explicitly identify and exploit the statistical ch aracteristics of the data. Abstractly, our key idea is to break the synopsi s into (1) a statistical interaction model that accurately captures signifi cant correlation and independence patterns in data, and (2) a collection of histograms on low-dimensional marginals that, based on the model, can prov ide accurate approximations of the overall joint data distribution. Extensi ve experimental results with several real-life data sets verify the effecti veness of our approach. An important aspect of our general, model-based met hodology is that it can be used to enhance the performance of other synopsi s techniques that are based on data-space partitioning (e.g., wavelets) by providing an effective tool to deal with the "dimensionality curse".