Optimizing Sample Design for Approximate Query Processing

Optimizing Sample Design for Approximate Query Processing

Philipp Rösch (Business Intelligence Practice, SAP Research, Dresden, Germany) and Wolfgang Lehner (Database Technology Research Group, Dresden University of Technology, Dresden, Germany)
Copyright: © 2013 |Pages: 21
DOI: 10.4018/ijkbo.2013100101
OnDemand PDF Download:
No Current Special Offers


The rapid increase of data volumes makes sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatically determine the optimal sample for a given query remained (almost) unaddressed. To tackle this problem the authors propose a sample advisor based on a novel cost model. Primarily designed for advising samples of a few queries specified by an expert, the authors additionally propose two extensions of the sample advisor. The first extension enhances the applicability by utilizing recorded workload information and taking memory bounds into account. The second extension increases the effectiveness by merging samples in case of overlapping pieces of sample advice. For both extensions, the authors present exact and heuristic solutions. Within their evaluation, the authors analyze the properties of the cost model and demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments.
Article Preview


Recent studies revealed a rapid multidimensional growth in current data warehouse databases: The size of the data triples every two years and the number of queries doubles each year. Additionally, the complexity of the queries increases —a lot of queries are complex aggregation queries with joins. These queries are of explorative nature, that is, users browse through the data and search for interesting regions. (Winter, 2008)

In order to make the data exploration reasonably applicable, short response times are essential. However, the rapid growth of data warehouse systems conflicts with the need for short response times. A common solution for this problem is the use of synopses. Synopses are concise representations that reflect the characteristics of the underlying data, and they allow for approximate query processing with significantly shorter response times. In the database literature, several kinds of synopses have been proposed like histograms (Ioannidis & Poosala, 1999; Poosala, Ioannidis, Haas, & Shekita, 1996), wavelets (Chakrabarti, Garofalakis, Rastogi, & Shim, 2000; Matias, Vitter, & Wang, 1998) or random samples (Vitter, 1985). From these synopses, random samples have proven to be a good choice: They provide probabilistic error bounds and they can easily be integrated into database systems. Initially, samples were mostly used for query optimization. In the last 10 years, however, the focus of database sampling has shifted more and more towards approximate query processing. Here, several techniques for online sampling have been proposed (Hellerstein, Haas, & Wang, 1997; Haas & Hellerstein, 1999; Jermaine, Dobra, Arumugam, Joshi, & Pol, 2005; Jermaine, Arumugam, Pol, & Dobra, 2007). However, the major part of research focuses on precomputed and materialized —that is, offline— sampling since online sampling has high overhead when drawing a sample and is only applicable for a small subset of queries.

In the field of offline sampling, a multitude of different sampling schemes have been proposed that are optimized for different query types, like aggregation (Chaudhuri, Das, Datar, Motwaniand, & Narasayya, 2001; Rösch, Gemulla, & Lehner, 2008), group-by (Acharya, Gibbons, & Poosala, 2000; Babcock, Chaudhuri, & Das, 2003; Rösch & Lehner, 2009) or foreign-key joins (Acharya, Gibbons, Poosala, & Ramaswamy, 1999; Gemulla, Rösch, & Lehner, 2008). While those sampling schemes provide great solutions for single (groups of) queries, the more general problem of automatic sample selection for an entire workload of a database remains almost unaddressed.

In this article, we address the problem of finding a set of samples that is to be materialized. We focus on simple random samples as those samples are easy to use and easy to maintain. Simple random samples are very general and can be used for a broad range of queries. Our solution is a sample advisor that suggests a set of samples for a set of queries specified by an expert. The sample advisor is based on a novel cost model to evaluate a sample for a given query. This cost model allows us to give advice on a sample for an individual query. To ease the usage of the sample advisor, we further propose an extension to utilize recorded workload information. In this scenario, the sample advisor selects from all the pieces of sample advices those that minimize the runtime of the workload and fit into a given memory bound. With a second extension, we are targeting the effectiveness of the sample advisor. Here, we consider the merge of samples in case of overlapping pieces of sample advice. As a result, more of the advised samples fit into the memory bound—the available memory is used more effectively—and thus, more queries can be answered very fast based on samples.

This article is an extended version of a previously published conference paper (Rösch & Lehner, 2010); we make the following (partly new) contributions:

Complete Article List

Search this Journal:
Open Access Articles: Forthcoming
Volume 12: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2013)
Volume 2: 4 Issues (2012)
Volume 1: 4 Issues (2011)
View Complete Journal Contents Listing