Rainer Gemulla

Sampling Algorithms for Evolving Datasets

Dokumente und Dateien


Bitte nutzen Sie beim Zitieren immer folgende Url:


Kurzfassung in Englisch

Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing.

weitere Metadaten

Uniform sampling, incremental sample maintenance, set sampling, multiset sampling, distinct-item sampling, data stream sampling
Einfache Zufallsstichproben, inkrementelle Stichprobenwartung, Stichprobenerhebung von Mengen/Multimengen/Projektionen/Datenströmen
DDC Klassifikation004
RVK KlassifikationST 274
InstitutionTechnische Universität Dresden
BetreuerProf. Dr.-Ing. Wolfgang Lehner
GutachterDr. Peter Haas
Prof. Dr.-Ing. Dr. h.c. Theo Härder
Tag d. Einreichung (bei der Fakultät)27.08.2008
Tag d. Verteidigung / Kolloquiums / Prüfung20.10.2008
Veröffentlichungsdatum (online)24.10.2008
persistente URNurn:nbn:de:bsz:14-ds-1224861856184-11644

Hinweis zum Urheberrecht

Diese Website ist eine Installation von Qucosa - Quality Content of Saxony!
Sächsische Landesbibliothek Staats- und Universitätsbibliothek Dresden