HaLoop: Efficient Iterative Data Processing on Large Clusters

Download: PDF.

“HaLoop: Efficient Iterative Data Processing on Large Clusters” by Yingyi Bu, Bill Howe, Magdalena Balazinska, and Michael D. Ernst. In 36th International Conference on Very Large Data Bases, (Singapore), September 14-16, 2010.

Abstract

The growing demand for large-scale data mining and data analysis applications has led both industry and academia to design new types of highly scalable data-intensive computing platforms. MapReduce and Dryad are two popular platforms in which the dataflow takes the form of a directed acyclic graph of operators. These platforms lack built-in support for iterative programs, which arise naturally in many applications including data mining, web ranking, graph analysis, model fitting, and so on. This paper presents HaLoop, a modified version of the Hadoop MapReduce framework that is designed to serve these applications. HaLoop not only extends MapReduce with programming support for iterative applications, it also dramatically improves their efficiency by making the task scheduler loop-aware and by adding various caching mechanisms. We evaluated HaLoop on real queries and real datasets. Compared with Hadoop, on average, HaLoop reduces query runtimes by 1.85, and shuffles only 4% of the data between mappers and reducers.

Download: PDF.

BibTeX entry:

@inproceedings{BuHBE2010,
   author = {Yingyi Bu and Bill Howe and Magdalena Balazinska and Michael
	D. Ernst},
   title = {{HaLoop}: Efficient Iterative Data Processing on Large Clusters},
   booktitle = {36th International Conference on Very Large Data Bases},
   address = {Singapore},
   month = {September~14--16,},
   year = {2010}
}

(This webpage was created with bibtex2web.)

Back to Program Analysis Group publications.