20.03.2017
C. Boden

Paper Accepted at BeyondMR Workshop @ SIGMOD 2017

A paper with the title "Benchmarking Data Flow Systems for Scalable Machine Learning" by Christoph Boden, Andrea Spina, Tilmann Rabl, Volker Markl has been accepted for presentation and publication at this years Algorithms and Systems for MapReduce and Beyond (BeyondMR) workshop held in conjunction with SIGMOD/PODS2017, in Chicago, IL, USA on Friday May 19, 2017.

Abstract:

Distributed data flow systems such as Apache Spark or Apache Flink are popular choices for scaling machine learning algorithms in production. Industry applications of large scale machine learning such as click through rate prediction rely on models trained on billions of data points which are both highly sparse and high dimensional. Existing Benchmarks attempt to assess the performance of data flow systems such as Apache Flink, Spark or Hadoop with non-representative workloads such as WordCount, Grep or Sort. They only evaluate scalability with respect to data set size and fail to address the crucial requirement of handling high dimensional data.

We introduce a representative set of distributed machine learning algorithms suitable for large scale distributed settings which have close resemblance to industry-relevant applications and provide generalizable insights into system per- formance. We implement mathematically equivalent ver- sions of these algorithms in Apache Flink and Apache Spark, tune relevant system parameters and run a comprehensive set of experiments to assess their scalability with respect to both: data set size and dimensionality of the data. We eval- uate the systems for data up to four billion data points 100 million dimensions. Additionally we compare the perfor- mance to single-node implementations to put the scalability results into perspective.

Our results indicate that while being able to robustly scale with increasing data set sizes, current state of the art data flow systems are surprisingly inefficient at coping with high dimensional data, which is a crucial requirement for large scale machine learning algorithms.