Title :
Petuum: A New Platform for Distributed Machine Learning on Big Data
Author :
Xing, Eric P. ; Qirong Ho ; Wei Dai ; Jin Kyu Kim ; Jinliang Wei ; Seunghak Lee ; Xun Zheng ; Pengtao Xie ; Kumar, Abhimanu ; Yaoliang Yu
Author_Institution :
Sch. of Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA, USA
Abstract :
What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100 s of billions of parameters) on Big Data (up to terabytes or petabytes)? Modern parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized graph-based execution that relies on graph representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of ML programs at scale. We propose a general-purpose framework, Petuum, that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. This presents unique opportunities for an integrative system design, such as bounded-error network synchronization and dynamic scheduling based on ML program structure. We demonstrate the efficacy of these system designs versus well-known implementations of modern ML algorithms, showing that Petuum allows ML programs to run in much less time and at considerably larger model sizes, even on modestly-sized compute clusters.
Keywords :
Big Data; learning (artificial intelligence); parallel programming; scheduling; Big Data; Big Models; MapReduce; Petuum platform; bounded-error network synchronization; bulk-synchronous processing paradigm; data-parallel method; distributed machine learning; dynamic scheduling; error-tolerant-iterative-convergent algorithmic solutions; fine-grained operations; general-purpose framework; graph representations; graph-based execution; integrative system design; large-scale ML; model-parallel method; optimization-centric programs; parallelization strategies; pull systems; universal platform; Big data; Computational modeling; Convergence; Data models; Mathematical model; Servers; Synchronization; Big Data; Big Model; Data-Parallelism; Distributed Systems; Machine Learning; Machine learning; Model-Parallelism; Theory; big data; big model; data-parallelism; distributed systems; model-parallelism; theory;
Journal_Title :
Big Data, IEEE Transactions on
DOI :
10.1109/TBDATA.2015.2472014