The 6th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics

May 29, 2017
Orlando, Florida, USA

In Conjunction with 31st IEEE International Parallel & Distributed Processing Symposium
May 29-June 2, 2017
Buena Vista Palace Hotel
Orlando, Florida USA
IPDPS 2017 logo

Advance Program

Keynote talk 1

Dr. John Feo, Northwest Institute for Advanced Computing

Why Tables and Graphs for Knowledge Discovery Systems

Abstract: The availability of data is changing the way science, business, and law enforcement operate. Economic competitiveness and national security depend increasingly on the insightful analysis of large data sets. The breadth of analytic processes is forcing knowledge discovery platforms to supplement traditional table-based methods with graph methods that provide better support for sparse data and dynamic relationships among typed entities. While storing data in only tables makes it difficult to discover complex patterns of activities in time and space, tables are the most efficient data structures for storing dense node and edge attributes, and executing simple select and join operations. Consequently, knowledge discovery systems must support both in a natural way without preference. In this talk, I will describe the hybrid data model, SHAD, that we developing to support both graphs and tables. I will present several high-performance, scalable, analytic platforms developed by PNNL for graph analytics, machine learning, and knowledge discovery. I will include an overview of HAGGLE, our proposed graph analytic platform for new architectures.

Bio: DR. JOHN FEO is the Director of the Northwest Institute for Advanced Computing, a joint institute established by Pacific Northwest National Laboratory and University of Washington. Previously, he managed a large DOD research project in graph algorithms, search, parallel computing, and multithreaded architectures. Dr. Feo received his Ph.D. in Computer Science from The University of Texas at Austin. He began his career at Lawrence Livermore National Laboratory where he managed the Computer Science Group and was the principal investigator of the Sisal Language Project. Dr. Feo then joined Tera Computer Company (now Cray Inc) where he was a principal engineer and product manager for the MTA-1 and MTA-2, the first two generations of the Cray’s multithreaded architecture. He has taken short sabbaticals to work at Sun Microsystem, Microsoft, and Context Relevant. Dr. Feo’s has held academic positions at UC Davis and Washington State University.

Keynote talk 2

Dr. Wei Tan, IBM T. J. Watson Research Center, NY, USA

Matrix Factorization on GPUs: A Tale of Two Algorithms

Abstract: Matrix factorization (MF) is an approach to derive latent features from observations. It is at the heart of many algorithms, e.g., collaborative filtering, word embedding and link prediction. Alternating least Square (ALS) and stochastic gradient descent (SGD) are two popular methods in solving MF. SGD converges fast, while ALS is easy to parallelize and able to deal with non-sparse ratings. In this talk, I will introduce cuMF, a CUDA-based matrix factorization library that accelerates both ALS and SGD to solve very large-scale MF. cuMF uses a set of techniques to maximize the performance on single and multiple GPUs. These techniques include smart access of sparse data leveraging memory hierarchy, using data parallelism with model parallelism, approximate algorithms and storage. With only a single machine with up to four Nvidia GPU cards, cuMF can be 10 times as fast, and 100 times as cost-efficient, compared with the state-of-art distributed CPU solutions. In this talk I will also share lessons learned in accelerating compute- and memory-intensive kernels on GPUs.

Bio: DR. WEI TAN is a Research Staff Member at IBM T. J. Watson Research Center. His research interest includes big data, distributed systems, NoSQL and services computing. Currently he works on accelerating machine learning algorithms using scale-up (e.g., GPU) and scale-out (e.g., Spark) approaches. His work has been incorporated into IBM patent portfolio and software products such as Spark, BigInsights and Cognos. He received the IEEE Peter Chen Big Data Young Researcher Award (2016), Best Paper Award at ACM/IEEE ccGrid 2015, IBM Outstanding Technical Achievement Award (2017, 2016 and 2014), Best Student Paper Award at IEEE ICWS 2014, Best Paper Award at IEEE SCC 2011, Pacesetter Award from Argonne National Laboratory (2010), and caBIG Teamwork Award from the National Institute of Health (2008). For more information, please visit and

Accepted Papers

Paper ID Title Authors
ParLearning-01 ExtDict: Extensible Dictionaries for Data- and Platform-Aware Large-Scale Learning Azalia Mirhoseini, Bita Rouhani, Ebrahim Songhori and Farinaz Koushanfar
ParLearning-02 Coded TeraSort Songze Li, Sucha Supittayapornpong, Mohammad Ali Maddah-Ali and Salman Avestimehr
ParLearning-03 Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing Nitin Gawande, Joshua Landwehr, Jeff Daily, Nathan Tallent, Abhinav Vishnu and Darren Kerbyson
ParLearning-04 Efficient and Portable ALS Matrix Factorization for Recommender Systems Jing Chen, Jianbin Fang, Weifeng Liu, Tao Tang, Xuhao Chen and Canqun Yang
ParLearning-05 Large-Scale Stochastic Learning using GPUs Thomas Parnell, Celestine Duenner, Kubilay Atasu, Manolis Sifalakis and Haris Pozidis
ParLearning-06 Distributed and in-situ machine learning for smart-homes and buildings: application to alarm sounds detection Amaury Durand, Yanik Ngoko and Christophe Cérin
ParLearning-07 The New Large-Scale RNNLM System Based On Distributed Neuron Dejiao Niu, Rui Xue, Tao Cai, Hai Li and Effah Kingsley
ParLearning-08 A Cache Friendly Parallel Encoder-Decoder Model without Padding on Mulit-core Architecture Yuchen Qiao, Kenjiro Taura, Kazuma Hashimoto, Yoshimasa Tsuruoka and Akkiko Eriguchi

Call for Papers

Scaling up machine-learning (ML), data mining (DM) and reasoning algorithms from Artificial Intelligence (AI) for massive datasets is a major technical challenge in the time of "Big Data". The past ten years have seen the rise of multi-core and GPU based computing. In parallel and distributed computing, several frameworks such as OpenMP, OpenCL, and Spark continue to facilitate scaling up ML/DM/AI algorithms using higher levels of abstraction. We invite novel works that advance the trio-fields of ML/DM/AI through development of scalable algorithms or computing frameworks. Ideal submissions would be characterized as scaling up X on Y, where potential choices for X and Y are provided below.

Scaling up


Proceedings of the Parlearning workshop will be distributed at the conference and will be submitted for inclusion in the IEEE Xplore Digital Library after the conference.

PDF Flyer

Journal publication

Selected papers from the workshop will be published in a Special Issue of Future Generation Computer Systems, Elsevier's International Journal of eScience. Special Issue papers will undergo additional review.


Best Paper Award: The program committee will nominate a paper for the Best Paper award. In past years, the Best Paper award included a cash prize. Stay tuned for this year!

Travel awards: Students with accepted papers have a chance to apply for a travel award. Please find details on the IEEE IPDPS web page.

Important Dates

Paper Guidelines

Submitted manuscripts should be upto 10 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. Format requirements are posted on the IEEE IPDPS web page.

All submissions must be uploaded electronically at


Technical Program Committee

Past workshops