The 7th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics

May 21, 2018
Vancouver, British Columbia CANADA

In Conjunction with 32nd IEEE International Parallel & Distributed Processing Symposium
May 21-25, 2018
JW Marriott Parq Vancouver
Vancouver, British Columbia CANADA
IPDPS 2018 logo

Advance Program

Time Title Authors/Speaker
8:15-8:30am Opening remarks
8:30-9:30am Invited Talk 1 Abhinav Vishnu, Principal member of technical staff, AMD, USA
9:30-10:00am Break
10:00-10:30am Near-Optimal Straggler Mitigation for Distributed Gradient Methods (ParLearning-01) Songze Li, Seyed Mohammadreza Mousavi Kalan, A. Salman Avestimehr and Mahdi Soltanolkotabi
10:30-11:00am Streaming Tiles: Flexible Implementation of Convolution Neural Networks Inference on Manycore Architectures (ParLearning-02) Nesma Rezk, Madhura Purnaprajna and Zain Ul-Abdin
11:00-12:0pm Invited Talk 2: Model Parallelism optimization with deep reinforcement learning Azalia Mirhoseini, Google Brain, USA
12:00-1:30pm Lunch
1:30-2:30pm Invited Talk 3: Introduction to Snap Machine Learning Thomas Parnell, IBM Research – Zurich, Switzerland
2:30-3:00pm Parallel Huge Matrix Multiplication on a Cluster with GPGPU Accelerators (ParLearning-06) Seungyo Ryu and Dongseung Kim
3:00-3:30pm Break
3:30-4:00pm Invited Talk 4
4:00-4:30pm A Study of Clustering Techniques and Hierarchical Matrix Formats for Kernel Ridge Regression (ParLearning-04) Elizaveta Rebrova, Gustavo Chávez, Yang Liu, Pieter Ghysels and Xiaoye Sherry Li
4:30-5:00pm Panel Discussion Azalia Mirhoseini, Thomas Parnell, Abhinav Vishnu

Invited talk 1

Abhinav Vishnu, Principal member of technical staff, AMD, USA

Invited talk 2

Azalia Mirhoseini, Google Brain, USA

Model Parallelism optimization with deep reinforcement learning

Invited talk 3

Thomas Parnell, IBM Research – Zurich, Switzerland

Introduction to Snap Machine Learning

Call for Papers

Scaling up machine-learning (ML), data mining (DM) and reasoning algorithms from Artificial Intelligence (AI) for massive datasets is a major technical challenge in the time of "Big Data". The past ten years have seen the rise of multi-core and GPU based computing. In parallel and distributed computing, several frameworks such as OpenMP, OpenCL, and Spark continue to facilitate scaling up ML/DM/AI algorithms using higher levels of abstraction. We invite novel works that advance the trio-fields of ML/DM/AI through development of scalable algorithms or computing frameworks. Ideal submissions should describe methods for scaling up X using Y on Z, where potential choices for X, Y and Z are provided below.

Scaling up

Using

On

Proceedings of the Parlearning workshop will be distributed at the conference and will be submitted for inclusion in the IEEE Xplore Digital Library after the conference.

PDF Flyer

Awards

Best Paper Award: The program committee will nominate a paper for the Best Paper award. In past years, the Best Paper award included a cash prize. Stay tuned for this year!

Travel awards: Students with accepted papers have a chance to apply for a travel award. Please find details on the IEEE IPDPS web page.

Important Dates

Paper Guidelines

Submitted manuscripts should be upto 10 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. Format requirements are posted on the IEEE IPDPS web page.

All submissions must be uploaded electronically at https://easychair.org/conferences/?conf=parlearning2018

Organization

Technical Program Committee

Past workshops