May 21, 2018
Vancouver, British Columbia CANADA
|8:30-9:30am||Invited Talk 1||Abhinav Vishnu, Principal member of technical staff, AMD, USA|
|10:00-10:30am||Near-Optimal Straggler Mitigation for Distributed Gradient Methods (ParLearning-01)||Songze Li, Seyed Mohammadreza Mousavi Kalan, A. Salman Avestimehr and Mahdi Soltanolkotabi|
|10:30-11:00am||Streaming Tiles: Flexible Implementation of Convolution Neural Networks Inference on Manycore Architectures (ParLearning-02)||Nesma Rezk, Madhura Purnaprajna and Zain Ul-Abdin|
|11:00-12:0pm||Invited Talk 2: Model Parallelism optimization with deep reinforcement learning||Azalia Mirhoseini, Google Brain, USA|
|1:30-2:30pm||Invited Talk 3: Introduction to Snap Machine Learning||Thomas Parnell, IBM Research – Zurich, Switzerland|
|2:30-3:00pm||Parallel Huge Matrix Multiplication on a Cluster with GPGPU Accelerators (ParLearning-06)||Seungyo Ryu and Dongseung Kim|
|3:30-4:00pm||Invited Talk 4|
|4:00-4:30pm||A Study of Clustering Techniques and Hierarchical Matrix Formats for Kernel Ridge Regression (ParLearning-04)||Elizaveta Rebrova, Gustavo Chávez, Yang Liu, Pieter Ghysels and Xiaoye Sherry Li|
|4:30-5:00pm||Panel Discussion||Azalia Mirhoseini, Thomas Parnell, Abhinav Vishnu|
Abhinav Vishnu, Principal member of technical staff, AMD, USA
Azalia Mirhoseini, Google Brain, USA
Model Parallelism optimization with deep reinforcement learning
Thomas Parnell, IBM Research – Zurich, Switzerland
Introduction to Snap Machine Learning
Scaling up machine-learning (ML), data mining (DM) and reasoning algorithms from Artificial Intelligence (AI) for massive datasets is a major technical challenge in the time of "Big Data". The past ten years have seen the rise of multi-core and GPU based computing. In parallel and distributed computing, several frameworks such as OpenMP, OpenCL, and Spark continue to facilitate scaling up ML/DM/AI algorithms using higher levels of abstraction. We invite novel works that advance the trio-fields of ML/DM/AI through development of scalable algorithms or computing frameworks. Ideal submissions should describe methods for scaling up X using Y on Z, where potential choices for X, Y and Z are provided below.
Proceedings of the Parlearning workshop will be distributed at the conference and will be submitted for inclusion in the IEEE Xplore Digital Library after the conference.PDF Flyer
Travel awards: Students with accepted papers have a chance to apply for a travel award. Please find details on the IEEE IPDPS web page.
Submitted manuscripts should be upto 10 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. Format requirements are posted on the IEEE IPDPS web page.
All submissions must be uploaded electronically at https://easychair.org/conferences/?conf=parlearning2018