Workshop / Overview

Efficient algorithms are indispensable in large scale ML applications. In recent years, the ML community has not just been a large consumer of what the optimization literature had to offer, but it has also been acting as a driving force in the development of new algorithmic tools. The challenges of massive data and efficient implementations have led to many cutting-edge advances in optimization.
The goal of this workshop is to bring practitioners and theoreticians together and to stimulate the exchange between experts from industry and academia. For practitioners, the workshop should give an idea of exciting new developments which they can *use* in their work. For theorists, it should provide a forum to frame the practicality of assumptions and recent work, as well as potentially interesting open questions.

Format: 4-5 invited talks, as well as a panel discussion with the invited speakers.
Schedule
- 13:30-14:05 Ce Zhang
- 14:05-14:40 Miltos Allamanis
- 14:45-15:20 Olivier Teytaud
- 15:20-15:55 Celestine Dünner
- 16:00-16:30 Panel Discussion

Accelerating Learning Systems

Speaker: Ce Zhang (ETH Zurich)
“Can machine learning help to improve this application?”
After this question pops up in the mind of a user -- a biologist, an astrophysicist, or a social scientist -- how long would it take for her to get an answer? Our research studies how to answer this question as rapidly as possible, by accelerating the whole machine learning process.

Making a deep learning system to train faster is indispensable for this purpose, but there is far more to it than that. Our research focuses on: (1) applications, (2) systems, and (3) abstractions. For applications, I will talk about machine learning applications that we enabled by supporting a range of users, none of whom had backgrounds in computer science. For systems, we focus on understanding the system trade-off of distributed training and inference for a diverse set of machine learning models, and how to co-design machine learning algorithms and modern hardware so as to unleash the full potential of both. I will talk in detail about our recent results and their application to FPGA-based acceleration. For abstractions, I will introduce ease.ml, a high-level declarative system for machine learning, which enables the coding of many of the applications we built with just four lines of code.
Machine Learning for Smart Software Engineering Tools

Speaker: Miltos Allamanis (Microsoft Research) - Presentation
Like writing and speaking, software development is an act of human communication. Humans need to understand, maintain and extend code. To achieve this efficiently, developers write code using implicit and explicit syntactic and semantic conventions that aim to ease human communication. The existence of these conventions has raised the exciting opportunity of creating machine learning models that learn from existing code and are embedded within software engineering tools.

This nascent area of "big code" or "code naturalness" lies in the intersection of the software engineering, programming languages and machine learning communities. The core challenge rests on finding methods that learn from highly structured and discrete objects with formal constraints and semantics. In this talk, I will give a brief overview of the research area, highlight a few interesting findings and discuss some of the emerging challenges for machine learning.
Exact Distributed Training: Random Forest with Billions of Examples

Speaker: Olivier Teytaud (Google Brain)
We introduce an exact distributed algorithm to train Random Forest models as well as other decision forest models without relying on approximating best split search. We introduce the proposed algorithm, and compare it, for various complexity measures (time, ram, disk, and network complexity analysis), to related approaches. We report its running performances on artificial and real-world datasets up to 17 billions examples. This figure is several orders of magnitude larger than datasets tackled in the existing literature. Finally, we show empirically that Random Forest benefits from being trained on more data, even in the case of already gigantic datasets. decision trees. Sprint is particularly suitable for the distributed setting, but we show that Sliq becomes better in the balanced case and/or when working with randomly drawn subsets of features; and we derive a rule for automatically switching between both methods. Given a dataset with 17.3B examples with 71 features, our implementation trains a tree in 22h.
Joint work with Mathieu Guillame-Bert.

High Performance Distributed Machine Learning in Heterogeneous Compute Environments

Speaker: Celestine Dünner (IBM Reseach) - Presentation

This talk focuses on techniques to accelerate the distributed training of large-scale machine learning models in heterogeneous compute environments. Such techniques are particularly important for applications where the training time is a severe bottleneck. They can enable more agile development and thus allow to better explore the parameter and model space which in turn yields to higher quality predictions. In this talk I will give insight into recent advances in distributed optimization and primal-dual optimization methods. I will focus on how such methods can be combined with novel techniques to accelerate machine learning algorithms on heterogeneous compute resources. Putting it all together, I will demonstrate the training of a linear classifier on the criteo click prediction dataset, consisting of 1 billion training examples, in a few seconds.

Workshop / Outcome

Workshop / Difficulty

Beginner level

Workshop / Prerequisites

ideally be familiar with some of the main optimization algorithms used in ML and some of the main challenges arising in the implementations

Track / Co-organizers

Sebastian Stich

Research Scientist, EPFL

Dan Alistarh

Assistant Professor, IST Austria

AMLD EPFL 2018 / Workshops

TensorFlow Basics 2018 – Sunday

With Bartek Wołowiec, Ruslan Habalov & Andreas Steiner

09:00-12:00 January 281BC

TensorFlow Basics 2018 – Saturday

With Bartek Wołowiec, Ruslan Habalov & Andreas Steiner

09:00-12:00 January 274ABC

Financial Predictions with Machine Learning

With Stefano Tempesta

13:30-16:30 January 275BC

AMLD / Global partners