Transfer Learning at scale in Natural Language Processing

09:00-12:30, January 26 @ Foyer 1

Workshop / Overview

Transfer learning has impacted Natural Language Processing significantly these last months. It has led to state-of-the-art results in most NLP supervised tasks.

However, these methods suffer from high computational cost. In this Workshop, we will give an overview of some selected transfer learning methods and review an example on how to use these models on a downstream task. Finally, we will discuss directions on how to use these models at scale.

Workshop / Outcome

During this workshop, participants will gain an understanding of some state-of-the-art transfer learning techniques in NLP (e.g., Bert, XLNet, etc). They will witness a hands-on example on how to finetune these models on a downstream task. Finally, participants will acknowledge the limitations of these models and gain an understanding on how to use them at scale.

Workshop / Difficulty

Beginner level

Workshop / Prerequisites

  • Basic Machine learning knowledge is preferred. However, we will make sure to make the workshop as inclusive as possible and accessible even for beginners.
  • Own laptop

Track / Co-organizers

Matteo Pagliardini

PhD. Student, EPFL

Yassine Benyahia

Machine Learning Engineer, IPROVA

Harm Cronie

CTO, Iprova

AMLD EPFL 2020 / Workshops

A Conceptual Introduction to Reinforcement Learning

With Kevin Smeyers, Katrien Van Meulder & Bram Vandendriessche

09:00-12:30 January 251ABC

Applied Machine Learning with R

With Dirk Wulff, Markus Steiner & Michael Schulte-Mecklenbeck

09:00-17:00 January 25Foyer 6

Augmenting the Web browsing experience using machine learning

With Oleksandr Paraska, Vasily Kuznetsov, Tudor Avram & Levan Tsinadze

09:00-12:30 January 253A

AMLD / Global partners