Talk / Overview

Deep neural networks (DNNs) are revolutionizing computing, necessitating an integrated approach across the computing stack to optimize efficiency, especially for on-device deployment. In this talk, I will explore the frontier of DNN optimization, spanning algorithms, software, and hardware. We’ll start with efficient hardware-aware neural architecture search, demonstrating how tailoring DNN architectures to specific hardware can drastically enhance performance. I’ll then delve into the intricacies of DNN-hardware codesign, revealing how this synergy leads to cutting-edge hardware accelerator architectures. This talk aims to shed light on the pivotal role of codesign in unleashing the full potential of next-generation DNNs, paving the way for continued breakthroughs in on-device deep learning.

Talk / Speakers

Mohamed Abdelfattah

Professor, Cornell University

AMLD / Global partners