Talk / Overview

The performance of mobile AI accelerators has been evolving rapidly in the past three years, nearly doubling with each new generation of SoCs. The current 4th generation of mobile NPUs is already approaching the results of CUDA-compatible Nvidia graphics cards presented not long ago, which together with the increased capabilities of mobile deep learning frameworks makes it possible to run complex and deep AI models on mobile devices. In this talk, we will discuss in detail what AI hardware is available on each chipset from Qualcomm, HiSilicon, Samsung, MediaTek and Unisoc, and will compare their performance on different real-world deep learning tasks. We will talk about the Android machine learning ecosystem and the deployment of deep learning models on smartphones. Finally, we will compare the results obtained on mobile NPUs with the performance of desktop CPUs and GPUs to understand the relation between these hardware platforms.

Talk / Speakers

Andrey Ignatov

PhD Student, Computer Vision Lab, ETH Zurich

Talk / Slides

Download the slides for this talk.Download ( PDF, 2263.06 MB)

Talk / Highlights

21:27

Deep Learning on Smartphones: A Detailed Overview

With Andrey IgnatovPublished March 12, 2020

AMLD / Global partners