By 2025, there will be as many as 75 billion devices globally connected to the Internet. In this new era of hyper-connectivity, these devices will not only collect data, but also produce and process information directly on the products closest to their users, on the edge. Increased functionality and computing available on the edge is already changing the way products are designed and built. Edge computing refers to applications, services, and processing performed outside of a central data center and closer to end users. The definition of “closer” falls along a spectrum and depends highly on networking technologies used, the application characteristics, and the desired end user experience.
Edge applications do not need to communicate with the cloud, but they may still interact with servers and internet based applications. Many of the most common edge devices feature physical sensors (such as temperature, lights, speakers), and moving computing power closer to these sensors in the physical world makes sense. Do you really need to rely on an off-site cloud server when asking your lamp to dim the lights? With collection and processing power now available on the edge, the volumes of data that must be moved and stored in the cloud can be significantly reduced. On the other hand, the new hybrid infrastructures of edge computing yield interaction networks and distributed architectures that are more complex than their cloud counterparts. New software for resource allocation, application deployment, and scheduling need to be developed to manage this unprecedented execution flow.
Edge computing definitively impacts three distinct dimensions: Reliability, Privacy, and Latency—each with profound implications. A primary motivator driving edge computing’s adoption is the need for robust and reliable technology in “hard to reach” environments. Edge computing also helps to alleviate some privacy concerns by bringing processing and collection into the environment(s) where the data is produced. Finally, when the computing is on the edge, latency is much less of an issue. Users won’t have to wait while data is sent to and from a cloud server, as no data roundtrip with the remote machine dealing with computation is required, therefore no communication overhead is introduced. In some applications, latency is indeed key and some systems cannot afford the delay of sending information to off-site cloud servers.
In this track, we’ll explore three aspects of edge computing with a focus on applied machine learning. First, we’ll discuss neuromorphic architectures and hardware accelerated deep learning. The recent progress in this field are starting to enable running larger and more efficient neural networks on always smaller processing units. Then, we’ll turn to machine learning on the edge and explore several aspects of this field, from ML on smartphones to embedded speech recognition for IoT. Finally, we’ll focus on edge intelligence and tackle questions like resource optimization, federated learning, and ambient processing.