Talk / Overview

Space is mainly empty. As such, it also is an environment different from the Earth where disturbances of all sorts build up noise levels that end up dominating the way some agent needs to act and react to environment stimuli. As a consequence, methodologies such as (deep) reinforcement learning, often used to incorporate optimality principles in agent responses to noisy environment, admit better alternatives based on deterministic optimal control results. In this talk we briefly present ESA's concept of G&CNETs: deep networks trained to imitate the optimal state-action relation computed applying Pontryagin theory of optimal processes. The resulting end-to-end network is able to drive the studied system successfully through the sensed environment and with unprecedented levels of optimality. Applications in interplanetary trajectories, planetary landings and quadcopter racing are shown. 

Talk / Speakers

Dario Izzo

European Space Agency

Talk / Slides

Download the slides for this talk.Download ( PDF, 33501.51 MB)

Talk / Highlights

14:09

Guidance And Control Networks: from perception to optimal action

With Dario IzzoPublished March 12, 2020

AMLD / Global partners