Space is mainly empty. As such, it also is an environment different from the Earth where disturbances of all sorts build up noise levels that end up dominating the way some agent needs to act and react to environment stimuli. As a consequence, methodologies such as (deep) reinforcement learning, often used to incorporate optimality principles in agent responses to noisy environment, admit better alternatives based on deterministic optimal control results. In this talk we briefly present ESA's concept of G&CNETs: deep networks trained to imitate the optimal state-action relation computed applying Pontryagin theory of optimal processes. The resulting end-to-end network is able to drive the studied system successfully through the sensed environment and with unprecedented levels of optimality. Applications in interplanetary trajectories, planetary landings and quadcopter racing are shown.