Deep neural networks have demonstrated impressive performances on visual recognition tasks relevant to the operation of autonomous drones or personal electric air-taxis. For this reason, their application to visual problems, including object detection and image segmentation, is promising, and even necessary, for autonomous flight.The downside of the increased model performance is higher complexity, which poses challenges to topics, such as, interpretability, explainability, and eventually certification of safety-critical aviation applications. How do you convince the regulators (and ultimately the public) that your model is robust to adversarial attacks? How do you prove that your training and testing datasets are exhaustive? How do you test edge cases when your input space is infinite and any mistake is potentially fatal?Over the last few months, we have partnered with EASA (European Union Aviation Safety Agency) to explore how existing regulations around safety-critical applications can be adapted to encompass modern machine-learning techniques. In this talk, we will visit the different stages of a typical machine-learning pipeline to discuss design choices for neural network architectures, desirable properties for training and test datasets, model generalizability, and how to protect ourselves against adversarial attacks. Finally, we will consider the opportunities, challenges and learning that may apply more generally when building AI for safety-critical applications in the future.
Download the slides for this talk.Download ( PDF, 20350.74 MB)