Industry has gone from skepticism to full-on adoption of Deep Learning to solve computer vision problems. This means we need to be able to rely on our models, the same way we need to rely on production code. We need to break our models, rebuild them and prove to stakeholders that they are robust. Adversarial examples are one of the multiple ways we can do this - but not only.
Are you sure your test set is good enough? What do you need to trust a model you put in production? Can Industry and Academia work together towards reliable Deep Learning models?