Non-Linearity in deep learning models have enabled them with this success, but as a consequence these non-linearities have made them opaque or “black boxed”. While performance metrics such as precision vs recall, accuracy, auc-roc etc. may be important, but they aren’t sufficient to make deep learning models deployable in production workloads.
In many industries, such as finance, healthcare and government policy-making, where the cost of wrong predictions is high, it is important to earn trust by explaining how the model works.
This workshop aims to unravel the inner workings of deep learning models, while offering practical advice on how model predictions can be made explainable for several architectures that use Convolutions Neural Networks, Recurrent Neural Network, Attention, Self-Attention and discuss trade-off between interpretability and predictive power.
We’ll focus on three domains, namely – Computer Vision, Natural Language Processing, and TimeSeries forecasting and enabling data scientists to help them communicate how their models work in the real world.
Practitioners will learn techniques to build and evaluate deep learning models, with particular emphasis on financial time series, NLP and CV use cases.
Intermediate level
- Intermediate knowledge of Python
- Deep learning basics
- AWS account
- Own laptop