In the best-case scenario, results are supplied with confidence intervals, but overall give very little insight into the uncertainty of the estimates and provided predictions. Furthermore, small data or highly flexible models can lead to overfitting. Overconfident predictions in sensitive fields such as healthcare may be costly and harmful.
Bayesian approach to model formulation offers a tool to resolve those shortcomings and allows for a lot of flexibility: a broad range of models – from linear regression to neural networks – can be formalised with the help of probabilistic programming languages (PPL), prior knowledge can be taken into account, multiple sources of uncertainty can be incorporated and propagated into the uncertainty of produced estimates and predictions. This makes probabilistic modelling applicable to even small datasets, where classical models would fail to produce reliable results.
As an introduction to the workshop, we will discuss the basics of Bayesian inference. The focus, however, will be on the hands-on experience. We will consider a number of problems and implement them in a Julia-based probabilistic modelling language Turing. Introduction to Julia will also be given at the start. Those who prefer R or Python to Julia can also follow along: translation of the Bayesian workflow into R/Stan and Python/PyMC3 will be provided in a GitHub repository.
Obtain the pre-taste of the workshop here: https://medium.com/@liza_p_semenova/ordered-logistic-regression-and-probabilistic-programming-502d8235ad3f
Teach your model to say “I don’t know”. Hands-on is the key: learn to program in Julia and use Turing (a probabilistic programming language) for Bayesian inference [alternatively, follow along using the provided R+Stan or Python+PyMC3 code]
Intermediate level
For technical preparations please follow instructions of the the "Preparation" section on the following page: https://github.com/elizavetasemenova/EmbracingUncertainty/blob/master/README.md