Digital ethics has become a more and more important topic, and is highly relevant also when it comes to machine learning. Biased training data (e.g. gender, racial bias, and more) can have dramatic consequences for the fairness of applications using machine learning models. When a model trained on biased data is used for smart decision making, unfair decisions might be taken. We therefore need a transparent and independent classification system, to measure the fairness of training data fed into machine learning algorithms.