Communication networks have not only become a critical infrastructure of our digital society, but are also increasingly complex and hence error-prone. This has recently motivated the study of more automated and “self-driving” networks: networks which measure, analyze, and control themselves in an adaptive manner, reacting to changes in the environment. In particular, such networks hence require mechanisms to evaluate potential solutions to problems. However, evaluating solutions is interestingly a challenging task: when using human-constructed examples or real-world data, it is difficult to assess to which degree the data represents the input spectrum also of future demands. Moreover, evaluations that fail to show generalization might hide algorithm weak-spots. This could eventually lead to reliability and security issues later on. To solve this problem, we propose two data-driven frameworks: NetBOA and Toxin. In first proof-of-concept implementations, we show (1) how NetBOA can generate real network traffic to benchmark network functions (e.g., the Open vSwitch) and (2) how Toxin creates adversarial network algorithm input in a simulation environment for data centers. This procedure can bring many benefits: it can help to reveal weak-spots of algorithms or to make them bullet-proof for future network demands.