Recent advancements in natural language processing that we owe to deep learning made us believe that we are ready to use AI for a number of NLP-related tasks like sentiment analysis, question answering, machine translation, named entity recognition, language generation. People who are closely following developments in the NLP area know that machines have already surpassed humans in language understanding - but wait, have they really? It turns out that all state-of-the-art deep learning models are extremely sensitive to small perturbations including things as common as typos or as natural for humans as changing words for their synonyms. During my talk, I will present an open-source framework called WildNLP which facilitates the training of robust NLP models.
Download the slides for this talk.Download ( PDF, 984.13 MB)