Explainability in NLP I: Explainability in Deep Learning & NLP

March 17th, 2021. 12:00 - 1:30 pm

Speaker: Benedikt Häcker

Artificial neural networks are widely used for pattern recognition and natural language processing. While they deliver good results, their internal workings are far from clear. To describe this phenomena we often call those models "black boxes". Not fully understanding how a model works can lead to legal issues, especially when the model is used for making decision which have an impact on human lives. To solve this problem we introduce "explainability", which aims for explaining the decision process of a model. In the seminar we want to look into three very different frameworks to achieve explanability and give examples on how their application look on real data. We will see that explainability is furthermore useful to find unwanted bias and to make models more robust.

Location: Zoom