Skip to content

Guest Talk 2

Title: Interpretable Deep Learning: Effects of Sparsity on High Dimensional Feature Extractors

Speaker: Maximillian Machado

Abstract:

Deep learning in the 21st century is unavoidable. Whether the task is self-driving, life-saving medical diagnosis, or loan underwriting, these models have established credibility in their undeniable performance. Thanks to the advancement of graphical processing units (GPUs) and general advances in computing, neural network architectures leverage a large set of parameters to tackle a diverse set of problems. That being said, their black-box nature raises a concern: does deep learning trade-off performance at the cost of interpretability? The answer is no. In this talk, I will share recent advancements in interpretable fine-grained image recognition models. Specifically, I will show case the Prototypical Part Network (ProtoPNet) model as well as some recent challenges. Pushing the boundaries of deep learning by integrating state-of-the-art interpretability techniques is not only an exciting endeavor but also a critical one. Through my experiences in major financial institutions and Silicon Valley, I aim to inspire the audience to critically engage with the status quo and embrace interpretability in deep learning research and application.