Interval Neural Networks as Instability Detectors for Image Reconstructions

Contributed Talk

Abstract

This work investigates the detection of instabilities that may occur when utilizing deep learning models for image reconstruction tasks. Although neural networks often empirically outperform traditional reconstruction methods, their usage for sensitive medical applications remains controversial. Indeed, in a recent series of works, it has been demonstrated that deep learning approaches are susceptible to various types of instabilities, caused for instance by adversarial noise or out-of-distribution features. It is argued that this phenomenon can be observed regardless of the underlying architecture and that there is no easy remedy. Based on this insight, the present work demonstrates on two use cases how uncertainty quantification methods can be employed as instability detectors. In particular, it is shown that the recently proposed Interval Neural Networks are highly effective in revealing instabilities of reconstructions. Such an ability is crucial to ensure a safe use of deep learning-based methods for medical image reconstruction.

Date
Mar 9, 2021 1:45 PM — 2:00 PM
Location
Virtual Workshop
Jan Macdonald
Jan Macdonald

My research is at the interface of applied and computational mathematics and scientific machine learning. I am interested in inverse problems, signal- and image recovery, and robust and interpretable deep learning.