Principled Detection of Out-of-Distribution Examples in Neural Networks[ Edit ]
Given a pre-trained neural network, which is trained using data from some distribution P (referred to as in-distribution data), the task is to detect the examples coming from a distribution Q which is different from P (referred to as out-of-distribution data).
For example, if a digit recognizer neural network is trained using MNIST images, an out-of-distribution example would be images of animals.
Neural Networks can make high confidence predictions even in such cases where the input is unrecognisable or irrelevant.
The paper proposes ODIN which can detect such out-of-distribution examples without changing the pre-trained model itself.