Calibration of Neural Network Outputs
Abstract
Report: PDF
In recent years, the popularity of neural networks has increased and people have even allowed computers to perform such critical tasks as driving a car or using machine learning to detect cancer cells. Especially in these areas, it is of immense importance that the confidence of the neural network’s output also matches the actual probability of an event occurring when it is predicted. In other words, it is important that the outputs of neural networks are calibrated. Starting from a fundamental work by [1], more and more research has been done on post-hoc calibration methods in recent years. This report, in addition to introducing the topic of calibration, highlights three new methods that each attempt to address the issue of calibration in their own way [2, 3, 4]. Besides theoretical derivations and backgrounds of the methods that are presented, the three methods are further compared both qualitatively and quantitatively.
This research project was carried out in the seminar “Beyond Deep Learning: Selected Topics on Novel Challenges” under the supervision of Christian Tomani and Prof. Dr. Daniel Cremers at the Computer Vision Group of TUM.
Enjoy Reading This Article?
Here are some more articles you might like to read next: