Common Limitations of Image Processing Metrics: A Picture Story
Apr 1, 2021·,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,·
0 min read
Annika Reinke
Matthias Eisenmann
Minu D. Tizabi
Carole H. Sudre
Tim Rädsch
Michela Antonelli
Tal Arbel
Spyridon Bakas
M. Jorge Cardoso
Veronika Cheplygina
Keyvan Farahani
Ben Glocker
Doreen Heckmann-Nötzel
Fabian Isensee
Pierre Jannin
Charles E. Kahn
Jens Kleesiek
Tahsin Kurc
Michal Kozubek
Bennett A. Landman
Geert Litjens
Klaus Maier-Hein
Bjoern Menze
Henning Müller
Jens Petersen
Mauricio Reyes
Nicola Rieke
Bram Stieltjes
Ronald M. Summers
Sotirios A. Tsaftaris
Bram Van Ginneken
Annette Kopp-Schneider
Paul Jäger
Lena Maier-Hein
Abstract
While the importance of automatic image analysis is increasing at an enormous pace, recent meta-research revealed major flaws with respect to algorithm validation. Specifically, performance metrics are key for objective, transparent and comparative performance assessment, but relatively little attention has been given to the practical pitfalls when using specific metrics for a given image analysis task. A common mission of several international initiatives is therefore to provide researchers with guidelines and tools to choose the performance metrics in a problem-aware manner. This dynamically updated document has the purpose to illustrate important limitations of performance metrics commonly applied in the field of image analysis. The current version is based on a Delphi process on metrics conducted by an international consortium of image analysis experts.
Type
Publication
arXiv:2104.05642