Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem M Hein, M Andriushchenko, J Bitterwolf CVPR 2019, 2019 | 634 | 2019 |
A simple way to make neural networks robust against diverse image corruptions E Rusak, L Schott, RS Zimmermann, J Bitterwolf, O Bringmann, M Bethge, ... ECCV 2020, 2020 | 216 | 2020 |
Certifiably adversarially robust detection of out-of-distribution data J Bitterwolf, A Meinke, M Hein NeurIPS 2020, 2020 | 81 | 2020 |
In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation J Bitterwolf, M Mueller, M Hein ICML 2023, 2023 | 61 | 2023 |
Increasing the robustness of DNNs against image corruptions by playing the game of noise E Rusak, L Schott, R Zimmermann, J Bitterwolf, O Bringmann, M Bethge, ... Towards Trustworthy ML: Rethinking Security and Privacy for ML (ICLR 2020 …, 2020 | 52 | 2020 |
Provably adversarially robust detection of out-of-distribution data (almost) for free A Meinke, J Bitterwolf, M Hein NeurIPS 2022, 2022 | 30* | 2022 |
Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities J Bitterwolf, A Meinke, M Augustin, M Hein ICML 2022, 2022 | 29 | 2022 |
Classifiers should do well even on their worst classes J Bitterwolf, A Meinke, V Boreiko, M Hein Shift Happens (ICML 2022 Workshop), 2022 | 4 | 2022 |
Neural Network Heuristic Functions: Taking Confidence into Account D Heller, P Ferber, J Bitterwolf, M Hein, J Hoffmann SoCS 2022, 2022 | 3 | 2022 |