Follow
Gesina Schwalbe
Gesina Schwalbe
Postdoc, University of Lübeck
Verified email at pheerai.de - Homepage
Title
Cited by
Cited by
Year
A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts
G Schwalbe, B Finzel
Data Mining and Knowledge Discovery, 1-59, 2023
120*2023
A survey on methods for the safety assurance of machine learning based systems
G Schwalbe, M Schels
10th European Congress on Embedded Real Time Software and Systems (ERTS 2020), 2020
552020
Inspect, understand, overcome: A survey of practical methods for ai safety
S Houben, S Abrecht, M Akila, A Bär, F Brockherde, P Feifel, ...
Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty …, 2022
542022
Structuring the safety argumentation for deep neural network based perception in automotive applications
G Schwalbe, B Knie, T Sämann, T Dobberphul, L Gauerhof, S Raafatnia, ...
Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops: DECSoS …, 2020
272020
Expressive explanations of DNNs by combining concept analysis with ILP
J Rabold, G Schwalbe, U Schmid
KI 2020: Advances in Artificial Intelligence: 43rd German Conference on AI …, 2020
212020
Concept embedding analysis: A review
G Schwalbe
arXiv preprint arXiv:2203.13909, 2022
202022
Concept enforcement and modularization as methods for the ISO 26262 safety argumentation of neural networks
G Schwalbe, M Schels
Otto-Friedrich-Universität, 2020
162020
Evaluating the stability of semantic concept representations in CNNs for robust explainability
G Mikriukov, G Schwalbe, C Hellert, K Bade
World Conference on Explainable Artificial Intelligence, 499-524, 2023
42023
Interpretable model-agnostic plausibility verification for 2d object detectors using domain-invariant concept bottleneck models
M Keser, G Schwalbe, A Nowzad, A Knoll
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
42023
Verification of size invariance in DNN activations using concept embeddings
G Schwalbe
IFIP International Conference on Artificial Intelligence Applications and …, 2021
42021
Strategies for safety goal decomposition for neural networks
G Schwalbe, M Schels
Abstracts 3rd ACM Computer Science in Cars Symposium, 2019
32019
Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings
G Schwalbe, C Wirth, U Schmid
arXiv preprint arXiv:2201.00572, 2022
2*2022
GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces
G Mikriukov, G Schwalbe, C Hellert, K Bade
arXiv preprint arXiv:2311.14435, 2023
12023
Quantified Semantic Comparison of Convolutional Neural Networks
G Mikriukov, G Schwalbe, C Hellert, K Bade
arXiv preprint arXiv:2305.07663, 2023
12023
The Anatomy of Adversarial Attacks: Concept-based XAI Dissection
G Mikriukov, G Schwalbe, F Motzkus, K Bade
arXiv preprint arXiv:2403.16782, 2024
2024
Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Y Shoeb, R Chan, G Schwalbe, A Nowzad, F Güney, H Gottschalk
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2024
2024
Method for monitoring logical consistency in a machine learning model and associated monitoring device
G Schwalbe, C Wirth
US Patent App. 18/046,087, 2023
2023
Concept Embedding Analysis Based Methods for the Safety Assurance of Deep Neural Networks: towards safe automotive computer vision applications
G Schwalbe
Otto-Friedrich-Universität Bamberg, Fakultät Wirtschaftsinformatik und …, 2022
2022
Concept Enforcement and Modularization for the ISO 26262 Safety Case of Neural Networks
G Schwalbe, U Schmid
Otto-Friedrich-Universität, 2019
2019
1.13 Object Detection Plausibility with Concept-Bottleneck Models
M Keser, G Schwalbe, A Nowzad, C AG
The system can't perform the operation now. Try again later.
Articles 1–20