Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?

TitleConcept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
Publication TypeConference Paper
Year of Publication2024
AuthorsLee, JHee, Mikriukov, G, Schwalbe, G, Wermter, S, Wolter, D
Conference NameComputer Vision – ECCV 2024 Workshops
Volume15643
Date Published05/2025
PublisherSpringer Nature Switzerland
Conference LocationMilano, Italy
KeywordsConcept Control, Concept Embedding Analysis, Concept-Based Explainable AI, Knowledge Representation, Neuro-Symbolic AI
Abstract

Concept-based XAI (C-XAI) approaches to explaining neural vision models are a promising field of research, since explanations that refer to concepts (i.e., semantically meaningful parts in an image) are intuitive to understand and go beyond saliency-based techniques that only reveal relevant regions. Given the remarkable progress in this field in recent years, it is time for the community to take a critical look at the advances and trends. Consequently, this paper reviews C-XAI methods to identify interesting and underexplored areas and proposes future research directions. To this end, we consider three main directions: the choice of concepts to explain, the choice of concept representation, and how we can control concepts. For the latter, we propose techniques and draw inspiration from the field of knowledge representation and learning, showing how this could enrich future C-XAI research.

DOI10.1007/978-3-031-92648-8_17
Bibtex: 
@inproceedings {1473,
	title = {Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?},
	booktitle = {Computer Vision {\textendash} ECCV 2024 Workshops},
	volume = {15643},
	year = {2024},
	month = {05/2025},
	pages = {266-287},
	publisher = {Springer Nature Switzerland},
	organization = {Springer Nature Switzerland},
	type = {Conference Paper},
	address = {Milano, Italy},
	abstract = {<p>Concept-based XAI (C-XAI) approaches to explaining neural vision models are a promising field of research, since explanations that refer to concepts (i.e., semantically meaningful parts in an image) are intuitive to understand and go beyond saliency-based techniques that only reveal relevant regions. Given the remarkable progress in this field in recent years, it is time for the community to take a critical look at the advances and trends. Consequently, this paper reviews C-XAI methods to identify interesting and underexplored areas and proposes future research directions. To this end, we consider three main directions: the choice of concepts to explain, the choice of concept representation, and how we can control concepts. For the latter, we propose techniques and draw inspiration from the field of knowledge representation and learning, showing how this could enrich future C-XAI research.</p>
},
	keywords = {Concept Control, Concept Embedding Analysis, Concept-Based Explainable AI, Knowledge Representation, Neuro-Symbolic AI},
	doi = {10.1007/978-3-031-92648-8_17},
	author = {Jae Hee Lee and Georgii Mikriukov and Gesina Schwalbe and Stefan Wermter and Diedrich Wolter}
}