Botty Dimanov
Botty Dimanov
Verified email at
Cited by
Cited by
You shouldn’t trust me: Learning models which conceal unfairness from multiple explanation methods.
B Dimanov, U Bhatt, M Jamnik, A Weller
IOS Press, 2020
Now You See Me (CME): Concept-based Model Extraction
D Kazhdan, B Dimanov, M Jamnik, P Liņ, A Weller
Advances in Interpretable Machine Learning and Artificial Intelligence (AIMLAI), 2020
Is Disentanglement all you need? Comparing Concept-based & Disentanglement Approaches
D Kazhdan, B Dimanov, HA Terre, M Jamnik, P Liņ, A Weller
International Conference on Learning Representations (ICLR) Workshop on …, 2021
MEME: Generating RNN Model Explanations via Model Extraction
D Kazhdan, B Dimanov, M Jamnik, P Lio
NeurIPS 2020 Workshop HAMLETS, 2020
REM: An integrative rule extraction methodology for explainable data analysis in healthcare
Z Shams, B Dimanov, S Kola, N Simidjievski, HA Terre, P Scherer, ...
medRxiv, 2021.01. 25.21250459, 2021
Failing conceptually: Concept-based explanations of dataset shift
MA Wijaya, D Kazhdan, B Dimanov, M Jamnik
arXiv preprint arXiv:2104.08952, 2021
Interpretable Deep Learning: Beyond Feature-Importance with Concept-based Explanations
B Dimanov
University of Cambridge, 2021
GCI: A (G) raph (C) oncept (I) nterpretation Framework
D Kazhdan, B Dimanov, LC Magister, P Barbiero, M Jamnik, P Lio
arXiv preprint arXiv:2302.04899, 2023
Step-Wise Sensitivity Analysis: Identifying Partially Distributed Representations For Interpretable Deep Learning
B Dimanov, M Jamnik
ICLR 2019 Debugging Machine Learning Models, 2019
Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be Effective for Detecting Unknown Spurious Correlations
S Cardozo, GI Montero, D Kazhdan, B Dimanov, M Wijaya, M Jamnik, ...
arXiv preprint arXiv:2211.07650, 2022
Method for inspecting a neural network
BT Dimanov, M Jamnik
US Patent 11,449,578, 2022
The system can't perform the operation now. Try again later.
Articles 1–11