Model interpretability and understanding for PyTorch
翻译 - PyTorch的模型可解释性和理解
#计算机科学#Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
#自然语言处理#Collection of NLP model explanations and accompanying analysis tools
#时序数据库#An Open-Source Library for the interpretability of time series classifiers
Explainable AI in Julia.
#计算机科学#A set of notebooks as a guide to the process of fine-grained image classification of birds species, using PyTorch based deep neural networks.
#计算机科学#Counterfactual SHAP: a framework for counterfactual feature importance
Materials for "Quantifying the Plausibility of Context Reliance in Neural Machine Translation" at ICLR'24 🐑 🐑
Materials for the Lab "Explaining Neural Language Models from Internal Representations to Model Predictions" at AILC LCL 2023 🔍
The official repo for the EACL 2023 paper "Quantifying Context Mixing in Transformers"
#大语言模型#Code and data for the ACL 2023 NLReasoning Workshop paper "Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods" (Feldhus et al., 2023...
#计算机科学#⛈️ Code for the paper "End-to-End Prediction of Lightning Events from Geostationary Satellite Images"
Implementation of the Integrated Directional Gradients method for Deep Neural Network model explanations.
Efficient and accurate explanation estimation with distribution compression (ICLR 2025 Spotlight)
Reproducible code for our paper "Explainable Learning with Gaussian Processes"
#计算机科学#Bachelor's thesis for degree in Economics at HSE University, Saint-Petersburg (2022)
Codes for the paper On marginal feature attributions of tree-based models
Robustness of Global Feature Effect Explanations (ECML PKDD 2024)
NO2 Prediction: Performance and Robustness Comparison between Random Forest and Graph Neural Network
Understanding the UNet for Traffic Forecasting Task - A Visual Analytics Approach