Search results for ""Author Riccardo Guidotti""
Springer Explainable Artificial Intelligence
Book SynopsisApplications of XAI.- Global Explanations of Expected Goal Models in Football.- Comprehensive Explanations Using Natural Language Queries.- A Human-in-the-Loop Approach to Learning Social Norms as Defeasible Logical Constraints.- A Cautionary Tale About ''Neutrally'' Informative AI Tools Ahead of the 2025 Federal Elections in Germany.- Human-Centered XAI & Argumentation.- Evaluating Argumentation Graphs as Global Explainable Surrogate Models for Dense Neural Networks and their Comparison with Decision Trees.- Mind the XAI Gap: A Human-Centered LLM Framework for Democratizing Explainable AI.- Explanations for Medical Diagnosis Predictions Based on Argumentation Schemes.- Spectral Occlusion - Attribution Beyond Spatial Relevance Heatmaps.- Non-experts' Trust in XAI is Unreasonably High.- Explainable and Interactive Hybrid Decision Making.- Exploring Annotator Disagreement in Sexism Detection: Insights from Explainable AI.- Can You Regulate Your Emotions? An Empirical Investigation of the Influence of AI Explanations and Emotion Regulation on Human Decision-Making Factors.- When Bias Backfires: The Modulatory Role of Counterfactual Explanations on the Adoption of Algorithmic Bias in XAI-Supported Human Decision-Making.- Understanding Disagreement Between Humans and Machines in XAI: Robustness, Fidelity, and Region-Based Explanations in Automatic Neonatal Pain Assessment.- On Combining Embeddings, Ontology and LLM to Retrieve Semantically Similar Quranic Verses and Generate their Explanations.- Uncertainty in Explainable AI.- Improving Counterfactual Truthfulness for Molecular Property Prediction through Uncertainty Quantification.- Fast Calibrated Explanations: Efficient and Uncertainty-Aware Explanations for Machine Learning Models.- Explaining Low Perception Model Competency with High-Competency Counterfactuals.- Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators.
£33.24
Springer Explainable Artificial Intelligence
Book SynopsisConcept-based Explainable AI.- Global Properties from Local Explanations with Concept Explanation Clusters.- From Colors to Classes: Emergence of Concepts in Vision Transformers.- V-CEM: Bridging Performance and Intervenability in Concept-based Models.- Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations.- Concept Extraction for Time Series with ECLAD-ts.- Human-Centered Explainability.- A Nexus of Explainability and Anthropomorphism in AI-Chatbots.- Comparative Explanations: Explanation Guided Decision Making for Human-in-the-Loop Preference Selection.- Generating Rationales Based on Human Explanations for Constrained Optimization.- Algorithmic Knowability: a unified approach to Explanations in the AI Act.- Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities.- Explainability, Privacy, and Fairness in Trustworthy AI.- Too Sure for Trust. The Paradoxical Effect of Calibrated Confidence in case of Uncalibrated Trust in Hybrid Decision Making.- The Impact of Concept Explanations and Interventions on Human-machine Collaboration.-Leaking LoRA: An Evaluation of Password Leaks and Knowledge Storage in Large Language Models.- Exploring Explainability in Federated Learning: A Comparative Study on Brain Age Prediction.- The Dynamics of Trust in XAI: Assessing Perceived and Demonstrated Trust Across Interaction Modes and Risk Treatments.- XAI in Healthcare.- Systematic Benchmarking of Local and Global Explainable AI Methods for Tabular Healthcare Data.- A Combination of Integrated Gradients and SRFAMap for Explaining Neural Networks Trained with High-order Statistical Radiomic Features.- FAIR-MED: Bias Detection and Fairness Evaluation in Healthcare Focused XAI.- Weakly Supervised Pixel-Level Annotation with Visual Interpretability.- Assessing the Value of Explainable Artificial Intelligence for Magnetic Resonance Imaging.
£33.24
Springer Explainable Artificial Intelligence
Book SynopsisRule-based XAI Systems & Actionable Explainable AI.- CFIRE: A General Method for Combining Local Explanations.- Which LIME should I trust? Concepts, Challenges, and Solutions.- Explainable Bayesian Optimization.- Bridging the Interpretability Gap in Process Mining: A Comprehensive Approach Combining Explainable Clustering and Generative AI.- Balancing Fairness and Interpretability in Clustering with FairParTree.- Features Importance-based XAI.- Antithetic Sampling for Top-k Shapley Identification.- Detecting Concept Drift with SHapley Additive exPlanations for Intelligent Model Retraining in Energy Generation Forecasting.- Counterfactual Shapley Values for Explaining Reinforcement Learning.- Improving the Weighting Strategy in KernelSHAP.- POMELO: Black-Box Feature Attribution with Full-Input, In-Distribution Perturbations.- Novel Post-hoc & Ante-hoc XAI Approaches.- Explain to Gain: Introspective Reinforcement Learning for Enhanced Performance.- Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest.- Mathematical Foundation of Interpretable Equivariant Surrogate Models.- Interpretable Link Prediction via Neural-Symbolic Reasoning.- CausalAIME: Leveraging Peter-Clark Algorithms and Inverse Modeling for Unified Global Feature Explanation in Healthcare.- XAI for Scientific Discovery.- Interpreting the Structure of Multi-object Representations in Vision Encoders.- Leveraging Influence Functions for Resampling in PINNs.- Safe and Efficient Social Navigation through Explainable Safety Regions Based on Topological Features.- A Biologically Inspired Filter Significance Assessment Method for Model Explanation.
£33.24
Springer Explainable Artificial Intelligence
Book SynopsisXAI in Computer Vision.- Comparing XAI Explanations and Synthetic Data Augmentation Strategies in Neuroimaging AI.- Superpixel Correlation for Explainable Image Classification.- On Background Bias of Post-Hoc Concept Embeddings in Computer Vision DNNs.- Explaining Vision GNNs: A Semantic and Visual Analysis of Graph-based Image Classification.- Counterfactuals in XAI.- HalCECE: A Framework for Explainable Hallucination Detection through Conceptual Counterfactuals in Image Captioning.- Diffusion Counterfactuals for Image Regressors.- Mitigating Text Toxicity with Counterfactual Generation.- Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification.- Exploring Ensemble Strategies for Graph Counterfactual Explanations.- Explainable Sequential Decision Making.- Leveraging XAI Techniques for Context-Aware Energy Consumption Forecasting.- ConformaSegment: A Conformal Prediction-Based, Uncertainty-Aware, and Model-Agnostic Explainability Framework for Time-Series Forecasting.- FLEXtime: Filterbank Learning to Explain Time Series.- From Text to Space: Mapping Abstract Spatial Models in LLMs during a Grid-World Navigation Task.- Class-Dependent Perturbation Effects in Evaluating Time Series Attributions.- Explainable AI in Finance & Legal Frameworks for XAI Technologies.- XAI In Fraud Detection: A Causal Perspective.- Detecting Fraud in Financial Networks: A Semi-Supervised GNN Approach with Granger-Causal Explanations.- Legal Requirements, Trust Issues and Engineering Challenges - a Multi-Disciplinary Case for User-Specific Explainability.- Explainable Fairness in Mortgage Lending.- Cyber Risk Management with Time Varying Artificial Intelligence Models.
£33.24
Springer Explainable Artificial Intelligence
Book SynopsisGenerative AI meets Explainable AI.- Reasoning-Grounded Natural Language Explanations for Language Models.- What's Wrong with Your Synthetic Tabular Data? Using Explainable AI to Evaluate Generative Models.- Explainable Optimization: Leveraging Large Language Models for User-Friendly Explanations.- Large Language Models as Attribution Regularizers for Efficient Model Training.- GraphXAIN: Narratives to Explain Graph Neural Networks.- Intrinsically Interpretable Explainable AI.- MSL: Multiclass Scoring Lists for Interpretable Incremental Decision Making.- Interpretable World Model Imaginations as Deep Reinforcement Learning Explanation.- Unsupervised and Interpretable Detection of User Personalities in Online Social Networks.- An Interpretable Data-Driven Approach for Modeling Toxic Users Via Feature Extraction.- Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support.- Benchmarking and XAI Evaluation Measures.- When can you Trust your Explanations? A Robustness Analysis on Feature Importances.- XAIEV – a Framework for the Evaluation of XAI-Algorithms for Image Classification.- From Input to Insight: Probing the Reasoning of Attention-based MIL Models.- Uncovering the Structure of Explanation Quality with Spectral Analysis.- Consolidating Explanation Stability Metrics.- XAI for Representational Alignment.- Reduction of Ocular Artefacts in EEG Signals Based on Interpretation of Variational Autoencoder Latent Space.- Syntax-Guided Metric-Based Class Activation Mapping.- Which Direction to Choose? An Analysis on the Representation Power of Self-Supervised ViTs in Downstream Tasks.- XpertAI: Uncovering Regression Model Strategies for Sub-manifolds.- An XAI-based Analysis of Shortcut Learning in Neural Networks.
£33.24
Springer International Publishing AG Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part II
Book SynopsisThis volume constitutes the papers of several workshops which were held in conjunction with the International Workshops of ECML PKDD 2022 on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2022, held in Grenoble, France, during September 19–23, 2022. The 73 revised full papers and 6 short papers presented in this book were carefully reviewed and selected from 143 submissions. ECML PKDD 2022 presents the following five workshops:Workshop on Data Science for Social Good (SoGood 2022)Workshop on New Frontiers in Mining Complex Patterns (NFMCP 2022)Workshop on Explainable Knowledge Discovery in Data Mining (XKDD 2022)Workshop on Uplift Modeling (UMOD 2022)Workshop on IoT, Edge and Mobile for Embedded Machine Learning (ITEM 2022)Workshop on Mining Data for Financial Application (MIDAS 2022)Workshop on Machine Learning for Cybersecurity (MLCS 2022)Workshop on Machine Learning for Buildings Energy Management (MLBEM 2022) Workshop on Machine Learning for Pharma and Healthcare Applications (PharML 2022)Workshop on Data Analysis in Life Science (DALS 2022)Workshop on IoT Streams for Predictive Maintenance (IoT-PdM 2022)Table of ContentsWorkshop on Mining Data for Financial Application (MIDAS 2022).- Preface from the workshop organisers.- Multi-Task Learning for Features Extraction in Financial Annual Reports.- What to do with your sentiments in finance.- On the development of a European tracker of societal issues and economic activities using alternative data.- Privacy-preserving machine learning in life insurance risk prediction.- Financial Distress Model Prediction using Machine Learning: A Case Study on Indonesia’s Consumers Cyclical Companies.- Improve default prediction in highly unbalanced context.- Towards Explainable Occupational Fraud Detection.- Towards Data-Driven Volatility Modeling with Variational Autoencoders.- Auto-Clustering of Financial Reports Based on Formatting Style and Author’s Fingerprint.- InFi-BERT 1.0: Transformer-based language model for Indian Financial Volatility Prediction.- Workshop on Machine Learning for Cybersecurity (MLCS 2022).- Preface from the workshop organisers.- Intrusion Detection using Ensemble Models.- Domain Adaptation with Maximum Margin Criterion with application to network traffic classification.- Evaluation of Detection Limit in Network Dataset Quality Assessment with Permutation Testing.- Towards a General Model for Intrusion Detection: An Exploratory Study.- Workshop on Machine Learning for Buildings Energy Management (MLBEM 2022).- Preface from the workshop organisers.- Conv-NILM-Net, a causal and multi-appliance model for energy source separation.- Domestic Hot Water Forecasting for Individual Housing with Deep Learning.- Workshop on Machine Learning for Pharma and Healthcare Applications (PharML 2022).- Preface from the workshop organisers.- Detecting Drift in Healthcare AI Models based on Data Availability.- Assessing Different Feature Selection Methods applied to a bulk RNA Sequencing Dataset with regard to Biomedical Relevance.- Predicting Drug Treatment for Hospitalized Patients with Heart Failure.- A Workflow for Generating Patient Counterfactuals in Lung Transplant Recipients.- Few-Shot Learning for Identification of COVID-19 Symptoms Using Generative Pre-Trained Transformer Language Models.- A Light-weight Deep Residual Network for Classification of Abnormal Heart Rhythms on Tiny Devices.- Workshop on Data Analysis in Life Science (DALS 2022).- Preface from the workshop organisers.- I-CONVEX: Fast and Accurate de Novo Transcriptome Recovery from Long Reads.- Italian debate on measles vaccination: how Twitter data highlight communities and polarity.- Workshop on IoT Streams for Predictive Maintenance (IoT-PdM 2022).- Preface from the workshop organisers.- Online Anomaly Explanation: A Case Study on Predictive Maintenance.- Fault forecasting using data-driven system modeling: a case study for Metro do Porto data set.- An online data-driven predictive maintenance approach for railway switches.- curr2vib: Modality Embedding Translation for Broken-Rotor Bar Detection.- Incorporating Physics-based Models into Data-Driven Approaches for Air Leak Detection in City Buses.- Towards Geometry-Preserving Domain Adaptation for Fault Identification.- A systematic approach for tracking the evolution of XAI as a field of research.- Frequent Generalized Subgraph Mining via Graph Edit Distances.
£62.99