Applications of SAT Solvers in Rigorous Explainable AI
My Session Status
Explainable AI (XAI) is one cornerstone of trustworthy AI. This is in part explained by the ever-increasing adoption of highly complex machine learning (ML) models in high-stakes uses of artificial intelligence (AI). Most solutions of XAI exploit subsymbolic methods of AI. Unfortunately, the use of subsymbolic methods of AI in XAI has been shown to be unworthy of trust, often yielding results that are either misleading or even erroneous. In contrast, logic-based XAI offers the strongest guarantees of rigor. This talk provides a glimpse of logic-based XAI, and overviews some of the numerous uses of Boolean Satisfiability (SAT) solvers.
References
João Marques-Silva: Logic-Based Explainability in Machine Learning. RW 2022: 24-104.
Adnan Darwiche: Logic for Explainable AI. LICS 2023: 1-11.
Marques-Silva J and Ignatiev A (2023) No silver bullet: interpretable ML models must be explained. Front. Artif. Intell. 6:1128212. doi:10.3389/frai.2023.1128212