Download PDFOpen PDF in browserBridging the Gap: Exploring Explainable AI for Interpretable Machine Learning Models in Software Defect DetectionEasyChair Preprint 1318111 pages•Date: May 6, 2024AbstractIn recent years, the adoption of machine learning (ML) in software defect detection has shown promising results, revolutionizing the way defects are identified and rectified in software development processes. However, the opacity of complex ML models presents a significant challenge, hindering their acceptance in critical domains where interpretability and trust are paramount. Explainable AI (XAI) has emerged as a crucial research area aimed at addressing this challenge by providing insights into the decision-making processes of ML models.
This paper delves into the integration of XAI techniques into interpretable ML models for software defect detection. By elucidating the inner workings of these models, XAI not only enhances their transparency but also enables stakeholders to understand, validate, and refine the detection process. We survey various XAI methods, including feature importance analysis, local and global interpretability techniques, and model-agnostic approaches, exploring their applicability and effectiveness in the context of software defect detection. Keyphrases: Adoption, learning, machine
|