Download PDFOpen PDF in browserEnhancing the Interpretability and Explainability of AI-Driven Risk Models Using LLM CapabilitiesEasyChair Preprint 1336818 pages•Date: May 18, 2024AbstractArtificial intelligence (AI) and machine learning (ML) models have become increasingly prevalent in risk assessment and management applications across various industries. However, the inherent complexity and "black box" nature of many AI/ML models can pose challenges in terms of interpretability and explainability - the ability to understand how these models arrive at their outputs and decisions. This is a critical concern, as risk-related decisions often require transparency and accountability.
This paper explores how large language model (LLM) capabilities can be leveraged to enhance the interpretability and explainability of AI-driven risk models. LLMs, with their powerful natural language processing and generation abilities, can provide explanations, rationales, and contextual insights that illuminate the underlying logic and reasoning of risk models. Keyphrases: AI/ML models, Explainability, Large Language Models (LLMs), interpretability, risk assessment, risk management
|