You can view the full text of this article for free using the link below.

Title: Toward trustworthy ESG risk assessment through XAI: a state-of-the-art review

Authors: Hossein Habibinejad; Morteza Alaeddini; Paul Reaidy

Addresses: Université Grenoble Alpes, Grenoble-INP, CERAG, 38400, Saint-Martin-d'Hères, France ' ICN Business School, Université de Lorraine, CEREFIGE, F-54000, Nancy, France ' Université Grenoble Alpes, Grenoble-INP, CERAG, 38400, Saint-Martin-d'Hères, France

Abstract: As artificial intelligence (AI) becomes increasingly central to environmental, social, and governance (ESG) risk assessment, concerns about model opacity and stakeholder trust have come to the forefront. Traditional ESG scoring systems face limitations such as inconsistent data, lack of transparency, and potential bias - issues that are often exacerbated by complex, black-box AI models. This paper examines the role of explainable AI (XAI) and responsible AI (RAI) in enhancing the credibility and ethical alignment of ESG assessments. A comprehensive review of the literature highlights critical research gaps, including the absence of standardised explainability metrics, minimal empirical validation in real-world contexts, and the neglect of cultural variability in trust formation. To address these gaps, the paper introduces a theoretical framework that integrates trust determinants, RAI principles, and XAI techniques. The model also incorporates human-centric moderators and feedback loops to ensure adaptability across stakeholder groups. By linking interpretability, ethical safeguards, and user-centred design, the framework offers a path toward more trustworthy and transparent ESG systems. Ultimately, this study contributes to the development of AI-powered tools that support responsible decision-making in sustainable finance while reinforcing stakeholder confidence and accountability.

Keywords: explainable artificial intelligence; XAI; responsible artificial intelligence; RAI; ESG risk assessment; sustainable finance; investment; trustworthiness; transparency; fairness; accountability; literature review.

DOI: 10.1504/IJGAIB.2026.151803

International Journal of Generative Artificial Intelligence in Business, 2026 Vol.1 No.1/2, pp.65 - 84

Received: 31 Jul 2025
Accepted: 06 Aug 2025

Published online: 20 Feb 2026 *

Full-text access for editors Full-text access for subscribers Free access Comment on this article