Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS
Cite
Citation

Files

Abstract

There has been a growing interest in Explainable Artificial Intelligence (henceforth XAI) models among researchers and AI programmers in recent years. Indeed, the development of highly interactive technologies that can collaborate closely with users has made explainability a necessity. This intends to reduce mistrust and the sense of unpredictability that AI can create, especially among non-experts. Moreover, the potential of XAI as a valuable resource has been recognized, considering that it can make intelligent systems more user-friendly and reduce the negative impact of black box systems. Building on such considerations, the paper discusses the potential dangers of large language models (LLMs) that generate explanations to support the outcomes produced. While these models may give users the illusion of control over the system’s responses, they actually have persuasive and non-explanatory effects. Therefore, it is argued here that XAI, appropriately regulated, should be a resource to empower users of AI systems. Any other apparent explanations should be reported to avoid misleading and circumventing effects.

Details

Actions

PDF