dc.contributor.author | Smith, Katharine A | |
dc.contributor.author | Cipriani, Andrea | |
dc.date.accessioned | 2023-06-22T16:44:10Z | |
dc.date.available | 2023-06-22T16:44:10Z | |
dc.date.issued | 2023-01 | |
dc.identifier.citation | Joyce, D.W., Kormilitzin, A., Smith, K.A. and Cipriani, A. Explainable artificial intelligence for mental health through transparency and interpretability for understandability. npj Digit. Med. 6, 6 (2023). | en |
dc.identifier.uri | https://oxfordhealth-nhs.archive.knowledgearc.net/handle/123456789/1229 | |
dc.description | Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. | en |
dc.description.abstract | The literature on artificial intelligence (AI) or machine learning (ML) in mental health and psychiatry lacks consensus on what
“explainability” means. In the more general XAI (eXplainable AI) literature, there has been some convergence on explainability
meaning model-agnostic techniques that augment a complex model (with internal mechanics intractable for human
understanding) with a simpler model argued to deliver results that humans can comprehend. Given the differing usage and
intended meaning of the term “explainability” in AI and ML, we propose instead to approximate model/algorithm explainability by
understandability defined as a function of transparency and interpretability. These concepts are easier to articulate, to “ground” in
our understanding of how algorithms and models operate and are used more consistently in the literature. We describe the TIFU
(Transparency and Interpretability For Understandability) framework and examine how this applies to the landscape of AI/ML in
mental health research. We argue that the need for understandablity is heightened in psychiatry because data describing the
syndromes, outcomes, disorders and signs/symptoms possess probabilistic relationships to each other—as do the tentative
aetiologies and multifactorial social- and psychological-determinants of disorders. If we develop and deploy AI/ML models, ensuring
human understandability of the inputs, processes and outputs of these models is essential to develop trustworthy systems fit for
deployment. | en |
dc.description.uri | https://doi.org/10.1038/s41746-023-00751-9 | en |
dc.language.iso | en | en |
dc.subject | Artificial Intelligence (AI) | en |
dc.subject | Mental Health (General) | en |
dc.title | Explainable artificial intelligence for mental health through transparency and interpretability for understandability | en |
dc.type | Article | en |