City Research Online

“It can never truly be human”: A mixed methods study of mental health clinicians’ perceptions and experience of Artificial Intelligence

Price, S. G. (2025). “It can never truly be human”: A mixed methods study of mental health clinicians’ perceptions and experience of Artificial Intelligence. (Unpublished Doctoral thesis, City St George's, University of London)

Abstract

Background: A plethora of artificial intelligence (AI) tools have been developed within mental health yet there remains a divide between availability and clinical uptake. It is, therefore, necessary to investigate clinicians’ perceptions and experiences.

Methods: We explored UK-based clinicians’ views using five AI use-cases (automated therapists, risk monitoring, virtual patients, AI assessments, and AI feedback on clinician performance). A mixed methods approach integrated quantitative analysis and hermeneutic phenomenology. Eight qualitative interviews were analysed to explore clinicians’ interpretations and meaning-making. Eighty-six clinicians completed an online survey assessing behavioural intention, attitudes, identity, social influence, perceived behavioural control, therapist self-efficacy, and general appreciation/aversion to AI. Repeated-measures ANOVA examined whether use intention varied across AI use-cases. Regression analysis was then used to identify predictors of intention for each use-case. An independent samples t test was then used to examine if intention to use differed depending on clinicians’ therapeutic approach (behavioural vs relational).

Results: Qualitative analysis identified six themes. Participants viewed AI as lacking compared to what human clinicians can offer. They felt “torn” between excitement and worry, and navigated ethical dilemmas where safety was paramount. Participants emphasised systemic and structural considerations and feared AI may lead to commodification and devaluing of human-to-human connection. Autonomy, responsibility, and professional identity were salient considerations. Quantitative analysis showed that identity and attitudes were the most consistent predictors of intention, with variations across use-cases. Intention to use varied across use-cases. Behavioural clinicians also reported being more open to using AI for risk monitoring, assessment, and virtual patient role-play than relational clinicians.

Conclusions: The findings illustrate perceptions of AI are nuanced and contextual emphasising the importance of differentiating between specific AI tools rather than understanding clinicians’ perceptions as related to AI as a unitary entity. Identity and the valuing of the human-to-human relationship were crucial considerations. This has implications for design and implementation of AI tools.

Publication Type: Thesis (Doctoral)
Subjects: B Philosophy. Psychology. Religion > BF Psychology
Q Science > QA Mathematics > QA76 Computer software
R Medicine > RC Internal medicine > RC0321 Neuroscience. Biological psychiatry. Neuropsychiatry
Departments: School of Health & Medical Sciences > Department of Psychology & Neuroscience
School of Health & Medical Sciences > School of Health & Medical Sciences Doctoral Theses
Doctoral Theses
[thumbnail of Price thesis 2026 redacted PDF-A.pdf] Text - Accepted Version
This document is not freely accessible until 28 February 2029 due to copyright restrictions.

To request a copy, please use the button below.

Request a copy

Export

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Downloads

Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login