Next Article in Journal
From Gender Threat to Farsightedness: How Women’s Perceived Intergroup Threat Shapes Their Long-Term Orientation
Previous Article in Journal
The Moderating Role of Learning Rounds: Effects on Retrieval Practice and Context Dependence in Digital Flashcard Foreign Language Vocabulary Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human-Centred Perspectives on Artificial Intelligence in the Care of Older Adults: A Q Methodology Study of Caregivers’ Perceptions

1
Department of Counselling and Coaching, Dongguk University-Seoul, 30, Pildong-ro 1 gil, Jung-gu, Seoul 04620, Republic of Korea
2
Dharma College, Dongguk University-Seoul, 30, Pildong-ro 1 gil, Jung-gu, Seoul 04620, Republic of Korea
3
School of Interdisciplinary Studies, Dongguk University-Seoul, 30, Pildong-ro 1 gil, Jung-gu, Seoul 04620, Republic of Korea
*
Authors to whom correspondence should be addressed.
Behav. Sci. 2025, 15(11), 1541; https://doi.org/10.3390/bs15111541
Submission received: 7 September 2025 / Revised: 9 October 2025 / Accepted: 10 November 2025 / Published: 12 November 2025
(This article belongs to the Special Issue Advanced Studies in Human-Centred AI)

Abstract

This study used Q methodology to explore and categorise caregivers’ subjective perceptions of artificial intelligence (AI)-powered ‘virtual human’ (AVH) devices in caring for older adults. We derived 123 initial statements from literature and focus groups and narrowed them to 34 statements as the final Q sample. Seventeen caregivers, nurses, and social workers completed the Q-sorting procedure. Using principal component analysis and Varimax rotation in Ken-Q, we identified three perception types: Active Acceptors, who emphasise the devices’ practical utility in patient communication; Improvement Seekers, who conditionally accept the technology while seeking greater accuracy and effectiveness; and Emotional Support Seekers, who view the device as a tool for emotional relief and psychological support. These findings suggest that technology acceptance in caregiving extends beyond functional utility. It also involves trust, affective experience, and interpersonal interaction. This study integrates multiple frameworks, including the Technology Acceptance Model (TAM), the Unified Theory of Acceptance and Use of Technology (UTAUT), Science and Technology Studies (STS), and Human–Machine Communication (HMC) theory, to provide a multifaceted understanding of caregivers’ acceptance of AI technology. The results offer valuable implications for designing user-centred AI care devices and enhanced emotional and communicative functions.

1. Introduction

In April 2025, South Korea entered a super-aged society, with individuals aged 65 and older accounting for approximately 10.46 million—20.45% of the total registered population of 51.17 million (Ministry of the Interior and Safety, 2025). This demographic shift extends beyond numbers, creating structural challenges for welfare systems, the economy, and society and demanding a transition to a more sustainable social infrastructure.
As the older population continues to grow, the issue of caregiving becomes increasingly critical (OECD, 2023). In the Korean context, older adult care primarily occurs in long-term care hospitals. These facilities are specialised medical institutions that serve older adult patients requiring continuous medical management and rehabilitation following acute care. Traditionally, the responsibility of caring for aging parents rested on the eldest son, reflecting strong Confucian family values. However, as family structures have modernised and chronic illnesses among older adults have increased, this tradition has gradually shifted toward institutional care, with long-term care hospitals becoming the primary setting for supporting older adults who are physically ill (Na, 2021). In these hospitals, physicians, nurses, and caregivers collaborate within an integrated medical–care model that provides medical services and daily living support.
This study broadly defines older adult care as the provision of physical, emotional, and social support for older adults across diverse contexts, including home, community, and institutional settings, rather than restricting it to nursing homes. Within this context, ‘caregivers’ refers to professional care providers such as nurses, nursing assistants, and social workers who deliver daily care and emotional support to older adults in institutional and community-based environments. This study uses the term ‘caregiver’ in a broad and narrow sense. Broadly, it encompasses all professional care providers involved in caring for older adults. In a narrow sense, it refers to certified long-term care workers or nursing assistants who hold national qualifications in South Korea. Accordingly, this study focuses on how these professional caregivers perceive and experience artificial intelligence (AI)-powered ‘virtual human’ (AVH) systems within the context of human-centred older adult care.
Older adults often experience chronic and complex conditions, requiring long-term care that family members often provide. However, this arrangement imposes significant physical and psychological burdens on caregivers. The increasing prevalence of chronic and severe illnesses can lead to profound emotional distress for caregivers, sometimes culminating in tragic incidents such as caregiver-perpetrated violence (Choi, 2022). In response, society has begun to recognise care for older adults as a collective responsibility rather than a private obligation.
This shift, known as the ‘socialisation of elderly care’, transfers caregiving responsibilities from individuals and families to public systems (Lee, 2018). As a result, long-term care facilities and professional caregivers now play critical roles as part of the welfare infrastructure, providing medical assistance and emotional support. However, despite this evolution, caregivers continue to face excessive physical labour and emotional strain. Chronic understaffing and heavy workloads further intensify burnout and reduce the quality of care (Y. Kim & Yeo, 2018).
In response to these challenges, AI-driven care technologies have emerged as promising solutions (Lee et al., 2023). Voice-enabled systems integrated with generative pre-trained transformer (GPT)-based language models are gaining attention for their potential to improve accessibility and facilitate natural human–computer interaction (HCI) (Cao et al., 2024). These systems can engage in flexible, open-ended conversations, offering users a human-like interaction experience (Edwards & Edwards, 2017). For instance, voice-based AI systems convert speech to text, process it through a language model, and generate synthesised voice responses. This process enables users to interact with AI in a manner that closely resembles human dialogue, thereby expanding its application across education, counselling, and healthcare (Nadarzynski et al., 2024).
The domains of healthcare and caregiving increasingly view AI technologies as tools that enhance operational efficiency and improve the quality of care for patients. M. Kim et al. (2022) suggested that AI-based applications can help caregivers manage tasks and improve work environments. The World Health Organization (WHO, 2024) projected that AI could impact up to 80% of healthcare-related occupations. Sabra et al. (2023) found that nurses generally welcomed AI, though acceptance varied by age, certification, and social status. Bonacaro et al. (2024) argued that integrating AI and big data into nursing care and education could improve diagnostic efficiency and enable personalised learning.
Additionally, Zaboli et al. (2025) reported that AI can aid in emergency decision-making in clinical settings. A systematic review by Borna et al. (2024) found that AI may also alleviate the workload and emotional stress experienced by informal caregivers. Furthermore, AI human devices with GPT-based voice interfaces have shown promise in providing indirect care, such as cognitive assessments and emotional support (Igarashi et al., 2024; Bonacaro et al., 2024).
Despite these advances, real-world applications of AI in care settings remain limited. Ruggiano et al. (2021) critiqued the limited effectiveness of chatbots developed without empirical validation or user-centred design. Recent co-design research has highlighted that technology acceptance in care settings extends beyond functionality to include user-centered design that addresses stakeholders’ real-world needs (Timon et al., 2024). While most studies focus on general attitudes or functional outcomes, few have explored how professional caregivers experience and interpret AI human devices. This gap underscores the need to move beyond quantitative metrics of technology adoption and delve into caregivers’ subjective perspectives in depth.
This study addressed that gap through Q methodology to identify and analyse caregivers’ subjective perceptions of AI–human devices. The study builds on an integrated framework that combines four theoretical perspectives: the Technology Acceptance Model (TAM), the Unified Theory of Acceptance and Use of Technology (UTAUT), Science and Technology Studies (STS), and Human–Machine Communication (HMC) theory.
TAM helps predict user acceptance by emphasising perceived usefulness and ease of use (Davis, 1986), and researchers have broadly applied it in healthcare (Holden & Karsh, 2010). For instance, Haque and Rubya (2023) analysed user reviews of mental health chatbots and found that users appreciated the human-like interactions but often felt frustrated by response errors and perceived misinterpretations of their personalities.
UTAUT extends TAM by incorporating four key constructs: performance expectancy, effort expectancy, social influence, and facilitating conditions (Venkatesh et al., 2003). Yu and Chen (2024) found that among adults over 60, facilitating conditions had the greatest impact on actual use, particularly for those with less technological experience, who relied more on social and effort-related cues.
STS frames technology as a socially constructed entity, shaped by user interpretation (Pinch & Bijker, 1984). It emphasises that technology stems from social experience rather than purely cognitive factors. Ruggiano et al. (2021) noted that chatbot designs that ignore user experience often fall short in effectiveness.
Finally, HMC theory conceptualises AI as an active communicative agent rather than a passive tool. It explores how users form trust, emotional connections, and social meaning in interactions with AI (Guzman & Lewis, 2020). Edwards and Edwards (2017) defined HMC as communication with digital conversational partners, while Lombard and Xu (2021) found that users often anthropomorphise AI agents. Croes and Antheunis (2021) noted that although users initially form social connections with AI, building long-term relationships remains limited.
Through Q methodology, this study categorised caregivers’ subjective perceptions of AI human devices, offering in-depth insights into the emotional and sociotechnical aspects of technology adoption in caregiving. The findings can guide the design of GPT-based voice systems and support the development of emotionally responsive, user-centred AI care technologies.

2. Study Method

To explore caregivers’ subjective perceptions of AI human devices in depth, this study adopted Q methodology, a structured approach for capturing subjectivity, grounded in the belief that individuals construct meaning in unique and interpretable ways (Hylton et al., 2018). At its core, Q sorting allows participants to arrange a set of statements according to their self-referential viewpoints, thereby revealing their interpretive frameworks (Brown, 1993). This process respects individual subjectivity and provides a systematic structure for identifying shared patterns of thought among participants (Ramlo, 2015; Van Exel & de Graaf, 2005).
Q methodology effectively classifies subjective viewpoints, making it well-suited to capture caregivers’ experiential perceptions of AI human voice-supported systems for the care of older adults. By focusing on how caregivers interpret and engage with AI technologies based on their lived experiences, the research seeks to uncover individual meaning-making and collective perception structures.

2.1. AI Virtual Human

In this study, we examined the AI Virtual Human (AVH) system, which provides distinct functionalities tailored to older adult users and professional caregivers.
Developers designed the AVH for older adults to include features such as companion dialogue (e.g., casual conversation), interactive support tools (e.g., dementia prevention quizzes, read-aloud books, memory aids, sleep and walking reminders), information on chronic disease management, simple dietary recommendations, and guided cognitive and physical activity videos. These features aim to support daily functioning, prevent cognitive decline, and promote independent living among older adults.
Conversely, the caregiver-oriented version of the AVH, accessible via mobile phones and tablets, addresses the practical needs of professional caregivers. Developers structured it to include specialised content distinct from that offered to senior users, with core functions such as AI-guided conversational support, chronic disease management guidelines, dietary planning, and multimedia tools for cognitive and physical exercises. These features enabled caregivers to make on-site decisions and access task-relevant information.
Notably, the caregiver version excluded personal health monitoring and wellness tracking for caregivers and did not integrate with the senior’s health data. Instead, it functioned as a practical tool for managing patient care, reflecting a purpose-built, streamlined design optimised for caregiving tasks.
Furthermore, some caregivers described the device as an informational resource and a form of psychological relief. They viewed it as a ‘socioemotional buffer’ that helped them express emotions and cope with emotional fatigue. This finding suggests that the AVH may contribute to more than just task efficiency and encompass the emotional well-being of caregivers. Figure 1 illustrates the system’s structure and service components analysed in this study.

2.2. Research Procedure

The research procedure consisted of five stages: (1) constructing the Q concourse, (2) selecting the Q sample, (3) selecting the P sample (participants), (4) conducting the Q-sorting process, and (5) analysing the data. Figure 2 provides an overview of the procedure.

2.2.1. Organisation of Q Population (Q Concourse)

The concourse represents the body of opinions, subjectivities, and subjective consciousness commonly shared within a particular culture or society. It encompassed the full range of statements participants could use to express their views on a given topic (H. Kim, 1992). We generated the concourse from literature and a focus group interview (FGI).
First, we reviewed the literature to generate initial statements for the Q concourse. This review focused on domestic and international journal articles as well as master’s and doctoral theses that addressed AI human devices in older adult care. This process yielded 42 preliminary statements.
Next, we used purposive sampling to recruit six nurses, caregivers, and social workers from long-term care hospitals who had prior experience with AI human devices. Participants first completed a pre-interview questionnaire assessing demographic characteristics (age, gender, position), career choice motivation, challenges in caregiving, and key patient relations, followed by the FGI. The FGI participants included three nurses, two caregivers, and one social worker.
We provided each participant with access to the AVH, including detailed usage instructions. We asked participants to use the AVH in their actual caregiving environments for at least 15 days to gain firsthand experience.
We obtained informed consent before the FGI and explained the study’s objectives and procedures. Semi-structured interview questions included: “What expectations did you have before interacting with the AVH, and how did your experience differ,” “How would you describe your overall experience using the program,” “How did the program affect your caregiving work,” “What functions do you think should be added or improved to better support caregivers and older adults,” and “What roles do you think AI-based caregiving programs can play in the future of caring for older adults?”
We continued collecting statements until reaching saturation, finalising 123 statements: 42 derived from the literature review and 81 from the FGI. Table 1 presents the characteristics of the FGI participants.

2.2.2. Selection of the Q Sample

The Q sample refers to a representative subset of statements selected from the broader concourse, designed to reflect the full range of opinions related to the research topic (Brown, 1980). According to Watts and Stenner (2005), a well-constructed Q sample should capture a balanced representation of the discursive landscape, allowing participants’ interpretations to emerge naturally.
We reviewed the 123 statements from the literature review and FGI and categorised them into nine thematic domains: social aspects, practical aspects, healthcare service perspectives, device-related limitations, technological reliability, emotional dimensions, user satisfaction, ethical and value-related concerns, and perceived needs for specific functions. We also classified each statement by tone—positive, negative, or neutral.
We revised or removed statements that were redundant, unclear, or ambiguous. For example, to improve clarity and specificity, we refined the initial statement ‘It would be good if there were functions that protect patient privacy while also providing convenience’, to ‘The AVH should provide personalised care through individual patient data’. Additionally, we removed unclear statements such as ‘Waiting for a slow internet connection feels even more frustrating’, due to poor sentence structure and a lack of direct relevance to the research focus. We refined the remaining pool based on clarity, specificity, neutrality, and representativeness. After expert review by two Q methodology specialists, we selected a final set of 34 statements for the Q sample.

2.2.3. Composition of the P Sample

The P sample consists of participants who sort the Q-sample statements based on their subjective viewpoints (H. Kim, 2007). Since Q methodology does not aim for demographic generalisability, it does not require statistical representative sampling (Hylton et al., 2018). Even a single participant can suffice when the goal is to explore internal cognitive structures (Brown, 1980). Rather than measuring differences between individuals, Q methodology focuses on how individuals interpret and assign meaning to an issue. Therefore, participants must closely and meaningfully engage with the research topic (H. Kim, 2007).
For this study, we used purposive sampling to select 17 participants, including caregivers, nurses, and social workers, who had at least 15 days of experience using the AVH in real care settings. We prioritised participants with direct relevance to the phenomenon under investigation.
The P sample included individuals with diverse demographic characteristics such as age, gender, occupation, career motivations, caregiving challenges, and the attributes of their care recipients. We considered these background variables as key factors influencing subjective perspectives. Table 2 shows the characteristics of participants categorised according to the three factors identified through subsequent Q-factor analysis.
We also included five of the six FGI participants in the P sample (P1, P2, P5, P8, P16). In Q methodology, including the same participants in the initial in-depth interview and Q-sorting is a methodologically sound approach. This approach enables a deeper understanding of participants’ subjective experiences and perceptions, thereby increasing data consistency and depth (de Guzman et al., 2012). Additionally, having the same participants evaluate the statements derived from the initial interview in Q-sorting ensures data continuity.
The Ethics Committee of the authors’ Institutional Review Board approved this study on 28 August 2024 (approval No. DUIRB2024-08-02).

2.2.4. Q Sorting

Q sorting is the process by which participants rank the Q sample statements based on their subjective judgments. This procedure reveals each individual’s internal perception structure and values (Brown, 1980).
Participants conducted Q-sorting in person, with researchers directly explaining the procedure and method to each participant and remaining present throughout the entire process. Participants completed the Q-sort individually using a standardised distribution grid (see Figure 3) provided by the researchers. The grid followed a forced-choice distribution, a quasi-normal distribution ranging from −4 to +4, where participants sorted statements based on their degree of agreement or disagreement with each statement. This method allowed us to uncover each participant’s unique structure of subjectivity (Brown, 1993). The forced distribution also ensured that participants could distinguish between statements meaningfully based on their relative importance.
Following the completion of Q-sorting, participants wrote why they chose their highest-ranked (+4) and lowest-ranked (−4) statements. Researchers then conducted individual post-sorting reviews with each participant, examining their written explanations and asking clarifying questions where necessary. These qualitative data served as key evidence for interpreting each factor type.

2.2.5. Data Analysis

We analysed data using Ken-Q analysis software (version 1.3.1), which supports the interpretive construction of subjectivity types based on similarities and differences in Q-sort patterns. Although classical Q methodology traditionally employs centroid extraction and manual rotation (Watts & Stenner, 2005), this study adopted principal component analysis (PCA) with varimax rotation as implemented in the Ken-Q software.
PCA offers computational stability, and researchers widely use it in contemporary Q studies, producing results comparable to those derived from centroid extraction (Brown, 1993; Van Exel & de Graaf, 2005; Zabala, 2014; Ramlo, 2015). The combination of PCA and varimax rotation clearly distinguished participants’ perceptual patterns in the data, providing a statistically coherent structure that facilitated the theoretical interpretation and classification of caregivers’ subjective viewpoints. Recent Q methodology studies across health, psychology, and the social sciences have widely used this combination due to its reproducibility and interpretive clarity (Churruca et al., 2021).
We retained factors in accordance with standard practices in Q methodology. Specifically, we selected factors that (1) had eigenvalues greater than 1.0, (2) included at least two significantly loading Q-sorts (p < 0.01), and (3) demonstrated theoretical interpretability. Among factors exceeding the eigenvalue threshold, we prioritised those with higher eigenvalues and clearer loading structures, provided they met the interpretability criterion. These criteria are consistent with established methodological guidelines (Brown, 1978; Van Exel & de Graaf, 2005; Akhtar-Danesh, 2017; Damio, 2018). We treated factor loadings of ±0.43 or higher as statistically significant (p < 0.01) and calculated this threshold using the formula 2.58 × (1/√N), where N is the number of Q statements (N = 34 in this study) (Brown, 1980; Van Exel & de Graaf, 2005). In Q methodology, factor loadings indicate the degree to which each Q-sort contributes to or represents a given factor, rather than merely reflecting statistical correlations in the conventional sense (Brown, 1993).
We interpreted each factor by examining standardised z-scores and Q-sort values to identify distinguishing statements that significantly differentiated one factor from the others, as well as characterising statements with the highest or lowest z-scores within each factor that represented participants’ shared core viewpoints (Brown, 1980; Watts & Stenner, 2005). These statistical measures enabled us to identify consensus and divergent perspectives across the different types. We also incorporated participants’ qualitative explanations of the statements that they highest-ranked (statements with which they most agreed) and lowest-ranked (statements with which they most disagreed). These narratives served as key evidence for interpreting the subjective orientation of each type (Van Exel & de Graaf, 2005).
In Q methodology, a ‘factor’ represents a group of participants who sorted the statements in similar ways. Each factor indicates a distinct subjective viewpoint or perception type. In this study, factor analysis yielded three factors (Type 1, Type 2, and Type 3), with each factor representing a distinct perception type regarding the AVH.

3. Study Results

3.1. Q-Factor Analysis Results

Table 3 shows that all extracted factors had eigenvalues greater than 1.0. Specifically, Factor 1 recorded an eigenvalue of 4.01352, Factor 2 registered 2.122581, and Factor 3 yielded 1.904245. These three factors collectively explained approximately 47% of the cumulative variance.
Factor analysis revealed three distinct perception types. A higher factor weight indicated a stronger representativeness of the participant within each type. Q-sorts with the highest factor weights defined Type 1, explaining 24% of the total variance. Type 2 and Type 3 each explained 12% and 11% of the variance, respectively. Together, the three types accounted for 47% of the total variance, indicating an adequate and interpretable factor solution (Brown, 1980; Van Exel & de Graaf, 2005).
Table 4 displays the correlation coefficients among the factors. The correlation between Factor 1 and Factor 2 was −0.0831, between Factor 1 and Factor 3 was 0.0551, and between Factor 2 and Factor 3 was 0.0564.
Table 5 presents the z-scores and Q-sort values for each type. An asterisk (*) marks statements that serve as distinguishing statements, indicating statistically significant difference at the p < 0.01 level.

3.2. Perception Type Characteristics

3.2.1. Type 1: Active Acceptors

Type 1, the ‘Active Acceptors’, demonstrated a high level of trust in AVH devices and a strong willingness to integrate them actively into patient care. Participants in this group adopted a proactive approach to incorporating AI technologies into their caregiving practices.
They most strongly agreed with the statements: ‘Sensor integration for real-time patient monitoring would be useful’ (Q25, Z = 1.98) and ‘Entertainment features help reduce stress when caring for patients’ (Q12*, Z = 1.74). In contrast, they most strongly disagreed with ‘I cannot trust the AVH system’ (Q8, Z = −1.57) and ‘The mechanical nature of the device makes it feel unfamiliar and impersonal’ (Q27*, Z = −1.85).
Beyond practical functions, Type 1 caregivers also distinctively valued the emotional dimension of AI interaction. They agreed that ‘Interaction with the AVH lifts my mood’ (Q9*, Z = 1.11), appreciated the ability to confide in the device about matters they found difficult to discuss with others: ‘I can say things to the AVH that I can’t easily say to others’ (Q16*, Z = 1.44), and found comfort in the empathetic responses from the AVH when sharing caregiving burdens (Q13*, Z = 1.41).
P11, classified under this type, shared the following experience: ‘When using the AI device, we often played trot music with the older adults. It really helped relieve stress for us. I was surprised when the patients clapped and even sang along’. P11 also remarked, ‘Care becomes more effective when the patient’s condition is easy to monitor. It would be great if the device had a sensor-based real-time monitoring feature. That way, I could respond promptly to changes in the patient’s condition’.
These narratives reflect the open and flexible attitudes of Type 1 caregivers towards AVH devices. They view such technologies as tools and collaborative partners that can support their caregiving responsibilities. Their responses reveal a practical commitment to enhancing care quality through AI. Based on these features, we labeled this type ‘Active Acceptors’.

3.2.2. Type 2: Improvement Seekers

The key characteristic of the ‘Improvement Seekers’ (Type 2) is a low level of trust in AVH devices and the belief that these systems require significant improvement before they can function effectively in real-world care settings. Participants in this group expressed cautious and critical views about the current capabilities of AVH systems.
This type most strongly agreed with the statements, ‘It takes too long for the AVH to recognise speech, which is inconvenient’ (Q15, Z = 1.83) and ‘I wouldn’t use the system without financial support’ (Q10*, Z = 1.69). Conversely, they most strongly disagreed with ‘The AVH helps me respond effectively to changes in the patient’s condition’ (Q6, Z = −1.89) and ‘Interacting with the AVH feels like receiving counselling’ (Q3*, Z = −2.03).
Participant P7, categorised under this type, commented the following:
It’s frustrating how long it takes for the device to recognise speech, and overall, I don’t think it helps much in daily caregiving. It takes time to activate and process commands properly, and the device lacks sufficient connectivity with medical staff to be useful in responding to changes in a patient’s condition.
Similarly, P1 commented, ‘If the device integrated a patient monitoring sensor, it would be easier to manage medication times or check meals on schedule. But as it stands, the system is limited and would need significantly more data accumulation to be useful.’
Unlike Type 1 participants, Type 2 participants distinctly rejected the emotional and practical values of AI interaction. They strongly disagreed that they could confide in the device about personal matters: ‘I can say things to the AVH that I can’t easily say to others’ (Q16*, Z = −1.66). They also found it inconvenient for accessing real-time caregiving knowledge: ‘It’s convenient to access knowledge needed on the spot while caregiving’ (Q22*, Z = −0.79). Despite their critical stance, they did not attribute their concerns to the mechanical nature of the device (Q27*, Z = −1.02).
These responses reflect a critical and discerning stance towards AVH devices. Caregivers in this group questioned the practical utility of the technology in its current form and called for substantial improvements in functionality, responsiveness, and real-time adaptability. They prioritised operational effectiveness and situational fit over adopting new technologies for their novelty. Based on these characteristics, we labeled this type the ‘Improvement Seekers’.

3.2.3. Type 3: Emotional Support Seekers

The defining feature of Type 3, the ‘Emotional Support Seekers’, is their emotional satisfaction when using AI devices as a form of relief and self-expression. Participants in this group perceived the AVH as a caregiving tool and source of personal comfort.
The statements with which this type most strongly agreed were: ‘I can say things to the AVH that I can’t easily say to others’ (Q16*, Z = 2.28) and ‘Interacting with the AVH feels like receiving counselling’ (Q3*, Z = 1.57). In contrast, they most strongly disagreed with ‘I cannot trust the AVH’ (Q8, Z = −1.83) and ‘The AVH helps me respond effectively to changes in the patient’s condition’ (Q6, Z = −2.00).
P3, in this group, stated
The AI responses were often empathetic and closely matched how I felt, which made me feel understood and emotionally uplifted. I was able to share personal or deeply held thoughts, which brought a sense of relief. While talking to the AI helped shift my mood during moments of sadness, I also felt the need to approach it carefully so as not to become emotionally dependent on it. Still, it helped me recover from temporary emotional lows.
Similarly, P9 shared
When I talked to the AI about difficult experiences, it responded empathetically and answered my questions in detail, which built trust. But at the same time, I still felt it was ‘just a machine’, so the emotional connection was limited and didn’t fully relieve my stress.
Type 3 participants distinctively prioritised emotional connection over functional features. Unlike Type 1, they showed neutral attitudes towards practical enhancements such as ‘Sensor integration for real-time patient monitoring would be useful’ (Q25*, Z = −0.04). They also acknowledged the mechanical limitations of AI, agreeing with the statement, ‘The mechanical nature of the device makes it feel unfamiliar and impersonal’ (Q27*, Z = 0.86), and even preferred direct internet searches for efficiency: ‘Searching the internet directly is faster and easier’ (Q26*, Z = 1.01). Despite recognising these practical and technological constraints, they continued to value the emotional support the device provided.
Caregivers in this group sought emotional support from the AVH to manage the psychological demands of caregiving. They perceived the device as a potential source of emotional companionship and a tool for self-regulation. Their responses suggest that AI systems could foster psychological resilience. Based on these characteristics, we labeled this group the ‘Emotional Support Seekers’.

4. Discussion

This study explored how caregivers for older adult patients with chronic illnesses subjectively perceived the AI Virtual Human (AVH) application. The analysis identified three distinct perception types: Active Acceptors (Type 1), Improvement Seekers (Type 2), and Emotional Support Seekers (Type 3).
Participants in the Active Acceptors group demonstrated high trust in AVH caregiving devices and a strong intention to use the technology proactively for patient-centred care. Their acceptance aligns with the Technology Acceptance Model (TAM), particularly the constructs of perceived usefulness and perceived ease of use. Consistent with Holden and Karsh’s (2010) argument that these two variables significantly influence acceptance of technology in clinical settings, participants in this type expressed strong motivation to adopt AVH devices. They valued the technology primarily for its utility in monitoring patient conditions and relieving caregiving stress. Notably, they did not define ‘usefulness’ in terms of personal convenience but through the lens of their caregiving responsibilities. Many expressed this view through statements such as ‘It’s good that I can use the device for my patients’, while rarely referencing personal benefits. This pattern suggests that they framed usefulness in terms of enhancing caregiving performance and patient outcomes.
Active Acceptors appear to view the AVH device through the lens of technological determinism. They believe that advancements in technology inherently improve the quality of caregiving work. Their proactive and optimistic stance reflects confidence in technology’s transformative potential. Ultimately, Active Acceptors combine high trust in AVH devices and patient-centred values, driven by the core constructs of TAM and a belief in technology’s ability to elevate care.
This study extends TAM by reframing perceived usefulness as a socially embedded construct grounded in caregiving ethics rather than individual convenience. In doing so, it highlights how caregivers interpret usefulness through relational and moral dimensions, expanding TAM beyond cognitive and efficiency-based parameters to include ethical and affective aspects of technology adoption in healthcare.
Type 2, the Improvement Seekers, expressed low trust in the AVH device while maintaining a primary focus on patient care. They criticised the device’s current performance and usability, which resonates with the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003). Specifically, these participants reported low performance expectancy and high effort expectancy, two UTAUT constructs that reduce behavioural intention to adopt new technologies. This finding aligns with Kwak et al. (2022), who showed that both constructs predict AI healthcare technology use among nursing students.
Improvement Seekers emphasised that the meaningful integration of AI into caregiving requires greater precision, real-time responsiveness, and seamless system integration. Their views rejected technological determinism and instead highlighted social constructivist thinking, where users define the value and relevance of technology based on context and social negotiation (Pinch & Bijker, 1984). These caregivers demand smarter, more context-sensitive features. For them, adoption does not hinge on novelty but on how well the device fits within the actual caregiving environment and the user’s experience.
Beyond reaffirming the UTAUT framework, this study extended it by embedding performance and effort expectancy in a social constructionist perspective. Improvement Seekers illustrated how these constructs operate as individual predictors and socially negotiated evaluations that depend on context-specific constraints, institutional norms, and collective feedback loops between caregivers and technology developers. This integration bridges UTAUT and STS, positioning technology acceptance as a functional and socio-contextual process.
The final type of caregiver in our study, the Emotional Support Seekers (Type 3), exhibited neutral or low trust in the AVH and used it primarily to regulate their emotions rather than deliver patient care. They found comfort in confiding in the device, using it as a tool for emotional expression and relief. This perspective extends the traditional TAM framework by introducing emotional usefulness as a parallel to perceived usefulness. For these caregivers, the AVH served as an emotional support system, helping to alleviate stress and facilitate interpersonal communication.
Participants’ narratives indicated that the AVH provided comfort during emotionally difficult moments, helping them recover from feelings of isolation and burnout. These findings align with recent studies (Haque & Rubya, 2023; Siddals et al., 2024; Merx et al., 2025), which found that emotional comfort and psychological safety significantly influenced user attitudes towards AI technologies. We can interpret this pattern using the Social Construction of Technology (SCOT) framework (Bijker, 1997), which emphasises how users reshape technology’s meaning based on their needs and social context. In this case, Emotional Support Seekers reinterpret the AVH as a socioemotional support system, diverging from its original function as a caregiving tool.
This reconceptualisation also aligns with Human–Machine Communication (HMC) theory, which views AI as a relational and communicative partner (Guzman & Lewis, 2020). For instance, Li et al. (2024) found that patients’ acceptance of AI-driven medical advice depended heavily on the system’s communication style and perceived social responsiveness. Emotional Support Seekers demonstrate this point by accepting and continuing to use the AVH because it offered emotional connection, comfort, and psychological relief.
This study expands HMC by situating emotional interaction within the caregiving relationship, where AI acts as a relational mediator that fosters empathy, containment, and self-regulation. Unlike prior HMC studies that focused on general communication or companionship, our findings indicate that emotional trust and therapeutic presence can also emerge in professional caregiving contexts, suggesting a new dimension of relational resilience in human–AI interactions.
In summary, Emotional Support Seekers perceived the AVH as a functional tool for enhancing patient care and a companion that supported their emotional well-being. Their experiences illustrate the evolving social role of AVH in caregiving, shifting from a task-oriented instrument to an emotionally meaningful partner. These findings suggest that the successful implementation of AVH caregiving technologies in real-world settings requires sensitivity to the distinct needs and orientations of each caregiver type.
For Active Acceptors, maintaining engagement relies on strengthening system reliability and interoperability, alongside the continuous incorporation of on-site feedback to improve usability and trust. For Improvement Seekers, technological sophistication and user-centred validation processes are crucial, with feedback loops between caregivers and developers enhancing practical applicability and contextual fit. Finally, Emotional Support Seekers require empathy-based conversational algorithms and emotional support modules to mitigate compassion fatigue and psychological stress. Collectively, these implications underscore the need to design AVHs that reflect the emotional, cognitive, and contextual dimensions of caregiving practice.

5. Limitations and Future Research Directions

While this study offers valuable insights into caregivers’ subjective perceptions of AVHs, it also has several limitations that provide direction for future research.
First, the sample consisted primarily of caregivers working in long-term care hospitals and facilities located in one geographic area. Therefore, regional characteristics may have influenced their perceptions. Future research should conduct cross-regional comparative studies or incorporate more representative quantitative approaches to enhance generalisability. While this study focused on caregivers’ subjective perceptions based on their actual experiences using AI caregiving devices, future research should include participants who are less familiar with AI technologies to explore how initial exposure, unfamiliarity, and learning processes shape perceptions and acceptance of AI in caregiving contexts.
Second, the average usage period for the AVH was approximately 15 days, and researchers did not impose a standardised usage protocol. Some participants used the device jointly with patients, while others used it independently. These varied contexts may have affected how participants experienced the system. Future studies should consider controlling for usage environments or providing clear usage guidelines to facilitate more accurate comparisons across perception types.
Third, the sample included more female than male caregivers. This gender imbalance reflects broader structural trends in the caregiving profession, but it limited the study’s ability to examine gender-based differences in technology acceptance. Future research should prioritise gender-diverse sampling to address this gap.
Despite these limitations, this study makes important contributions. It revealed that AI caregiving technologies support task-based functions and caregivers’ emotional well-being and psychological resilience. Researchers should expand future work to include more caregiving professions and regions, while also developing new models of technology acceptance that incorporate emotional and cultural dimensions. A mixed-methods approach that integrates Q methodology with quantitative assessments may offer a more comprehensive understanding of how users perceive and engage with AI systems.

6. Conclusions

This study employed Q methodology to explore caregivers’ subjective perceptions of AI Virtual Human (AVH) devices in the care of older adults and identified three distinct perception types: Active Acceptors, Improvement Seekers, and Emotional Support Seekers. The results indicate that caregivers’ acceptance of AI extends beyond functional and instrumental efficiency to include emotional connection, relational meaning, and contextual interpretation, reflecting the complex and human-centred nature of technology use in caregiving.
By integrating theoretical perspectives from the Technology Acceptance Model (TAM), the Unified Theory of Acceptance and Use of Technology (UTAUT), Science and Technology Studies (STS), and Human–Machine Communication (HMC), this study contributes to a more comprehensive understanding of human–AI interactions in care contexts.
Practically, the findings emphasise the need for user-centred AI design and policy development that account for caregivers’ emotional and experiential realities. Future studies should include participants with limited or no prior experience with AI to capture a broader range of perceptions and investigate cross-cultural and longitudinal variations in AI acceptance within caregiving settings.

Author Contributions

Conceptualization, S.J.S., Y.-G.J. and S.Y.L.; methodology, S.J.S., J.Y.K. and S.Y.L.; software, S.J.S. and J.Y.K.; validation, K.Y.M. and Y.-G.J.; formal analysis, S.J.S. and J.Y.K.; investigation, K.Y.M. and J.Y.K.; resources, K.Y.M., Y.-G.J. and S.Y.L.; data curation, S.J.S. and J.Y.K.; writing—original draft preparation, S.J.S., J.Y.K. and S.Y.L.; writing—review and editing, K.Y.M. and Y.-G.J.; visualization, K.Y.M.; supervision, S.Y.L.; project administration, S.Y.L.; funding acquisition, Y.-G.J. All authors have read and agreed to the published version of the manuscript.

Funding

A grant from the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health and Welfare, Republic of Korea [(Grant Number: RS-2023-KH140987)], supported this research.

Institutional Review Board Statement

The researchers conducted this study in accordance with the principles outlined in the Declaration of Helsinki. The Ethics Committee of the Dongguk University Institutional Review Board approved this study (IRB No. DUIRB2024-08-02) on 28 August 2024.

Informed Consent Statement

All subjects involved in the study provided their informed consent.

Data Availability Statement

The data are available from the corresponding authors upon reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Akhtar-Danesh, N. (2017). An overview of the statistical techniques in Q methodology: Is there a better way of doing Q analysis? Operant Subjectivity, 38(3/4), 100553. [Google Scholar] [CrossRef]
  2. Bijker, W. E. (1997). Of bicycles, bakelites, and bulbs: Toward a theory of sociotechnical change. MIT Press. [Google Scholar]
  3. Bonacaro, A., Rubbi, I., Artioli, G., Monaco, F., Sarli, L., & Guasconi, M. (2024). AI and big data: Current and future nursing practitioners’ views on future of healthcare education provision. In Innovation in applied nursing informatics (pp. 200–204). IOS Press. [Google Scholar]
  4. Borna, S., Maniaci, M. J., Haider, C. R., Gomez-Cabello, C. A., Pressman, S. M., Haider, S. A., Demaerschalk, B. M., Cowart, J. B., & Forte, A. J. (2024). Artificial intelligence support for informal patient caregivers: A systematic review. Bioengineering, 11(5), 483. [Google Scholar] [CrossRef] [PubMed]
  5. Brown, S. R. (1978). The importance of factors in Q methodology: Statistical and theoretical considerations. Operant Subjectivity, 1(4), 100516. [Google Scholar] [CrossRef]
  6. Brown, S. R. (1980). Political subjectivity: Applications of Q methodology in political science. Yale University Press. [Google Scholar]
  7. Brown, S. R. (1993). A primer on Q methodology. Operant Subjectivity, 16(3/4), 91–138. [Google Scholar] [CrossRef]
  8. Cao, X., Zhang, H., Zhou, B., Wang, D., Cui, C., & Bai, X. (2024). Factors influencing older adults’ acceptance of voice assistants. Frontiers in Psychology, 15, 1376207. [Google Scholar] [CrossRef]
  9. Choi, S. (2022). History and prospects of senior care and nursing issues. OUGHTOPIA, 36(3), 179–208. [Google Scholar] [CrossRef]
  10. Churruca, K., Ludlow, K., Wu, W., Gibbons, K., Nguyen, H. M., Ellis, L. A., & Braithwaite, J. (2021). A scoping review of Q-methodology in healthcare research. BMC Medical Research Methodology, 21(1), 125. [Google Scholar] [CrossRef]
  11. Croes, E. A. J., & Antheunis, M. L. (2021). Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot. Journal of Social and Personal Relationships, 38(1), 279–300. [Google Scholar] [CrossRef]
  12. Damio, S. M. (2018). The analytic process of Q methodology. Asian Journal of University Education (AJUE), 14(1), 59–75. [Google Scholar]
  13. Davis, F. D. (1986). A technology acceptance model for empirically testing new end-user information systems. Theory and Results/Massachusetts Institute of Technology. [Google Scholar]
  14. de Guzman, A. B., Silva, K. E. M., Silvestre, J. Q., Simbillo, J. G. P., Simpauco, J. J. L., Sinugbuhan, R. J. P., Sison, D. M. N., & Siy, M. R. C. (2012). For your eyes only: A Q-methodology on the ontology of happiness among chronically ill Filipino elderly in a penal institution. Journal of Happiness Studies, 13(5), 913–930. [Google Scholar] [CrossRef]
  15. Edwards, A., & Edwards, C. (2017). The machines are coming: Future directions in instructional communication research. Communication Education, 66(4), 487–488. [Google Scholar] [CrossRef]
  16. Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A human–machine communication research agenda. New Media & Society, 22(1), 70–86. [Google Scholar]
  17. Haque, M. R., & Rubya, S. (2023). An overview of chatbot-based mobile mental health apps: Insights from app descriptions and user reviews. JMIR mHealth and uHealth, 11(1), e44838. [Google Scholar] [CrossRef]
  18. Holden, R. J., & Karsh, B. T. (2010). The technology acceptance model: Its past and its future in health care. Journal of Biomedical Informatics, 43(1), 159–172. [Google Scholar] [CrossRef]
  19. Hylton, P., Kisby, B., & Goddard, P. (2018). Young people’s citizen identities: A Q-methodological analysis of English youth perceptions of citizenship in Britain. Societies, 8, 121. [Google Scholar] [CrossRef]
  20. Igarashi, T., Iijima, K., Nitta, K., & Chen, Y. (2024). Estimation of the cognitive functioning of the elderly by AI agents: A comparative analysis of the effects of the psychological burden of intervention. Healthcare, 12(18), 1821. [Google Scholar] [CrossRef]
  21. Kim, H. (1992). Understanding Q methodology for subjectivity research. Journal of Nursing, 6(1), 1–11. [Google Scholar]
  22. Kim, H. (2007). P sample selection and Q sorting. Journal of KSSSS, 15, 5–19. [Google Scholar]
  23. Kim, M., Lee, J., & Kim, S.-H. (2022). Caregiver support services as health care assistants for seniors. Proceedings of the HCI Korea, 2022, 538–541. [Google Scholar]
  24. Kim, Y., & Yeo, Y. H. (2018). The effect of job environment on burnout and the moderating effect of occupational identity for nursing home workers. Locality and Globality, 42(3), 39–60. [Google Scholar]
  25. Kwak, Y., Seo, Y. H., & Ahn, J. W. (2022). Nursing students’ intent to use AI-based healthcare technology: Path analysis using the unified theory of acceptance and use of technology. Nurse Education Today, 119, 105541. [Google Scholar] [CrossRef]
  26. Lee, Y. (2018). Trends in the socialisation of elderly care (No. 2018-03-3). Statistics Development Institute. [Google Scholar]
  27. Lee, Y., Song, S., & Choi, H. (2023). Case analysis and prospects of artificial intelligence-based elderly care service development. The Journal of the Korea Contents Association, 23(2), 647–656. [Google Scholar] [CrossRef]
  28. Li, S., Chen, M., Liu, P. L., & Xu, J. (2024). Following medical advice of an AI or a human doctor? Experimental evidence based on clinician–patient communication pathway model. Health Communication, 40(9), 1810–1822. [Google Scholar] [CrossRef] [PubMed]
  29. Lombard, M., & Xu, K. (2021). Social responses to media technologies in the 21st century: The media are social actors paradigm. Human–Machine Communication, 2(1), 29–55. [Google Scholar] [CrossRef]
  30. Merx, Q., Steins, M., & Odekerken, G. (2025). The role of psychological comfort with service robot reminders: A dyadic field study. Journal of Services Marketing, 39(10), 1–14. [Google Scholar] [CrossRef]
  31. Ministry of the Interior and Safety. (2025). Population and household status by administrative dong based on resident registration. Available online: https://jumin.mois.go.kr/statMonth.do (accessed on 19 May 2025).
  32. Na, S. (2021). Long-term care hospitals and changing elderly care in South Korea. Medicine Anthropology Theory, 8(3), 1–20. [Google Scholar] [CrossRef]
  33. Nadarzynski, T., Knights, N., Husbands, D., Graham, C. A., Llewellyn, C. D., Buchanan, T., & Ridge, D. (2024). Achieving health equity through conversational AI: A roadmap for design and implementation of inclusive chatbots in healthcare. PLoS Digital Health, 3(5), e0000492. [Google Scholar] [CrossRef] [PubMed]
  34. Organisation for Economic Co-operation and Development (OECD). (2023). Health at a glance 2023: OECD indicators. OECD Publishing. [Google Scholar]
  35. Pinch, T. J., & Bijker, W. E. (1984). The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other. Social Studies of Science, 14(3), 399–441. [Google Scholar] [CrossRef]
  36. Ramlo, S. (2015). Theoretical significance in Q methodology: A qualitative approach to a mixed method. Research in the Schools, 22(1), 73–87. [Google Scholar]
  37. Ruggiano, N., Brown, E., Roberts, L., Framil Suarez, C., Luo, Y., Hao, Z., & Hristidis, V. (2021). Chatbots to support people with dementia and their caregivers: Systematic review of functions and quality. Journal of Medical Internet Research, 23(6), e25006. [Google Scholar] [CrossRef]
  38. Sabra, H. E., Abd Elaal, H. K., Sobhy, K. M., & Bakr, M. M. (2023). Utilization of artificial intelligence in health care: Nurses’ perspectives and attitudes. Menoufia Nursing Journal, 8(1), 253–268. [Google Scholar] [CrossRef]
  39. Siddals, S., Torous, J., & Coxon, A. (2024). It happened to be the perfect thing: Experiences of generative AI chatbots for mental health. NPJ Mental Health Research, 3(1), 48. [Google Scholar] [CrossRef] [PubMed]
  40. Timon, C. M., Heffernan, E., Kilcullen, S., Hopper, L., Lee, H., Gallagher, P., & Murphy, C. (2024). Developing independent living support for older adults using Internet of Things and AI-based systems: Co-design study. JMIR Aging, 7, e54210. [Google Scholar] [CrossRef]
  41. Van Exel, J., & de Graaf, G. (2005). Q methodology: A sneak preview. Available online: https://qmethod.org/wp-content/uploads/2016/01/qmethodologyasneakpreviewreferenceupdate.pdf (accessed on 28 April 2025).
  42. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  43. Watts, S., & Stenner, P. (2005). The subjective experience of partnership love: A Q methodological study. British Journal of Social Psychology, 44(1), 85–107. [Google Scholar] [CrossRef]
  44. World Health Organization. (2024). Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. Available online: https://iris.who.int/server/api/core/bitstreams/e9e62c65-6045-481e-bd04-20e206bc5039/content (accessed on 19 May 2025).
  45. Yu, S., & Chen, T. (2024). Understanding older adults’ acceptance of chatbots in healthcare delivery: An extended UTAUT model. Frontiers in Public Health, 12, 1435329. [Google Scholar] [CrossRef]
  46. Zabala, A. (2014). Qmethod: A package to explore human perspectives using Q methodology. The R Journal, 6(2), 163–173. [Google Scholar] [CrossRef]
  47. Zaboli, A., Biasi, C., Magnarelli, G., Miori, B., Massar, M., Pfeifer, N., Brigo, F., & Turcato, G. (2025). Arterial blood gas analysis and clinical decision-making in emergency and intensive care unit nurses: A performance evaluation. Healthcare, 13(3), 261. [Google Scholar] [CrossRef]
Figure 1. Structural design of the AI Virtual Human (AVH) system.
Figure 1. Structural design of the AI Virtual Human (AVH) system.
Behavsci 15 01541 g001
Figure 2. Q methodology steps.
Figure 2. Q methodology steps.
Behavsci 15 01541 g002
Figure 3. Q-sort distribution grid.
Figure 3. Q-sort distribution grid.
Behavsci 15 01541 g003
Table 1. Demographic characteristics of participants in the focus group interview.
Table 1. Demographic characteristics of participants in the focus group interview.
Participant Label (Number)Age (Gender)PositionCareer Choice MotivationChallengesAverage Patient AgeKey Patient Relations
A (P16)48 (F)NurseJob stabilityPhysical strain/Emotional difficulty80~89Trust/Emotional support/Communication
B (P2)68 (F)CaregiverDesire to help othersEmotional difficulty60~69Communication
C (P1)56 (F)NurseDesire to help othersEmotional difficulty80~89Trust
D (P5)55 (M)Social WorkerSocial recognition/Acquisition of professional skillsRelationship with patients’ families70~79Trust
E (P8)51 (F)CaregiverDesire to help othersEmotional difficulty80~89Emotional support
F (-)40 (F)NurseDesire to help others/Job stabilityEmotional difficulty80~89Communication
Table 2. Demographic characteristics of the P sample.
Table 2. Demographic characteristics of the P sample.
TypeP Sample No.Factor WeightAge (Gender)PositionCareer Choice MotivationChallengesAverage Patient AgeKey Patient Relations
Type 1
(N = 9)
P110.790146 (F)CaregiverMeaningful experiencePhysical strain60~89Trust
P170.769550 (F)Nursing AssistantJob stabilityCommunication issues with patients70~79Consistent care
P150.714145 (F)NurseAcquisition of professional skillsCommunication issues with patients60~89Emotional support
P160.699948 (F)NurseJob stabilityPhysical strain/Emotional difficulty80~89Trust/Emotional support/Communication
P20.693468 (F)CaregiverDesire to help othersEmotional difficulty60~69Communication
P130.555742 (F)CaregiverMeaningful experienceCommunication issues with patients80~89Trust
P60.503743 (F)NurseJob stability/Desire to help othersEmotional difficulty80~89Communication
P40.383461 (F)CaregiverDesire to help othersCommunication issues with patients80~89Trust
P10−0.338151 (F)CaregiverDesire to help othersPhysical strain80~89Trust
Type 2
(N = 4)
P70.796343 (F)CaregiverJob stabilityTime management70~79Communication
P10.756 (F)NurseDesire to help othersEmotional difficulty80~89Trust
P8−0.63551 (F)CaregiverDesire to help othersEmotional difficulty80~89Emotional support
P50.569355 (M)Social WorkerSocial recognition/Acquisition of professional skillsRelationship with patients’ families70~79Trust
Type 3
(N = 4)
P30.728652 (F)CaregiverJob stabilityPhysical strain/Communication issues with patients70~79Trust/Communication
P140.633758 (F)CaregiverMeaningful experience/Job stabilityCommunication issues with patients80~89Trust
P90.536850 (F)Nursing AssistantJob stabilityEmotional difficulty70~79Trust
P120.333363 (F)CaregiverMeaningful experience/Social recognitionPhysical strain80~89Trust
Table 3. Eigenvalues and explanatory variances in the sorting of three factors.
Table 3. Eigenvalues and explanatory variances in the sorting of three factors.
Content123
Eigenvalues4.013522.1225811.904245
Explained Variance (%)241211
Cumulative Explained Variance (%)243647
Table 4. Correlation coefficients between factors.
Table 4. Correlation coefficients between factors.
Type123
11−0.08310.0551
2 10.0564
3 1
Table 5. Z-scores and Q-sort values of Q statements by factor type.
Table 5. Z-scores and Q-sort values of Q statements by factor type.
No.Statement123
Z-ScoreQ-Sort ValueZ-ScoreQ-Sort ValueZ-ScoreQ-Sort Value
1The AVH feels too heavy to use comfortably.−1.18−3−0.110 *−1.48−3
2The character in the AVH feels like I’m speaking to a real person.−0.96−2−1.21−3−0.110 *
3Interacting with the AVH feels like receiving counselling.0.010 *−2.03−4 *1.574 *
4The AVH is simple to operate.0.31 *−0.37−1−0.82−2
5I don’t know who to turn to when problems occur while using the AVH.−0.18−1−0.33−10.61 *
6The AVH helps me respond effectively to changes in the patient’s condition.0.281 *−1.89−4−2−4
7It is difficult to use the AVH for medical purposes.0.10 *0.691−0.84−2 *
8I cannot trust the AVH system.−1.57−4−0.280 *−1.83−4
9Interaction with the AVH lifts my mood.1.112 *−1−2−0.87−3
10I wouldn’t use the system without financial support.−0.97−21.694 *−0.79−2
11It’s hard to get the answers I want from the AVH.−0.21−10.4910.261
12Entertainment features help reduce stress when caring for patients.1.744 *−0.52−1−0.46−1
13Sharing my struggles with the AVH helps relieve stress because it shows empathy.1.413 *−1.04−3−1.59−3
14Functional issues make the AVH difficult to use.−1.23−3 *0.241−0.10
15It takes too long for the AVH to recognise speech, which is inconvenient.−1.05−2 *1.8341.022
16I can say things to the AVH that I can’t easily say to others.1.443 *−1.66−3 *2.284 *
17The AVH asks too many questions, making it boring.−1.25−3 *−0.34−1−0.29−1
18It feels like the system lacks sufficient big data.−0.93−1 *0.7710.291
19The AVH should offer features for emergencies.0.4320.9421.133
20Voice recognition often fails with accents or dialects, which is frustrating.−1.15−2 *0.5611.143
21The device is helpful in suggesting responses to sudden changes in a patient’s condition.−0.010−0.1400.180
22It’s convenient to access the knowledge needed on the spot while caregiving.1.493−0.79−2 *1.022
23Existing AI-based real-time platforms are more effective than the AVH.−0.0700.120−0.250
24It would be helpful if the system could store and manage patient information.0.8120.922−0.51−1 *
25Sensor integration for real-time patient monitoring would be useful.1.9841.633−0.040 *
26Searching the internet directly is faster and easier.−0.6−1−0.1101.012 *
27The mechanical nature of the device makes it feel unfamiliar and impersonal.−1.85−4 *−1.02−2 *0.862 *
28The AVH should provide personalised care through individual patient data.1.0420.9830.611
29Caregivers can collaborate with the AVH to improve work efficiency.0.31 *−0.75−2−0.28−1
30The AVH should be able to connect directly with medical staff in emergencies.0.2400.2300.551
31It would be better if the AVH could assist with medication management.0.331 *0.922−0.76−2
32The AVH should track eating habits and offer appropriate advice.0.3210.8620.210 *
33The AVH must strictly protect patient anonymity.0.190 *1.0131.043
34The AVH’s design should be senior-friendly.−0.28−1−0.3−1−0.73−1
Note. * statistically significant difference (p < 0.01) defining a given factor type.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shin, S.J.; Moon, K.Y.; Kim, J.Y.; Jeong, Y.-G.; Lee, S.Y. Human-Centred Perspectives on Artificial Intelligence in the Care of Older Adults: A Q Methodology Study of Caregivers’ Perceptions. Behav. Sci. 2025, 15, 1541. https://doi.org/10.3390/bs15111541

AMA Style

Shin SJ, Moon KY, Kim JY, Jeong Y-G, Lee SY. Human-Centred Perspectives on Artificial Intelligence in the Care of Older Adults: A Q Methodology Study of Caregivers’ Perceptions. Behavioral Sciences. 2025; 15(11):1541. https://doi.org/10.3390/bs15111541

Chicago/Turabian Style

Shin, Seo Jung, Kyoung Yeon Moon, Ji Yeong Kim, Youn-Gil Jeong, and Song Yi Lee. 2025. "Human-Centred Perspectives on Artificial Intelligence in the Care of Older Adults: A Q Methodology Study of Caregivers’ Perceptions" Behavioral Sciences 15, no. 11: 1541. https://doi.org/10.3390/bs15111541

APA Style

Shin, S. J., Moon, K. Y., Kim, J. Y., Jeong, Y.-G., & Lee, S. Y. (2025). Human-Centred Perspectives on Artificial Intelligence in the Care of Older Adults: A Q Methodology Study of Caregivers’ Perceptions. Behavioral Sciences, 15(11), 1541. https://doi.org/10.3390/bs15111541

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop