You are currently viewing a new version of our website. To view the old version click .
Informatics
  • Article
  • Open Access

1 November 2025

Negotiating Human–AI Complementarity in Geriatric and Palliative Care: A Qualitative Study of Healthcare Practitioners’ Perspectives in Northeast China

,
,
and
1
HeXie Management Research Centre, College of Industry-Entrepreneurs, Xi’an Jiaotong-Liverpool University, Suzhou 215000, China
2
Centre for Ageing and the Life Course, Department of Sociology, Social Policy and Criminology, University of Liverpool, Liverpool L69 7ZX, UK
3
Department of Sociology, Social Policy and Criminology, University of Liverpool, Liverpool L69 7ZX, UK
4
Department of Social and Policy Sciences, University of Bath, Bath BA2 7AY, UK

Abstract

Artificial intelligence (AI) is becoming increasingly significant in healthcare around the world, especially in China, where rapid population ageing coincides with rising expectations for quality of life and a shrinking care workforce. This study explores Chinese health practitioners’ perspectives on using AI assistants in integrated geriatric and palliative care. Drawing on Actor–Network Theory, care is viewed as a network of interconnected human and non-human actors, including practitioners, technologies, patients and policies. Based in Northeast China, a region with structurally marginalised healthcare infrastructure, this article analyses qualitative interviews with 14 practitioners. Our findings reveal three key themes: (1) tensions between AI’s rule-based logic and practitioners’ human-centred approach; (2) ethical discomfort with AI performing intimate or emotionally sensitive care, especially in end-of-life contexts; (3) structural inequalities, with weak policy and infrastructure limiting effective AI integration. The study highlights that AI offers clearer benefits for routine geriatric care, such as monitoring and basic symptom management, but its utility is far more limited in the complex, relational and ethically sensitive domain of palliative care. Proposing a model of human–AI complementarity, the article argues that technology should support rather than replace the emotional and relational aspects of care and identifies policy considerations for ethically grounded integration in resource-limited contexts.

1. Introduction

There is an urgent need to reduce avoidable and serious health-related burdens through prevention and treatment, while ensuring comprehensive, equitable access to palliative care, particularly in low- and middle-income countries []. Global rapid population ageing exacerbates existing challenges, including insufficient healthcare resources, limited workforce capacity and structural barriers []. These pressures are especially acute in geriatric and palliative care, where many people live with multiple chronic conditions, frailty and functional decline, requiring holistic care that addresses multifaceted needs in the continuing process of ageing and dying. Although geriatric and palliative care share the aims of addressing frailty, comorbidities and cognitive decline and improving quality of life, their integration remains unclear []. In many underdeveloped regions, however, care for older people and care for the dying are already closely intertwined without specialised classification or awareness, which highlights the importance of adaptable and context-sensitive models. Research suggests that integration across these two fields can improve training, enhance specialist skills and promote standardised teaching, including communication goals, in care [,,].
In response, national healthcare systems, including China’s, are working to embed palliative care into geriatric care frameworks []. The 14th Five-Year Plan for the Development of National Aging Services and the Elderly Care Service System in China explicitly calls for integrating palliative care into older people’s health support systems []. Similarly, the 2025 Guidelines for the Construction and Management of Geriatrics mandate that general hospitals establish geriatrics departments, which are also tasked with developing palliative care services and protocols []. To address these challenges, especially in under-resourced areas, since 2017, China has introduced a number of policy and service reforms, including piloting integrated palliative care services in 39 cities. Sites in Northeast China, where population ageing, economic marginalisation and uneven healthcare provision intersect, have been at the forefront of these initiatives []. While supporting national strategies and responding to the World Health Organization’s call for broader access to palliative care, these initiatives remain inaccessible to many small and middle-sized cities in Northeast China, where shortages of trained professionals, inadequate funding mechanisms and weak institutional infrastructures continue to limit service delivery.
At the same time, artificial intelligence (AI) is increasingly being explored as a way to address workforce shortages and structural barriers in both geriatric and palliative care. In China, smart palliative care systems often use online platforms to support real-time sharing of clinical knowledge, ethical values and care practices [,], while AI supports health monitoring, personalised treatment and clinical decision-making in geriatric care []. The country’s strong digital infrastructure further enables these developments; for example, around 24.8 percent of the population (about 350 million people) are aware of generative AI tools []. While promising, AI adoption raises questions about how these tools are received, interpreted and integrated into complex and ethically challenging clinical routines. These questions further complicate the growing interest in human–AI complementarity and positions AI to support rather than replace healthcare practitioners in the integration of geriatric and palliative care [,,,,]. In other words, AI has the potential to facilitate early symptom detection, care coordination and remote specialist access throughout the interconnected processes of ageing and dying [,,,], but an over-reliance on technology risks marginalising human presence, narrative meaning-making and culturally embedded values such as filial piety []. These challenges underscore the necessity of ‘technological humility’, which aims to balance predictive accuracy with moral discernment and contextual judgement, and call for socio-culturally sensitive, explainable AI systems to safeguard patient dignity [].
The interpretability and transparency of AI systems remain critical for supporting the integration of geriatric and palliative care, especially in high-stakes environments such as intensive care or end-of-life decision-making [,]. Many AI implementations fall short of genuine collaboration, instead relegating clinicians to passive oversight roles, which raises concerns about deskilling and erosion of professional autonomy [,]. In the context of integrated care, AI’s limitations in capturing the relational, ethical and holistic dimensions of patient needs are particularly salient, given the centrality of empathy, dignity and autonomy [,,,,]. Healthcare practitioners’ attitudes towards AI are shaped by a complex interplay of technological, institutional and cultural factors. While some exhibit optimism about AI’s potential to enhance evidence-based and empathetic care [], others remain cautious due to concerns about professional roles, ethical risks and potential impacts on clinician–patient relationships [,,]. Understanding these perspectives is especially important in under-resourced areas such as Northeast China, where workforce shortages, infrastructure gaps and regional disparities intersect with the complex demands of integrating geriatric and palliative care [,].
To navigate these complexities, this study employs Actor–Network Theory (ANT) as a conceptual framework. ANT offers a relational and materially grounded approach that treats both human and non-human entities, including AI technologies, not as passive instruments or external influences but as actors whose agency emerges through interactions [,]. It challenges traditional distinctions between subject and object, emphasising that outcomes in care systems are co-produced through heterogeneous networks of clinicians, patients, technologies, policies and cultural values []. In healthcare, this means that AI is not simply inserted into pre-existing workflows but actively participates in shaping clinical logics, ethical norms and relational dynamics. ANT enables analysis of how trust, authority and empathy are distributed and contested across evolving care networks. It also facilitates sensitivity to how cultural imperatives, such as filial responsibility or personhood, are not external constraints but immanent forces within socio-technical assemblages []. By framing human–AI relations as dynamic and context-specific rather than fixed or universal, ANT supports an investigation into the contingent formations through which technology becomes ethically and practically meaningful in care.
Thus, this conceptual lens allows for an exploration of how healthcare practitioners in the resource-constrained region of Northeast China make sense of and adapt to the introduction of AI in integrated geriatric and palliative care. It situates AI not as an autonomous disruptor or inevitable solution, but as a relationally embedded actor, whose significance is forged through everyday practices, institutional histories and cultural scripts. As such, this study investigates the following questions:
  • How do healthcare practitioners in resource-constrained settings interpret and navigate the role of AI in supporting, extending or reshaping their clinical and ethical responsibilities within integrated geriatric and palliative care?
  • In what ways do human–AI interactions foster collaboration, generate tension or necessitate negotiation in the provision of care for older adults living with frailty or serious illness?

2. Materials and Methods

2.1. Research Design

This study employed a qualitative approach to examine how AI was interpreted, resisted and selectively adopted by healthcare professionals in resource-limited settings in China in which palliative and geriatric care were closely intertwined in daily practice. It explored how technologies were integrated into clinical routines and how they interacted with professional ethics, care relationships and infrastructural limitations. The study was conducted in a private Secondary Grade B hospital in a rural area of Northeast China. This region is shaped by a paradox of structural resilience, stemming from its strong industrial past, and ongoing stagnation, marked by low economic growth and the outmigration of skilled labour []. These wider structural issues places pressure on public service provision, particularly in healthcare for older populations.
The hospital selected for the study was a typical example of small and medium-sized institutions falling outside of China’s national palliative care pilot programme. Staffed mainly by general practitioners and nurses, the hospital served a mixed patient group, including older adults with chronic illness, patients with psychosomatic symptoms, patients with late-stage cancer and frail older people. Specialised palliative care provision remained limited and focused on symptom management and basic psychological support. This reflects the systemic lack of resources and the absence of structured policy support in non-pilot areas. This study chose this hospital as a field site not only due to its location in a resource-limited and ageing region, but also because it is a representative model of integrated geriatric and palliative care in China, which is common in non-urban, lower-tier hospitals in which palliative care is not formally recognised or delivered, but rather intertwined with more generalised geriatric care.

2.2. Data Collection

Informed by prior research calling for methodological pluralism and deeper contextual insights [], this study employed a purposive sampling strategy to recruit frontline healthcare professionals who were engaged in the delivery of geriatric and palliative care. A total of 14 participants (5 general practitioners and 9 nurses) were recruited from the selected hospital (Table 1). These professionals were selected for their routine engagement with older and terminally ill patients and their first-hand exposure to the use of digital technologies within clinical routines. Data collection was conducted through semi-structured in-depth interviews, carried out in Spring 2025 via telephone, with scheduling being based on participants’ availability. An interview guide (see Table A1) was developed to explore three core thematic domains: (1) the everyday challenges and dilemmas faced by practitioners in integrated geriatric and palliative care; (2) attitudes, understandings and levels of engagement with AI technologies; and (3) how practitioners perceive and manage the integration (or lack) of AI tools in their professional routines. The flexible interview format allowed participants to narrate their experiences on their own terms, while also probing deeper into contradictions, anxieties and hopes surrounding the evolving role of AI in care. Thematic saturation was reached after analysing interviews with 14 participants, as three consecutive interviews produced no new themes or subthemes regarding human–AI complementarity in integrated geriatric and palliative care.
Table 1. Participant information.
Potential participants were initially identified through introductions by hospital leadership, a particularly effective method in the Chinese context for engaging reticent or traditionally hard-to-reach staff members to mitigate self-selection bias and ensure a more representative sample. While we deliberately sought variation in age and clinical experience during recruitment to capture a spectrum of perspectives, our data analysis revealed no consistent and marked differences in AI adoption or attitudes correlating specifically with these demographic factors. The interviews lasted between 20 and 40 min each. This duration was carefully determined to balance comprehensive data collection with respect for the time constraints of clinical staff. Ethical approval was obtained from the first author’s affiliation (ER-SRR-11000163320250106103627).
During qualitative interviews, participants frequently referenced two distinct AI platforms as integral to their daily practice, Doubao 1.5 (豆包) and DeepSeek-V3 (深度求索), both of which are widely used and accessible in China. These tools emerged organically in discussions about clinical decision-support, patient education and workflow efficiency, reflecting their divergent yet complementary roles in healthcare settings.

2.3. Data Analysis

All interviews were transcribed verbatim and subjected to thematic analysis, supported by NVivo 14. The coding process followed an inductive approach, allowing analytical categories to emerge organically from participants’ narratives rather than being imposed a priori []. Initial line-by-line coding generated a broad set of codes, which were then refined and clustered through iterative comparison and interpretive memo writing. Regular analytical meetings between the first two authors helped maintain rigour, ensure reflexivity and clarify conceptual boundaries among the evolving themes.
Actor–Network Theory (ANT) was adopted as the principal analytical lens to illuminate how AI was implemented within the socio-technical networks of rural healthcare. ANT directed our attention to the ways in which non-human actors such as AI systems, digital platforms and clinical tools become entangled in the delivery of integrated palliative and geriatric care, as well as with human actors, institutional structures and professional norms. Using this perspective, the analysis revealed that AI integration was not experienced as a linear or uniform process []. In particular, the embedding of AI within integrated palliative and geriatric care highlighted the contingent and negotiated character of technological adoption. Through our ANT-informed analysis, AI emerged as neither inherently beneficial nor detrimental but as a contingent actor, whose role and significance were co-constituted within a heterogeneous network of people, technologies and institutions []. By attending to these, the analysis foregrounded how care practices were reshaped through complex negotiations involving both human judgement and technological affordances []. This approach moved beyond binary notions of technological acceptance or rejection, instead illuminating a landscape of partial integrations, strategic adaptations and ongoing recalibrations in the pursuit of ethically attuned care.

3. Results

Three interrelated themes emerged: (1) tensions between AI’s rule-based logic and practitioners’ human-centred approach, raising concerns about diminished empathy and discretion; (2) ethical discomfort with AI delivering intimate or emotionally sensitive care, especially in end-of-life contexts; and (3) structural inequalities driven by weak policy and infrastructure, which created barriers to effective AI integration. These themes also captured the targeted use of AI (e.g., Doubao, DeepSeek) in integrated geriatric and palliative care, ranging from drafting documents and querying causes of emotional symptoms to diagnosing and treating geriatric diseases. They highlighted both the benefits of AI in bridging professional boundaries between general geriatrics and specialised palliative care, and the limitations of AI in addressing complex and sensitive needs, particularly towards the end of life. Together, these themes revealed how the promise of AI was often undermined by communication difficulties, service gaps and the persistence of structural constraints in everyday clinical practice.

3.1. Tensions Between AI’s Rule-Based Logic and Practitioners’ Human-Centred Approach

All participants believed that human beings are irreplaceable in healthcare, particularly when addressing the sensitive everyday needs associated with ageing and dying. To many of these clinicians, AI assistants were no more than advanced search tools, lacking the capacity for empathy or nuanced judgement. To avoid misunderstanding the capabilities of AI, participants tended to adopt a simplified view of its functions, focusing on its practical use rather than attributing human-like characteristics:
Participant 02 (nurse): “When you search using AI tools like DeepSeek or Doubao, it gives you quick answers, sometimes linking to extra features or content that you have to pay for. But really, it just helps you find information; it doesn’t think for you.
Some participants viewed AI assistants as static knowledge bases, rather than dynamic systems that are capable of synthesising lived experiences. This cognitive approach significantly influenced how they interact with AI and how it is integrated into healthcare practices, raising important questions for critical examination. In particular, the search engine-style interactions, where practitioners primarily submit brief factual queries, often result in under-utilisation of the technology’s potential to provide nuanced and context-sensitive responses.
Participant 09 (nurse): “Because the severity of the disease varies from patient to patient, we sometimes need to rely on our experience or observe carefully in the moment to understand what’s really going on, something AI can’t fully do yet.
Some participants believed that AI assistants are incapable of expressing genuine attitudes towards patients, a view shared by both doctors and nurses. Healthcare professionals consistently emphasised the indispensable role of human caregivers in managing chronic disease, a conviction that was deeply rooted in the inherent uncertainties of long-term care outcomes. Healing, they explained, arises not only from technical interventions but also from the complex intersubjective interactions between caregivers and patients. This dimension resists digitisation precisely because it relies on ambiguous and context-dependent human signals, such as facial expressions, that cannot be fully captured or interpreted by AI.
Participant 12 (doctor): “AI, including any robots we might see in the future, just can’t match human expression or language. The same sentence said with a different tone or speed can mean something completely different. Some doctors might have fewer patients because they focus on quality, but they spend more time communicating. A smile, good care, or just visiting a patient regularly can make them feel satisfied, and that helps them recover faster. I’ve seen plenty of examples where the medicine and treatment are the same, but if a patient has a bad attitude, they don’t feel it’s working. They might stay a few days, then think it’s no good, and stop the treatment.
The nature of clinical work could influence how practitioners perceived their patients’ attitudes toward AI in caregiving. Nurses, who worked closely with patients, noted that some older patients might resist the integration of AI due to unfamiliarity and discomfort with new technologies, further complicating the network of actors involved in care delivery.
Participant 05 (nurse): “For patients, especially those who are middle-aged or older and unfamiliar with AI, there can be a feeling of uncertainty and insecurity about accepting it in their care.
Despite limited trust in AI, practitioners generally acknowledged its benefits for simple geriatric tasks such as documentation, scheduling and reminders, allowing for more focus on complex clinical decisions and patient interaction. By contrast, they were more cautious about AI in caring for dying patients, where palliative and geriatric care often overlap, but emotional, psychological and existential needs remain central. While they provide both types of care, practitioners recognise the distinctions and see AI as having a limited role in palliative care. For example, Participant 08, an experienced nurse, described how her nuanced communication skills went beyond textbook knowledge, allowing her to interpret unspoken concerns of a terminally ill patient and recognise that encouragement and praise could reduce their perception of extreme pain.
Participant 08 (nurse): “When a patient is in extreme pain, they can become very impatient. We might tell them about other patients with similar pain who found relief through painkillers. Sometimes, we just chat about everyday life to distract them, then gently remind them of precautions. AI can support this by providing information or suggestions, but the real connection comes from asking questions like, ‘What did you do before that made you feel happy or proud?’ Often, the patient becomes really proud, and then they don’t need much more chatting. They start to open up and talk to you to take their mind off the pain.
As such, our participants viewed AI as a limited, rule-based tool that can be useful for factual information but is unable to integrate the emotional, relational and nuanced judgement required to combine palliative care with general geriatric practice. Human caregivers remained central, because care depended on complex interactions between patients, practitioners and technology. The nature of clinical work influenced how AI was used mainly as a support tool, while patient acceptance, especially among older adults who were unfamiliar with AI, further shaped its integration. These findings show how caregiving arises from dynamic relationships among human and non-human actors within healthcare networks.

3.2. Ethical Discomfort Around Human–AI Complementarity

The broader tension between AI’s rule-based logic and the human-centred nature of caregiving gave rise to ethical discomfort among clinicians, who expressed concerns about the erosion of professional judgement, the depersonalisation of care and the risk of over-reliance on systems lacking contextual and emotional awareness. This discomfort was especially evident in relation to AI’s inability to manage ambiguity or respond to emotionally complex situations.
Several participants acknowledged that they had not yet encountered ethical dilemmas firsthand when using AI chatbots, primarily because AI integration in their institutions remained minimal and non-intrusive. However, through their shared reflections, they expressed growing concern about potential ethical conflicts emerging in the near future.
A key ethical concern lies in the design and delineation of AI’s roles within care environments. Participants emphasised that when the boundaries between the responsibilities of humans and machines are not clearly defined, it can compromise the quality and integrity of care, particularly in emotionally complex areas such as geriatric and palliative settings. In such contexts, blurred roles may not only disrupt collaboration but also obscure lines of accountability and reduce attentiveness to patients’ emotional and relational needs. One nurse reflected on this tension, underscoring the ethical risk of emotional neglect in human–AI care:
Participant 04 (nurse): “I don’t think it will work in the future. What really matters to patients is your attitude: whether you are patient with them, whether you truly listen. AI can’t replace the warmth and empathy of human care, and that could mean older people’s emotional needs are simply ignored.
This comment highlights an important ethical dimension: without proper safeguards and clarity around roles, the integration of AI may inadvertently diminish the relational aspects of care, leaving critical human needs unmet.
The ethical challenges surrounding AI in healthcare are deeply rooted in the complexities of patient consent and autonomy. Ensuring informed consent was particularly difficult in socio-economically underdeveloped contexts, where digital literacy and exposure to emerging technologies vary widely. Without adequate understanding, as observed by our participants, patients’ autonomy could be compromised, risking decisions being made without true awareness of AI’s role or implications. Addressing these disparities was essential to upholding ethical standards and maintaining trust in AI-assisted healthcare.
Participant 11 (doctor): “Many patients and families are not very familiar with AI. They are often in their fifties or older. To be honest, their generation is not very comfortable with new technologies, especially since we are in the countryside, and this area is relatively underdeveloped. Honestly, they really do not know much about it. They have not had much exposure to these things, and frankly, they are not very good at accepting new stuff.
Building on concerns about informed consent, participants also identified complexities in family dynamics as a significant factor complicating ethical issues of accountability in AI-assisted care. They noted that, in the context of China’s rapid modernisation, traditional patriarchal family structures had weakened, leading to more delicate and unpredictable family relationships. These changes had contributed to family conflicts that, particularly in rural areas, were linked to incidents such as elderly suicide [].
Participants explained that such complicated social realities made it especially challenging to assign responsibility when AI was involved in critical decisions relating to pain management, sedation or life-sustaining treatment within culturally sensitive palliative care settings. This raised urgent ethical questions about who should be held accountable in the event of AI-assisted errors, highlighting the need for frameworks that addressed both the technical and socio-cultural dimensions of healthcare. Some participants were particularly concerned about delegating sensitive monitoring tasks to AI. They emphasised that the embodied expressions of suffering in palliative contexts are highly ambiguous, and that algorithms often struggle to distinguish different expressions of discomfort and suffering due to culture and symptoms. As such, AI has not yet reached the required level of sophistication to provide ethically sound care for dying patients.
Participant 05 (nurse): “I’m not sure it’s right for AI to make judgements about patients’ symptoms. When someone is approaching the end of their life, they often cry out in pain, but AI might misinterpret this as anger or another emotion and trigger the wrong response. There’s also the danger that AI systems lack sensitivity to individual symptoms and cultural differences, overlooking the specific needs of minority patients and creating serious ethical risks.
Clinicians expressed ethical concerns about AI’s limited emotional understanding and the risk of depersonalised care, especially in palliative settings as an extension of geriatric care, where unclear boundaries between the responsibilities of humans and machines can compromise attentiveness to patients’ emotional and relational needs. Viewed through the lens of Actor–Network Theory, these concerns arise not simply from individual practitioners or technologies, but from a heterogeneous network of human and non-human actors including clinicians, patients, algorithms, infrastructures and institutional policies. In under-developed rural areas of China, low digital literacy and limited exposure to technology hinder informed consent, while evolving family dynamics and social expectations further complicate the web of care. Although no formal AI ethics training currently exists, many participants highlighted the urgent need for proactive education and institutional support to prepare healthcare workers for these challenges. Their reflections reveal a growing awareness of ethical tensions and point to the importance of interdisciplinary approaches that consider how ethical practice is shaped through interactions among people, technologies and socio-cultural contexts.

3.3. Structural Inequalities in the Adoption of AI in Care

The ethical concerns surrounding the adoption of AI in healthcare could be further complicated by structural inequalities in how care is provided, especially when access to resources, quality of care and decision-making autonomy are unevenly distributed. In mainland China, these structural inequalities are exemplified by entrenched occupational hierarchies within hospitals and pronounced regional disparities.
A central disparity revealed in our analysis concerns the differing roles and experiences of nurses and doctors in relation to AI technologies in geriatric and palliative care. Many nurses reported greater communication challenges and relied more heavily on AI assistants like Doubao, which reinforced their clinical authority. In resource-limited regions, nurses often had lower formal qualifications and less access to ongoing education, specialist training and research-based materials than their counterparts in more affluent areas. This aligns with the findings of Tully, Longoni and Appel [], who report a negative association between AI literacy and AI acceptance, whereby individuals with lower levels of AI literacy tend to exhibit higher acceptance of AI. This environment sometimes produced a ‘double mistrust’: patients doubting nurses’ competence and nurses doubting their own judgement, especially in integrated geriatric and palliative care where needs are complex and outcomes uncertain.
Participant 09: “In the care setting for ageing and terminal patients, where patients often face complex conditions and uncertain outcomes, patients tend to trust Doubao’s professional judgement more when it appears as a report. Sometimes they don’t believe what we usually say… They think it’s specialised, and if they can’t understand it, they assume it might be more accurate.
These barriers were not absent for doctors in lower-tier, resource-limited hospitals. Patients’ prior experiences in better-resourced facilities could sometimes undermine local doctors’ authority:
Participant 14 (doctor): “The patient had been to many big hospitals before, such as haematology departments in Beijing, Shenyang, and Tianjin. He had also received treatment at our city’s best hospital. When he saw that his test results were even better than before, he thought I, as the doctor, was making a bit of a fuss. So he just didn’t understand.
The reluctance to integrate AI chatbots into their daily practice was largely due to uneven access to technology-related policies and a lack of formal policy guidance on AI use. Many practitioners, especially doctors, were effectively forced to choose between efficient AI assistance and compliant documentation, with most opting for the latter due to professional responsibility and liability considerations.
Participant 14 (GP): “Medical records are mainly written strictly in accordance with the national medical records… Copy and paste are not allowed in the system… I will only use AI to help write medical records if it is allowed in the future.
Nevertheless, limited policy support did not eliminate GPs’ motivation to use AI. The only doctor participant who actively used an AI assistant had previously been challenged by a self-taught patient about his authority. Even then, he used the tool cautiously, cross-checking with textbooks. Unlike most of the nurses, he preferred DeepSeek, which he perceived as more accurate and specialised than Doubao.
Participant 14 (GP): “It’s accurate, but I don’t rely on [DeepSeek] completely. I’ll look at the phone again and check the textbook myself.
The emergence of AI also offered an accessible and scalable alternative, giving nurses access to specialised care knowledge, including palliative, that was not routinely available in their settings. This enabled them to deliver more accurate information, bridge knowledge gaps and enhance patient care, despite structural constraints.
Participant 05 (nurse): “In our hospital, we don’t have much access to specialist training or up-to-date reference materials, so I sometimes learn alongside my patients. When I was giving moxibustion, one patient used Doubao on his phone to check what I had explained. The information he found was almost exactly what I’d told him. When patients see my explanation matches what Doubao says, they trust me more; and it helps me feel more confident too.
These findings reveal how structural inequalities in mainland China’s healthcare system shape the ways in which nurses and doctors engage with AI in integrated geriatric and palliative care. While AI can provide access to specialised knowledge and support routine tasks, its role in palliative care remains limited, as it cannot substitute for the empathy, nuanced judgement and relational skills that are essential in end-of-life care. Nurses in resource-limited areas, facing both patient mistrust and self-doubt due to limited training and resources, are often compelled to rely more heavily on AI tools like Doubao or Deepseek to reinforce their credibility, bridge knowledge gaps and deliver general geriatric care. Conversely, GPs encounter barriers to AI adoption because of restrictive medical record protocols and the lack of supportive policies, leading many to avoid AI despite recognising its potential.
This uneven reliance on AI amid resource constraints risks exacerbating ethical challenges, as unequal access to AI technologies and inadequate governance may deepen power imbalances, further undermine trust in clinicians and intensify regional disparities in care quality. It highlights the pressing need for robust ethical guidance and equitable policies to ensure that AI integration is responsible, fair and trustworthy, while recognising its limited capacity to replace human judgement in complex and sensitive contexts, such as palliative care.

4. Discussion

Focusing on integrated geriatric and palliative care in a private mid-sized hospital in a rural area of Northeast China, our study reveals a mixed picture of practitioners’ interactions with and attitudes towards AI chatbot assistants. While clinicians reported that AI assistant tools, including Doubao and Deepseek, were used in care provision in various circumstances, such as consultations, decision-making support and facilitating communication between practitioners and patients, the role of AI remained limited, functioning as little more than an advanced search engine in their daily work. While acknowledging the benefits brought by AI, the practitioners that we interviewed did not foresee it replacing human professionals in healthcare, particularly in sensitive contexts involving frail older and dying patients, where relational engagement, empathy and nuanced understanding of patients’ emotional and existential needs are paramount []. This perspective was deeply rooted in profoundly human-centred values of care, which emphasise compassion, trust and the irreplaceable role of human presence at the end of life []. Such a cautious approach aligns with Actor–Network Theory, which situates AI not as an autonomous replacement for human actors but as one element within a complex network of relationships, institutional norms and material constraints []. From this perspective, AI’s limited agency in geriatric and palliative care is shaped not only by its algorithmic and data-driven logic but also by the socio-cultural, policy and resource contexts in which it is embedded, particularly in less-advanced regions, where structural inequalities both constrain and, at times, inadvertently stimulate innovation.
Interestingly, we found that engagement with AI was not exclusively determined by the consideration of practical and ethical implications, but also by unequal access to clinical resources in economically left-behind regions. As highlighted in our findings, AI could be used to affirm healthcare practitioners’ clinical authority, particularly in settings where occupational hierarchies, limited training opportunities and shortages of specialist resources undermined confidence and trust. In this case, AI served as an actant, mediating interactions between practitioners and patients by translating clinical knowledge into authoritative outputs and reconfiguring power dynamics within the care setting []. Through this mediation, AI not only compensates for gaps in expertise and resources but also actively participates in reshaping the clinical network, enabling practitioners—especially nurses—to negotiate legitimacy and authority in environments that are marked by structural inequality. The use of AI also varied by occupation []. For nurses in under-resourced hospitals, tools like Doubao and DeepSeek helped bridge knowledge gaps, reassure patients and bolster professional credibility, especially in complex geriatric and palliative care. In contrast, doctors faced restrictive medical record systems and a lack of formal policy guidance, limiting their integration of AI, despite recognising its value. This uneven adoption highlights how structural inequalities (both regional and occupational) not only determine if AI is used, but also influence its purpose, the reliance on it and its broader impacts on clinical authority and patient–practitioner relationships.
Building on these insights, this study proposes a model of human and AI complementarity in which technology supports, rather than replaces, the emotional and relational dimensions of care, particularly in resource-constrained palliative and geriatric settings in China. Drawing on classical Chinese philosophies, the model integrates the concepts of the Small Self (小我) and Great Self (大我) to form a human–AI complementarity framework grounded in ANT. The Small Self denotes the ego-focused individual who is concerned with immediate personal interests and survival, while the Great Self represents an expanded self that transcends ego boundaries to embrace the wider social, natural and spiritual community [,,]. This framework views care as a dynamic network of human and non-human actors, including healthcare professionals, patients, AI technologies and institutional policies, whose interactions collectively shape care delivery and experience. It suggests that these relationships be understood through the lens of the Great Self, while institutional policies relating to medical staff roles correspond to the Small Self. Within geriatric and palliative care, the Small Self embodies healthcare practitioners’ duties, technical focus and desire for control, often accompanied by prejudice and resistance towards AI. In contrast, the Great Self reflects a benevolent and expansive approach that embraces collaboration with AI, challenging assumptions that ‘AI is inferior’ or ‘AI will replace me’. This duality frames both institutional benchmarks and the aspirational strategy for continuous improvement in human–AI complementarity. Through ANT, the framework recognises distributed agency across human and non-human actors, with AI tools actively reconfiguring care routines, redistributing responsibilities and shaping ethical decision-making. This model addresses the relational, emotional and contextual challenges that are characteristic of structurally marginalised Chinese regions, inviting stakeholders to prioritise ethical design and frontline involvement to ensure that digital innovation supports compassionate care rather than supplanting or resisting essential human elements [].
The distinct characteristics of geriatric and palliative care further complicate the integration of AI, as these settings demand heightened attentiveness to patients’ emotional, social and existential needs []. Older adults often present with complex comorbidities, cognitive impairments and frailty, which require personalised care that is sensitive to both medical and psychosocial dimensions []. In palliative contexts, practitioners must navigate end-of-life conversations, manage suffering and provide reassurance and presence that cannot be fully codified into algorithms []. As our study indicates, AI tools may assist with simple informational or procedural tasks in general geriatric care, such as documentation, scheduling or reminders. However, they remain limited in recognising subtle cues, responding to nuanced expressions of distress and sustaining the continuity of human relationships over time. These limitations are particularly evident in rural and under-resourced hospitals, where staff rely heavily on relational knowledge and collective experience to compensate for gaps in infrastructure, training and specialist support when addressing the complex symptomatology and needs of patients. Consequently, while AI can augment care provision by offering rapid access to information or practical support, the ethical, relational and affective dimensions of geriatric and palliative care remain firmly human-centred, reinforcing the necessity of positioning AI as a supportive rather than substitutive element within these uniquely demanding clinical contexts.
The integration of AI in geriatric and palliative care raises profound ethical considerations that must be addressed at the institutional, disciplinary and educational levels []. Institutionally, healthcare organisations need robust ethical frameworks that guide responsible AI deployment, ensuring that it complements rather than undermines human values that are central to care, such as dignity, empathy and trust. While AI can support simpler tasks in general geriatric care, such as documentation, scheduling, reminders or factual guidance, and may help build patient trust, its role in complex care, particularly involving frail or dying patients, is limited. This reinforces that AI currently has only a limited role in integrating palliative care into general geriatric practice.
Furthermore, AI integration frameworks should be sensitive to socio-cultural and resource disparities in rural and under-resourced settings, guarding against an uncritical adoption of technology that might exacerbate inequalities []. Disciplinarily, ethics must be embedded into the evolving professional standards and clinical guidelines governing AI use, fostering reflexivity among practitioners about their roles, power relations and the implications of technological mediation in patient interactions []. Education plays a critical role in this process, equipping healthcare professionals with not only technical competence but also ethical literacy and critical thinking skills to navigate the complex relational and existential dimensions of care in an AI-augmented environment []. Training programmes should incorporate interdisciplinary perspectives that draw on philosophy, sociology and clinical ethics to prepare practitioners for collaborative engagement with AI tools, emphasising the balance between technological assistance and the irreplaceable human presence that lies at the heart of compassionate care. Together, these institutional, disciplinary and educational efforts form an essential triad for fostering an ethically grounded, culturally sensitive and contextually aware integration of AI into geriatric and palliative care.
Building on the need for ethically grounded and context-sensitive AI integration, it is crucial to consider the broader Chinese context, which is characterised by rapid AI development alongside stark regional disparities. While China’s national AI strategy has propelled cutting-edge innovation and concentrated resources in major urban centres, rural and economically disadvantaged areas, such as those in Northeast China, remain under-resourced, facing shortages of specialised staff, limited training opportunities and insufficient digital infrastructure. Existing policies and clinical guidelines often implicitly assume a uniform capacity for AI adoption across regions, which risks deepening the digital divide and exacerbating healthcare inequalities. Paradoxically, these resource-constrained settings may have an even greater demand for AI solutions, particularly generative AI tools that can enhance nursing communication and assist in clinical decision-making in integrated geriatric and palliative care contexts, while facing challenges in simplifying the complex and sensitive needs of patients. Therefore, a vital policy shift is required: the development and implementation of regionally adapted and balanced AI governance frameworks and clinical standards that respond to the unique socio-economic, cultural and infrastructural realities of less-developed areas. Such targeted policies would ensure that AI not only serves as a technological advancement and inclusive tool addressing structural inequalities, but also acknowledges and accounts for its limitations in complex care settings when integrating and transitioning from general geriatric care to specialised palliative care.

Limitations and Implications

Despite the valuable insights gained, several limitations must be acknowledged. Our data derive from a single private Secondary Grade B hospital in a rural area of Northeast China, which may limit the broader applicability of our findings, especially given the vast regional disparities in healthcare resources and AI adoption across China. In addition, although the hospital combines traditional Mongolian medicine, traditional Chinese medicine, Western biomedicine and geriatrics, our findings did not capture how AI was engaged across these diverse medical systems. This means that the study offers only a partial view of AI use in this distinctive institutional context. Further research could explore the use of AI across multiple medical traditions, examining potential tensions, complementarities and context-specific practices in multidisciplinary clinical settings. Incorporating a comparative dimension would also be valuable; for instance, examining perspectives on the use of AI in a hospital with an established palliative care programme or in an urban setting could help illuminate how dynamics vary across different contexts.
Moreover, this study did not include patients’ perspectives, an omission that restricts our understanding of how AI tools affect the patient experience and relational dynamics from the recipient’s viewpoint. Finally, while our research highlights the cautious and context-dependent use of AI in consultations, communications and decision-making support, further in-depth investigation is needed into the routine, everyday integration of AI across diverse clinical procedures and occupational roles. Addressing these gaps will be crucial for developing nuanced, context-sensitive policies and educational programmes that fully capture the complex realities of AI implementation in geriatric and palliative care within economically and institutionally varied settings in China.

Author Contributions

Conceptualisation, C.G. and C.F.; methodology, C.F.; formal analysis, C.G., C.F. and W.Z.; investigation, C.G. and C.F.; data curation, C.G.; writing—original draft preparation, C.G.; writing—review and editing, C.F.; visualisation, C.F. and W.Z.; supervision, J.T.; project administration, C.G. The first two authors have contributed equally to the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study has been ethically reviewed and approved by Xi’an Jiaotong-Liverpool University Research Ethics Review Panel (ref: ER-SRR-11000163320250106103627 4 March 2025).

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank the participants for generously sharing their experiences and views on the research topic.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ANTActor–Network Theory

Appendix A

Table A1. Interview structure: healthcare practitioners’ perspectives.
Table A1. Interview structure: healthcare practitioners’ perspectives.
Interview Questions:
(1) the everyday challenges and dilemmas faced by practitioners in geriatric and palliative care
Could you briefly describe the key challenges you have encountered or anticipate in managing relationships with patients in your palliative care training and practice?
How do you currently address or cope with these challenges in your work?
(2) attitudes, understandings, and levels of engagement with AI technologies:
Could you share any personal experiences or examples of interacting with AI technologies in your nursing practice? How have these experiences shaped your views on AI’s potential impact?
What is your general understanding of AI applications in palliative care? Are you familiar with them? (If yes, could you elaborate?)
(3) how practitioners perceive and manage the integration (or lack) of AI tools in their professional routines.
Do you believe you can effectively collaborate with AI for palliative care patients? In what ways?
What predictions or interpretations do you have regarding future nursing scenarios and evolving interpersonal dynamics?
Do you believe AI can effectively help address interpersonal challenges in palliative care? Why or why not? What expectations or recommendations do you have for future development?
(4) open-Ended Invitation:
Is there anything else you would like to share regarding human-AI complementarity in geriatric and palliative care?
Table A2. Summary of key qualitative themes and findings on human–AI complementarity in geriatric and palliative care.
Table A2. Summary of key qualitative themes and findings on human–AI complementarity in geriatric and palliative care.
ThemeSub-ThemeIllustrative Participant Quote
Tensions between AI’s rule-based logic and practitioners’ human-centred
approach
The non-transferable core of care provided by humans‘Personally, I do not think it’s (refer to assistants in healthcare) possible (to replace human), because everyone’s emotions are unique. AI can only be programmed with one emotion, or multiple ones—dozens or even hundreds. But each person’s emotional landscape is different.’ Participant 05 (nurse)
The conceptualisation of AI as a new rule-based knowledge base‘AI is part of some commonly used office software that assists me with tasks like writing articles or consulting on questions... Unlike traditional online consultations on official platforms, which often fail to provide a definitive answer, AI assistant is far more efficient and accurate. As soon as you describe the condition, we can immediately get the right answers.’ Participant 02 (nurse)
Ethical discomfort around human–AI
complementarity
The risk of depersonalised care‘Initially, there would always be one or two patients every day who would pick a fight with you. You’re already providing excellent care, being very gentle and attentive to them. But if you show excessive concern, they may perceive you as bothersome, they just want to undergo the treatment quietly in peace. Your excessive questioning might irritate them, and they could even end up yelling at you... AI cannot handle or help resolve these kinds of issues’. Participant 04 (nurse)
‘Some patients who dislike AI should prefer face-to-face conversations, while others should like more accurate data.’ Participant 04 (nurse)
The digital literacy–consent gap among older patients‘Regarding middle-aged and elderly patients, they may lack prior exposure to such technologies and consequently find them difficult to accept... I perceive this primarily as resistance to novel technologies, rooted in their belief that human-provided services feel more reliable. When it comes to artificial intelligence, they likely experience a sense of insecurity stemming from the unknown.’ Participant 05 (nurse)
Structural inequalities in the adoption of AI in careEntrenched occupational hierarchies within hospitals‘Because he’s supposed to think of it as a small thing. He thinks it won’t be able to affect the results of the medical examination. But it won’t. It is something that can affect the results of the medical examination.’ Participant 06 (nurse)
Pronounced regional
disparities
Researcher: ‘Have there been any training sessions on the use of AI provided by local health and wellness commissions or other related research and teaching institutions for you?’
Participant 01 (nurse): ‘Not yet.’

References

  1. Knaul, F.M.; Arreola-Ornelas, H.; Kwete, X.J.; Bhadelia, A.; Rosa, W.E.; Touchton, M.; Méndez-Carniado, O.; Enciso, V.V.; Pastrana, T.; Friedman, J.R.; et al. The evolution of serious health-related suffering from 1990 to 2021: An update to The Lancet Commission on global access to palliative care and pain relief. Lancet Glob. Health 2025, 13, e422–e436. [Google Scholar] [CrossRef]
  2. Brondeel, K.C.; Duncan, S.A.; Luther, P.M.; Anderson, A.; Bhargava, P.; Mosieri, C.; Ahmadzadeh, S.; Shekoohi, S.; Cornett, E.M.; Fox, C.J.; et al. Palliative Care and Multi-Agent Systems: A necessary paradigm shift. Clin. Pract. 2023, 13, 505–514. [Google Scholar] [CrossRef]
  3. Castagna, A.; Militano, V.; Ruberto, C.; Manzo, C.; Ruotolo, G. Comprehensive geriatric assessment and palliative care. Aging Med. 2024, 7, 645–648. [Google Scholar] [CrossRef]
  4. Pacala, J.T. Is Palliative Care the “New” Geriatrics? Wrong Question-We’re Better Together. J. Am. Geriatr. Soc. 2014, 62, 1968–1970. [Google Scholar] [CrossRef]
  5. Schelin, M.E.C.; Fürst, C.J.; Rasmussen, B.H.; Hedman, C. Increased patient satisfaction by integration of palliative care into geriatrics—A prospective cohort study. PLoS ONE 2023, 18, e0287550. [Google Scholar] [CrossRef]
  6. Zhang, H.; Ma, F.; Zhang, H.; Shi, J. Research on Geriatric Education based on the Concept of Palliative Care. Int. J. Geriatr. 2024, 45, 381–384. (In Chinese) [Google Scholar] [CrossRef]
  7. Atreya, S.; Sinha, A.; Kumar, R. Integration of primary palliative care into geriatric care from the Indian perspective. J. Fam. Med. Prim. Care 2022, 11, 4913–4918. [Google Scholar] [CrossRef]
  8. State Council. The 14th Five-Year Plan for the Development of National Aging Services and the Elderly Care Service System in China. 2021. Available online: https://www.gov.cn/zhengce/content/2022-02/21/content_5674844.htm (accessed on 12 May 2025).
  9. Office of the National Health Commission. The National Health Commission Has Issued the Guidelines for the Construction and Management of Geriatrics (2025 Edition). 2025. Available online: https://www.nhc.gov.cn/yzygj/c100068/202505/cb8fc8b895b142938027facaeee62e90.shtml (accessed on 12 May 2025).
  10. Yang, L.; Zhao, K.; Fan, Z. Exploring determinants of population ageing in Northeast China: From a socio-economic perspective. Int. J. Environ. Res. Public Health 2019, 16, 4265. [Google Scholar] [CrossRef]
  11. Zhou, Y.; Yao, Y.; Ye, X. A qualitative study on the acceptance Attitude of Elderly Care Robots Based on Value Perception. Nurs. Res. 2025, 39, 381–387. (In Chinese) [Google Scholar]
  12. Guo, J.; Xiao, Y.; Chen, Y. Research Status of Smart Healthcare in Home-based palliative care. Chin. J. Geriatr. Mult. Organ Dis. 2022, 21, 840–844. (In Chinese) [Google Scholar]
  13. He, L. Research on the Intervention Design of AI+ Online Social Work Services in Hospice Care. J. East China Univ. Sci. Technol. 2024, 39, 27–44. (In Chinese) [Google Scholar]
  14. China Internet Network Information Center. Generative Artificial Intelligence Application Development Report; China Internet Network Information Center: Beijing, China, 2024. Available online: http://www.sccio.cn/uploads/20241212/bee175a16ef20aa7188b29e1f18f6ade.pdf (accessed on 12 May 2025).
  15. Inkpen, K.; Chappidi, S.; Mallari, K.; Nushi, B.; Ramesh, D.; Michelucci, P.; Mandava, V.; Vepřek, L.H.; Quinn, G. Advancing Human-AI complementarity: The impact of user expertise and algorithmic tuning on joint decision making. ACM Trans. Comput. Hum. Interact. 2023, 30, 1–29. [Google Scholar] [CrossRef]
  16. Hemmer, P.; Schemmer, M.; Kühl, N.; Vössing, M.; Satzger, G. Complementarity in Human-AI Collaboration: Concept, sources, and evidence. Eur. J. Inf. Syst. 2025, 1–24. [Google Scholar] [CrossRef]
  17. Sharma, A.; Lin, I.W.; Miner, A.S.; Atkins, D.C.; Althoff, T. Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nat. Mach. Intell. 2023, 5, 46–57. [Google Scholar] [CrossRef]
  18. Clay, T.J.; Da Custodia Steel, Z.J.; Jacobs, C. Human-Computer Interaction: A Literature review of Artificial intelligence and Communication in healthcare. Cureus 2024, 16, e73763. [Google Scholar] [CrossRef]
  19. Jia, Y.; Evans, H.; Porter, Z.; Graham, S.; McDermid, J.; Lawton, T.; Snead, D.; Habli, I. The case for delegated AI autonomy for Human AI teaming in healthcare. arXiv 2025, arXiv:2503.18778. [Google Scholar] [CrossRef]
  20. Odionu, C.; Pub, A. The Role of Data Analytics in Enhancing Geriatric Care: A Review of AI-Driven Solutions. Int. J. Multidiscip. Res. Growth Eval. 2025, 5, 1131–1138. [Google Scholar] [CrossRef]
  21. Avati, A.; Jung, K.; Harman, S.; Downing, L.; Ng, A.; Shah, N.H. Improving Palliative Care with Deep Learning. BMC Med. Inform. Decis. Mak. 2018, 18, 55–64. [Google Scholar] [CrossRef]
  22. Chen, D.; Yoon, H.J.; Wan, Z.; Alluru, N.; Lee, S.W.; He, R.; Moore, T.J.; Nelson, F.F.; Yoon, S.; Lim, H. Advancing Human-Machine Teaming: Concepts, Challenges, and Applications. arXiv 2025, arXiv:2503.16518. [Google Scholar]
  23. Bressler, T.; Song, J.; Kamalumpundi, V.; Chae, S.; Song, H.; Tark, A. Leveraging Artificial Intelligence/Machine Learning Models to identify potential palliative care beneficiaries: A systematic review. J. Gerontol. Nurs. 2024, 51, 7–14. [Google Scholar] [CrossRef]
  24. Wang, J. Is robot-assisted elderly care the solution to the aging problem?—From the perspective of Confucian role ethics. Confucius Stud. 2024, 6, 5–14. (In Chinese) [Google Scholar]
  25. Abiodun, A.; Akingbola, A.; Ojo, O.; Jessica, O.U.; Alao, U.H.; Shagaya, U.; Adewole, O.; Abdullahi, O. Ethical challenges in the integration of artificial intelligence in palliative care. J. Med. Surg. Public Health 2024, 4, 100158. [Google Scholar] [CrossRef]
  26. De Panfilis, L.; Peruselli, C.; Tanzi, S.; Botrugno, C. AI-based clinical decision-making systems in palliative medicine: Ethical challenges. BMJ Support. Palliat. Care 2023, 13, 183–189. [Google Scholar] [CrossRef]
  27. Nasarian, E.; Alizadehsani, R.; Acharya, U.; Tsui, K. Designing interpretable ML system to enhance trust in healthcare: A systematic review to proposed responsible clinician-AI-collaboration framework. Inf. Fusion 2024, 108, 102412. [Google Scholar] [CrossRef]
  28. Gómez-García, M.; Ruiz-Palmero, J.; Boumadan-Hamed, M.; Soto-Varela, R. Percepciones de futuros docentes y pedagogos sobre uso responsable de la IA. Un instrumento de medida. RIED Rev. Iberoam. Educ. Distancia 2025, 28, 105–130. [Google Scholar] [CrossRef]
  29. Naughton, M.; Salmon, P.M.; Compton, H.R.; McLean, S. Challenges and opportunities of artificial intelligence implementation within sports science and sports medicine teams. Front. Sports Act. Living 2024, 6, 1332427. [Google Scholar] [CrossRef]
  30. Akhras, N.; Antaki, F.; Mottet, F.; Nguyen, O.; Sawhney, S.; Bajwah, S.; Davies, J.M. Large language models perpetuate bias in palliative care: Development and analysis of the Palliative Care Adversarial Dataset (PCAD). arXiv 2025, arXiv:2502.08073. [Google Scholar] [CrossRef]
  31. Sattar, K.; Yusoff, M.S.B. Embracing the Future: Nurturing Medical Professionalism through the Integration of Artificial Intelligence. Educ. Med. J. 2024, 16, 1–3. [Google Scholar] [CrossRef]
  32. Allen, M.R.; Webb, S.; Mandvi, A.; Frieden, M.; Tai-Seale, M.; Kallenberg, G. Navigating the doctor-patient-AI relationship—A mixed-methods study of physician attitudes toward artificial intelligence in primary care. BMC Prim. Care 2024, 25, 42. [Google Scholar] [CrossRef]
  33. Dean, T.B.; Seecheran, R.; Badgett, R.G.; Zackula, R.; Symons, J. Perceptions and attitudes toward artificial intelligence among frontline physicians and physicians’ assistants in Kansas: A cross-sectional survey. JAMIA Open 2024, 7, ooae100. [Google Scholar] [CrossRef]
  34. Sahoo, R.K.; Sahoo, K.C.; Negi, S.; Baliarsingh, S.K.; Panda, B.; Pati, S. Health Professionals’ Perspectives on the Use of Artificial Intelligence in Healthcare: A Systematic review. Patient Educ. Couns. 2025, 134, 108680. [Google Scholar] [CrossRef]
  35. Celi, L.A.; Cellini, J.; Charpignon, M.; Dee, E.C.; Dernoncourt, F.; Eber, R.; Mitchell, W.G.; Moukheiber, L.; Schirmer, J.; Situ, J.; et al. Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review. PLoS Digit. Health 2022, 1, e0000022. [Google Scholar] [CrossRef]
  36. Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory; Oxford University Press: Oxford, UK, 2005. [Google Scholar] [CrossRef]
  37. Law, J. Actor network theory and material semiotics. In The New Blackwell Companion to Social Theory; Turner, B.S., Ed.; Wiley-Blackwell: Chichester, UK, 2009; pp. 141–158. [Google Scholar]
  38. Wynn, M.; Garwood-Cross, L. Reassembling nursing in the digital age: An actor-network theory perspective. Nurs. Inq. 2024, 31, e12655. [Google Scholar] [CrossRef] [PubMed]
  39. Landers, A.; Pitama, S.G.; Palmer, S.C.; Beckert, L. Positioning Stakeholder Perspectives in COPD End-of-Life Care Using Critical Theory and Actor-Network Theory: A Methodological approach. Int. J. Qual. Methods 2023, 22, 16094069231214098. [Google Scholar] [CrossRef]
  40. Chen, Y. Regional decline and structural changes in Northeast China: An exploratory space–time approach. Asia Pac. J. Reg. Sci. 2024, 8, 397–427. [Google Scholar] [CrossRef]
  41. Zhang, R.; Li, H.; Meng, H.; Zhan, J.; Gan, H.; Lee, Y.C. The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships. arXiv 2024, arXiv:2410.20130. [Google Scholar]
  42. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  43. Neff, G. Talking to bots: Symbiotic Agency and the case of Tay. Int. J. Commun. 2016, 10, 4915–4931. Available online: https://ora.ox.ac.uk/objects/uuid:613f7303-8a07-4f5a-ada2-b495c9a449af (accessed on 28 July 2025).
  44. Wu, F. Suicide and Justice; China Renmin University Press: Beijing, China, 2009. (In Chinese) [Google Scholar]
  45. Tully, S.; Longoni, C.; Appel, G. Express: Lower Artificial Intelligence literacy predicts Greater AI receptivity. J. Mark. 2025, 89, 00222429251314491. [Google Scholar] [CrossRef]
  46. Wernly, B.; Guidet, B.; Beil, M. The role of artificial intelligence in life-sustaining treatment decisions: Current state and future considerations. Intensive Care Med. 2025, 51, 157–159. [Google Scholar] [CrossRef]
  47. Badawy, W.; Shaban, M. Exploring geriatric nurses’ perspectives on the adoption of AI in elderly care a qualitative study. Geriatr. Nurs. 2025, 61, 41–49. [Google Scholar] [CrossRef] [PubMed]
  48. Tomlinson, K.; Jaffe, S.; Wang, W.; Counts, S.; Suri, S. Working with AI: Measuring the Occupational Implications of Generative AI. arXiv 2025, arXiv:2507.07935. [Google Scholar] [CrossRef]
  49. Dien, D.S. Big me and little me: A Chinese perspective on self. Psychiatry J. Study Interpers. Process. 1983, 46, 281–286. [Google Scholar] [CrossRef]
  50. Barbalet, J. Greater Self, Lesser Self: Dimensions of Self-Interest in Chinese Filial Piety. J. Theory Soc. Behav. 2014, 44, 186–205. [Google Scholar] [CrossRef]
  51. Wong, P.H. Why Confucianism Matters for the Ethics of Technology. In The Oxford Handbook of Philosophy of Technology; Vallor, S., Ed.; Oxford University Press: Oxford, UK, 2022; pp. 609–628. [Google Scholar] [CrossRef]
  52. Fang, C.; Tanaka, M. An exploration of person-centred approach in end-of-life care policies in England and Japan. BMC Palliat. Care 2022, 21, 68. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.