A Perceptual Gap Analysis of Service Quality Perceptions in Home-Based Long-Term Care Service Centers
Abstract
1. Introduction
1.1. Population Ageing Trends
1.2. The LTC Quality Framework
1.3. Challenges and Research Gaps
- Divergent Perceptions of Purpose: Centers often view assessment as a mere “data-gathering” burden unrelated to strategy, whereas the evaluation framework intends to drive “self-improvement.”
- Professional Knowledge vs. Practical Experience: A perceived “credibility gap” exists where managers question the practical expertise of committees, while committees criticize the lack of integrated management concepts within centers.
- Ambiguity of Benchmarks: Inconsistent interpretations of assessment standards lead to a “vague understanding” of the system’s purpose, often reducing it to a survival mechanism for securing government subsidies.
2. Theoretical Review
2.1. LTC Centers
2.2. Consensus Benchmarks and Methods
- Evaluatees (Internal Perspective): As the primary leaders responsible for institutional operations and strategic management, these managers possess an in-depth, localized understanding of their organizations. Their insights provide an essential “bottom-up” view of how the evaluation system is implemented in daily practice.
- Evaluators (External Perspective): These members are responsible for the independent and objective supervision of LTC centers. Leveraging their multi-institutional experience and professional expertise, they provide a “top-down” standardized assessment.
3. Materials and Methods
3.1. Study Design and Data Collection
- Selection Criteria: All 57 home-based LTC centers currently registered and active in the target county were screened. Exclusion criteria were applied to 7 newly established centers (with less than one year of operation) as they were not yet subject to statutory assessment during the 2023–2025 cycle.
- Sample Characteristics: Of the 50 eligible centers, all 50 provided valid assessment data (100% participation rate).
- Data collection: followed a rigorous, two-stage administrative procedure within the official 2023–2025 accreditation cycle.
- (1)
- Stage 1: Internal Self-Assessment (Y-axis data). Before the external audit, each center manager was required to perform a self-evaluation using the official “Home-based LTC Service Quality Assessment Scale.” This was completed via a secure government web portal, representing the institution’s internal perception of its service performance.
- (2)
- Stage 2: External Expert Assessment (X-axis data). Subsequently, a committee of 28 experts (assigned by the MOHW or local health bureaus) conducted on-site inspections. Each audit team typically consisted of three members (majors are: Administration, Social work, and LTC Nursing). After reviewing physical evidence and conducting interviews, the committee reached a consensus score for each of the 20 benchmarks.
3.2. Methodological Framework: Two-Dimensional Gap Analysis
- Consensus (High/Low Performance): Where internal and external views align.
- Blind Spots: Where internal self-assessment significantly exceeds external verification (comparable to the “Concentrate Here” priority in the classical model).
- Hidden Strengths: Where external evaluators identify performance that the institution may be underestimating.
3.3. Data Analysis and Scoring Standards
- First, the reliability and structural validity of the assessment instrument were verified. Internal consistency was evaluated using Cronbach’s α, with a coefficient exceeding 0.70 considered indicative of acceptable reliability.
- Prior to performing comparative analyses, assumption testing was conducted to ensure the appropriateness of parametric procedures. The data were subjected to the Kolmogorov–Smirnov test to assess normality, while Levene’s test was employed to evaluate the homogeneity of variance. In instances where the assumption of equal variances was violated (p < 0.05 in Levene’s test), Welch’s t-test was utilized to provide a more robust and accurate comparison of mean scores.
- To identify perceptual divergences between the two primary groups, Independent Samples t-tests (or Welch’s t-test where applicable) were performed to compare the mean scores of service providers (evaluatees) against those of the committee members (evaluators) across all indicators and dimensions.
- Furthermore, a Gap Analysis was conducted to quantify the discrepancy (defined as the difference between evaluatees’ and evaluators’ scores). Statistical significance for all tests was set at a threshold of p < 0.05. Finally, to determine the magnitude of the observed differences, Cohen’s d was calculated as the effect size, providing a standardized measure of the perceptual gap between the two cohorts.
4. Results and Discussion
4.1. Descriptive Statistics of Background Variables of the LTC Centers
- Organizational Maturity and Expansion: Over half of the surveyed centers (56.0%) have been established for less than three years. This indicates that the sector in the target county is in a rapid expansion phase, where newer organizations may still be refining their internal quality management systems relative to established regulatory standards.
- Service Intensity and Clinical Complexity: Although 62.0% of the centers are small-to-medium-sized (serving fewer than 100 cases), the clinical complexity of the caseload is high. Specifically, 55.35% of service recipients are classified as having moderate-to-severe disability (CMS Levels 4–8). Such a high concentration of severe cases necessitates rigorous institutional risk management. This context explains the substantial pressure on Concept B (Professional Care Quality) and provides a logical foundation for the discrepancies observed between internal self-assessments and external expert evaluations.
| Variable | Category | n (%) |
|---|---|---|
| Age of the Center | <3 years | 28 (56.0%) |
| 3–7 years | 11 (22.0%) | |
| >7 years | 11 (22.0%) | |
| Classification 1 | Public/LTC Foundation | 4 (8.0%) |
| Individually Established | 22 (44.0%) | |
| Foundation/Association | 14 (28.0%) | |
| Affiliated Organization | 10 (20.0%) | |
| Cases Served | <100 cases | 31 (62.0%) |
| 100–200 cases | 10 (20.0%) | |
| >200 cases | 9 (18.0%) | |
| Disability Level 2 | Mild (Level 2–3) | 3192 (44.54%) |
| Moderate (Level 4–6) | 2827 (39.45%) | |
| Severe (Level 7–8) | 1139 (15.90%) |
4.2. Cognitive Differences Between Evaluatees and Evaluators
4.3. Two-Dimensional Gap Analysis
5. Discussion
5.1. The Divergence in Quality Logic: Objectivity vs. Subjectivity
5.2. Organizational Resilience: Stability vs. Emergency Capacity
5.3. Implementation Efficacy in Individual Equity
5.4. Strategic Roadmap via Two-Dimension Gap Matrix
5.4.1. Addressing Management Blind Spots (Quadrant II: Prioritize Improvement)
5.4.2. Leveraging Hidden Strengths (Quadrant IV: Potential Advantage)
5.4.3. Consensus-Based Management (Quadrants I & III)
5.5. Strategic Suggestions and Research Limitations
5.5.1. Practical Suggestions for Policy and Management
- Refining Assessment Standards and Guidance: The level of understanding regarding consensus benchmarks directly affects self-assessment accuracy. Specific assessment standards must be formulated in advance with detailed instructional guidance. By providing concrete examples of practical applications, managers can better grasp the nuances of each indicator, improving the reliability of internal evaluations.
- Dynamic Weighting and Hierarchical Assessment: The weighting of benchmarks should be adjusted to avoid the undue impact of a single item on the overall score. By conducting a hierarchical assessment based on the relative importance and influence of each indicator, a more nuanced evaluation of institutional performance can be achieved, allowing for the identification of specific problems across different service domains.
- Establishing Continuous Tracking and Coaching: Quality assessment should transition from a “one-time inspection” to a continuous improvement cycle. A problem follow-up and coaching plan should be established to track shortcomings discovered during evaluation. This feedback loop helps organizations integrate quality improvement into daily operations rather than merely preparing for periodic audits.
- Balancing Standardization with Flexibility: The assessment system for home-based LTC centers must balance rigorous standardization with operational flexibility. By introducing flexible standards and resource support for institutions with limited capacity, the policy can promote sustainable development and ensure the long-term fairness and transparency of the LTC service system.
5.5.2. Methodological and Contextual Limitations
- Sample Scope and Generalizability: This study was conducted within a single geographical region in Taiwan, encompassing 50 home-based LTC centers. While these centers represent a significant portion of the local service network, the limited sample size and regional homogeneity may constrain the generalizability (external validity) of the results to other regions with different socio-economic profiles or urban-rural resource disparities. Future research should aim for a nationwide analysis to validate whether these perception gaps persist across broader geographic contexts.
- Systemic Measurement Bias: The reliance on data derived from an official governmental evaluation system introduces potential institutional bias. In Taiwan’s “high-stakes” audit environment, where scores are directly tied to institutional contracting and subsidies, a “Performance-Driven Response Bias” is likely. Managers may instinctively inflate self-assessment scores as a defensive organizational strategy, while committee scores—though professionally audited—provide only a cross-sectional snapshot during pre-scheduled visits. This systemic “Audit Culture” may inherently amplify the discrepancies between internal and external perceptions.
- Lack of Stakeholder Triangulation: This study primarily focuses on the dyadic relationship between managers and expert evaluators. The exclusion of direct feedback from service recipients and their families (User-Perceived Quality) means that the “true” service excellence remains partially obscured. Future investigations should utilize Data Triangulation, integrating recipient satisfaction surveys with longitudinal digital operational tracking to provide a more holistic verification of care quality.
6. Conclusions
6.1. Principal Findings and Theoretical Insights
6.2. Scientific and Academic Contributions
- Deconstructing Systematic Discrepancies: This study provides robust empirical evidence of a systematic divergence between institutional self-evaluation and external professional auditing. By quantifying a pervasive “Leniency Effect” across 19 of 20 benchmarks, we contribute to Organizational Behavior literature, suggesting that in “high-stakes” regulatory environments, self-assessment often functions as a defensive administrative ritual rather than a reflective quality tool.
- Methodological Advancement in Perception Alignment: We demonstrate that quality in the LTC sector is a multidimensional, stakeholder-dependent construct. By repurposing the IPA framework into a two-dimensional perception gap matrix, this research offers a novel diagnostic mechanism to identify where internal “Managerial Logic” fails to align with external “Evaluative Logic.” This provides a visual roadmap for Institutional Calibration that transcends simple mean-difference testing.
- Contribution to Evaluation Theory: Our findings highlight the inherent limitations of subjective, checklist-based audit systems. By revealing specific “Management Blind Spots” in emergency resilience and staff follow-ups, this study argues for a paradigm shift in Evaluation Methodology—moving from static, one-time inspections toward a “Continuous Calibration Model” that integrates digital evidence with stakeholder triangulation.
6.3. Practical and Policy Implications
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ministry of Health and Welfare (MOHW). Long-Term Care 10-Year Plan 2.0: Long-Term Care Service Quality Verification Mechanism. 2023. Available online: https://1966.gov.tw/LTC/lp-6485-207.html (accessed on 12 December 2025).
- National Development Council (NDC). Taiwan Demographic Data Report. 2024. Available online: https://pop-proj.ndc.gov.tw/News.aspx?n=3&sms=10347 (accessed on 12 December 2025).
- Yisel, M.G.L.l.; Simón, Q.R.L. Assessment of Patients’ Quality of Care in Healthcare Systems: A Comprehensive Narrative Literature Review. Healthcare 2025, 13, 1714. [Google Scholar] [CrossRef] [PubMed]
- Audit Department. Report on the Implementation of the Government’s 10-Year Plan for Long-Term Care 2.0. 2023. Available online: https://www.audit.gov.tw/p/405-1000-8867,c158.php?Lang=zh-tw (accessed on 30 January 2026).
- Ting, T.-H. Preliminary Study on the Quality of Care in Home Care Services. Master’s Thesis, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan, 2023. Available online: https://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi?o=dnclcdr&s=id=%22112NTCN0712001%22.&searchmode=basic (accessed on 7 January 2026).
- National Development Council (NDC). Population Projections. 2024. Available online: http://csyue.nccu.edu.tw/ch/Taiwan%20Population%20Projection%20(2024-2070).pdf (accessed on 7 January 2026).
- Wu, S.-C.; Chu, F.-H.; Lin, L.-C.; Chen, S.-H.; Chang, M.-M.; Chou, L.-H.; Ye, J.-L.; Yang, Y.-R. To establish the accreditation indices for the community and home-based long-term care facilities of Taipei City. J. Long-Term Care 2019, 23, 121–131. [Google Scholar]
- Liao, C.-C.; Chang, S.-C.; Hung, C.-T.; Chen, N.-S.; Hwu, Y.-J. Results and related factors of integrated community service center accreditation. Hospital 2022, 55, 55–66. [Google Scholar]
- Hung, J.-Y. 2024 Yunlin County Long-Term Care Service Quality Improvement Plan Achievement Report; Yunlin County Government: Yunlin, Taiwan, 2024.
- Yalda, M.; Hessane, H.; Azizollah, A.; Reza, E.; Mohammad, M. Evaluation of the hospital service quality using the Importance–Performance Analysis (IPA) tool in Ardabil city. J. Educ. Health Promot. 2026, 19, 47. [Google Scholar] [CrossRef]
- Francisco, J.M.; Antonio, C.; Luis, R.M.; Juan, V. An Importance-Performance Analysis of Primary Health Care Services: Managers vs. Patients Perceptions. J. Serv. Sci. Manag. 2010, 3, 227–234. [Google Scholar] [CrossRef]
- Chen, C.-W.; Pai, J.-Y.; Zeng, S.-H.; Chen, W.-C. A study of service quality on the long-term care institution residents’ satisfaction and reuse intention. J. Health Manag. 2016, 14, 29–44. [Google Scholar]
- Caughey, G.-E.; Rahja, M.; Harrison, S.; Fernando, R.; Inacio, M.-C. Quality indicators to monitor home care services for the older population: A scoping review. J. Am. Med. Dir. Assoc. 2025, 26, 105876. [Google Scholar] [CrossRef] [PubMed]
- Zheng, Q.-L.; Kong, L.-N.; Hu, P.; Liu, D.-X. Identifying quality indicators for home care services: A modified Delphi and analytic hierarchy process study. BMC Nurs. 2024, 23, 494. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Coyte, P.C.; Shi, W.; Zong, X.; Zhong, R. Social Governance and Sustainable Development in Elderly Services: Innovative Models, Strategies, and Stakeholder Perspectives. Sustainability 2025, 15, 15414. [Google Scholar] [CrossRef]
- Gao, Y.; Yang, Y.; Qin, T.; Li, X.; Guo, J.; Zhou, L.; Gu, M.; Wang, Y. Improving Service Quality of Home-Based Health Care Services for the Elderly: A Qualitative Study of Facilitators and Barriers Reported by Community Health Workers in Beijing. J. Multidiscip. Healthc. PMC Qual. Study 2025, 18, 2293–2304. [Google Scholar] [CrossRef] [PubMed]
- Hung, J.-Y. Assessment of Home- and Community-Based Long-Term Care Centers in Taiwan. Soc. Sci. 2026, 15, 125. [Google Scholar] [CrossRef]
- Martilla, J.-A.; James, J.-C. Importance-performance analysis. J. Mark. 1977, 41, 77–79. [Google Scholar] [CrossRef]
- Chen, L.-J.; Wang, J.-H.; Chen, M.-T. Opinion difference between evaluators and providers on home care service assessment system. J. Healthc. Manag. 2019, 20, 101–125. [Google Scholar]
- Cheng, C.-Y. Applying IPA model to analyze the care quality of home care attendants. J. Geotechnol. Serv. Manag. 2025, 13, 1–22. [Google Scholar] [CrossRef]
- Chen, C.-S. The Impact of Long-Term Care Service Evaluation on the Management of Home-Based Long-Term Care Institutions: A Case Study of Taichung City. Master’s Thesis, National Chi Nan University, Puli, Taiwan, 2024. Available online: https://www.airitilibrary.com/Article/Detail/U0020-2110202311263700 (accessed on 30 January 2026).

| Center Type | Service Characteristics | Establishments in Taiwan (End of 2024) | Key Points in LTC 3.0 |
|---|---|---|---|
| Home-based | Provides physical care, daily living support, and medical nursing within the recipient’s home. | 2235 lefts (55.42% of total LTC lefts) | Digitalized Precision Dispatch & Smart Monitoring: Utilizing AI to optimize matching efficiency and employing wearable devices for in-home risk pre-warning. |
| Community-based | Provides Day Care and Family Care, emphasizing social interaction and delaying functional decline. | 1478 lefts (Including Day Care, Family Care, Group Homes, and Small-scale Multi-function) (36.65% of total LTC lefts) | AI-powered Precision Exercise & Disability Prevention: Implementing Generative AI and digital rehabilitation tools to provide personalized health promotion programs. |
| Residential-based | Provides 24 h full-time care, targeting individuals with moderate-to-severe disability or dementia. | 122 lefts (Including Senior Welfare Institutions and General Nursing Homes) (3.03% of total LTC lefts) | Medical-Care Integration & Increased Subsidies: Serving as the backbone for heavy-care needs, strengthening “green channels” with hospitals, and significantly increasing resident subsidies. |
| Integrated | A single legal entity providing two or more types of services (e.g., Day Care + Home-based Care). | 198 lefts (4.91% of total LTC lefts) | One-stop Smart Care Campus: Acting as a regional resource integration hub to implement “Continuity of Care” and reduce service fragmentation for the elderly. |
| Assessment Concept | Consensus Benchmark | |
|---|---|---|
| Home-Based LTC Centers | Key Content of Benchmark | |
| A: Management effectiveness | A1: Business plan development and implementation |
|
| A2: Work manuals and administrative management regulations |
| |
| A3: Supervisory system operation | Hold administrative meetings regularly (at least quarterly) to discuss service quality and work improvement. | |
| A4: Financial management system | Establish an independent accounting system based on the principle of accrual accounting with tax reporting information. | |
| A5: Information system coding | Service providers must record the service status of each case in the municipal government care service management information platform by the 10th of the following month. | |
| A6: Deficiency and improvement evaluation by the relevant authority during the auditing/inspection period | During the assessment period, any suggested improvements received from supervisory authorities in inspections/audits or guidance/supervision (including fire and labor authorities) can be concretely implemented and tracked. A+, as improvement reached 100% A, as improved by 80% but did not reach 100%. B+, as improvement reached 60% but did not reach 80% B, as improvement reached 40% but did not reach 60% C, as improvement did not reach 40% | |
| A7: Establishment and implementation of systems related to staff rights and interests |
| |
| A8: Regular health check-ups and follow-ups for staff |
| |
| A9: Pre-training for new staff |
| |
| A10: Actual participation of supervisor in administrative and care quality management meetings/activities | The business manager is actively involved in administration and care quality management, and keeps records (such as constructing the annual business plan, presiding over administrative meetings, reviewing meetings for accidents or emergency incidents, etc). | |
| A11: Establishment and implementation of a scheduling mechanism for caregiving staff |
| |
| B: Professional care quality | B1: Strengthening the professional skills of LTC service personnel |
|
| B2: Caregiver service execution and feedback from service recipients/family members |
| |
| B3: Proactive referral of cross-professional services |
| |
| B4: Management of case starting and finishing for service recipients | Formulate procedures and processing guidelines for case initiation/acceptance, referral, service suspension, and case closure, and clearly explain them to service recipients and their families. | |
| B5: Accident and emergency handling and prevention |
| |
| C: Individual equity guarantee | C1: Signing of a service contract with the service recipient or family member | Sign a contract with the client (the person themselves, family members, guardian, or agent) The contract or annex should be updated when relevant regulations, service recipients, or institutional service conditions change. |
| C2: Fees and receipts | The fee standards are subject to approval by the competent authority with regulations. | |
| C3: Establishment and handling of the feedback/complaint process |
| |
| C4: Service satisfaction surveys |
| |
| Quadrant | I | II | III | IV |
|---|---|---|---|---|
| Evaluators (External Committee Evaluation)/Evaluatees (Internal Self-Assessment) | +,+ | -,+ | -,- | +,- |
| Definition | Consensus Excellence | Critical Improvement/Overestimation | Consensus Weakness | Potential Advantage/Underestimation |
| Status Description | Both the evaluators’ and evaluatees’ scores were high. This indicates a strong alignment between actual performance and internal perception. | The evaluators provided lower ratings than the evaluatees’ scores. This indicates a significant perceptual gap where lefts may be overestimating their performance. | Both the evaluators’ and evaluatees’ evaluations yielded low scores. This suggests that service quality deficiencies are evident and recognized by both parties, requiring systemic intervention. | The evaluators‘ evaluation was higher than that of the evaluatees. This indicates that the lefts’ actual performance exceeds their own perceptions, revealing untapped strengths. |
| Concept | Group | n | M | SD | Welch’s t | df | p | Cohen’s d |
|---|---|---|---|---|---|---|---|---|
| Management effectiveness | Evaluates | 50 | 4.56 | 0.31 | 6.89 | 47.2 | <0.001 | 1.63 |
| Evaluators | 28 | 3.95 | 0.43 | |||||
| Professional care quality | Evaluates | 50 | 4.52 | 0.36 | 6.28 | 44.8 | <0.001 | 1.57 |
| Evaluators | 28 | 3.79 | 0.55 | |||||
| Individual equity guarantee | Evaluates | 50 | 4.70 | 0.31 | 5.55 | 48.9 | <0.001 | 1.32 |
| Evaluators | 28 | 4.22 | 0.40 | |||||
| Total | Evaluates | 50 | 4.59 | 0.31 | 7.21 | 49.5 | <0.001 | 1.71 |
| Evaluators | 28 | 3.99 | 0.39 |
| Concept | Consensus Benchmark | Evaluates | Evaluators | M.D. | t | p | Cohen’s d | Interpretation |
|---|---|---|---|---|---|---|---|---|
| A: Management effectiveness | A1 | 4.505 | 3.855 | 0.650 | 6.177 | <0.001 *** | 1.48 | Extreme Gap |
| A2 | 4.685 | 4.02 | 0.665 | 7.995 | <0.001 *** | 1.88 | Critical Gap | |
| A3 | 4.605 | 3.965 | 0.640 | 6.694 | <0.001 *** | 1.58 | Extreme Gap | |
| A4 | 4.805 | 4.455 | 0.350 | 4.028 | <0.001 *** | 0.98 | Large Gap | |
| A5 | 4.835 | 4.73 | 0.105 | 1.632 | 0.109 | 0.38 | Consensus | |
| A6 | 3.845 | 3.47 | 0.375 | 3.352 | 0.002 | 0.82 | Large Gap | |
| A7 | 4.55 | 3.94 | 0.610 | 6.276 | <0.001 *** | 1.48 | Extreme Gap | |
| A8 | 4.61 | 3.395 | 1.215 | 6.567 | <0.001 *** | 1.54 | Max Discrepancy | |
| A9 | 4.58 | 3.675 | 0.905 | 6.679 | <0.001 *** | 1.63 | Extreme Gap | |
| A10 | 4.505 | 3.84 | 0.665 | 5.001 | <0.001 *** | 1.25 | Extreme Gap | |
| A11 | 4.58 | 4.13 | 0.450 | 4.041 | <0.001 *** | 0.98 | Large Gap | |
| B: Professional care quality | B1 | 4.445 | 4.015 | 0.430 | 4.256 | <0.001 *** | 0.94 | Large Gap |
| B2 | 4.625 | 4.24 | 0.385 | 3.031 | 0.004 | 0.76 | Moderate-Large | |
| B3 | 4.38 | 3.455 | 0.925 | 5.985 | <0.001 *** | 1.49 | Extreme Gap | |
| B4 | 4.595 | 3.695 | 0.900 | 7.311 | <0.001 *** | 1.72 | Extreme Gap | |
| B5 | 4.565 | 3.53 | 1.035 | 6.831 | <0.001 *** | 1.61 | Extreme Gap | |
| C: Individual equity guarantee | C1 | 4.805 | 4.225 | 0.580 | 5.838 | <0.001 *** | 1.44 | Extreme Gap |
| C2 | 4.835 | 4.59 | 0.245 | 2.796 | 0.007 | 0.66 | Moderate Gap | |
| C3 | 4.565 | 3.93 | 0.635 | 7.09 | <0.001 *** | 1.67 | Extreme Gap | |
| C4 | 4.58 | 4.145 | 0.435 | 5.838 | <0.001 *** | 1.34 | Extreme Gap | |
| Total | 4.591 | 3.987 | 0.604 | 1.71 | Systemic Gap |
| Quadrant | Perceptual Nature | Strategic Goal | Key Benchmarks |
|---|---|---|---|
| I: Consensus Excellence | Consensus Strength | Maintain Quality | A2, A3, A4, A5 (True Consensus) |
| II: Critical Improvement/Overestimation | Critical Blind Spot | Immediate Intervention | A8 ***, A9 ***, B4 *** |
| III: Consensus Weakness | Consensus Deficiency | Strategic Improvement | A1 ***, A6, B5 *** |
| IV: Potential Advantages/Underestimation | Hidden Advantage | Recognition & Leverage | B1 *** |
| Empirical Evidence (The “What”) | Strategic Recommendation (The “How”) | Policy/Organizational Goal |
|---|---|---|
| Evidence A: Significant gaps in Quadrant II (Blind Spots) regarding staff training and case tracking (p < 0.001). | Implementation of Digital Early Warning Systems: Transition from manual logs to automated tracking for compliance and personnel updates. | Operational Efficiency: Minimizing human error during staff turnover. |
| Evidence B: External evaluators prioritize “Emergency Resilience” (B5) over “Daily Stability.” | From SOPs to Dynamic Drills: Shift from static document reviews to active crisis-simulation and coaching models. | Organizational Resilience: Enhancing the capacity to respond to high-risk contingencies. |
| Evidence C: Managers focus on system integrity, while evaluators focus on “Grievance Responsiveness.” | Standardization vs. Flexibility Balance: Refine indicators to include “Response Speed” and “User Satisfaction” as weighted benchmarks. | Service Equity: Ensuring that formal systems provide functional efficacy for recipients. |
| Evidence D: Quadrant IV (Hidden Strengths) shows staff competence exceeds manager awareness. | Professional Branding & Recognition: Formalize internal recognition of expertise to leverage latent organizational assets. | Competitive Advantage: Improving institutional reputation in the LTC market. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Hung, J.-Y. A Perceptual Gap Analysis of Service Quality Perceptions in Home-Based Long-Term Care Service Centers. Healthcare 2026, 14, 980. https://doi.org/10.3390/healthcare14080980
Hung J-Y. A Perceptual Gap Analysis of Service Quality Perceptions in Home-Based Long-Term Care Service Centers. Healthcare. 2026; 14(8):980. https://doi.org/10.3390/healthcare14080980
Chicago/Turabian StyleHung, Jui-Ying. 2026. "A Perceptual Gap Analysis of Service Quality Perceptions in Home-Based Long-Term Care Service Centers" Healthcare 14, no. 8: 980. https://doi.org/10.3390/healthcare14080980
APA StyleHung, J.-Y. (2026). A Perceptual Gap Analysis of Service Quality Perceptions in Home-Based Long-Term Care Service Centers. Healthcare, 14(8), 980. https://doi.org/10.3390/healthcare14080980

