Next Article in Journal
The Impact of Intraocular Treatment on Visual Acuity of Patients Diagnosed with Branch Retinal Vein Occlusions
Previous Article in Journal
Geo-Mapping of the Spatial Accessibility to Public Oral Health Facilities among Schoolchildren in Selangor, Malaysia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Relationship between Internet Patient Satisfaction Ratings and COVID-19 Outcomes

1
Vanguard Communications, Denver, CO 80205, USA
2
Hensley Biostats, Seattle, WA 98102, USA
3
Tulane Medical School, Tulane University, New Orleans, LA 70112, USA
*
Author to whom correspondence should be addressed.
Healthcare 2023, 11(10), 1411; https://doi.org/10.3390/healthcare11101411
Submission received: 18 January 2023 / Revised: 7 April 2023 / Accepted: 6 May 2023 / Published: 12 May 2023
(This article belongs to the Section Coronaviruses (CoV) and COVID-19 Pandemic)

Abstract

:
Our prior research showed that patient experience—as reported by Google, Yelp, and the Hospital Consumer Assessment of Healthcare Providers and Systems survey—is associated with health outcomes. Upon learning that COVID-19 mortality rates differed among U.S. geographic areas, we sought to determine if COVID-19 outcomes were associated with patient experience. We reviewed daily, U.S.-county-level-accrued COVID-19 infections and deaths during the first year of the pandemic using each locality’s mean online patient review rating, correcting for county-level demographic factors. We found doctor star ratings were significantly associated with COVID-19 outcomes. We estimated the absolute risk reduction (ARR) and relative risk reduction (RRR) for each outcome by comparing the real-world-observed outcomes, observed with the mean star rating, to the outcomes predicted by our model with a 0.3 unit higher average star rating. Geographic areas with higher patient satisfaction online review ratings in our models had substantially better COVID-19 outcomes. Our models predict that, had medical practices nationwide maintained a 4-star average online review rating—a 0.3-star increase above the current national average—the U.S may have experienced a nearly 11% lower COVID-19 infection rate and a nearly 17% lower death rate among those infected.

1. Introduction

Patient experience ratings have received increased attention in healthcare, but their significance is still being assessed. Since 2006, patient-reported experiences after hospitalization have been collected using the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey. The HCAHPS survey is the current nationwide standard for patient-experience-of-care data [1].
Additionally, Patient Online Reviews (PORs) are a vast and potentially rich source of information for large-scale analysis [2]. POR websites, such as Yelp and Google, enable patients to rate their healthcare providers with a star rating between one and five stars, with one star being the worst and five stars being the best. These internet testimonials are elaborate [3], free, continuously updated, and often reveal the specific causes of a patient’s experience [4]. PORs not only mirror many aspects of the HCAHPS survey but also reflect new areas of importance to patients and caregivers that may have significant implications for policy makers [5]. A study of hospitals found 90% of patient review narratives commented on clinicians and staff, which were overwhelmingly positive, and 52% commented on hospital facilities, such as hospital cleanliness, food, parking, and amenities [6]. A study of nursing homes found the most common theme in online reviews was regarding staff caring (53%) [7].
In addition to measuring patient experience, PORs have shown substantial associations to health outcomes and can be used as a data source for understanding healthcare quality [8]. For example, Yelp ratings were associated with lower readmission rates for all conditions and lower mortality for myocardial infarction and pneumonia [9]. Web-based positive recommendations of hospitals were shown to be significantly associated with lower hospital standardized mortality ratios [10] and they contained key themes for emergency care [11]. PORs have been associated with the resolution of original complaints [12] and geographically to key measures of healthcare coordination and quality [2]. PORs can also be used to enhance the evidence base for general decision making in healthcare [13].
In 2014, we developed the Happy Patient Index (HPI), which assessed Google and Yelp PORs by locality [14]. We used automated computer software to catalog all available Google and Yelp PORs for businesses explicitly identified as doctors with an address within 50 miles of the city center, as defined by Google Maps, for each of the 100 most populous cities within the U.S. The resulting HPI dataset contained over 46,000 PORs, which were used to determine the average POR star rating for each of the localities, herein to be referred to as the Locality Mean Patient Online Rating (LMPOR). These were found to be as low as 3.20 stars and as high as 4.15 stars on a scale of 1–5. Wealth—or a lack thereof—did not appear to affect LMPOR in the HPI; three of the top-10 happiest areas had mean household incomes below the national mean.
Upon learning that COVID-19 mortality rates differed among U.S. geographic areas, we sought to determine if COVID-19 outcomes were associated with LMPORs.

2. Materials and Methods

2.1. Data Sources

We obtained daily U.S.-county-level-accrued COVID-19 infections and deaths from the Centers for Disease Control and Prevention (CDC) and state- and local-level public health agencies as compiled by USA Facts for the first year of the COVID-19 pandemic (between 11 March 2020 and 11 March 2021) [15].
We obtained LMPORs for 100 U.S. localities from the HPI dataset. These were the most recent source of LMPORs available at the time.
We obtained the selected characteristics of county-level data from the 2015–2019 American Community Survey (ACS) 5-Year Estimates from the U.S. Census Bureau, including population, demographics [16], selected economic characteristics [17] and selected social characteristics [18]. These were the most recent estimates available at the time. Rates were reported as a proportion of total county population to which the descriptor applied.
The raw data (Table S1) can be used to produce a scatter plot of outcomes with a linear trendline. For example, Figure 1, Figure 2 and Figure 3, respectively, show LMPOR versus the deaths per 100k population, LMPOR versus infections per 100k population, and LMPOR versus the infected death rate for the counties (all as of 31 March 2021 and prior to correction for possible confounders).

2.2. Statistical Analysis

We matched the LMPOR to its respective county-level information. Since LMPORs contained PORs from a 50-mile radius, some LMPORs had a substantially overlapping area and were effectively clones. For example, we had overlapping LMPORs for Phoenix, Scottsdale, Mesa, Chandler, and Glendale areas, all being in Maricopa County, Arizona. In those instances, we avoided over-representing those duplicative LMPORs by only matching the county-level outcomes to its most populous/recognizable locality. For example, we matched Maricopa County COVID-19 outcomes only to the Phoenix area LMPOR (which represents all PORs within 50 miles of the Phoenix epicenter and is inclusive of the other cities mentioned). This had the effect of removing from our dataset entries for which the predictor variables (star ratings) are exactly the same, and would otherwise have exerted undue influence on our overall results. This process eliminated 11 of the original 100 cities, resulting in a total of 89 localities for our analysis dataset.
We determined the daily COVID-19 outcome rates from county-level data: accrued deaths divided by accrued infections (the infected death rate), accrued deaths divided by population (the population death rate), and accrued infections divided by population (the population infection rate).
We used this complete dataset to measure the outcome for the first year of the pandemic. Furthermore, we investigated the results from the first three months of the pandemic as well as the remainder of its first year in consideration of the novelty of the disease during the first wave and the subsequently evolving public health response thereafter:
  • First year of the pandemic: 11 March 2020–11 March 2021;
  • Initial pandemic: 11 March 2020–11 June 2020;
  • Later pandemic: 11 June 2020–11 March 2021.
Additionally, we investigated several smaller time periods reflecting post hoc knowledge of the wave-like changes in disease incidence over time. Respectively, these periods represent a five month relative lull, a two month rising wave, and a two month falling wave:
  • Summer/Fall pandemic: 11 June 2020–11 November 2020;
  • Holiday rise: 11 November 2020–11 January 2021;
  • Holiday drop: 11 January 2021–11 March 2021.
To reduce the influence of other variables, we corrected for the influence of multiple potential confounders. Given the limits imposed by our dataset of 89 counties, we took a hypothesis-driven approach, selecting the three available covariates hypothesized a priori to be most likely to confound the relationship between star ratings and COVID-19 outcomes. We ran panel data regressions with GLS random effects. These use the following form (Equation (1)):
y i t = α + k = 1 k β k x i k + u i t
where:
  • yit = COVID-19 outcome accounting for localities, time, covariates, and error;
  • α = y-intercept;
  • k = number of covariates;
  • i = number of localities;
  • t = observations across time;
  • βk = coefficient for each covariate;
  • xik = time-invariant covariates across localities;
  • uit = random error varying across localities and time.
We ran the panel data regressions for each of the three COVID-19 outcomes, including the following time-invariant covariates:
  • Population infection rate → star rating, age ≥ 65, poverty;
  • Population death rate → star rating, age ≥ 65, poverty, no health insurance;
  • Infected death rate → star rating, age ≥ 65, poverty, no health insurance.
We considered age ≥ 65 to be relevant for all outcomes, as age appears to be a risk factor for both infection and prognosis [19]. Poverty was considered to be relevant for all outcomes. Taking poverty into account can yield insights into socioeconomic variances and their effects, such as income-related facility resources, rates of working from home, and household size. This, in turn, can impact infection rates, stress-induced immunosuppression (affecting both infection and death), and healthcare access, which impacts death rates [20,21]. A lack of health insurance was considered to be relevant only to deaths, as it may reflect healthcare access and quality of care [22]. Although other factors may significantly affect COVID-19 outcomes, the size of the dataset limited our ability to correct for additional factors. This is addressed further in the limitations section.
Our complete dataset, including localities, daily COVID-19 outcomes, and demographics, contained approximately 100,000 observations, in addition to the 46,000 PORs represented by the 89 LMPORs. We performed the regression analysis in Stata v16, producing the coefficient for each covariate, their respective p-values, and confidence intervals.
From the above covariates model, we also modeled a counterfactual comparison group in which the average star rating was increased to 4.0 stars (a 0.3-star increase). This increase was within the bounds of the data available in our model. We estimated the absolute risk reduction (ARR) and relative risk reduction (RRR) for each study outcome by comparing the real-world outcomes seen with the real-world mean star rating to the estimated outcomes predicted by our counterfactual model.

3. Results

3.1. Descriptive Data

The 89 localities included in our study provided a significant range of demographics. Comparing the highest and lowest variables on a county-by-county basis, poverty varied nearly five-fold, the rate of uninsurance varied nearly eight-fold, and the rate of those aged at least 65 years varied nearly three-fold (see Table 1). The area with the highest star rating was nearly 1-star higher than the worst-rated area.
Our complete model of the 89 counties captured approximately one-third of U.S. total COVID-19 outcomes and total U.S. population (see Table 2). It provided an accurate representation of U.S. COVID-19 outcomes as a whole, with the observed population infection rate in the 89 localities staying within 6% of the national average across the study period.

3.2. Main Results

We found doctor star ratings significantly associated with COVID-19 infection outcomes during the first entire year of the pandemic. We estimated the ARR and RRR for each study outcome by comparing the real-world observations to the estimated outcomes predicted by our model with a 0.3 unit higher average star rating. The increase represents an average rating of 4.0 stars instead of 3.7–56% of healthcare practices measured individually already meet or exceed this goal [23,24]. As shown in Table 3, we found a 16.8% RRR of the infected death rate and a 10.7% RRR of the population infection rate during the first year of the pandemic with a 0.3 increase in LMPOR. Generally, we found a higher likelihood of statistical significance when the time window for analysis was longer, whereas shorter time windows rarely showed a significant association for any COVID-19 outcome, likely due to the decrease in dataset size.
We used the modeled RRRs to estimate the number of COVID-19 outcomes that could have been prevented. Our model predicted that a 0.3 unit higher star rating could have resulted in 87,782 fewer COVID-19 deaths and 3,083,209 fewer infections during the first entire year of the pandemic in the U.S. (see Table 4).

4. Discussion

We found a significant association between LMPORs and COVID-19 outcomes. The geographic areas with the most satisfied patients, on average, fared significantly better against COVID-19 compared with areas with the least satisfied patients. Our modeling of a 0.3 unit higher U.S. average star rating predicted 87,782 fewer deaths during the first year of the COVID-19 pandemic, representing a 16.8% higher survival rate for the infected U.S. population, assuming an equal number of infections. During the later pandemic (11 June 2020–11 March 2021), this value was an even higher 20.9%. Although we acknowledge the potential for residual confounding in our model, this higher survival rate might possibly illustrate the ability of the highest rated medical practices to rapidly adapt and respond to a novel infectious disease. Another possibility is that the higher ratings of medical practices may indicate closer physician–patient relationships and greater patient trust in their respective physicians, leading to improved patient behavior or willingness to accept physician recommendations [25], which could include COVID-19-related risk factors, such as weight management. However, a fully nuanced explanation is probably multifactorial. For example, obese patients report greater satisfaction with their healthcare providers than their normal-weight counterparts [26].
We also found LMPORs to be associated with the population infection rate. Our modeling of a 0.3 unit higher U.S. average star rating predicted 3,083,209 fewer infections, representing a 10.7% RRR for the first entire year of the pandemic (and 10.3% RRR for the later pandemic). We did not anticipate this association, since patient satisfaction is not a direct component of the virus’s mechanism of transmission. However, successful preventive healthcare may reduce an individual’s vulnerability to infection and associated adverse outcomes. Furthermore, as trust increases between patient and provider, improved patient behavior is also anticipated [25]. This may indicate a patient’s willingness to adhere to doctor recommendations, such as hand-washing and social-distancing, which in turn reduce infection. We therefore anticipate the association between infection rates and PORs to represent less tangible measures of quality of care, such as patient trust and preventive care.
Generally, we found a higher likelihood of statistical significance when the time window for analysis was longer, whereas shorter time windows rarely showed a significant association for any COVID-19 outcome, likely due to the decrease in dataset size. We also note that the changing progression of the pandemic and seasonal cultural traditions may have played a role in reducing the statistical significance of the star rating association during the shorter time windows.
The least predictable of the outcomes was population death rate, which was not statistically significant during any of the time windows we measured. However, population death rate is a metric that incorporates into its denominator a significant portion of the population who were not infected. With this in mind, population death rate analysis would not have as much statistical power as the infected death rate analysis, which has only the infected in its denominator. The subsequent analysis of the data extending beyond the first year of the pandemic may provide additional nuance.

4.1. Implications

The available evidence shows that patient experience has a positive association with the processes of care for both prevention and disease management [27]. In addition to improved patient behavior, patient experience has also been associated with improved clinical outcomes. For example, the analysis of aggregate data has shown that patient-centered care is associated with lower mortality and lower readmission rates for myocardial infarctions [25]. Similarly, high Yelp ratings are associated to an improvement in clinical outcomes for myocardial infarction and pneumonia [9]. We find that LMPORs serve as a significant predictor of both COVID-19 infection and death. This finding is in agreement with and furthers the available evidence that improvements in patient satisfaction ratings are associated with improvements in clinical outcomes.
It is especially important that our findings not be misconstrued as expressions of contempt towards doctors or other health professionals. Our findings do not fault healthcare providers for adverse outcomes. Rather, our research uncovers additional benefits afforded through the successful pursuit of superior patient experience in healthcare, beyond the direct interactions between doctor and patient. For example, the existing research calls for “active listening” by all members of healthcare practices, including administrative staff [28,29], which may result in increased patient satisfaction, improved patient behavior [25], improved outcomes [28], and earned patient trust [30,31]. Those patient experience improvements are likely to be reflected in star ratings that are here shown to predict health outcomes. Evidence has shown the areas that contribute most to doctors’ happiness seem to focus on the satisfaction of their patients [32], and it is likely that happier doctors lead to improved patient experiences. Americans generally view medical professionals favorably [33] and 78% of patient complaints are not about physicians; therefore, programs that aim to improve patient care and reduce patient dissatisfaction should be directed at the entire staff, not only physicians [29]. The evidence, taken together with our findings here, shows that practices who cultivate a team of caring experts who deliver high patient satisfaction and corresponding star ratings may yield a more unified and satisfied team, an enjoyable working environment, and improved patient outcomes.
Prior to the development of a vaccine, a significant portion of public health policy response to the pandemic was directed towards non-pharmaceutical interventions, including school closures, banning of mass gatherings, isolation of ill persons, disinfection and/or hygiene measures [34], and the mandatory wearing of masks [35]. Although such measures may have reduced infections, “reactive” measures may introduce the risk of other adverse outcomes, such as increased suicidality [36], closure of health practices [37], reduced cancer screenings [38], and are unlikely to have the benefit of “proactive” measures [34]. For example, at the onset of the COVID-19 pandemic in March 2020, appointments for breast, cervical, and colon cancer screenings decreased by 86% to 94% percent compared with average volumes in previous years and comparable times [38]. Some of these systemic issues may be difficult to change. However, our work reinforces that public policy and important efforts within individual practices that facilitate improved patient experience may result in improved patient satisfaction ratings and improved outcomes without those risks to patient or practice.

4.2. Limitations

Online reviews are not verified and have an inherent selection bias in the reporting of patient experiences. However, our study only assessed these reviews in aggregate. The available evidence suggests online reviews are in agreement between the various platforms [39], contain important information that can generate insights into quality of care [23], observe aspects of care related to important patient outcomes [9], and mirror many aspects of more traditional surveys, such as the HCAHPS [5].
LMPORs from the HPI dataset may have changed since their original publication in 2014. There has been a general paucity of studies that examine LMPORs, and we found no equivalent and no more recent source from which to obtain LMPORs. Despite the rise of telehealth during the COVID-19 pandemic, patient satisfaction with video visits was high [40]. Further research may obtain a more fine-scale LMPOR association, such as practice-level associations within a hospital, or broad-scale LMPOR associations, such as country-level associations.
The available LMPORs were for the most populous localities within the U.S. The outcomes for those areas differed modestly and showed elevated infection rates, yet better survival rates, than the remainder of the U.S. Therefore, our findings are most applicable to urban areas of the U.S.
Patients likely do not select healthcare providers according to county boundaries. Their selection behavior may be smaller or larger than county borders. The star ratings selection from a 50-mile radius may include portions of multiple counties, or less than an entire county. Some level of border leakiness is inherent in a city- or county-level analysis approach—there will inevitably be patients crossing from one region to another for some of their medical treatment, adding error to the estimated characteristics of patients receiving medical service in a given region.
Our findings include the results of a panel data regression and describe a linear relationship between LMPORs and COVID-19 outcomes. The nature of this form of analysis is sensitive to outliers and may overfit the data. Other statistical approaches in a larger or more detailed dataset may reveal more nuanced results. Furthermore, our analysis was not designed to predict outcomes for patient ratings beyond the extremes of our model, which extend from 3.2 to 4.2 stars.
Our findings are a result of broad trends and may not prove accurate in every individual circumstance and locality. Our findings included anomalous localities that experienced low COVID-19 death rates despite low patient satisfaction rates.
Although other factors contribute to COVID-19 survivability, our study was limited to a focus on LMPORs. Although a panel of 89 counties vs. all the dates of interest result in a large panel dataset, the correlated nature of day-to-day COVID-19 rates means the dataset was unable to support correction for more than three confounders. We were therefore unable to include all possible confounders, such as certain demographics or governmental or institutional interventions, including gender, race, obesity, population density, or mandates. There also may be some features of healthcare facilities that the patient experience may be blind to, which nonetheless affect COVID-19 outcomes. Nevertheless, given that the size of our dataset only supported the correction of three confounders, we selected the factors deemed to be of highest relevance and objectivity, and with the lowest correlation between each other. For example, although obesity is a significant risk factor for COVID-19 [41,42], it also has a significant overlap with poverty; the highest rates of obesity occur among population groups with the highest poverty rates [42]. A larger dataset would facilitate correction for additional confounders. For example, socioeconomic status (SES) includes poverty and lack of health insurance, for which we were able to correct. However, SES is notoriously difficult to capture, and a larger dataset would allow correction for the larger range of factors that make up SES.
Our analysis was based on aggregate patient data and did not assess individual-level patient data. Individual-level data offers advantages; however, aggregate patient data continues to be the mainstay of systematic reviews and can support clinical practice guidelines [43].
Although PORs may be influenced mostly by the patient–clinician relationship [44], PORs do not directly measure physician clinical skill, and in some cases may be counter to clinical skill. For example, a study found that, although outpatient respiratory tract infections (RTIs) are mostly viral in nature and rarely warrant treatment with antibiotics, patients who received antibiotic prescriptions for respiratory tract infections reported a nominal increase in satisfaction [45]. Furthermore, the totality of patient experience encompasses far more than provider skill. For example, a patient who is unable to book an appointment due to a malfunctioning telephone system may report a lower satisfaction.
A prior analysis of 1.5 million online reviews showed that health practices tend to receive about one-fifth of the quantity of reviews of restaurants and hotels [46], giving each POR more influence on a practice’s overall rating online. In comparison to restaurants, doctors are 64% more likely to receive a 5-star review, but 194% as likely to receive a 1-star review [46]. This suggests negative reviews to be especially important quality indicators for health practices. Rather than viewing negative PORs as misguided criticism, our findings suggest healthcare providers welcome the valuable assessment of the total patient experience. In managing online reviews, practices should avoid self-dealing, review incentives, and other review manipulations, which may be illegal [47] or generate negative publicity [48]. Any individual POR may be inaccurate or false; however, the evidence suggests that, on the whole, PORs do truly reflect patient experiences and outcomes.
The correlations we found do not imply causation; the act of giving a positive review does not itself inoculate against adverse outcomes, and the act of giving a negative review does not itself induce adverse outcomes.

5. Conclusions

Our new findings uncover a significant relationship between COVID-19 outcomes and reported patient satisfaction levels. Specifically, the geographic areas with higher patient satisfaction online review ratings benefitted from substantially better COVID-19 outcomes. Prior research has shown that positive patient experiences predict improved myocardial infarction and pneumonia outcomes, among other improvements; these new findings suggest patient online reviews may predict COVID-19 outcomes as well, providing the first illustration of this phenomenon in a pandemic context.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/healthcare11101411/s1.

Author Contributions

Conceptualization: J.S.; data curation: J.S.; formal analysis: M.H. and J.S.; funding acquisition: R.K.; investigation: J.S. and M.H.; methodology: J.S. and M.H.; project administration: J.S. and R.K.; software: M.H. and J.S.; supervision: J.S. and R.K.; validation: J.S., M.H., R.K. and N.B.; visualization: J.S.; writing—original draft: J.S.; writing—review and editing: J.S., M.H., R.K. and N.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was sponsored by Vanguard Communications, a firm involved in publishing online patient information and educational materials for the benefit of private and academic specialty medical practices (vanguardcommunications.net, accessed on 11 March 2021). This included financial support for publishing charges and research/consultant fees. The funding did not influence the integrity of data or decision to publish, nor altered our adherence to the journals’ policies on sharing data and materials.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this published article. Supplementary datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors received research/consultant fees. The funding did not influence the integrity of data or decision to publish, nor altered our adherence to the journals’ policies on sharing data and materials. Statistical analyses were performed and validated by Mark Hensley of Hensley Biostats, a company which has no vested interest in the financial success of Vanguard Communications.

References

  1. Giordano, L.A.; Elliott, M.N.; Goldstein, E.; Lehrman, W.G.; Spencer, P.A. Development, Implementation, and Public Reporting of the HCAHPS Survey. Med. Care Res. Rev. 2010, 67, 27–37. [Google Scholar] [CrossRef] [PubMed]
  2. Wallace, B.C.; Paul, M.J.; Sarkar, U.; Trikalinos, T.A.; Dredze, M. A large-scale quantitative analysis of latent factors and sentiment in online doctor reviews. J. Am. Med. Inform. Assoc. 2014, 21, 1098–1103. [Google Scholar] [CrossRef] [PubMed]
  3. Smith, R.J.; Lipoff, J.B. Evaluation of Dermatology Practice Online Reviews: Lessons from Qualitative Analysis. JAMA Dermatol. 2016, 152, 153–157. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Schlesinger, M.; Grob, R.; Shaller, D.; Martino, S.C.; Parker, A.M.; Finucane, M.L.; Cerully, J.L.; Rybowsk, L. Taking Patients’ Narratives about Clinicians from Anecdote to Science. N. Engl. J. Med. 2015, 373, 675–679. [Google Scholar] [CrossRef] [PubMed]
  5. Ranard, B.L.; Werner, R.M.; Antanavicius, T.; Schwartz, H.A.; Smith, R.J.; Meisel, Z.F.; Asch, D.A.; Ungar, L.H.; Merchant, R.M. Yelp Reviews of Hospital Care Can Supplement and Inform Traditional Surveys of the Patient Experience of Care. Health Aff. 2016, 35, 697–705. [Google Scholar] [CrossRef] [Green Version]
  6. Lagu, T.; Goff, S.L.; Hannon, N.S.; Shatz, A.; Lindenauer, P.K. A Mixed-Methods Analysis of Patient Reviews of Hospital Care in England: Implications for Public Reporting of Health Care Quality Data in the United States. Jt. Comm. J. Qual. Patient Saf. 2013, 39, 7–15. [Google Scholar] [CrossRef]
  7. Kellogg, C.; Zhu, Y.; Cardenas, V.; Vazquez, K.; Johari, K.; Rahman, A.; Enguidanos, S. What Consumers Say about Nursing Homes in Online Reviews. Gerontologist 2018, 58, e273–e280. [Google Scholar] [CrossRef] [Green Version]
  8. Lin, Y.; Hong, Y.; Henson, B.; Stevenson, R.; Hong, S.; Lyu, T.; Liang, C. Assessing Patient Experience and Healthcare Quality of Dental Care Using Patient Online Reviews in the United States: Mixed Methods Study. J. Med. Internet Res. 2020, 22, e18652. [Google Scholar] [CrossRef]
  9. Bardach, N.S.; Penaloza, R.; Boscardin, W.J.; Dudley, R.A. The relationship between commercial website ratings and traditional hospital performance measures in the USA. BMJ Qual. Saf. 2013, 22, 194–202. [Google Scholar] [CrossRef]
  10. Greaves, F.; Pape, U.J.; King, D.; Darzi, A.; Majeed, A.; Wachter, R.M.; Millett, C. Associations between Web-Based Patient Ratings and Objective Measures of Hospital Quality. Arch. Intern. Med. 2012, 172, 435–436. [Google Scholar] [CrossRef] [Green Version]
  11. Kilaru, A.S.; Meisel, Z.F.; Paciotti, B.; Ha, Y.P.; Smith, R.J.; Ranard, B.L.; Merchant, R.M. What do patients say about emergency departments in online reviews? A qualitative study. BMJ Qual. Saf. 2016, 25, 14–24. [Google Scholar] [CrossRef] [Green Version]
  12. Yu, J.; Samuel, L.; Yalçin, S.; Sultan, A.A.; Kamath, A.F. Patient-Recorded Physician Ratings: What Can We Learn from 11,527 Online Reviews of Orthopedic Surgeons? J. Arthroplast. 2020, 35, S364–S367. [Google Scholar] [CrossRef]
  13. Tran, N.; Lee, J. Online Reviews as Health Data: Examining the Association between Availability of Health Care Services and Patient Star Ratings Exemplified by the Yelp Academic Dataset. JMIR Public Health Surveill. 2017, 3, e43. [Google Scholar] [CrossRef] [Green Version]
  14. Vanguard Communications. New Happy Patient Index Ranks Best & Worst U.S. Cities by Physician Reviews. 2014. Available online: https://www.globenewswire.com/news-release/2014/06/02/641256/10084268/en/New-Happy-Patient-Index-Ranks-Best-Worst-U-S-Cities-by-Physician-Reviews.html (accessed on 21 January 2021).
  15. USA Facts. Detailed Methodology and Sources: COVID-19 Data. 2020. Available online: https://usafacts.org/articles/detailed-methodology-covid-19-data/ (accessed on 21 January 2021).
  16. ACS Demographic and Housing Estimates. U.S. Census Bureau, 2015–2019 American Community Survey 5-Year Estimates. Available online: https://data.census.gov/table?q=DP05:+ACS+DEMOGRAPHIC+AND+HOUSING+ESTIMATES&g=050XX00US06077,32003,06037,13121,12031,21067,27123,22071,25025,22033,02020,06085,20173,34013,06001,34017,12086,16001,32031,29510,31109,27053,08031,26163,06099,31055,29095,11001,12057,06059,06019,15003,12095,17031,24510,01073,08041,06065,06067,08005,12103,06029,18003,18097,04019,01101,04013,06071,06073,06075,40109,36029,36103,37119,39035,39153,55025,41051,48303,47037,48029,37183,39061,37081,48141,48085,42003,42101,40143,47157,48201,36059,51013,37063,35001,53033,39049,36055,55079,51550,48479,48439,51710,36061,39095,51810,48453,48355,48113&tid=ACSDP5Y2019.DP05 (accessed on 13 May 2021).
  17. Selected Economic Characteristics. U.S. Census Bureau, 2015–2019 American Community Survey 5-Year Estimates. Available online: https://data.census.gov/table?q=DP03:+SELECTED+ECONOMIC+CHARACTERISTICS&g=050XX00US06077,32003,06037,13121,12031,21067,27123,22071,25025,22033,02020,06085,20173,34013,06001,34017,12086,16001,32031,29510,31109,27053,08031,26163,06099,31055,29095,11001,12057,06059,06019,15003,12095,17031,24510,01073,08041,06065,06067,08005,12103,06029,18003,18097,04019,01101,04013,06071,06073,06075,40109,36029,36103,37119,39035,39153,55025,41051,48303,47037,48029,37183,39061,37081,48141,48085,42003,42101,40143,47157,48201,36059,51013,37063,35001,53033,39049,36055,55079,51550,48479,48439,51710,36061,39095,51810,48453,48355,48113&tid=ACSDP5Y2019.DP03 (accessed on 13 May 2021).
  18. Selected Social Characteristics in the United States. U.S. Census Bureau, 2015–2019 American Community Survey 5-Year Estimates. Available online: https://data.census.gov/table?q=DP02:+SELECTED+SOCIAL+CHARACTERISTICS+IN+THE+UNITED+STATES&g=050XX00US06077,32003,06037,13121,12031,21067,27123,22071,25025,22033,02020,06085,20173,34013,06001,34017,12086,16001,32031,29510,31109,27053,08031,26163,06099,31055,29095,11001,12057,06059,06019,15003,12095,17031,24510,01073,08041,06065,06067,08005,12103,06029,18003,18097,04019,01101,04013,06071,06073,06075,40109,36029,36103,37119,39035,39153,55025,41051,48303,47037,48029,37183,39061,37081,48141,48085,42003,42101,40143,47157,48201,36059,51013,37063,35001,53033,39049,36055,55079,51550,48479,48439,51710,36061,39095,51810,48453,48355,48113&tid=ACSDP5Y2019.DP02 (accessed on 13 May 2021).
  19. Kim, L.; Garg, S.; O’halloran, A.; Whitaker, M.; Pham, H.; Anderson, E.J.; Armistead, I.; Bennett, N.M.; Billing, L.; Como-Sabetti, K.; et al. Risk Factors for Intensive Care Unit Admission and In-hospital Mortality among Hospitalized Adults Identified through the US Coronavirus Disease 2019 (COVID-19)-Associated Hospitalization Surveillance Network (COVID-NET). Clin. Infect. Dis. 2021, 72, e206–e214. [Google Scholar] [CrossRef]
  20. Patel, J.A.; Nielsen, F.B.H.; Badiani, A.A.; Assi, S.; Unadkat, V.A.; Patel, B.; Ravindrane, R.; Wardle, H. Poverty, inequality and COVID-19: The forgotten vulnerable. Public Health 2020, 183, 110–111. [Google Scholar] [CrossRef]
  21. Adhikari, S.; Pantaleo, N.P.; Feldman, J.M.; Ogedegbe, O.; Thorpe, L.; Troxel, A.B. Assessment of Community-Level Disparities in Coronavirus Disease 2019 (COVID-19) Infections and Deaths in Large US Metropolitan Areas. JAMA Netw. Open 2020, 3, e2016938. [Google Scholar] [CrossRef]
  22. McWilliams, J.M. Health consequences of uninsurance among adults in the United States: Recent evidence and implications. Milbank Q. 2009, 87, 443–494. [Google Scholar] [CrossRef] [Green Version]
  23. Hong, Y.; Liang, C.; Radcliff, T.; Wigfall, L.; Street, R. What Do Patients Say About Doctors Online? A Systematic Review of Studies on Patient Online Reviews. J. Med. Internet Res. 2019, 21, e12521. [Google Scholar] [CrossRef] [Green Version]
  24. Chen, Y. User-Generated Physician Ratings—Evidence from Yelp. 2018. Available online: http://stanford.edu/~yiwei312/papers/jmp_chen.pdf (accessed on 28 January 2021).
  25. Meterko, M.; Wright, S.; Lin, H.; Lowy, E.; Cleary, P.D. Mortality among patients with acute myocardial infarction: The influences of patient-centered care and evidence-based medicine. Health Serv. Res. 2010, 45 Pt 1, 1188–1204. [Google Scholar] [CrossRef] [Green Version]
  26. Fong, R.L.; Bertakis, K.D.; Franks, P. Association between Obesity and Patient Satisfaction. Obesity 2006, 14, 1402–1411. [Google Scholar] [CrossRef]
  27. Sequist, T.D.; Schneider, E.C.; Anastario, M.; Odigie, E.G.; Marshall, R.; Rogers, W.H.; Safran, D.G. Quality monitoring of physicians: Linking patients’ experiences of care to clinical quality and outcomes. J. Gen. Intern. Med. 2008, 23, 1784–1790. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Lang, F.; Floyd, M.R.; Beine, K.L. Clues to Patients’ Explanations and Concerns About Their Illnesses—A Call for Active Listening. Arch. Fam. Med. 2000, 9, 222–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Montini, T.; Noble, A.A.; Stelfox, H.T. Content analysis of patient complaints. Int. J. Qual. Health Care 2008, 20, 412–420. [Google Scholar] [CrossRef]
  30. Lynn-sMcHale, D.J.; Deatrick, J.A. Trust between Family and Health Care Provider. J. Fam. Nurs. 2000, 6, 210–230. [Google Scholar] [CrossRef]
  31. Coulter, A. Patients’ views of the good doctor—Doctors have to earn patients’ trust. BMJ 2002, 325, 668–669. [Google Scholar] [CrossRef]
  32. Bogue, R.J.; Guarneri, J.G.; Reed, M.; Bradley, K.; Hughes, J. Secrets of physician satisfaction: Study identifies pressure points and reveals life practices of highly satisfied doctors. Physician Exec. 2006, 32, 6. Available online: https://link.gale.com/apps/doc/A155405677/AONE (accessed on 1 July 2021).
  33. Funk, C.; Hefferon, M.; Kennedy, B.; Johnson, C. Trust and Mistrust in Americans’ Views of Scientific Experts. Pew Research Center. 2019. Available online: https://www.pewresearch.org/science/2019/08/02/trust-and-mistrust-in-americans-views-of-scientific-experts/ (accessed on 1 July 2021).
  34. Cowling, B.J.; Aiello, A.E. Public Health Measures to Slow Community Spread of Coronavirus Disease 2019. J. Infect. Dis. 2020, 221, 1749–1751. [Google Scholar] [CrossRef] [Green Version]
  35. MacIntyre, C.R.; Seale, H.; Dung, T.C.; Hien, N.T.; Nga, P.T.; Chughtai, A.A.; Rahman, B.; Dwyer, D.E.; Wang, Q. A cluster randomised trial of cloth masks compared with medical masks in healthcare workers. BMJ Open 2015, 5, e006577. [Google Scholar] [CrossRef] [Green Version]
  36. Caballero-Domínguez, C.C.; Jiménez-Villamizar, M.P.; Campo-Arias, A. Suicide risk during the lockdown due to coronavirus disease (COVID-19) in Colombia. Death Stud. 2020, 46, 885–890. [Google Scholar] [CrossRef]
  37. 2020 Survey of America’s Physicians COVID-19 Impact Edition—Part One of Three: COVID-19′s Impact on Physicians’ Practices and Their Patients. Physicians Foundation. 2020. Available online: http://physiciansfoundation.org/wp-content/uploads/2020/08/20-1278-Merritt-Hawkins-2020-Physicians-Foundation-Survey.6.pdf (accessed on 11 February 2021).
  38. Mitchell, E.P. Declines in Cancer Screening During COVID-19 Pandemic. J. Natl. Med. Assoc. 2020, 112, 563–564. [Google Scholar] [CrossRef]
  39. Donnally, C.J.; Li, D.J.; Maguire, J.A.; Roth, E.S.; Barker, G.P.; McCormick, J.R.; Rush, A.J.; Lebwohl, N.H. How social media, training, and demographics influence online reviews across three leading review websites for spine surgeons. Spine J. 2018, 18, 2081–2090. [Google Scholar] [CrossRef]
  40. Ramaswamy, A.; Yu, M.; Drangsholt, S.; Ng, E.; Culligan, P.J.; Schlegel, P.N.; Hu, J.C. Patient Satisfaction with Telemedicine During the COVID-19 Pandemic: Retrospective Cohort Study. J. Med. Internet Res. 2020, 22, e20786. [Google Scholar] [CrossRef]
  41. Kwok, S.; Adam, S.; Ho, J.H.; Iqbal, Z.; Turkington, P.; Razvi, S.; Le Roux, C.W.; Soran, H.; Syed, A.A. Obesity: A critical risk factor in the COVID-19 pandemic. Clin. Obes. 2020, 10, e12403. [Google Scholar] [CrossRef]
  42. Drewnowski, A.; Specter, S.E. Poverty and obesity: The role of energy density and energy costs. Am. J. Clin. Nutr. 2004, 79, 6–16. [Google Scholar] [CrossRef] [Green Version]
  43. Lyman, G.H.; Kuderer, N.M. The strengths and limitations of meta-analyses based on aggregate data. BMC Med. Res. Methodol. 2005, 5, 14. [Google Scholar] [CrossRef] [Green Version]
  44. Langerhuizen, D.W.G.; Brown, L.E.; Doornberg, J.N.; Ring, D.; Kerkhoffs, G.M.M.J.; Janssen, S.J. Analysis of Online Reviews of Orthopaedic Surgeons and Orthopaedic Practices Using Natural Language Processing. J. Am. Acad. Orthop. Surg. 2021, 29, 337–344. [Google Scholar] [CrossRef]
  45. Martinez, K.A.; Rood, M.; Jhangiani, N.; Kou, L.; Boissy, A.; Rothberg, M.B. Association between Antibiotic Prescribing for Respiratory Tract Infections and Patient Satisfaction in Direct-to-Consumer Telemedicine. JAMA Internet Med. 2018, 178, 1558–1560. [Google Scholar] [CrossRef] [Green Version]
  46. Vanguard Communications. Doctors Online Reviews Are Polarized. Available online: https://vanguardcommunications.net/reviews-love-or-hate-doctors/ (accessed on 1 July 2021).
  47. New York State Office of the Attorney General. Attorney General Cuomo Secures Settlement with Plastic Surgery Franchise That Flooded Internet with False Positive Reviews. 2009. Available online: https://ag.ny.gov/press-release/2009/attorney-general-cuomo-secures-settlement-plastic-surgery-franchise-flooded (accessed on 1 July 2021).
  48. Yelp. Yelp’s Consumer Protection Initiative: Empowering Our Users. 2018. Available online: https://blog.yelp.com/2018/08/yelps-consumer-protection-initiative-empowering-our-users (accessed on 1 July 2021).
Figure 1. Uncorrected LMPOR vs. cumulative COVID-19 infections per 100k population until 11 March 2021.
Figure 1. Uncorrected LMPOR vs. cumulative COVID-19 infections per 100k population until 11 March 2021.
Healthcare 11 01411 g001
Figure 2. Uncorrected LMPOR vs. cumulative COVID-19 deaths per 100k population until 11 March 2021.
Figure 2. Uncorrected LMPOR vs. cumulative COVID-19 deaths per 100k population until 11 March 2021.
Healthcare 11 01411 g002
Figure 3. Uncorrected LMPOR vs. cumulative COVID-19 deaths/infections until 11 March 2021.
Figure 3. Uncorrected LMPOR vs. cumulative COVID-19 deaths/infections until 11 March 2021.
Healthcare 11 01411 g003
Table 1. Basic demographics of the modeled localities.
Table 1. Basic demographics of the modeled localities.
Min.MedianAverageMax.Standard Deviation
County Population226,941836,0621,229,10010,081,5701,343,643
Poverty5.60%14.70%14.54%27.50%4.29%
Age ≥ 659.20%13.40%13.63%24.30%2.36%
No Health Insurance3.20%8.50%9.23%27.70%4.38%
Stars3.203.703.704.150.21
Table 2. Summary outcomes for the modeled dates.
Table 2. Summary outcomes for the modeled dates.
11 March 202011 June 202011 November 202011 January 202111 March 2021
Modeled Number of Days193246307366
Modeled Number of Counties8989898989
Modeled Population109,389,862109,389,862109,389,862109,389,862109,389,862
US Population328,239,523328,239,523328,239,523328,239,523328,239,523
Model Pop./US Pop.33.326%33.326%33.326%33.326%33.326%
Model Deaths Total3235,96177,656114,369169,656
US Deaths Total40113,073238,816369,388523,420
Model Deaths/US Deaths80.0%31.8%32.5%31.0%32.4%
Model Infections Total623698,2993,609,2957,868,68810,128,763
US Infections Total13392,010,45610,286,99122,265,94428,731,120
Model Infections/US Infections46.5%34.7%35.1%35.3%35.3%
Model Deaths/Model Population0.0%0.0%0.1%0.1%0.2%
US Deaths/US Population0.0%0.0%0.1%0.1%0.2%
Model Rate/US Rate240.1%95.4%97.6%92.9%97.3%
Model Infections/Model Population0.0%0.6%3.3%7.2%9.3%
US Infections/US Population0.0%0.6%3.1%6.8%8.8%
Model Rate/US Rate139.6%104.2%105.3%106.0%105.8%
Model Deaths/Model Infected5.1%5.1%2.2%1.5%1.7%
US Deaths/US Infected3.0%5.6%2.3%1.7%1.8%
Model Rate/US Rate171.9%91.6%92.7%87.6%91.9%
Table 3. Star-rating association to COVID-19 outcomes *.
Table 3. Star-rating association to COVID-19 outcomes *.
Model Avg. Incidence **+0.3★ ARR+0.3★ RRR
Actual+0.395% CIEst.95% CIEst.95% CI
Pandemic Year (11 March 2020–11 March 2021)
Infections/Population3.04%2.72%2.46%2.97%0.33%0.58%0.07%10.73%19.17%2.29%
Deaths/Population0.06%0.04%0.01%0.08%0.01%0.05%−0.02%25.14%84.07%−33.79%
Deaths/Infections2.61%2.17%1.93%2.40%0.44%0.67%0.20%16.79%25.78%7.79%
Early Pandemic (3 November 2020–6 November 2020)
Infections/Population0.29%0.20%0.05%0.36%0.08%0.23%−0.07%28.56%81.16%−24.04%
Deaths/Population0.01%0.01%−0.02%0.04%0.01%0.04%−0.03%40.58%263.76%−182.61%
Deaths/Infections3.46%3.16%2.62%3.71%0.30%0.84%−0.25%8.56%24.32%−7.20%
Later Pandemic (6 November 2020–3 November 2021)
Infections/Population3.99%3.58%3.24%3.91%0.41%0.75%0.07%10.32%18.81%1.83%
Deaths/Population0.07%0.06%0.01%0.10%0.02%0.06%−0.03%24.07%85.00%−36.85%
Deaths/Infections2.32%1.84%1.58%2.09%0.48%0.74%0.23%20.89%31.80%9.98%
Summer/Fall Pandemic (6 November 2020–11 November 2020)
Infections/Population1.90%1.69%1.37%2.00%0.21%0.52%−0.10%11.06%27.62%−5.50%
Deaths/Population0.05%0.04%−0.01%0.09%0.02%0.06%−0.03%29.24%125.76%−67.28%
Deaths/Infections2.84%2.19%1.82%2.56%0.65%1.02%0.28%22.99%36.03%9.95%
Holiday Rise (11 November 2020–1 November 2021)
Infections/Population5.06%4.69%3.88%5.49%0.37%1.17%−0.43%7.31%23.19%−8.57%
Deaths/Population0.08%0.06%−0.04%0.16%0.02%0.12%−0.08%23.05%145.02%−98.91%
Deaths/Infections1.73%1.37%0.91%1.83%0.37%0.83%−0.10%21.10%47.71%−5.50%
Holiday Drop (1 November 2021–3 November 2021)
Infections/Population8.26%7.29%6.29%8.28%0.97%1.97%−0.03%11.76%23.86%−0.33%
Deaths/Population0.13%0.10%−0.02%0.23%0.02%0.15%−0.10%19.32%119.12%−80.47%
Deaths/Infections1.56%1.38%0.93%1.83%0.18%0.63%−0.27%11.27%40.08%−17.53%
* Bold values are statistically significant at p < 0.05. ** Average incidence values across panel data regressions render an average point estimate across the time window, which is not equivalent to an incidence calculation performed cross-sectionally on data from the last day of the series. ★ represents POR star ratings for the model.
Table 4. Year-one COVID-19 pandemic outcomes with a 0.3 star improvement *.
Table 4. Year-one COVID-19 pandemic outcomes with a 0.3 star improvement *.
+0.3 ★Modeled OutcomesDifference (Actual–Modeled)
Pandemic Year (11 March 2020–11 March 2021)ActualEstimate95% CIEstimate95% CI
Infections of US Population28,729,78125,646,57223,221,55128,071,5933,083,209658,1885,508,230
Deaths of US Population523,380391,80783,364700,249131,573−176,869440,016
Deaths of US Infected523,380435,518388,440482,59687,86240,784134,940
* Bold values are statistically significant at p < 0.05. ★ represents POR star ratings for the model.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stanley, J.; Hensley, M.; King, R.; Baum, N. The Relationship between Internet Patient Satisfaction Ratings and COVID-19 Outcomes. Healthcare 2023, 11, 1411. https://doi.org/10.3390/healthcare11101411

AMA Style

Stanley J, Hensley M, King R, Baum N. The Relationship between Internet Patient Satisfaction Ratings and COVID-19 Outcomes. Healthcare. 2023; 11(10):1411. https://doi.org/10.3390/healthcare11101411

Chicago/Turabian Style

Stanley, Jonathan, Mark Hensley, Ronald King, and Neil Baum. 2023. "The Relationship between Internet Patient Satisfaction Ratings and COVID-19 Outcomes" Healthcare 11, no. 10: 1411. https://doi.org/10.3390/healthcare11101411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop