Next Article in Journal
Design of a Microbial Remediation Inoculation Program for Petroleum Hydrocarbon Contaminated Sites Based on Degradation Pathways
Next Article in Special Issue
Externalities of Lean Implementation in Medical Laboratories. Process Optimization vs. Adaptation and Flexibility for the Future
Previous Article in Journal
Perception of Harmfulness of Various Tobacco Products and E-Cigarettes in Poland: A Nationwide Cross-Sectional Survey
Previous Article in Special Issue
Lean Healthcare Tools for Processes Evaluation: An Integrative Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Six Sigma in Health Literature, What Matters?

by
Ana-Beatriz Hernández-Lara
,
Maria-Victoria Sánchez-Rebull
* and
Angels Niñerola
Department of Business Management, Faculty of Business and Economics, University Rovira i Virgili, 43204 Reus, Spain
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2021, 18(16), 8795; https://doi.org/10.3390/ijerph18168795
Submission received: 18 July 2021 / Revised: 17 August 2021 / Accepted: 18 August 2021 / Published: 20 August 2021
(This article belongs to the Collection Lean Six Sigma in Healthcare)

Abstract

:
Six Sigma has been widely used in the health field for process or quality improvement, constituting a quite profusely investigated topic. This paper aims at exploring why some studies have more academic and societal impact, attracting more attention from academics and health professionals. Academic and societal impact was addressed using traditional academic metrics and alternative metrics, often known as altmetrics. We conducted a systematic search following the PRISMA statement through three well-known databases, and identified 212 papers published during 1998–2019. We conducted zero-inflated negative binomial regressions to explore the influence of bibliometric and content determinants on traditional academic and alternative metrics. We observe that the factors influencing alternative metrics are more varied and difficult to apprehend than those explaining traditional impact metrics. We also conclude that, independently of how the impact is measured, the paper’s content, rather than bibliometric characteristics, better explains its impact. In the specific case of research on Six Sigma applied to health, the papers with more impact address process improvement focusing on time and waste reduction. This study sheds light on the aspects that better explain publications’ impact in the field of Six Sigma application in health, either from an academic or a societal point of view.

1. Introduction

Six Sigma seeks quality, understood as less variability in a process result [1]. Despite it coming from the manufacturing industry, as Motorola created it, it has been applied to a diverse set of non-manufacturing-related issues, giving excellent results [2,3]. It is based on the principle of measuring, monitoring, and controlling processes through the DMAIC steps (define, measure, analyze, improve, and control) [4]. The goal is to reach 99.9997% accuracy, with only 3.4 defects per million opportunities.
This management philosophy is widely used in the health sector to reduce error, cost, and time [5,6,7,8]. In today’s complex environment, with financial constraints on the healthcare system, increased efficiency could help health institutions to maintain or improve outcomes [9]. In this sense, the use of Six Sigma for addressing health process improvements is gaining scholars’ and professionals’ interest [10]. Most of this research is carried out through case studies showing Six Sigma implementations in different areas of a medical organization [5].
As the field attracts more attention, the impact of its publications also grows. In this regard, citations have been the traditional way to assess research impact [11,12], as well as journals’ impact factor [13], or other indexes, such as the FWCI (Field-Weighted Citation Impact). FWCI is an indicator that compares the actual number of citations received by a document with the expected number of citations for documents of the same type (article, review, book, or conference proceeding), publication year, and subject area [14]. The relevance of these traditional metrics is based on their use in performing evaluations of individual academics, research groups, and universities [15]. However, the criticism received for these classic metrics approaches is growing because they do not analyze the reasons for citations or the impact that research exerts beyond academia [16]. Besides, in the specific case of the health field, the time normally required to accumulate citations may overlook important societal and clinical impacts and new scholarly channels are increasingly used to disseminate scientific results [16].
This critical movement has led to the rise of alternative metrics, commonly known as altmetrics, representing another way of assessing research impact based on relationships and sharing academic publications in online environments [17]. They capture the relevance of research based on metrics such as article views, downloads, and mentions on social media or news media [16], considering channels such as Twitter, Mendeley, CitedULike or blogging [18,19], among others. Given the growth and relevance of social media for sharing scientific knowledge [20], these novel metrics have become necessary and represent another way to assess research impact.
Altmetrics complement citation analysis and other traditional metrics; they overcome their limitations and provide new insights into research impact study [16]. At the same time, altmetrics can provide better signaling of significant publications according to different audiences, which do not necessarily need to be academic, being more suitable for capturing the societal impact of research [21].
Previous research on traditional and altmetric scores of publications has mostly been interested in analyzing the relationship between both kinds of metrics [22]. The conclusions of these studies are not unanimous. While some of them find some degree of correlation between altmetrics and citations [22], others claim that altmetrics cannot predict future citations [23] and that some altmetrics are only weakly correlated with traditional citation metrics [24]. These conflicting results suggest that these two approaches are related; they complement each other, but they do not provide identical information on the visibility and impact of academic research. The conflicting results also point out the necessity of conducting more studies providing additional empirical evidence to support the notion of the determinants that explain what is important in order to increase the citations and the altmetric scores.
In the health field, and more specifically among the extensive literature that considers Six Sigma for addressing health process improvements, we are unaware of any study that has analyzed the determinants that have more impact in the citations and altmetric scores obtained by these publications.
The paper aims to determine which factors are the determinants of the impact of the publications, considering traditional and alternative impact indicators. Specifically, we aim to do this in health publications applying Six Sigma.
These different ways of addressing impact represent important opportunities for academics that concern the objectives of their research. Academics want their contributions to reach the greatest audience. Therefore, they will value finding out the main determinants of their research impact.
After explaining the methodology in Section 2, results are presented in Section 3, and finally, discussion and conclusions are summarized in Section 4 and Section 5, respectively.

2. Method

2.1. Search Strategy

A systematic search following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement was carried out [25]. We used three databases for guaranteeing greater coverage [26]: WoS (Web of Science) core collection, Scopus, and Medline, for its relevance at the medical level. The analysis included research articles [27] published until 2019, written in English and gathered through the next search engine:
  • 1st level (terms related to the methodology should appear in the title): “six sigma” or “six-sigma” or DMAIC;
  • 2nd level (terms related to quality processes should be in the Topic for WoS and Medline or in TITLE-ABS-KW for Scopus): “quality systems”, “quality improvement”, or “quality management”;
  • 3rd level (the activity sector should be in the Topic for WoS and Medline or in TITLE-ABS-KW for Scopus): “health *”.
Figure 1 shows the process carried out for reaching the final sample analyzed, 212 articles.

2.2. Variables Examined

2.2.1. Dependent Variable

The impact of the publications can be measured with different indicators, typically separated into traditional or alternatives. Previous research claims they have a different nature, related to academic or social parameters [12]. While traditional metrics assess the impact of a publication based on citations, altmetrics are based on social networks.
Among the traditional metrics, we choose the citations (citations per year to diminish the influence of tenure) and FWCI in the study, as they are commonly accepted in academia [14].
Mendeley and the databases consulted provide much data used as alternative metrics for assessing impact to capture the research’s societal impact. In this sense, we had information on paper usage regarding abstract views, full-text views, link-outs, or times downloaded, besides mentions on Twitter or readers on Mendeley [28].
In the analysis, we included Mendeley readers and the abstract views. Most of the papers had information related to those variables. On the other hand, only 46 papers of the sample showed interaction on Twitter, causing us to choose another alternative social evaluation tool such as Mendeley. Moreover, previous studies claim that an estimated one-third of the scientific community comprises readers [16]. Therefore, the selected variables can gather the scope that research can obtain.
In Table 1, descriptive data of those dependent variables are detailed. As can be seen, except for abstract views, the information is available for all the sample papers. Abstract views also have the highest mean and greater dispersion.

2.2.2. Independent Variables

The independent variables were the determinants that may influence the research impact. These determinants are varied. Research has pointed out that bibliometric information can make the paper more remarkable, easy to find, or more visible for an interested audience [29]. Besides, the content addressed by the research, including their objectives, themes, or the methodologies applied, can also serve as an attractor for gaining readers [30].
Among the bibliometric predictors, previous literature identifies the authors and their bibliometric characteristics as variables influencing the publication impact [12,31], e.g., recognized authors or top authors with many papers and citations typically gather more attention [32], as well as authors from leading universities [33].
We considered several items regarding authorship, such as the number of authors of the paper (N_authors), the type of authorship (academic/professional/both), and the first author’s information, as a proxy of authors’ impact [34]. In this sense, we included the URC (university of affiliation research score) of the first author according to the Times Higher Education ranking [35], first author citations the year before the publication of the paper for capturing its notoriety, and the total number of documents he/she authored (first author citations, first author N_papers).
We also included bibliometric variables regarding the source of the publications [36]. It is expected that well-known journals positively contribute to the impact of the research. This means considering the impact factor, the number of fields or categories indexed, or its quartile for addressing journal influence on a paper’s impact. The quartile is used to evaluate the relative importance of a journal within the total number of journals in its category. Therefore, if we divide a list of journals ordered from highest to lowest impact index into four equal parts, each of these parts will be a quartile. We chose Scopus metrics as most of the papers were indexed in this database (SJR, Q, and N_fields). Previous research found a concordance between JCR and SJR metrics in percentiles and ranks [37]; therefore, any could be considered.
Regarding information extracted from the paper itself [31], we added, on the one hand, bibliometric variables such as tenure, references, and keywords (paper tenure, N_references, and N_keywords) [12,30]. Previous research found that bibliometric characteristics affect the impact of research in terms of citations; its effect on social impact should be analyzed as well.
Among the content determinants of the research impact, we included the paper’s objectives, its main themes, and the unit of analysis contemplated in the research. This means, in our specific case, the health department where Six Sigma was applied. In this sense, based on a recent review [5], the main objectives of using Six Sigma in healthcare are reducing cost, time, waste, and errors (OBJ). Besides, the main themes of the publications were determined through their keywords. To do so, the 10 most recurrent keywords were identified (K_). Finally, Six Sigma has been applied to a great range of hospital departments, including cardiology, laboratory, management, medication & pharmacy, nursing, obstetric, pediatric, radiology, rehab, surgery and anesthesiology, traumatology, and UCI and Emergency.
Variables included in the model are summarized in Figure 2.

3. Results

Statistical data analysis was conducted using RStudio (version 1.4.1103) (RStudio, Inc., Boston, MA, USA) [38]. Firstly, we developed descriptive statistics on the numeric variables and the correlation matrix. The results are shown in Table 1. It shows that the dependent variables were sometimes significantly correlated, but we considered separated regression models for each one. Between the independent variables of the models, the significant correlations only reached moderate levels, confirming that there were no multicollinearity problems.
The results of the Shapiro–Wilk normality test revealed that dependent variables did not follow a normal distribution. Moreover, there was a preponderance of zeros and highly dispersed data, with variances much higher than the means of the dependent variables, as Table 1 shows. Therefore, we used zero-inflated negative binomial regressions for the model estimation.
Table 2 shows the estimation models for the traditional impact metrics (Models 1 refers to citations per year as the dependent variable, and Models 2 has FWCI as the dependent variable). Table 3 shows the estimation models for altmetrics (Model 3 refers to Mendeley readers and Model 4 to abstract views as dependent variables, respectively). Each of the four models was split in two, labeled as submodel a (including the bibliometric determinants) and b (the paper’s content variables).
Overall, all models had excellent goodness of fit (p < 0.001) considering the likelihood-ratio tests and Wald tests. It is noteworthy that the AIC (Akaike information criterion) is always lower for estimation models based on the publications’ content than for the models based on the bibliometric indicators. This means that the content models exhibit a better fit, and therefore, the content indicators better explain the impact of the publications, with independence for how this impact is measured.

3.1. Traditional Metrics Models

Focusing on the bibliometric models’ results (models 1a, 2a), they show a significant and negative influence of the lower quartiles (Q3 and Q4) on all impact metrics, especially on traditional metrics. On the other hand, we found that publishing in Q2 is not significantly worse than publishing in Q1.
Particularly for citations, even if they are corrected using the citations per year, tenure appears significant and positive (β = 0.038, p < 0.05), as well as the number of papers authored by the first author (β = 0.030, p < 0.001). Old articles written by a productive principal author achieve greater success in terms of citations. Besides, the number of references seems to affect both traditional metrics (β = 0.014, p < 0.001 (citations), β = 0.007, p < 0.1 (FCWI)), which means that the use of more references impacts positively on citations.
Moreover, when we consider the impact of content indicators on traditional metrics, it is noteworthy that some keywords positively affect citations and FWCI. We can conclude from the content models that papers using healthcare, process improvement, and quality improvement as keywords have a significantly higher impact in terms of citations and FWCI. In particular, articles addressing process improvement are highly cited (β = 1.384, p < 0.01 (citations), β = 1.827, p < 0.001 (FCWI)). Additionally, some objectives have more interest for academics considering the traditional metrics. Papers pursuing time and waste reduction are more significantly cited and have higher FWCI according to our results (β = 0.917, p < 0.001 and β = 1.478, p < 0.001 (citations), β = 0.743, p < 0.05 and β = 1.312, p < 0.05 (FCWI)).
We could not find any significance regarding the influence of the health departments on FWCI. However, the results show a slightly significant and negative relationship between nursing (β = −1.146, p < 0.1) and pediatrics (β = −2.902, p < 0.05) compared to cardiology, which was the default department category in the models, concerning the citations.

3.2. Altmetrics Models

In the estimation models for the altmetric scores, we found some conflicting results compared with traditional metrics, and less consistency between the effects of the variables analyzed in both dependent variables (Mendeley readers and abstract views). Despite the explanatory capacity of the models, there is much information included in the intercept. This did not occur in the traditional models, giving us insights into the variety of variables that could affect altmetrics.
Models 3a and 4a on bibliometric indicators show the same significant and negative influence of the lower quartiles (Q3 and Q4) also found in traditional metrics. Moreover, tenure plays a negative role in Mendeley readers and abstract views (β = −0.074, p < 0.001 and β = −0.072, p < 0.05, respectively). Contrary to what occurred with citations and FWCI, new papers achieve more social impact. This result is explained as the most recent articles are usually more promoted on social networks or appear in the researchers’ alerts when new works related to their topic are published.
In those models also, the number of keywords was a significant and positive variable. Using more keywords positively affects both metrics (β = 0.079, p < 0.01 (Mendeley readers), β = 0.139, p < 0.01 (abstract views)). On the contrary, papers included in more categories, i.e., less focused on a specific research area, experienced a negative relationship with altmetric scores (β = −0.17, p < 0.01 (Mendeley Readers), β = −0.265, p < 0.05 (abstract views)). Finally, there are two variables with significant impact on abstract views: the number of authors, with a positive influence (β = 0.130, p < 0.01), and the SJR that appears to have a negative impact (β = −0.879, p < 0.01). This last finding is quite unexpected and is evidence that the scholars find and consider abstracts interesting for themselves, even if they do not belong to high-ranked journals. However, these results are only proven on a specific altmetric score, so more empirical evidence would be necessary to obtain robust conclusions.
The content models, model 3b and 4b results, showed that the papers addressing time reduction are the most relevant for causing researchers to view the papers’ abstract (β = 1.336, p < 0.01), while for Mendeley readers, only error reduction is an objective without impact; Mendeley readers are significantly interested in the rest of the identified objectives of this research.
There are several keywords with a significant and positive influence on content scores. The most influential are process improvement (β = 1.405, p < 0.01 for readers and β = 3.349, p < 0.001 for abstract views), quality improvement (β = 0.853, p < 0.001 for readers and β = 1.684, p < 0.001 for abstract views), and quality management (β = 2.990, p < 0.001 for abstract views). These findings agree with those obtained for traditional metrics where process and quality improvement were also the most influential keywords. Additionally, keywords such as Lean Six Sigma or healthcare also exerted a significant and positive influence on bibliometrics, while Six Sigma or just Lean yielded a negative impact on some of the altmetrics considered.
Finally, our results point out that the department investigated in the publications does not seem to explain their impacts in terms of readers. On the contrary, it is more important to explain the abstract views, especially in publications on rehab. Some others were also significant but negative in comparison with cardiology, which is the default category department. This is the case for obstetric, trauma, or management departments.

4. Discussion

Research impact can be addressed using different metrics and can also be affected by various determinants. Although some previous research found a positive correlation between traditional metrics and altmetrics [20,39], others claim they are weakly correlated [24]. Our study has also constated that the items influencing both metrics are different [29]. However, in both cases, the content better explains the impact of the publications rather than bibliometrics.
We have concluded that the determinants affecting traditional metrics are clearer and more specific, so researchers know better what is important to consider when they address improving the impact of their publications in terms of traditional scores based on citations. In this sense, papers published in lower-ranked journals (journals in the Q3 and Q4 of their database) show lower values in traditional metrics. On the contrary, including more references positively affects traditional metrics. Citing other works is a way of gaining visibility. Authors usually receive alerts when they are mentioned, and they may be tempted to read or share the study where it is cited.
The number of previous papers of the first authors seems to be only important when we look at the citations. This finding has a logical explanation. First of all, citations do not exclude self-citations, so authors with previous papers can increase their citations, including their own works. Moreover, if an author is working in the field or has some research experience, his/her work can be better known, compared to novel authors. This fact does not occur in the other traditional metric considered. FWCI compares the actual number of citations received by a document with the expected number of citations based on some aspects. So, the power or influence of an experienced first author may have less weight.
Altmetrics, on the contrary, more based on the societal impact of research, can be influenced by varied determinants beyond bibliometric indicators, especially indicators related to the content of the publications. A paper may have more readers or views because of other reasons, that countenance moving beyond the traditional standards of academia [40]. Baek et al. [41] analyzed the top-cited articles versus top altmetric articles in a particular field, finding no overlaps between the two samples. This confirms that traditional and academic metrics do not always go in the same direction [40].
Previous research agreed that journals’ impact is one of the most relevant determinants of citations and altmetrics [29]. In this study, we emphasize the importance of the quartile where the journal is positioned more than its impact factor. As a matter of fact, our results point out that abstract views can be higher in the case of journals not well ranked, perhaps because these journals make more effort to attract a bigger audience, knowing that visualization is the first step to attract future readers and higher impact. Moreover, other article characteristics such as authorship or the number of references increase the article’s impact, especially in traditional metrics [29,31]. Considering the bibliometric determinants of impact, it is also noteworthy to highlight the different effects of tenure. We agree with previous research that altmetrics can provide more real-time information [13], as the effect of tenure works contrarily than for traditional metrics. Similar to Araújo et al. [40], our results also confirm that recent publications receive more attention with altmetrics, and older, seminal works benefit from using more conventional metrics for measuring research impact.
As aforementioned, we found that the content models better explain the data. This means that papers analyzing Six Sigma in the health field are more cited or have higher social visibility for what they investigate (their main objectives, themes, and the units of analysis) rather than for their bibliometric characteristics. Those results align with other previous research that pointed out the research topic as the key to success [42]. According to our results, the objective of the article is essential for achieving academic and societal impact. In this regard, time and waste reduction are the main goals of highly cited papers in the studied field. Moreover, the more relevant themes were common for traditional and altmetric scores, which were related to process and quality improvement and quality management. The relevance of the objective analyzed and the theme addressed by the publication was higher than the impact of the unit of analysis or the department considered in the research.
Table 4 summarizes the main effects found, highlighting the importance of the content of the paper in all metrics. In red are the determinants that negatively affect the impact of the publication; in green, the positive.

5. Conclusions

This research provides additional evidence on the determinants that explain what matters in order to increase the scientific and societal impact of research on Six Sigma applied to health. There is an open debate on the scope and determinants that influence the different metrics that capture the impact of publications. Moreover, besides providing more evidence to support the inconsistent findings of previous research on the topic, we also pointed out the relevance of this kind of study in the health field, because in this case, the time required for traditional metrics of impact may overlook important scientific advances with societal and clinical impacts, which justifies the relevance of alternative metrics in this field and the need to understand their determinants better [18].
As a conclusion, our results confirm that it is more complicated to apprehend the varied factors that explain the societal impact of academic research. Contrary to traditional metrics, which are more stable, influenced by time, and solidly based on bibliometric and content determinants, altmetrics can also be affected by other factors. They are not as well known, given the novelty of social media and academic networks to divulge research, and also given the varied methods and tools that journals and scholars themselves can use to promote and obtain higher visibility.
Despite these difficulties, there are some determinants in the bottom-line of impact, independently of how it is measured, which allow us to reach conclusions on the relevance of the paper’s content rather than their bibliometric characteristics.
In the specific case of research on Six Sigma applied to health, process, and quality improvement, the themes and objectives that assure the highest impact, for both traditional and alternative metrics, are addressed mainly by time and waste reduction, independently of the department or unit of analysis used.
However, even if this kind of study provides additional insights into what matters to enhance the academic and societal impact of research, it is necessary to mention the limitations in obtaining these alternative metrics. Data quality problems are usual and constitute a relevant issue in the field, inviting us to consider the findings with caution and suggesting the need to make additional efforts in the future to overcome this limitation.
Although it seems complicated that citations and the journal impact factor stop being crucial, currently the use of social networks and support software, like Mendeley, are enhancing the relevance and weight of altmetrics in academic research.
Despite altmetrics not representing an alternative to the traditional methods to measure research output impact, some organizations such as DORA (Declaration on Research Assessment) [43] are pressing to improve how scientific research is evaluated by funding agencies, academic institutions, and other parties, trying to go beyond traditional indicators such as the impact factor of the journal.
In this sense, Scopus has a tool known as PlumX Metrics [44] that gathers people’s footprints when interacting with research and categorizes them into five categories—Usage, Captures, Mentions, Social Media, and Citations. It would be interesting for all parties involved in research to establish an overall indicator giving different weights to those categories to find a representative parameter agglutinating all the impact indicators. Thus, both metrics, academic and societal, could contribute to assessing the impact of a publication.

Author Contributions

All the authors designed the research. A.N. and M.-V.S.-R. collected the data. A.N. and A.-B.H.-L. performed the analysis of data. Finally, the paper was written by A.N., A.-B.H.-L. and M.-V.S.-R. All the authors have read and approved the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. De Koning, H.; Verver, J.P.; van den Heuvel, J.; Bisgaard, S.; Does, R.J.M.M. Lean six sigma in healthcare. J. Healthc. Qual. 2006, 28, 4–11. [Google Scholar] [CrossRef]
  2. Snee, R.D. Six-Sigma: The evolution of 100 years of business improvement methodology. Int. J. Six Sigma Compet. Advant. 2004, 1, 4–20. [Google Scholar] [CrossRef]
  3. Antony, J. Six sigma for service processes. Bus. Process Manag. J. 2006, 12, 234–248. [Google Scholar] [CrossRef]
  4. Kwak, Y.H.; Anbari, F.T. Benefits, obstacles, and future of six sigma approach. Technovation 2006, 26, 708–715. [Google Scholar] [CrossRef]
  5. Niñerola, A.; Sánchez-Rebull, M.V.; Hernández-Lara, A.B. Quality improvement in healthcare: Six Sigma systematic review. Health Policy 2020. [Google Scholar] [CrossRef] [PubMed]
  6. Improta, G.; Balato, G.; Romano, M.; Ponsiglione, A.M.; Raiola, E.; Russo, M.A.; Rosa, D.; Triassi, M.; Cesarelli, M. Improving performances of the knee replacement surgery process by applying DMAIC principles. J. Eval. Clin. Pract. 2017, 23, 1401–1407. [Google Scholar] [CrossRef]
  7. Kobo-Greenhut, A.; Holzman, K.; Raviv, O.; Arad, J.; Ben Shlomo, I. Applying health-six-sigma principles helps reducing the variability of length of stay in the emergency department. Int. J. Qual. Health Care 2021, 33. [Google Scholar] [CrossRef]
  8. Scala, A.; Ponsiglione, A.M.; Loperto, I.; Della Vecchia, A.; Borrelli, A.; Russo, G.; Triassi, M.; Improta, G. Lean Six Sigma Approach for Reducing Length of Hospital Stay for Patients with Femur Fracture in a University Hospital. Int. J. Environ. Res. Public Health 2021, 18, 2843. [Google Scholar] [CrossRef] [PubMed]
  9. Hundal, G.S.; Thiyagarajan, S.; Alduraibi, M.; Laux, C.M.; Furterer, S.L.; Cudney, E.A.; Antony, J. Lean Six Sigma as an organizational resilience mechanism in health care during the era of COVID-19. Int. J. Lean Six Sigma 2021. [Google Scholar] [CrossRef]
  10. Barberato Henrique, D.; Godinho Filho, M. A systematic literature review of empirical research in Lean and Six Sigma in healthcare. Total Qual. Manag. Bus. Excell. 2018, 31, 429–449. [Google Scholar] [CrossRef]
  11. Garfield, E. Citation analysis as a tool in journal evaluation. Science 1972, 178, 471–479. [Google Scholar] [CrossRef]
  12. Hou, J.; Ma, D. How the high-impact papers formed? A study using data from social media and citation. Scientometrics 2020, 125, 2597–2615. [Google Scholar] [CrossRef]
  13. Warren, V.T.; Patel, B.; Boyd, C.J. Analyzing the relationship between Altmetric score and literature citations in the Implantology literature. Clin. Implant Dent. Relat. Res. 2020, 22, 54–58. [Google Scholar] [CrossRef]
  14. Purkayastha, A.; Palmaro, E.; Falk-Krzesinski, H.J.; Baas, J. Comparison of two article-level, field-independent citation metrics: Field-Weighted Citation Impact (FWCI) and Relative Citation Ratio (RCR). J. Informetr. 2019, 13, 635–642. [Google Scholar] [CrossRef]
  15. Adam, D. The counting house. Nature 2002, 415, 726–729. [Google Scholar] [CrossRef] [PubMed]
  16. Barbic, D.; Tubman, M.; Lam, H.; Barbic, S. An Analysis of Altmetrics in Emergency Medicine. Acad. Emerg. Med. 2016, 23, 251–268. [Google Scholar] [CrossRef] [PubMed]
  17. Priem, J.; Groth, P.; Taraborelli, D. The Altmetrics Collection. PLoS ONE 2012, 7, e48753. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Bornmann, L. Alternative metrics in scientometrics: A meta-analysis of research into three altmetrics. Scientometrics 2015, 103, 1123–1144. [Google Scholar] [CrossRef] [Green Version]
  19. Wei, M.; Noroozi Chakoli, A. Evaluating the relationship between the academic and social impact of open access books based on citation behaviors and social media attention. Scientometrics 2020, 125, 2401–2420. [Google Scholar] [CrossRef]
  20. Ali, O.; Karim, S.M.; Nasim, A.; Leila, H.; Alireza, I.-M. Do altmetrics correlate with citations? A study based on the 1,000 most-cited articles. Inf. Discov. Deliv. 2019, 47, 192–202. [Google Scholar] [CrossRef]
  21. Erdt, M.; Nagarajan, A.; Sin, S.C.J.; Theng, Y.L. Altmetrics: An analysis of the state-of-the-art in measuring research impact on social media. Scientometrics 2016, 109, 1117–1166. [Google Scholar] [CrossRef]
  22. Costas, R.; Zahedi, Z.; Wouters, P. Do “altmetrics” correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. J. Assoc. Inf. Sci. Technol. 2015, 66, 2003–2019. [Google Scholar] [CrossRef]
  23. Barnes, C. The Use of Altmetrics as a Tool for Measuring Research Impact. Aust. Acad. Res. Libr. 2015, 46, 121–134. [Google Scholar] [CrossRef] [Green Version]
  24. Wooldridge, J.; King, M.B. Altmetric scores: An early indicator of research impact. J. Assoc. Inf. Sci. Technol. 2019, 70, 271–282. [Google Scholar] [CrossRef]
  25. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Int. J. Surg. 2010, 8, 336–341. [Google Scholar] [CrossRef] [Green Version]
  26. Mongeon, P.; Paul-Hus, A. The journal coverage of Web of Science and Scopus: A comparative analysis. Scientometrics 2016, 106, 213–228. [Google Scholar] [CrossRef]
  27. VanRaan, A. The use of bibliometric analysis in research performance assessment and monitoring of interdisciplinary scientific developments. Tech.–Theor. Prax. 2003, 1, 20–29. [Google Scholar]
  28. Hassan, S.U.; Gillani, U.A. Altmetrics of” altmetrics” using Google Scholar, Twitter, Mendeley, Facebook, Google-plus, CiteULike, Blogs and Wiki. arXiv 2016, arXiv:1603.07992. [Google Scholar]
  29. Didegah, F.; Bowman, T.D.; Holmberg, K. On the differences between citations and altmetrics: An investigation of factors driving altmetrics vs. citations for Finnish articles. J. Assoc. Inf. Sci. Technol. 2018, 69, 832–843. [Google Scholar] [CrossRef]
  30. Marin-Anglada, Q.; Hernández-Lara, A.B. Research on sharing economy: Why are some articles more cited than others? Econ. Res. Istraz. 2020, 33, 2787–2805. [Google Scholar] [CrossRef]
  31. van der Zwaard, S.; de Leeuw, A.W.; Meerhoff, L.R.A.; Bodine, S.C.; Knobbe, A. Articles with impact: Insights into 10 years of research with machine learning. J. Appl. Physiol. 2020, 129, 967–979. [Google Scholar] [CrossRef] [PubMed]
  32. Ioannidis, J.P.A.; Klavans, R.; Boyack, K.W. Multiple Citation Indicators and Their Composite across Scientific Disciplines. PLoS Biol. 2016, 14, e1002501. [Google Scholar] [CrossRef] [PubMed]
  33. Yan, W.; Zhang, Y.; Hu, T.; Kudva, S. How does scholarly use of academic social networking sites differ by academic discipline? A case study using ResearchGate. Inf. Process. Manag. 2021, 58, 102430. [Google Scholar] [CrossRef]
  34. Antonakis, J.; Bastardoz, N.; Liu, Y.; Schriesheim, C.A. What makes articles highly cited? Leadersh. Q. 2014, 25, 152–179. [Google Scholar] [CrossRef] [Green Version]
  35. Times Higher Education World University Rankings. Available online: https://www.timeshighereducation.com/world-university-rankings/2021/world-ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/scores (accessed on 5 June 2020).
  36. Caulley, L.; Cheng, W.; Catalá-López, F.; Whelan, J.; Khoury, M.; Ferraro, J.; Husereau, D.; Altman, D.G.; Moher, D. Citation impact was highly variable for reporting guidelines of health research: A citation analysis. J. Clin. Epidemiol. 2020, 127, 96–104. [Google Scholar] [CrossRef] [PubMed]
  37. Pech, G.; Delgado, C. Assessing the publication impact using citation data from both Scopus and WoS databases: An approach validated in 15 research fields. Scientometrics 2020, 125, 909–924. [Google Scholar] [CrossRef]
  38. RStudio Team. RStudio: Integrated Development Environment for R; RStudio Team: Boston, MA, USA, 2015; Available online: http://www.rstudio.com/ (accessed on 22 September 2020).
  39. Onyancha, O.B. Research Excellence in the Era of Online Attention: Altmetrics of South Africa’s Highly Cited Papers in Selected Research Fields. Publ. Res. Q. 2020, 36, 169–185. [Google Scholar] [CrossRef]
  40. Araújo, R.; Sorensen, A.A.; Konkiel, S.; Bloem, B.R. Top Altmetric Scores in the Parkinson’s Disease Literature. J. Parkinsons Dis. 2017, 7, 81–87. [Google Scholar] [CrossRef] [Green Version]
  41. Baek, S.; Yoon, D.Y.; Lim, K.J.; Hong, J.H.; Moon, J.Y.; Seo, Y.L.; Yun, E.J. Top-cited articles versus top Altmetric articles in nuclear medicine: A comparative bibliometric analysis. Acta Radiol. 2020, 61, 1343–1349. [Google Scholar] [CrossRef]
  42. Livas, C.; Delli, K. Looking beyond traditional metrics in orthodontics: An altmetric study on the most discussed articles on the web. Eur. J. Orthod. 2018, 40, 193–199. [Google Scholar] [CrossRef]
  43. The Declaration on Research Assessment San Francisco Declaration on Research Assessment. 2021. Available online: https:/sfdora.org/read/ (accessed on 25 July 2021).
  44. Lindsay, J.M. PlumX from Plum Analytics: Not Just Altmetrics. J. Electron. Resour. Med. Libr. 2016, 13, 8–17. [Google Scholar] [CrossRef]
Figure 1. PRISMA.
Figure 1. PRISMA.
Ijerph 18 08795 g001
Figure 2. Graphic model of variables analyzed.
Figure 2. Graphic model of variables analyzed.
Ijerph 18 08795 g002
Table 1. Descriptive data and correlation matrix of the numeric variables.
Table 1. Descriptive data and correlation matrix of the numeric variables.
NMeansd12345678910111213
1. Citations per year (c/y)2122.423.181
2. FWCI2121.521.900.570 *** 1
3. Mendeley readers21216.4624.600.581 *** 0.607 *** 1
4. Abstract views15776.30131.470.215 *0.508 *** 0.524 *** 1
5. URS7646.3923.610.128−0.199−0.141−0.1651
6. N_Authors 2123.682.80−0.327 *−0.371−0.458 **−0.3080.1821
7. 1st author citations212217.36772.18−0.027−0.108−0.198 **−0.292−0.131−0.078 **1
8. 1st author N_papers2127.959.950.158 **−0.121 *−0.198 **−0.157−0.076−0.057 **0.589 *** 1
9. N_fields2122.151.07−0.336−0.505−0.536−0.554 *0.121 *0.3090.0230.3121
10. SJR2120.610.55−0.342 **−0.287 *−0.188 **0.159−0.3770.119 *** 0.154 **−0.154 **−0.1851
11. Paper tenure2128.365.410.178−0.132−0.302 *** −0.336 **−0.009 *** 0.116 *** 0.1680.4090.1900.081 *** 1
12. N_references15928.5222.170.427 *** −0.0150.397 **0.185−0.176−0.101−0.0360.0780.027−0.065−0.186 *1
13. N_keywords2122.942.760.369 *** 0.2600.215 *** 0.262 *0.012−0.202 *0.1810.3750.146 *** −0.239−0.076 *** 0.576 *1
Correlation significant at the level of * 0.05, ** 0.01, and *** 0.001 (bilateral).
Table 2. Zero-inflated negative binomial regression model for traditional metrics of research impact.
Table 2. Zero-inflated negative binomial regression model for traditional metrics of research impact.
Citations Per YearFWCI
Model 1a: BibliometricModel 1b: ContentModel 2a: BibliometricModel 2b: Content
ESTSEESTSEESTSEESTSE
Intercept0.1570.366−0.3460.5270.7320.418 +−0.5870.623
N_Authors 0.0280.024 −0.0030.029
N_Keywords0.0490.029 0.0170.034
Author_Prof0.3830.198 + 0.3480.227
Author_Both−0.1250.163 −0.0330.190
1st author citations−0.0010.001 0.0000.000
1st author N_papers0.0300.009 ** 0.0080.012
N_Fields−0.0570.068 −0.1200.080
Q2−0.2730.183 −0.3230.208
Q3−1.8210.342 *** −2.2630.450 ***
Q4−2.0260.614 *** −2.2610.802 **
SJR −0.1440.167 −0.3080.201
Paper tenure 0.0380.015 * 0.0270.017
N_references0.0140.003 *** 0.0070.004 +
OBJcost 0.3790.241 0.1490.292
OBJtime 0.9170.269 *** 0.7430.327 *
OBJwaste 1.4780.432 *** 1.3120.521 *
OBJerror 0.2560.229 0.1840.280
Laboratory −0.0800.465 −0.5680.590
Management −0.2510.438 −0.6050.512
Med&Pharma 0.1080.570 0.0600.658
Nursing −1.1460.608 + −0.6350.605
Obstetric −0.6700.685 −1.0080.847
Pediatric −2.9021.272 * −2.6271.390
Radiology −0.1530.480 0.3390.512
Rehab 0.8380.846 0.6991.055
Surgery&Anesthesiology 0.2060.402 0.0590.462
Trauma 0.5600.414 0.0570.491
UCI&Emergency −0.6400.457 −0.5200.518
K_DMAIC 0.3620.399 −0.0590.516
K_Healthcare 0.6350.254 * 0.6670.299 *
K_Hospital −0.0630.426 −0.3500.513
K_Lean 0.2490.308 0.2520.356
K_LSS 0.3090.257 0.5430.325 +
K_Process improvement 1.3840.455 ** 1.8270.507 ***
K_Quality improvement 0.5100.234 * 0.7850.283 **
K_Quality management 0.5160.341 0.4400.435
K_SS −0.1100.224 −0.1980.272
K_Waiting time −0.0490.307 0.0860.396
AIC611.88302.72516.15261.33
Log-likelihood −290.94 (df = 15)−124.36 (df = 27)−243.075 (df = 15)−103.67 (df = 27)
Lrtest null. Model (Chi squared)100.49 ***70.087 ***63.319 ***58.399 ***
Wald test (F)7.509 ***6.299 ***3.521 ***3.428 ***
Significant at the level of + 0.10, * 0.05, ** 0.01, and *** 0.001 (bilateral). Note: EST: estimate; SE: standard error. 1st author: first author information.
Table 3. Zero-inflated negative binomial regression model for alternative metrics of research impact.
Table 3. Zero-inflated negative binomial regression model for alternative metrics of research impact.
Mendeley ReadersAbstract Views
Model 3a: BibliometricModel 3b: ContentModel 4a: BibliometricModel 4b: Content
ESTSEESTSEESTSEESTSE
Intercept2.8410.340 ***1.2300.581 *5.4100.665 ***2.0920.926 *
N_Authors −0.0060.025 0.1300.047 **
N_Keywords0.0790.028 ** 0.1390.053 **
Author_Prof0.0050.192 −0.2130.375
Author_Both0.1890.149 −0.3810.291
1st author citations0.0000.000 −0.00010.0002
1st author N_papers0.0130.010 0.0060.019
N_Fields−0.1700.063 ** −0.2650.125 *
Q20.1630.180 −0.5980.348 +
Q3−0.8190.119 *** −0.4220.460
Q4−0.9620.352 ** −1.6910.676 *
SJR 0.1450.165 −0.8790.326 **
Paper tenure −0.0740.015 *** −0.0720.031 *
N_references0.0110.003 *** −0.0070.006
OBJcost 0.7770.316 * 0.5370.488
OBJtime 0.6290.293 * 1.3360.458 **
OBJwaste 1.7950.649 ** −1.1691.014
OBJerror 0.3240.258 0.0510.397
Laboratory 0.6990.505 −0.0750.729
Management 0.7290.502 −2.6740.758 ***
Med&Pharma 0.2590.631 −0.9580.960
Nursing 0.0770.549 0.3200.755
Obstetric 0.8500.756 −2.5981.087 *
Pediatric −1.7301.076 −2.1951.602
Radiology 0.4310.550 −0.8990.858
Rehab 1.0701.022 3.2981.460 *
Surgery&Anesthesiology 0.4140.482 −0.0490.709
Trauma 0.3810.532 −1.5990.752 *
UCI&Emergency 0.4690.508 −1.2150.739 +
K_DMAIC 0.2680.457 0.0920.692
K_Healthcare 0.5730.297 + 2.0840.462 ***
K_Hospital −0.5580.475 1.5270.797 +
K_Lean 0.2000.359 −1.3380.546 *
K_LSS 0.6840.285 * 2.1090.510 ***
K_Process improvement 1.4050.583 ** 3.3490.963 ***
K_Quality improvement 0.8530.243 *** 1.6840.420 ***
K_Quality management −0.0050.360 2.9900.712 ***
K_SS −0.5070.247 * 0.1850.464
K_Waiting time 0.0220.389 −0.1230.610
AIC1179625.231385 700.36
Log-likelihood −574.51 (df = 15)−285.62 (df = 27)−677.5 (df = 15)−323.18 (df = 27)
Lrtest null. Model (Chi squared)112.22 ***66.136 ***32.128 **52.338 **
Wald test (F)11.51 ***3.289 ***3.2318 ***6.063 ***
Significant at the level of + 0.10, * 0.05, ** 0.01, and *** 0.001 (bilateral). Note: EST: estimate; SE: standard error. 1st author: first author information.
Table 4. Most important research impact determinants.
Table 4. Most important research impact determinants.
Citations Per YearFWCIMendeley ReadersAbstract Views
1st author N_paperspositive
N_Fields negativenegative
Q3negativenegativenegative
Q4negativenegativenegativenegative
Paper tenure positive negativenegative
N_referencespositivepositivepositive
OBJtimepositivepositivepositivepositive
OBJwastepositivepositivepositive
K_Healthcarepositivepositivepositivepositive
K_Process improvementpositivepositivepositivepositive
K_Quality improvementpositivepositivepositivepositive
Beta higher than 1 in bold. Green: positive impact. Red negative impact.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hernández-Lara, A.-B.; Sánchez-Rebull, M.-V.; Niñerola, A. Six Sigma in Health Literature, What Matters? Int. J. Environ. Res. Public Health 2021, 18, 8795. https://doi.org/10.3390/ijerph18168795

AMA Style

Hernández-Lara A-B, Sánchez-Rebull M-V, Niñerola A. Six Sigma in Health Literature, What Matters? International Journal of Environmental Research and Public Health. 2021; 18(16):8795. https://doi.org/10.3390/ijerph18168795

Chicago/Turabian Style

Hernández-Lara, Ana-Beatriz, Maria-Victoria Sánchez-Rebull, and Angels Niñerola. 2021. "Six Sigma in Health Literature, What Matters?" International Journal of Environmental Research and Public Health 18, no. 16: 8795. https://doi.org/10.3390/ijerph18168795

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop