Next Article in Journal
Using Graph Embedding Techniques in Process-Oriented Case-Based Reasoning
Previous Article in Journal
Knowledge Distillation-Based Multilingual Code Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

meta.shrinkage: An R Package for Meta-Analyses for Simultaneously Estimating Individual Means

1
Biostatistics Center, Kurume University, Kurume 830-0011, Japan
2
Department of Clinical Medicine (Biostatistics), School of Pharmacy, Kitasato University, Tokyo 108-8641, Japan
3
Department of Social Information, Mejiro University, Tokyo 161-8539, Japan
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(1), 26; https://doi.org/10.3390/a15010026
Submission received: 3 December 2021 / Revised: 5 January 2022 / Accepted: 13 January 2022 / Published: 17 January 2022

Abstract

:
Meta-analysis is an indispensable tool for synthesizing statistical results obtained from individual studies. Recently, non-Bayesian estimators for individual means were proposed by applying three methods: the James–Stein (JS) shrinkage estimator, isotonic regression estimator, and pretest (PT) estimator. In order to make these methods available to users, we develop a new R package meta.shrinkage. Our package can compute seven estimators (named JS, JS+, RML, RJS, RJS+, PT, and GPT). We introduce this R package along with the usage of the R functions and the “average-min-max” steps for the pool-adjacent violators algorithm. We conduct Monte Carlo simulations to validate the proposed R package to ensure that the package can work properly in a variety of scenarios. We also analyze a data example to show the ability of the R package.

1. Introduction

Meta-analysis is a tool for synthesizing statistical results obtained from individually published studies [1]. Meta-analyses have been employed in a variety of scientific studies [2,3,4,5], including studies on the influence of COVID-19 [6,7,8].
Usually, the goal of meta-analyses is to summarize individual studies to find some common effect [5,9]. The idea of estimating the common mean was originated from mathematical statistics and stratified sampling designs under fixed effect models (pp. 55–103 of [10,11,12,13]). In biostatistical methodologies, the estimation method based on random effect models [14] is popular. In either model, the goal of meta-analyses is usually to estimate the common mean by combining the estimators of individual means (Section 2.1).
In some scenarios, however, the goal of estimating the common mean is questionable. In these scenarios, meta-analyses can still be informative by looking at individual studies’ means (e.g., by a forest plot). Aside from these simple meta-analyses, Bayesian posterior means provide a more sophisticated summary of individual means in a meta-analysis [15,16,17,18,19].
Recently, Taketomi et al. [20] proposed non-Bayesian estimators of individual means by applying three methods: the James–Stein (JS) shrinkage estimator, isotonic regression estimator, and pretest (PT) estimator. Their frequentist estimators were shown to be superior to the individual studies’ estimators via decision theoretic criteria and Monte Carlo simulation experiments. These frequentist estimators also found their suitable applications to real data examples.
In this article, we propose a new R package meta.shrinkage in order to implement the frequentist estimators in [20]. Our package can calculate seven estimators (namely δ JS , δ JS + ,   δ RML , δ RJS , δ RJS + , δ PT and δ GPT ; see Section 3 for their definitions). We introduce this R package along with the usage of the R functions and the “average-min-max” steps for the pool-adjacent violators algorithm. We conduct Monte Carlo simulations to validate the proposed R package to ensure that the package can work properly in a variety of scenarios. We made the package freely available in the Comprehensive R Archive Network (CRAN): Available online: https://CRAN.R-project.org/package=meta.shrinkage (accessed on 14 November 2021).
This article is organized as follows. Section 2 gives the background, including a quick review of meta-analyses. Section 3 introduces the proposed R package. Section 4 conducts simulation studies to validate the proposed R package. Section 5 includes a data example to illustrate the proposed R package. Section 6 concludes the article with discussions. The appendices give the R code to reproduce the numerical results of this article.

2. Background

2.1. Meta-Analysis

This subsection reviews the basic concepts for meta-analysis.
To clarify the concepts, we introduce some notations and assumptions for a meta-analysis. Define G as the number of studies, where G stands for groups in the meta-analysis. For each i = 1 , 2 , , G , let Y i be an estimator for a target estimand μ i that is unknown. Let y i be a realized value of the random variable Y i . We assume that the error is normally distributed so that Y i ~ N ( μ i , σ i 2 ) , where σ i 2 > 0 is a known variance for the error distribution. Thus, μ i is the mean of Y i . The observed data are { y i :   i = 1 , 2 , , G } in this meta-analysis. We will not consider a setting where the variance is unknown [21,22], as this setting does not follow the framework of meta-analyses based on “summary data”. Without a loss of generality, we assume that μ i = 0 corresponds to the null value.
Traditionally, the objective of meta-analyses is to estimate the common mean, denoted as μ . It is defined by the fixed effect model assumption of μ μ 1 = = μ G or the random-effects model assumption of μ i ~ N ( μ , τ 2 ) for i = 1 , 2 , , G , where τ 2 is the between-study variance. In this article, we assume neither, since the aforementioned models do not always fit the data at hand. For instance, if the studies have ordered means (e.g., μ 1 μ G ), the model is neither fixed nor random [20]. Indeed, many real meta-analyses have some covariates to systematically explain the reason for increasing means μ 1 μ G or decreasing means μ 1 μ G (see Section 5). In such circumstances, there is no general way to define the common mean μ . Below, we discuss what meta-analyses can do in the absence of the common mean.

2.2. Improved Estimation of Individual Means

Meta-analyses often display individual estimates ( y 1 , , y G ) along with their 95% confidence intervals (CIs): ( y 1 , , y G ) ± 1.96 × ( σ 1 , , σ G ) . Similarly, the funnel plot shows ( y 1 , , y G ) against   ( σ 1 , , σ G ) (see [1,5,23,24] for these plots). These meta-analyses are possible without the assumptions of the fixed effect or random effect models. Therefore, looking at the individual estimates ( y 1 , , y G ) is a part of meta-analysis.
Taketomi et al. [20] pointed out the need for improving the individual estimates by shrinkage estimation methods [11,12,25,26]. They first regard Y ( Y 1 , , Y G ) as an estimator of μ ( μ 1 , , μ G ) . Then, they consider an estimator δ ( Y ) ( δ 1 ( Y ) , , δ G ( Y ) ) that improves upon Y in terms of the weighted mean square error (WMSE) criteria:
E [ i = 1 G ( δ i ( Y ) μ i ) 2 σ i 2 ] < E [ i = 1 G ( Y i μ i ) 2 σ i 2 ] = G ,         ( μ 1 , , μ G ) ,
E [ i = 1 G ( δ i ( Y ) μ i ) 2 σ i 2 ] E [ i = 1 G ( Y i μ i ) 2 σ i 2 ] = G ,         ( μ 1 , , μ G ) .
The above δ ( Y ) is called an improved estimator of Y . If “ ( μ 1 , , μ G ) ” in Equation (2) holds only for a restricted parameter space, δ ( Y ) is locally improved. Two locally improved estimators are relevant in this article. The first one is under ordered means, where the parameter space is restricted to μ 1 μ 2 μ G . The second one is under the sparse normal means [27], where many μ i values are zero (e.g., ( μ 1 , , μ 10 ) = ( 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 ) ).
The inverse variance weights in Equation (1) and (2) make it convenient to apply the classical decision theory [20]. Consequently, Taketomi et al. [20] were able to theoretically verify the (local) improvement of their estimators upon Y in terms of the WMSE. In practice, one may also be interested in the total MSE (TMSE) criterion, defined as E [ i = 1 G ( δ i ( Y ) μ i ) 2 ] [11,25], while the TMSE makes the theoretical analysis complex [11]. Section 4 will employ the TMSE criterion to assess the performance of all the improved estimators that will be introduced in the proposed R package.
Below, we introduce our R package, which can compute several improved estimators suggested by [20]. The goal of our package is to compute δ ( y ) ( δ 1 ( y ) , , δ G ( y ) ) from y that are the realized values for a random vector Y ( Y 1 , , Y G ) .

3. R Package meta.shrinkage

This section introduces our proposed R package meta.shrinkage, which can compute seven improved estimators for individual means, denoted as δ JS , δ JS +   , δ RML , δ RJS , δ RJS + , δ PT , and δ GPT . We will divide our explanations into four sections: Section 3.1 for   δ JS and δ JS + , Section 3.2 for δ RML , Section 3.3 for δ RJS and δ RJS + , Section 3.4 for δ PT , and δ GPT .
Before embarking on details, we explain the basic variables used in the package. Let y i ~ N ( μ i , σ i 2 ) be an estimate for μ i under a known variance σ i 2 > 0 for i = 1 , 2 , , G . Thus, one needs to prepare { ( y i , σ i ) ; i = 1 , 2 , , G } to perform a meta-analysis. Accordingly, the input variables in the R console are two vectors:
  • y: a vector for y i s;
  • s: a vector for σ i s.
Below is an example for the input variables for a dataset of G = 14 in the R console. Algorithms 15 00026 i001
This dataset comes from the data analysis of [20], in which gastric cancer data [28] was analyzed. The values for y i s are Cox regression estimates for the effect of chemotherapy on disease-free survival (DFS) for gastric cancer patients, and the σ i values are the SEs.

3.1. James–Stein Estimator

The James–Stein (JS) estimator is defined as
δ JS ( δ 1 JS , , δ G JS ) ( 1 G 2 i = 1 G Y i 2 / σ i 2 ) Y .
This estimator is a variant from the primitive JS estimator [29], which was derived under homogeneous variances ( σ i = 1 for i ). The JS estimator reduces the WMSE by shrinking the vector Y toward 0 . The degree of shrinkage is determined by the factor ( G 2 ) / ( i = 1 G Y i 2 / σ i 2 ) that typically takes its value from 0 (0% shrinkage) to 1 (100% shrinkage). In rare cases, it becomes greater than one (overshrinkage).
It was proven that δ JS has a smaller WMSE than Y when G 3 [20]. That is, Equations (1) and (2) hold. Thus, δ JS is an improve estimator without any restriction.
The positive-part JS estimator can further reduce the WMSE by avoiding the overshrinkage phenomenon of ( G 2 ) / ( i = 1 G Y i 2 / σ i 2 ) > 1 :
δ JS + ( δ 1 JS + , , δ G JS + )   ( 1 G 2 i = 1 G Y i 2 / σ i 2 ) + Y ,
where ( . ) + max ( 0 , . ) .
In our R package, the function “js(.)” can compute δ JS and δ JS + .
Below is the usage in the R console.Algorithms 15 00026 i002
The values in the JS column are the shrunken values of y. The values are equivalent between the columns of JS and JS_plus because there is no overshrinkage.

3.2. Restricted Maximum Likelihood Estimators under Ordered Means

We consider the restricted maximum likelihood (RML) estimator when the individual means are ordered. Without loss of generality, we consider the increasing order μ 1 μ G . This indicates that μ belongs to { ( μ 1 , , μ G ) : μ 1 μ G } . If there are such parameter constraints, they should be incorporated into the estimators in order to improve the estimation accuracy [20].
The RML estimator satisfying δ 1 δ G is calculated by
δ i RML = max 1 s i min i t G j = s t Y j t s + 1 .
The above formula is called the pool-adjacent violators algorithm (PAVA). The computation requires the “average-min-max” steps (Figure 1). We developed an R function “rml(.)” in our R package to perform the matrix-based computation of Figure 1. This R function initially computes all the elements of the matrix (Figure 1). Then, it applies the command “max(apply(z, 1, min))”, where “1” indicates that “min” applies to the rows of the matrix “z”. This yields easy-to-understand code in the R program.
To understand the reason why the matrix-based computation (Figure 1) is necessary, we give an example for calculating δ 2 RML from a dataset Y = ( Y 1 , Y 2 , Y 3 ) . By setting i = 2 and G = 3 in Figure 1, the calculation for δ 2 RML proceeds as follows:
[ Y 1 + Y 2 2 Y 1 + Y 2 + Y 3 3 Y 2 Y 2 + Y 3 2 ] [ min ( Y 1 + Y 2 2 , Y 1 + Y 2 + Y 3 3 ) min ( Y 2 , Y 2 + Y 3 2   ) ] max { min ( Y 1 + Y 2 2 , Y 1 + Y 2 + Y 3 3 ) , min ( Y 2 , Y 2 + Y 3 2   )   } .
The matrix in the preceding formula keeps four sub-averages of ( Y 1 , Y 2 , Y 3 ) , including Y 2 . Any one of the four components of the matrix can be δ 2 RML . Hence, the matrix is necessary as well as sufficient to calculate δ 2 RML . That aside, matrices are easy to manipulate in R.
The RML estimator δ RML ( δ 1 RML , , δ G RML ) gives a smaller WMSE than Y [20].
For theories and applications of the PAVA, we refer to [30,31,32,33,34]. We decided not to use the “pava(.)” function available in the R package “Iso” [33] to ensure the independence of our package from others. Nonetheless, we checked that “rml(.)” and “pava(.)” gave numerically identical results.
So far, we have assumed that the studies (i.e., i = 1 , 2 , , G ) are ordered to achieve μ 1 μ G . However, usually, the studies in a raw dataset may be arbitrarily ordered, and hence, one needs to find covariates to order the studies. For instance, one can use publication years if the μ i values increase with them. More generally, we assume that there exists an increasing sequence of covariates ( x 1 < < x G ) or a decreasing sequence of covariates ( x 1 > > x G ) to achieve the order μ 1 μ G .
In our R package, the function “rml(.)” can compute δ RML . The function allows users to enter covariates when studies are not ordered. For instance, we enter the estimates (y), SEs (s), and the proportion of males (x) from the COVID-19 data with G = 11 [7,20].Algorithms 15 00026 i003
In this data, the estimates (y) are the log risk ratios (RRs) calculated from two-by-two contingency tables examining the association (mortality vs. hypertension). As found in the previous studies [7,20], there was a decreasing sequence of the proportion of males ( x 1 > > x 11 ) that could achieve the order μ 1 μ 11 . Then, we have the following: Algorithms 15 00026 i004
We see that the estimates are ordered, which is consistent with the prescribed order μ 1 μ 11 . By the option “test=TRUE”, one can test if the μ i values are properly ordered by a sequence of covariates. Figure 2 shows the output including the LOWESS plot and a correlation test based on Kendall’s tau via “cor.test(x,y,method=“kendall”)”. The test confirmed that the means were ordered by a “decreasing” sequence ( x 1 > > x 11 ). We suggest the 10% significance level to declare the increasing or decreasing trend. In meta-analyses, 5% is too strict and not realistic since the number of studies is limited.

3.3. Shrinkage Estimators under Ordered Means

The estimator introduced above (in Section 3.2) can be improved by the JS type shrinkage. Based on the idea of Chang [35], Taketomi et al. [20] proposed a JS-type estimator under the following order restriction:
δ RJS ( 1 G 2 i = 1 G Y i 2 / σ i 2 ) Y I ( Y 1 Y G ) + δ RML ( 1 I ( Y 1 Y G ) ) ,
where “RJS” stands for “restricted JS”, I ( . ) is the indicator function, where I ( A ) = 1 or I ( A ) = 0 if A is true or false, respectively. Note that δ RJS has a smaller WMSE than δ RML [20,35], meaning that δ RJS gives more precise estimates than δ RML .
The RJS estimator can be further corrected by the following positive-part RJS estimator:
δ RJS + ( 1 G 2 i = 1 G Y i 2 / σ i 2 ) + Y I ( Y 1 Y G ) + δ RML ( 1 I ( Y 1 Y G ) ) .
Consequently, δ RJS + has a smaller WMSE than δ RJS   [20].
In our R package, the function “rjs(.)” can compute δ RJS and δ RJS + possibly with the aid of covariates.
For instance, one can analyze the COVID-19 data as follows: Algorithms 15 00026 i005
In the above commands, the input includes the optional argument “id” that signifies the leading authors and publication years of the 11 studies. We did so simply to make an informative output. If the input does not include this argument, the output shows the ordered sequence of 1, 2, …, 11 as shown in Section 3.2.

3.4. Estimators under Sparse Means

We now consider discrete shrinkage schemes by pre-testing H 0 : μ i = 0 vs. H 1 : μ i 0 for i = 1 , 2 , , G . The idea was proposed by Bancroft [36], who developed pretest estimators (see also more recent works [20,37,38,39,40,41,42,43]). In the meta-analytic context, Taketomi et al. [20] adopted the general pretest (GPT) estimator of Shih et al. [41], which is defined as follows:
δ i GPT = Y i I ( | Y i σ i | > z α 1 / 2 ) + q Y i I ( z α 2 / 2 < | Y i σ i | z α 1 / 2 ) ,             i = 1 , 2 , , G .
Here 0 α 1 α 2 1 , 0 < q < 1 , and z p is the upper pth quantile of N ( 0 , 1 ) for 0 < p < 1 . To implement the GPT estimator, the values of α 1 , α 2 , and q must be chosen. For any value of α 1 and α 2 , as well as a function q , one can show that δ GPT ( δ 1 GPT , , δ G GPT ) has smaller WMSE and TMSE values than Y , provided μ 0 [20].
One may choose q = 1 / 2 (50% shrinkage), as suggested by [20,41]. To facilitate the interpretability of the pretests, one may choose α 1 = 0.05 (5% level) and α 2 = 0.10 (10% level). The resultant estimator is
δ i GPT = Y i I ( | Y i σ i | > 1.96 ) + q Y i I ( 1.645 < | Y i σ i | 1.96 ) ,             i = 1 , 2 , , G .      
The special case of α 1 = α 2 = α = 0.05 leads to the usual pretest (PT) estimator
δ i PT = Y i I ( | Y i σ i | > 1.96 ) ,
In our R package, the function “gpt(.)” can compute δ PT and δ GPT . The significance levels and the shrinkage parameter are flexibly chosen by the following arguments:
  • alpha1: significance level for α 1 (0 < alpha1 < 1);
  • alpha2: significance level for α 2 (0 < alpha2 < 1);
  • q: degrees of shrinkage for q (0 < q < 1).
If users do not specify them, the default values α 1 = 0.05 , α 2 = 0.10 , and q = 0.5 will be realized. The following is an example: Algorithms 15 00026 i006
The output shows that δ PT and δ GPT lead to 0%, 50%, or 100% shrinkage of Y . The estimates of δ PT yield 0 for 12 studies, and the estimates of δ GPT yield 0 for 10 studies.

4. Simulation: Validating the R Package

This technical section is devoted to the numerical verification of the proposed package via Monte Carlo simulation experiments. Users of the package may skip this section.
We conducted simulations to investigate the operating performance of the seven estimators implemented in the proposed R package (Section 3). Our simulation design added new scenarios to the original ones in [20]. Hence, the simulation not only added new knowledge on the performance of the seven estimators, but it also validated the proposed R package.

4.1. Simulation Design

We considered the four scenarios for the true parameters μ = ( μ 1 , , μ G ) :
Scenario (a):
Ordered and non-sparse: μ = ( 2 , 2 , 1 , 1 , 0 , 0 , 1 , 1 , 2 , 2 ) ;
Scenario (b):
Ordered and sparse: μ = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 2 , 4 ) ;
Scenario (c):
Unordered and non-sparse: μ = ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ;
Scenario (d):
Unordered and sparse: μ = ( 0 , 0 , 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 ) ;
Scenario (e):
Ordered and non-sparse: μ = ( 2 , 2 , 1 , 1 , 0 , 0 , 1 , 1 , 2 , 2 ) ;
Scenario (f):
Ordered and sparse: μ = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 2 , 4 ) .
These scenarios were not considered by our previous simulation studies [20].
We generated normally distributed data Y i ~ N ( μ i , σ i 2 ) where σ i 2 ~ χ d f = 1 2 / 4 was restricted (truncated) in σ i 2 [ 0.009 , 0.6 ] , i = 1 , 2 , , G , as previously considered [20,44]. Using the data Y ( Y 1 , , Y G ) , we applied the proposed R package meta.shrinkage to compute δ JS , δ JS + ,   δ RML , δ RJS , δ RJS + , δ PT , and δ GPT for estimating μ . We examined how these estimators improved upon the standard estimator Y in terms of the TMSE and WMSE.
Our simulations were based on 10,000 repetitions using Y ( r ) , where 1 r 10,000 . Let δ ( Y ( r ) ) = ( δ 1 ( Y ( r ) ) , , δ G ( Y ( r ) ) ) be one of the seven estimators in the r th repetition. To assess the TMSE, we computed it via the Monte Carlo average:
TMSE 1 10,000 r = 1 10,000 [ i = 1 G ( δ i ( Y ( r ) ) μ i ) 2 ] .      
As the TMSE and WMSE gave the same conclusion, we reported on the former.
Appendix A provides the R code for the simulations, which can reproduce the results of the following section.

4.2. Simulation Results

Figure 3 compares the estimators Y , δ JS , δ JS + ,   δ RML , δ RJS , δ RJS + , δ PT , and δ GPT .
In Scenarios (a) and (e), the smallest TMSE values were achieved by δ RML , δ RJS , and δ RJS + , which appropriately accounted for the ordered means. Thus, these ordered mean estimators provided some advantages over the standard estimator Y . Here, users needed to specify the option “decreasing = FALSE” (Scenario (a)) or “decreasing = TRUE” (Scenario (e)) to capture the true ordering of the means. On the other hand, δ PT and δ GPT produced unreasonably large TMSE values, since they wrongly imposed the sparse mean assumptions.
In Scenario (b), the smallest TMSE values were attained by δ PT followed by δ GPT , as they took advantage of the sparse means. The TMSE values for δ RML , δ RJS , and δ RJS + were also small by accounting for the ordered means. Hence, these pretest and restricted estimators produced significant advantages over the standard estimator Y .
In Scenario (c), δ JS and δ JS + performed the best, but the advantage over Y was modest. In this scenario, δ RML , δ RJS , and δ RJS + produced quite large TMSE values and performed the worst, since they wrongly assumed the ordered means. Additionally, δ PT and δ GPT produced large TMSE values since they wrongly assumed the sparse means. This was the only scenario where the standard estimator Y was enough.
In Scenario (d), the smallest TMSE values were attained by δ GPT , as it captured the sparse means. The performance of δ PT , δ JS , and δ JS + was also good. On the other hand, δ RML , δ RJS , and δ RJS + gave large TMSE values due to the unordered means.
In summary, our simulations demonstrated that the seven estimators implemented in the proposed R package exhibited desired operating characteristics. If the true means were ordered, the restricted estimators ( δ RML , δ RJS , and δ RJS + ) showed definite advantages over the standard estimator Y . In addition, the pretest estimators ( δ PT and δ GPT ) produced the best performance under the sparse means. Finally, the JS estimators ( δ JS and δ JS + ) modestly but uniformly improved upon Y across all the scenarios.
We therefore conclud that there are good reasons to apply the proposed R package to estimate μ in order to improve the accuracy of estimation.

5. Data Example

This section analyzes a dataset to illustrate the methods in the proposed package, demonstrating their possible advantages over the standard meta-analysis. Appendix B provides the R code, which can reproduce the following results.
We used the blood pressure dataset containing G = 10 studies, where each study examined the effect of a treatment to reduce blood pressure. The dataset is available in the R package mvmeta. Available online: https://CRAN.R-project.org/package=mvmeta (accessed on 14 November 2021). Each study provided the treatment’s effect estimate on the systolic blood pressure (SBP) and the treatment’s effect on the diastolic blood pressure (DBP), as shown in Table 1. In the following analysis, we focus on the treatment’s effect estimates for the SBP and regard those for the DBP as covariates.
We aimed to improve the individual treatment effects on the SBP by the methods in the R package meta.shrinkage. For this purpose, we utilized the covariate information to implement meta-analyses under ordered means (Section 3.2 and Section 3.3).
To apply the proposed methods, we changed the order of the 10 studies by the increasing order of the covariates (see the first column of Table 2). Under this order, we defined Y i , where i = 1 , , 10 as the treatment effect estimates on the SBP (see Y in the second column of Table 2). Table 2 shows a good concordance between Y and the covariates; the smallest covariate (−7.87) yielded the smallest outcome ( Y 1 = 17.93 ), and the largest covariate (−2.08) yielded the largest outcome ( Y 11 = 6.55 ). However, the values of Y i were not perfectly ordered.
We therefore considered the order-restricted estimators ( δ RML , δ RJS , and δ RJS + ) by imposing the assumption that the true treatment effects were ordered. Table 2 shows that these restricted estimators satisfied δ 1 δ 10 (see the fifth through the seventh columns of Table 2). Since it was reasonable to impose a concordance between the treatment effects on the SBP and DBP, these estimators may have been advantageous over the standard estimates ( Y ). In this data example, the JS estimators ( δ JS and δ JS + ) were almost identical to the standard estimates Y . Hence, there would be little advantage to shrinkage in the JS estimators.
By the “rml(.)” function, we tested if the μ i values were ordered by a sequence of covariates. Figure 4 shows a correlation-based test and the LOWESS plot. The test confirmed that the means were ordered by an increasing sequence ( x 1 < < x 10 ). The P-value was highly significant (p = 0.002). Therefore, we validated the assumption of ordered means.

6. Conclusions and Future Extensions

This article introduced an R package meta.shrinkage (https://CRAN.R-project.org/package=meta.shrinkage (accessed on 15 November 2021)), which we made freely available on CRAN. It was first released on 19 November 2021 (version 0.1.0), following our original methodological article published on 20 October 2021 [20]. We hope that the timely release of our package facilitates the appropriate use of the proposed methods for interested readers. As the precision and reliability of the developed statistical methods are important, we conducted extensive simulation studies to validate the proposed R package (Section 4). We also analyzed a data example to show the ability of the R package (Section 5).
To implement isotonic regression in our R package, we proposed a matrix-based algorithm for the PAVA (Figure 1). This algorithm is easy to program in the R environment, where matrices are convenient to manipulate. However, if one tries to implement the PAVA in other programing environments, the matrix-based algorithm may be inefficient, especially for a meta-analysis with a very large number of studies. The number G > 500 makes the proposed algorithm slow to compute the output. However, real meta-analyses rarely have more than 100 studies, so this issue may not arise at the practical level.
An extension of the present R packages to multiple responses is an important research topic. An example includes a meta-analysis of verbal and math test scores [45], consisting of two responses. A similar instance involves math and stat tests [44,46]. There is much room for meta-analyzing bivariate and multivariate responses [47,48,49,50,51,52,53,54,55,56]. Multivariate shrinkage estimators of multivariate restricted and unrestricted normal means, such as those in [57,58,59,60], can be considered for this extension.
The proposed R package can only handle normally distributed data; it cannot handle data that are non-normally distributed, asymmetrically distributed, or discrete-valued. To analyze such data, one should consider extensions of the meta-analysis methods in [20] toward asymmetric distributions for skewed or discrete response variables. Shrinkage and pretest estimators exist, such as [61] for the exponential distribution, [62] for the gamma distribution, and [63,64] for the Poisson distribution. Thus, meta-analytical applications of shrinkage estimators to asymmetric or non-normal models are relevant research directions.
We left a comparison of the estimators in our package with the Bayesian random-effects meta-analyses. For instance, the performance of the proposed estimators could be compared with the Bayesian posterior mean estimators, which could be computed from the bayesmeta package [15]. However, one needs to specify the prior mean and prior standard deviation to perform the Bayesian meta-analyses. Therefore, it remains unclear how to perform a fair comparison between the Bayesian and non-Bayesian estimators, as found in [65,66]. A carefully designed comparative study will be helpful for guiding users to apply two R packages: meta.shrinkage and bayesmeta.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/a15010026/s1.

Author Contributions

Conceptualization, N.T. and T.E.; methodology, N.T. and T.E.; data curation, N.T. and T.E.; writing, N.T., T.E., H.M. and Y.-T.C.; supervision, T.E., H.M. and Y.-T.C.; funding acquisition, H.M. and Y.-T.C. All authors have read and agreed to the published version of the manuscript.

Funding

Michimae H. is financially supported by JSPS KAKENHI Grant Number JP21K12127. Chang Y.T. is financially supported by JSPS KAKENHI Grant Number JP26330047 and JP18K11196.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the numerical results of the article are fully reproducible by the R code in the main text and appendices. The proposed R package is downloadable online from CRAN (https://CRAN.R-project.org/package=meta.shrinkage (accessed on 15 November 2021)) or from the “tar.gz” file from the article’s Supplementary Material.

Acknowledgments

The authors thank the two referees for their valuable suggestions that improved the article. We thank Alina Chen from the Algorithms Editorial Office for her offer to publish free of APC.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. R Code for Simulations

#install.packages(“meta.shrinkage”)
library(meta.shrinkage)
 
R = 10,000
Sa = “Scenario (a): True means: (−2,−2,−1,−1,0,0,1,1,2,2); Ordered & Non-sparse”
Sb = “Scenario (b): True means: (0,0,0,0,0,0,0,0,5,10); Ordered & Sparse”
Sc = “Scenario (c): True means: (1,−1,1,−1,1,−1,1,−1,1,−1); Unordered & Non-sparse”
Sd = “Scenario (d): True means: (0,0,0,1,0,0,0,1,0,0); Unordered & Sparse”
Se = “Scenario (e): True means: (2,2,1,1,0,0,−1,−1,−2,−2); Ordered & Non-sparse”
Sf = “Scenario (f): True means: (0,0,0,0,0,0,0,0,−2,−4); Ordered & Sparse”
 
Mu = c(−2,−2,−1,−1,0,0,1,1,2,2);nam = Sa
#Mu = c(0,0,0,0,0,0,0,0,5,10);nam = Sb
#Mu = c(1,−1,1,−1,1,−1,1,−1,1,−1);nam = Sc
#Mu = c(0,0,0,1,0,0,0,1,0,0);nam = Sd
#Mu = c(2,2,1,1,0,0,−1,−1,−2,−2);nam = Se
#Mu = c(0,0,0,0,0,0,0,0,−2,−4);nam = Sf
 
G = length(Mu)
Mu_y = Mu_js = Mu_jsp = matrix(NA,R,G)
Mu_rml = Mu_rjs = Mu_rjsp = matrix(NA,R,G)
Mu_pt = Mu_gpt = matrix(NA,R,G)
W = matrix(NA,R,G)
mu_y = mu_js = mu_jsp = rep(NA,R)
mu_rml = mu_rjs = mu_rjsp = rep(NA,R)
 
for(i in 1:R){
set.seed(i)
Chisq = rchisq(1000,df = 1)/4
s = sqrt(Chisq[(Chisq> = 0.009)&(Chisq< = 0.6)][1:G])
W[i,] = 1/s^2
y = rnorm(G,mean = Mu,sd = s)
Mu_y[i,] = y
Mu_js[i,] = js(y,s)$JS
Mu_jsp[i,] = js(y,s)$JS_plus
Mu_rml[i,] = rml(y)$RML
Mu_rjs[i,] = rjs(y,s)$RJS
Mu_rjsp[i,] = rjs(y,s)$RJS_plus
Mu_pt[i,] = gpt(y,s)$PT
Mu_gpt[i,] = gpt(y,s)$GPT
 
## for scenarios (e) and (f)
#Mu_rml[i,] = rev(rml(y,decreasing = TRUE)$RML)
#Mu_rjs[i,] = rev(rjs(y,s,decreasing = TRUE)$RJS)
#Mu_rjsp[i,] = rev(rjs(y,s,decreasing = TRUE)$RJS_plus)
 
}
 
Mu_mat = matrix(rep(Mu,R),nrow = R,ncol = G,byrow = TRUE)
TMSE_y = sum( colMeans((Mu_y-Mu_mat)^2) )
TMSE_js = sum( colMeans((Mu_js-Mu_mat)^2) )
TMSE_jsp = sum( colMeans((Mu_jsp-Mu_mat)^2) )
TMSE_rml = sum( colMeans((Mu_rml-Mu_mat)^2) )
TMSE_rjs = sum( colMeans((Mu_rjs-Mu_mat)^2) )
TMSE_rjsp = sum( colMeans((Mu_rjsp-Mu_mat)^2) )
TMSE_pt = sum( colMeans((Mu_pt-Mu_mat)^2) )
TMSE_gpt = sum( colMeans((Mu_gpt-Mu_mat)^2) )
TMSE = c(TMSE_y,TMSE_js,TMSE_jsp,TMSE_rml,
TMSE_rjs,TMSE_rjsp,TMSE_pt,TMSE_gpt)
 
barplot(TMSE,names.arg = c(“Y”,”JS”,”JS+”,”RML”,”RJS”,”RJS+”,”PT”,”GPT”),
col = c(“red”,”purple”,”blue”,”darkgreen”,”green”,
“lightgreen”,”orange”,”brown”),
main = nam,xlab = “Estimator”,ylab = “TMSE”)

Appendix B. R Code for the Data Example

#install.packages(“mvmeta”)
library(mvmeta)
#install.packages(“meta.shrinkage”)
library(meta.shrinkage)
 
data(hyp)
dat<-hyp
 
#-------------------
# JS estimator and JS_plus estimator
#-------------------
JS<-js(dat$sbp,dat$sbp_se)
id<-c(2,8,3,6,4,5,7,10,1,9)
dat1<-data.frame(“Y” = dat$sbp,”JS” = JS[,1],”JS_plus” = JS[,2],”id” = id)
dat1
dat2<-dat1[order(dat1$id,decreasing = T),]
 
#-------------------
# RML estimator
#-------------------
RML<-rml(dat$sbp,x = dat$dbp,id = dat$study,test = TRUE)
#-------------------
# RJS estimator and RJS+ estimator
#-------------------
RJS<-rjs(dat$sbp,dat$sbp_se,x = dat$dbp,id = dat$study)
 
 
res<-data.frame(“Study” = RML$id,”x”=RML$x,”Y” = dat2$Y,”JS” = dat2$JS
,”JS_plus” = dat2$JS_plus,”RML” = RML$RML
,”RJS” = RJS$RJS,”RJS_plus” = RJS$RJS_plus)
res

References

  1. Borenstein, M.; Hedges, L.V.; Higgins, J.P.; Rothstein, H.R. Introduction to Meta-Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  2. Kaiser, T.; Menkhoff, L. Financial education in schools: A meta-analysis of experimental studies. Econ. Educ. Rev. 2020, 78, 101930. [Google Scholar] [CrossRef] [Green Version]
  3. Leung, Y.; Oates, J.; Chan, S.P. Voice, articulation, and prosody contribute to listener perceptions of speaker gender: A systematic review and meta-analysis. J. Speech Lang. Hear. Res. 2018, 61, 266–297. [Google Scholar] [CrossRef] [PubMed]
  4. DerSimonian, R.; Laird, N. Meta-analysis in clinical trials revisited. Contemp. Clin. Trials 2015, 45, 139–145. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Fleiss, J.L. Review papers: The statistical basis of meta-analysis. Stat. Methods Med. Res. 1993, 2, 121–145. [Google Scholar] [CrossRef]
  6. Batra, K.; Singh, T.P.; Sharma, M.; Batra, R.; Schvaneveldt, N. Investigating the psychological impact of COVID-19 among healthcare workers: A meta-analysis. Int. J. Environ. Res. Public Health 2020, 17, 9096. [Google Scholar] [CrossRef]
  7. Pranata, R.; Lim, M.A.; Huang, I.; Raharjo, S.B.; Lukito, A.A. Hypertension is associated with increased mortality and severity of disease in COVID-19 pneumonia: A systematic review, meta-analysis and meta-regression. J. Renin-Angiotensin-Aldosterone Syst. 2020, 21, 1470320320926899. [Google Scholar] [CrossRef]
  8. Wang, Y.; Kala, M.P.; Jafar, T.H. Factors associated with psychological distress during the coronavirus disease 2019 (COVID-19) pandemic on the predominantly general population: A systematic review and meta-analysis. PLoS ONE 2020, 15, e0244630. [Google Scholar] [CrossRef]
  9. Rice, K.; Higgins, J.P.; Lumley, T. A re-evaluation of fixed effect(s) meta-analysis. J. R. Stat. Soc. Ser. A 2018, 181, 205–227. [Google Scholar] [CrossRef]
  10. Lehmann, E.L. Elements of Large-Sample Theory; Springer Science & Business Media: Berlin, Germany, 2010. [Google Scholar]
  11. Shinozaki, N.; Chang, Y.-T. Minimaxity of empirical Bayes estimators of the means of independent normal variables with unequal variances. Commun. Stat.-Theor. Methods 1993, 8, 2147–2169. [Google Scholar] [CrossRef]
  12. Shinozaki, N.; Chang, Y.-T. Minimaxity of empirical Bayes estimators shrinking toward the grand mean when variances are unequal. Commun. Stat.-Theor. Methods 1996, 25, 183–199. [Google Scholar] [CrossRef]
  13. Singh, H.P.; Vishwakarma, G.K. A family of estimators of population mean using auxiliary information in stratified sampling. Commun. Stat.-Theor. Methods 2008, 37, 1038–1050. [Google Scholar] [CrossRef]
  14. DerSimonian, R.; Laird, N. Meta-analysis in clinical trials. Control. Clin. Trials 1986, 7, 177–188. [Google Scholar] [CrossRef]
  15. Röver, C. Bayesian random-effects meta-analysis using the bayesmeta R package. J. Stat. Softw. 2020, 93, 1–51. [Google Scholar] [CrossRef]
  16. Raudenbush, S.W.; Bryk, A.S. Empirical bayes meta-analysis. J. Educ. Stat. 1985, 10, 75–98. [Google Scholar] [CrossRef]
  17. Schmid, C. Using bayesian inference to perform meta-analysis. Eval. Health Prof. 2001, 24, 165–189. [Google Scholar] [CrossRef] [PubMed]
  18. Röver, C.; Friede, T. Dynamically borrowing strength from another study through shrinkage estimation. Stat. Methods Med. Res. 2020, 29, 293–308. [Google Scholar] [CrossRef] [Green Version]
  19. Röver, C.; Friede, T. Bounds for the weight of external data in shrinkage estimation. Biom. J. 2021, 63, 1131–1143. [Google Scholar] [CrossRef]
  20. Taketomi, N.; Konno, Y.; Chang, Y.-T.; Emura, T. A Meta-Analysis for Simultaneously Estimating Individual Means with Shrinkage, Isotonic Regression and Pretests. Axioms 2021, 10, 267. [Google Scholar] [CrossRef]
  21. Shinozaki, N. A note on estimating the common mean of k normal distributions and the stein problem. Commun. Stat.-Theory Methods 1978, 7, 1421–1432. [Google Scholar] [CrossRef]
  22. Malekzadeh, A.; Kharrati-Kopaei, M. Inferences on the common mean of several normal populations under hetero-scedasticity. Comput. Stat. 2018, 33, 1367–1384. [Google Scholar] [CrossRef]
  23. Everitt, B. Modern Medical Statistics: A Practical Guide; Wiley: Hoboken, NJ, USA, 2003. [Google Scholar]
  24. Lin, L. Hybrid test for publication bias in meta-analysis. Stat. Methods Med. Res. 2020, 29, 2881–2899. [Google Scholar] [CrossRef]
  25. Lehmann, E.L.; Casella, G. Theory of Point Estimation, 2nd ed.; Springer: New York, NY, USA, 1998. [Google Scholar]
  26. Shao, J. Mathematical Statistics; Springer: New York, NY, USA, 2003. [Google Scholar]
  27. van der Pas, S.; Salomond, J.-B.; Schmidt-Hieber, J. Conditions for posterior contraction in the sparse normal means problem. Electron. J. Stat. 2016, 10, 976–1000. [Google Scholar] [CrossRef]
  28. GASTRIC (Global Advanced/Adjuvant Stomach Tumor Research International Collaboration) Group. Role of chemotherapy for advanced/recurrent gastric cancer: An individual-patient-data meta-analysis. Eur. J. Cancer 2013, 49, 1565–1577. [Google Scholar] [CrossRef] [PubMed]
  29. James, W.; Stein, C. Estimation with quadratic loss. In Breakthroughs in Statistics; Springer: New York, NY, USA, 1992; Volume 1, pp. 443–460. [Google Scholar]
  30. van Eeden, C. Restricted Parameter Space Estimation Problems; Springer: New York, NY, USA, 2006. [Google Scholar]
  31. Li, W.; Li, R.; Feng, Z.; Ning, J. Semiparametric isotonic regression analysis for risk assessment under nested case-control and case-cohort designs. Stat. Methods Med. Res. 2020, 29, 2328–2343. [Google Scholar] [CrossRef] [PubMed]
  32. Robertson, T.; Wright, F.T.; Dykstra, R. Order Restricted Statistical Inference; Wiley: Chichester, UK, 1988. [Google Scholar]
  33. Turner, R. Pava: Linear order isotonic regression, Cran. 2020. Available online: https://CRAN.R-project.org/package=Iso (accessed on 14 November 2021).
  34. Tsukuma, H. Simultaneous estimation of restricted location parameters based on permutation and sign-change. Stat. Pap. 2012, 53, 915–934. [Google Scholar] [CrossRef]
  35. Chang, Y.-T. Stein-Type Estimators for Parameters Restricted by Linear Inequalities; Faculty of Science and Technology, Keio University: Tokyo, Japan, 1981; Volume 34, pp. 83–95. [Google Scholar]
  36. Bancroft, T.A. On biases in estimation due to the use of preliminary tests of significance. Ann. Math. Stat. 1944, 15, 190–204. [Google Scholar] [CrossRef]
  37. Judge, G.G.; Bock, M.E. The Statistical Implications of Pre-Test and Stein-Rule Estimators in Econometrics; Elsevier: Amsterdam, The Netherlands, 1978. [Google Scholar]
  38. Khan, S.; Saleh, A.K.M.E. On the comparison of the pre-test and shrinkage estimators for the univariate normal mean. Stat. Pap. 2001, 42, 451–473. [Google Scholar] [CrossRef] [Green Version]
  39. Magnus, J.R. The traditional pretest estimator. Theory Probab. Its Appl. 2000, 44, 293–308. [Google Scholar] [CrossRef]
  40. Magnus, J.R.; Wan, A.T.; Zhang, X. Weighted average least squares estimation with nonspherical disturbances and an application to the Hong Kong housing market. Comput. Stat. Data Anal. 2011, 55, 1331–1341. [Google Scholar] [CrossRef]
  41. Shih, J.-H.; Konno, Y.; Chang, Y.-T.; Emura, T. A class of general pretest estimators for the univariate normal mean. Commun. Stat.-Theory Methods 2021. [Google Scholar] [CrossRef]
  42. Shih, J.-H.; Lin, T.-Y.; Jimichi, M.; Emura, T. Robust ridge M-estimators with pretest and Stein-rule shrinkage for an intercept term. Jpn. J. Stat. Data Sci. 2021, 4, 107–150. [Google Scholar] [CrossRef]
  43. Kibria, B.G.; Saleh, A.M.E. Optimum critical value for pre-test estimator. Commun. Stat.-Simul. Comput. 2006, 35, 309–319. [Google Scholar] [CrossRef]
  44. Shih, J.-H.; Konno, Y.; Chang, Y.-T.; Emura, T. Estimation of a common mean vector in bivariate meta-analysis under the FGM copula. Statistics 2019, 53, 673–695. [Google Scholar] [CrossRef]
  45. Gleser, L.J.; Olkin, L. Stochastically dependent effect sizes. In the Handbook of Research Synthesis; Russel Sage Foundation: New York, NY, USA, 1994. [Google Scholar]
  46. Shih, J.-H.; Konno, Y.; Chang, Y.-T.; Emura, T. Copula-based estimation methods for a common mean vector for bivariate meta-analyses. Symmetry 2021, in press. [Google Scholar]
  47. Emura, T.; Sofeu, C.L.; Rondeau, V. Conditional copula models for correlated survival endpoints: Individual patient data meta-analysis of randomized controlled trials. Stat. Methods Med. Res. 2021, 30, 2634–2650. [Google Scholar] [CrossRef] [PubMed]
  48. Mavridis, D.; Salanti, G.A. practical introduction to multivariate meta-analysis. Stat. Methods Med. Res. 2013, 22, 133–158. [Google Scholar] [CrossRef] [PubMed]
  49. Peng, M.; Xiang, L.; Wang, S. Semiparametric regression analysis of clustered survival data with semi-competing risks. Comput. Stat. Data Anal. 2018, 124, 53–70. [Google Scholar] [CrossRef]
  50. Peng, M.; Xiang, L. Correlation-based joint feature screening for semi-competing risks outcomes with application to breast cancer data. Stat. Methods Med. Res. 2021, 30, 2428–2446. [Google Scholar] [CrossRef]
  51. Riley, R.D. Multivariate meta-analysis: The effect of ignoring within-study correlation. J. R. Stat. Soc. Ser. A 2009, 172, 789–811. [Google Scholar] [CrossRef]
  52. Copas, J.B.; Jackson, D.; White, I.R.; Riley, R.D. The role of secondary outcomes in multivariate meta-analysis. J. R. Stat. Soc. Ser. C 2018, 67, 1177–1205. [Google Scholar] [CrossRef]
  53. Sofeu, C.L.; Emura, T.; Rondeau, V. A joint frailty-copula model for meta-analytic validation of failure time surrogate endpoints in clinical trials. BioMed. J. 2021, 63, 423–446. [Google Scholar] [CrossRef] [PubMed]
  54. Yamaguchi, Y.; Maruo, K. Bivariate beta-binomial model using Gaussian copula for bivariate meta-analysis of two binary outcomes with low incidence. Jpn. J. Stat. Data Sci. 2019, 2, 347–373. [Google Scholar] [CrossRef] [Green Version]
  55. Kawakami, R.; Michimae, H.; Lin, Y.-H. Assessing the numerical integration of dynamic prediction formulas using the exact expressions under the joint frailty-copula model. Jpn. J. Stat. Data Sci. 2021, 4, 1293–1321. [Google Scholar] [CrossRef]
  56. Nikoloulopoulos, A.K. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence. Stat. Methods Med. Res. 2017, 26, 2270–2286. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Karamikabir, H.; Afshari, M. Generalized Bayesian shrinkage and wavelet estimation of location parameter for spherical distribution under balance-type loss: Minimaxity and admissibility. J. Multivar. Anal. 2020, 177, 104583. [Google Scholar] [CrossRef]
  58. Bilodeau, M.; Kariya, T. Minimax estimators in the normal MANOVA model. J. Multivar. Anal. 1989, 28, 260–270. [Google Scholar] [CrossRef] [Green Version]
  59. Konno, Y. On estimation of a matrix of normal means with unknown covariance matrix. J. Multivar. Anal. 1991, 36, 44–55. [Google Scholar] [CrossRef] [Green Version]
  60. Karamikabir, H.; Afshari, M.; Lak, F. Wavelet threshold based on Stein’s unbiased risk estimators of restricted location parameter in multivariate normal. J. Appl. Stat. 2021, 48, 1712–1729. [Google Scholar] [CrossRef]
  61. Pandey, B.N. Testimator of the scale parameter of the exponential distribution using LINEX loss function. Commun. Stat.-Theory Methods 1997, 26, 2191–2202. [Google Scholar] [CrossRef]
  62. Vishwakarma, G.K.; Gupta, S. Shrinkage estimator for scale parameter of gamma distribution. Commun. Stat.-Simul. Comput. 2020. [Google Scholar] [CrossRef]
  63. Chang, Y.-T.; Shinozaki, N. New types of shrinkage estimators of Poisson means under the normalized squared error loss. Commun. Stat.-Theory Methods 2019, 48, 1108–1122. [Google Scholar] [CrossRef]
  64. Hamura, Y. Bayesian shrinkage approaches to unbalanced problems of estimation and prediction on the basis of negative multinomial samples. Jpn. J. Stat. Data Sci. 2021. [Google Scholar] [CrossRef]
  65. Soliman, A.-A.; Abd Ellah, A.H.; Sultan, K.S. Comparison of estimates using record statistics from Weibull model: Bayesian and non-Bayesian approaches. Comput. Stat. Data Anal. 2006, 51, 2065–2077. [Google Scholar] [CrossRef]
  66. Rehman, H.; Chandra, N. Inferences on cumulative incidence function for middle censored survival data with Weibull regression. Jpn. J. Stat. Data Sci. 2022. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram for implementing the pool-adjacent violators algorithm (PAVA) in the R function “rml(.)” in our R package.
Figure 1. The schematic diagram for implementing the pool-adjacent violators algorithm (PAVA) in the R function “rml(.)” in our R package.
Algorithms 15 00026 g001
Figure 2. The LOWLESS plot based on the estimates (y) and the proportions of males (x) from the COVID-19 data with 11 studies. Shown are Kendall’s tau and the test of no association between x and y.
Figure 2. The LOWLESS plot based on the estimates (y) and the proportions of males (x) from the COVID-19 data with 11 studies. Shown are Kendall’s tau and the test of no association between x and y.
Algorithms 15 00026 g002
Figure 3. Simulation results for the estimators Y , δ JS , δ JS + ,   δ RML , δ RJS , δ RJS + , δ PT , and δ GPT . Comparison is based on Monte Carlo average:   TMSE r = 1 10,000 [ i = 1 G ( δ i ( Y ( r ) ) μ i ) 2 ] / 10,000 .
Figure 3. Simulation results for the estimators Y , δ JS , δ JS + ,   δ RML , δ RJS , δ RJS + , δ PT , and δ GPT . Comparison is based on Monte Carlo average:   TMSE r = 1 10,000 [ i = 1 G ( δ i ( Y ( r ) ) μ i ) 2 ] / 10,000 .
Algorithms 15 00026 g003aAlgorithms 15 00026 g003b
Figure 4. The LOWLESS plot based on the treatment effect estimates on the SBP (y) and those on the DBP (x) from the blood pressure dataset with 10 studies. Shown are Kendall’s tau and the test of no association between x and y.
Figure 4. The LOWLESS plot based on the treatment effect estimates on the SBP (y) and those on the DBP (x) from the blood pressure dataset with 10 studies. Shown are Kendall’s tau and the test of no association between x and y.
Algorithms 15 00026 g004
Table 1. The 10 studies from the blood pressure data. Each study provided the treatment’s effect on the systolic blood pressure (SBP) and the treatment’s effect on the diastolic blood pressure (DBP).
Table 1. The 10 studies from the blood pressure data. Each study provided the treatment’s effect on the systolic blood pressure (SBP) and the treatment’s effect on the diastolic blood pressure (DBP).
Treatment Effect on SBPSETreatment Effect on DBPSE
Study 1−6.660.72−2.990.27
Study 2−14.174.73−7.871.44
Study 3−12.8810.31−6.011.77
Study 4−8.710.30−5.110.10
Study 5−8.700.14−4.640.05
Study 6−10.600.58−5.560.18
Study 7−11.360.30−3.980.27
Study 8−17.935.82−6.541.31
Study 9−6.550.41−2.080.11
Study 10−10.260.20−3.490.04
Table 2. The treatment effect estimates on the SBP based on the 10 studies from the blood pressure data. The 10 studies are ordered by the covariates (treatment effect estimates on the DBP).
Table 2. The treatment effect estimates on the SBP based on the 10 studies from the blood pressure data. The 10 studies are ordered by the covariates (treatment effect estimates on the DBP).
Covariate Y δ JS δ JS + δ RML δ RJS δ RJS +
Study 2−7.87−17.93−17.91−17.91−16.05−16.05−16.05
Study 8−6.54−10.26−10.25−10.25−16.05−16.05−16.05
Study 3−6.01−14.17−14.16−14.16−12.88−12.88−12.88
Study 6−5.56−11.36−11.35−11.35−10.6−10.6−10.6
Study 4−5.11−8.71−8.70−8.70−9.76−9.76−9.76
Study 5−4.64−10.6−10.59−10.59−9.76−9.76−9.76
Study 7−3.98−8.7−8.69−8.69−9.76−9.76−9.76
Study 10−3.49−12.88−12.87−12.87−9.76−9.76−9.76
Study 1−2.99−6.66−6.65−6.65−6.66−6.66−6.66
Study 9−2.08−6.55−6.54−6.54−6.55−6.55−6.55
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Taketomi, N.; Michimae, H.; Chang, Y.-T.; Emura, T. meta.shrinkage: An R Package for Meta-Analyses for Simultaneously Estimating Individual Means. Algorithms 2022, 15, 26. https://doi.org/10.3390/a15010026

AMA Style

Taketomi N, Michimae H, Chang Y-T, Emura T. meta.shrinkage: An R Package for Meta-Analyses for Simultaneously Estimating Individual Means. Algorithms. 2022; 15(1):26. https://doi.org/10.3390/a15010026

Chicago/Turabian Style

Taketomi, Nanami, Hirofumi Michimae, Yuan-Tsung Chang, and Takeshi Emura. 2022. "meta.shrinkage: An R Package for Meta-Analyses for Simultaneously Estimating Individual Means" Algorithms 15, no. 1: 26. https://doi.org/10.3390/a15010026

APA Style

Taketomi, N., Michimae, H., Chang, Y. -T., & Emura, T. (2022). meta.shrinkage: An R Package for Meta-Analyses for Simultaneously Estimating Individual Means. Algorithms, 15(1), 26. https://doi.org/10.3390/a15010026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop