Appendix A
When performing Metropolis–Hastings sampling, the required proposal distributions and their variances are as follows, with reference to the methods proposed by Lee and Song [
13] and Lee and Tang [
7].
First, samples
are drawn from
. Specifically,
is chosen as its proposal distribution, where
, and
is given by
with
. The implementation of the MH algorithm is as follows:
In the
-th iteration, based on the current
, a new candidate value
is generated from the proposal distribution
. The probability of accepting this new candidate is
where the variance
is adjusted to ensure the mean acceptance rate reaches roughly 0.25 or greater (Gelman et al. [
29]).
Second, samples are drawn from . is selected as its proposal distribution, with variance , where is a submatrix of , and is a subvector of , both containing rows and columns corresponding to . is the set of indices corresponding to the observed data . The selection criteria for and the sampling process similarly apply.
Finally, samples are drawn from . is chosen as its proposal distribution, with variance . The selection criteria for and the sampling process similarly apply.
The selection of proposal distributions for the above three parameters and the calculation of their covariances both refer to the article by Lee and Tang [
7], and the average acceptance rates of the three parameters are all maintained between 0.25 and 0.4 by adjusting
,
, and
.
The Gibbs sampling process and the posterior distribution
of the unknown parameter
are presented below, where the notation used follows that in Song and Lee [
13].
Let , , , , , , , , and , and is the k-th diagonal element of the matrix .
Given
, let
denote the corresponding element of the matrix, i.e.,
, where
and
. The positions of fixed elements in
are identified by the index matrix
, whose elements are defined as follows:
Let , , and be a submatrix of where . Rows corresponding to are set to zero vectors, while rows with are retained. Meanwhile, let , where , and is the j-th element of . The Gibbs sampling process is as follows:
1. Updating
:
For
, it holds that
2. Updating
:
Let
. Then,
For
and
, it holds that
where
and
.
3. Updating
:
To derive
, we can refer to the method proposed by Lindley and Smith [
30] for deriving posterior distributions. Specifically, let
denote that the column vector
follows a multivariate normal distribution with mean vector
and positive semi-definite variance matrix
. Given
,
and given
,
then the distribution of
given
is
, where
and
. Similarly, we can derive the posterior distribution of
. Specifically, given
,
, and
, we have
Thus, for
, it holds that
where
,
,
, and
; the variance of the multivariate normal distribution is
, and the mean
.
6. Updating
:
Let
. Then,
For
, it holds that
7. Updating
:
Similar to the posterior derivation of
, given
,
, and
, we have
Thus, we can conclude that
where
,
,
, and
; the variance of the multivariate normal distribution is
, and the mean
.
Table A1.
True values of parameter vector under different missing rates and cases ().
Table A1.
True values of parameter vector under different missing rates and cases ().
Missing Rate | Case | | | | | | | | | | |
---|
20% | 1 | −4.00 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 |
3 | −3.80 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 | 0.19 |
4 | −4.00 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 |
30% | 1 | −4.00 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 |
3 | −4.00 | 0.27 | 0.27 | 0.27 | 0.27 | 0.27 | 0.27 | 0.27 | 0.27 | 0.27 |
4 | −4.00 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 |
40% | 1 | −4.00 | 0.60 | 0.60 | 0.60 | 0.60 | 0.60 | 0.60 | 0.60 | 0.60 | 0.60 |
3 | −4.00 | 0.36 | 0.36 | 0.36 | 0.36 | 0.36 | 0.36 | 0.36 | 0.36 | 0.36 |
4 | −4.00 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 |
Table A2.
True values of parameter vector under different missing rates and cases ().
Table A2.
True values of parameter vector under different missing rates and cases ().
Missing Rate | Case | | | | | | | | | | |
---|
20% | 1 | −4.00 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 | 0.28 |
3 | −3.80 | 0.18 | 0.18 | 0.18 | 0.18 | 0.18 | 0.18 | 0.18 | 0.18 | 0.18 |
4 | −4.00 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 |
30% | 1 | −4.00 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 | 0.40 |
3 | −3.80 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 |
4 | −4.00 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 | 0.20 |
40% | 1 | −4.00 | 0.65 | 0.65 | 0.65 | 0.65 | 0.65 | 0.65 | 0.65 | 0.65 | 0.65 |
3 | −3.80 | 0.34 | 0.34 | 0.34 | 0.34 | 0.34 | 0.34 | 0.34 | 0.34 | 0.34 |
4 | −4.00 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 | 0.25 |
Table A3.
Bayesian estimates of regression coefficients in the structural equation with complete data.
Table A3.
Bayesian estimates of regression coefficients in the structural equation with complete data.
| | | | |
---|
Par
| |
Bias
|
RMS
|
Bias
|
RMS
|
Bias
|
RMS
|
Bias
|
RMS
|
---|
| 0.1 | 0.0090 | 0.0145 | 0.0011 | 0.0030 | 0.0046 | 0.0076 | −0.0006 | 0.0033 |
| 0.5 | 0.0002 | 0.0093 | 0.0000 | 0.0023 | 0.0010 | 0.0043 | 0.0031 | 0.0042 |
| 0.9 | −0.0048 | 0.0113 | 0.0002 | 0.0037 | −0.0035 | 0.0068 | 0.0090 | 0.0101 |
| 0.1 | 0.0033 | 0.0082 | −0.0191 | 0.0193 | −0.0345 | 0.0348 | −0.0217 | 0.0219 |
| 0.5 | −0.0014 | 0.0070 | −0.0034 | 0.0041 | −0.0203 | 0.0206 | −0.0087 | 0.0090 |
| 0.9 | −0.0008 | 0.0082 | 0.0222 | 0.0228 | −0.0085 | 0.0108 | 0.0104 | 0.0121 |
| 0.1 | 0.0165 | 0.0182 | −0.0193 | 0.0195 | −0.0375 | 0.0378 | −0.0242 | 0.0244 |
| 0.5 | 0.0012 | 0.0071 | −0.0035 | 0.0041 | −0.0226 | 0.0229 | −0.0105 | 0.0108 |
| 0.9 | −0.0072 | 0.0110 | 0.0219 | 0.0222 | −0.0160 | 0.0173 | 0.0052 | 0.0087 |
| 0.1 | 0.0017 | 0.0058 | −0.0261 | 0.0263 | −0.0238 | 0.0241 | −0.0211 | 0.0213 |
| 0.5 | −0.0004 | 0.0053 | −0.0186 | 0.0188 | −0.0253 | 0.0255 | −0.0165 | 0.0166 |
| 0.9 | −0.0010 | 0.0059 | −0.0027 | 0.0062 | −0.0316 | 0.0323 | −0.0063 | 0.0080 |
| 0.1 | −0.0401 | 0.0410 | −0.0278 | 0.0282 | −0.0290 | 0.0297 | −0.0247 | 0.0251 |
| 0.5 | 0.0036 | 0.0088 | −0.0067 | 0.0075 | 0.0019 | 0.0058 | −0.0050 | 0.0060 |
| 0.9 | 0.0502 | 0.0509 | 0.0378 | 0.0389 | 0.0782 | 0.0788 | 0.0486 | 0.0496 |
| 0.1 | −0.0512 | 0.0519 | −0.0295 | 0.0299 | −0.0360 | 0.0366 | −0.0252 | 0.0255 |
| 0.5 | −0.0027 | 0.0080 | −0.0086 | 0.0092 | −0.0057 | 0.0080 | −0.0045 | 0.0063 |
| 0.9 | 0.0484 | 0.0493 | 0.0317 | 0.0325 | 0.0596 | 0.0606 | 0.0462 | 0.0477 |
Table A4.
Bayesian estimates of regression coefficients in the structural equation with complete data.
Table A4.
Bayesian estimates of regression coefficients in the structural equation with complete data.
| | | | |
---|
Par
| |
Bias
|
RMS
|
Bias
|
RMS
|
Bias
|
RMS
|
Bias
|
RMS
|
---|
| 0.1 | 0.0131 | 0.0196 | 0.0090 | 0.0099 | 0.0245 | 0.0261 | 0.0050 | 0.0076 |
| 0.5 | 0.0021 | 0.0128 | 0.0076 | 0.0082 | 0.0191 | 0.0199 | 0.0080 | 0.0090 |
| 0.9 | 0.0056 | 0.0173 | 0.0126 | 0.0145 | 0.0162 | 0.0186 | 0.0101 | 0.0147 |
| 0.1 | 0.0490 | 0.0510 | −0.0471 | 0.0473 | −0.0306 | 0.0314 | 0.0229 | 0.0237 |
| 0.5 | 0.0060 | 0.0144 | −0.0114 | 0.0119 | −0.0643 | 0.0645 | −0.0297 | 0.0300 |
| 0.9 | −0.0211 | 0.0263 | 0.0470 | 0.0489 | 0.0572 | 0.0620 | 0.0074 | 0.0176 |
| 0.1 | 0.0368 | 0.0396 | −0.0486 | 0.0489 | −0.0360 | 0.0368 | 0.0218 | 0.0225 |
| 0.5 | 0.0004 | 0.0131 | −0.0143 | 0.0147 | −0.0717 | 0.0719 | −0.0221 | 0.0224 |
| 0.9 | −0.0192 | 0.0246 | 0.0397 | 0.0418 | 0.0318 | 0.0450 | 0.0228 | 0.0278 |
| 0.1 | 0.0058 | 0.0141 | 0.0350 | 0.0354 | 0.0166 | 0.0182 | 0.0300 | 0.0304 |
| 0.5 | 0.0013 | 0.0111 | −0.0509 | 0.0511 | −0.0591 | 0.0593 | −0.0539 | 0.0541 |
| 0.9 | −0.0016 | 0.0146 | 0.0024 | 0.0122 | −0.0280 | 0.0311 | −0.0354 | 0.0389 |
| 0.1 | −0.0297 | 0.0332 | 0.0301 | 0.0312 | −0.0165 | 0.0196 | 0.0266 | 0.0280 |
| 0.5 | 0.0026 | 0.0105 | −0.0258 | 0.0265 | −0.0171 | 0.0190 | −0.0100 | 0.0116 |
| 0.9 | 0.0481 | 0.0501 | −0.0027 | 0.0208 | −0.0434 | 0.0528 | 0.0582 | 0.0630 |
| 0.1 | −0.0363 | 0.0390 | 0.0347 | 0.0357 | 0.0011 | 0.0109 | 0.0166 | 0.0188 |
| 0.5 | 0.0016 | 0.0114 | −0.0217 | 0.0224 | −0.0007 | 0.0079 | −0.0081 | 0.0099 |
| 0.9 | 0.0447 | 0.0468 | 0.0083 | 0.0191 | −0.0082 | 0.0459 | 0.0549 | 0.0605 |
Table A5.
Bayesian estimates of other parameters with complete data when .
Table A5.
Bayesian estimates of other parameters with complete data when .
| | | |
---|
Par
|
Bias
|
RMS
|
Bias
|
RMS
|
Bias
|
RMS
|
---|
| 0.0028 | 0.0037 | 0.0028 | 0.0037 | 0.0029 | 0.0037 |
| 0.0013 | 0.0119 | −0.0007 | 0.0112 | −0.0024 | 0.0103 |
| 0.0000 | 0.0120 | −0.0020 | 0.0111 | −0.0038 | 0.0111 |
| 0.0285 | 0.0295 | 0.0285 | 0.0294 | 0.0281 | 0.0290 |
| 0.0158 | 0.0207 | 0.0156 | 0.0210 | 0.0142 | 0.0205 |
| 0.0169 | 0.0219 | 0.0169 | 0.0219 | 0.0151 | 0.0211 |
| −0.0193 | 0.0211 | −0.0191 | 0.0209 | −0.0191 | 0.0209 |
| −0.0063 | 0.0159 | −0.0055 | 0.0156 | −0.0010 | 0.0161 |
| −0.0028 | 0.0140 | −0.0022 | 0.0142 | 0.0021 | 0.0145 |
| −0.0596 | 0.0609 | −0.0445 | 0.0458 | 0.0393 | 0.0448 |
| −0.0603 | 0.0616 | −0.0452 | 0.0461 | 0.0384 | 0.0438 |
| 0.0571 | 0.0584 | 0.0179 | 0.0224 | 0.0192 | 0.0254 |
| 0.0594 | 0.0608 | 0.0192 | 0.0237 | 0.0210 | 0.0268 |
| 0.0369 | 0.0395 | 0.0022 | 0.0140 | −0.0034 | 0.0173 |
| 0.0386 | 0.0410 | 0.0030 | 0.0145 | −0.0030 | 0.0173 |
| −0.0384 | 0.0496 | −0.0113 | 0.0203 | 0.0396 | 0.0603 |
| 0.0731 | 0.0751 | 0.0512 | 0.0527 | 0.0281 | 0.0451 |
| 0.0140 | 0.0299 | −0.0350 | 0.0397 | 0.0147 | 0.0388 |
Table A6.
Bayesian estimates of other parameters with missing data when .
Table A6.
Bayesian estimates of other parameters with missing data when .
| | | |
---|
Par
|
Bias
|
RMS
|
Bias
|
RMS
|
Bias
|
RMS
|
---|
| 0.0014 | 0.0028 | 0.0011 | 0.0023 | 0.0010 | 0.0025 |
| −0.0047 | 0.0121 | −0.0017 | 0.0107 | 0.0015 | 0.0106 |
| −0.0010 | 0.0108 | 0.0025 | 0.0109 | 0.0038 | 0.0112 |
| −0.0106 | 0.0119 | −0.0110 | 0.0124 | −0.0102 | 0.0113 |
| −0.0042 | 0.0114 | −0.0034 | 0.0116 | −0.0016 | 0.0115 |
| −0.0061 | 0.0115 | −0.0052 | 0.0113 | −0.0054 | 0.0111 |
| −0.0055 | 0.0077 | −0.0044 | 0.0072 | −0.0048 | 0.0074 |
| −0.0049 | 0.0119 | −0.0031 | 0.0111 | −0.0009 | 0.0106 |
| −0.0039 | 0.0119 | −0.0032 | 0.0125 | −0.0008 | 0.0116 |
| −0.0248 | 0.0268 | −0.0031 | 0.0109 | 0.0045 | 0.0117 |
| −0.0262 | 0.0287 | −0.0047 | 0.0121 | 0.0034 | 0.0119 |
| 0.0155 | 0.0180 | 0.0000 | 0.0104 | −0.0144 | 0.0179 |
| 0.0172 | 0.0200 | −0.0020 | 0.0098 | −0.0128 | 0.0167 |
| 0.0146 | 0.0174 | −0.0006 | 0.0103 | −0.0141 | 0.0182 |
| 0.0115 | 0.0142 | −0.0020 | 0.0101 | −0.0186 | 0.0215 |
| 0.0190 | 0.0587 | 0.0767 | 0.0947 | −0.0039 | 0.0591 |
| −0.0314 | 0.0445 | −0.0418 | 0.0563 | −0.0385 | 0.0546 |
| −0.0422 | 0.0610 | −0.0221 | 0.0561 | −0.0456 | 0.0655 |
Table A7.
Bayesian estimates of regression coefficients in the structural equation with missing data.
Table A7.
Bayesian estimates of regression coefficients in the structural equation with missing data.
| M1 | M2 | M3 |
---|
|
---|
Par
| |
Bias
|
RMS
|
Bias
|
RMS
|
Bias
|
RMS
|
---|
| 0.1 | 0.0098 | 0.0176 | 0.0099 | 0.0156 | 0.0095 | 0.0165 |
| 0.5 | −0.0023 | 0.0128 | −0.0028 | 0.0122 | −0.0035 | 0.0105 |
| 0.9 | −0.0007 | 0.0151 | −0.0061 | 0.0154 | −0.0064 | 0.0158 |
| 0.1 | 0.0488 | 0.0507 | 0.0469 | 0.0488 | 0.0456 | 0.0472 |
| 0.5 | 0.0063 | 0.0128 | 0.0046 | 0.0119 | 0.0047 | 0.0117 |
| 0.9 | −0.0130 | 0.0189 | −0.0230 | 0.0274 | −0.0215 | 0.0250 |
| 0.1 | 0.0389 | 0.0411 | 0.0364 | 0.0387 | 0.0357 | 0.0378 |
| 0.5 | 0.0009 | 0.0103 | 0.0012 | 0.0107 | 0.0011 | 0.0091 |
| 0.9 | −0.0121 | 0.0161 | −0.0203 | 0.0241 | −0.0204 | 0.0239 |
| 0.1 | 0.0072 | 0.0135 | 0.0067 | 0.0121 | 0.0048 | 0.0101 |
| 0.5 | 0.0020 | 0.0100 | 0.0022 | 0.0084 | 0.0020 | 0.0092 |
| 0.9 | 0.0039 | 0.0129 | -0.0026 | 0.0116 | −0.0022 | 0.0109 |
| 0.1 | −0.0197 | 0.0241 | −0.0083 | 0.0158 | −0.0047 | 0.0151 |
| 0.5 | 0.0032 | 0.0115 | 0.0042 | 0.0117 | 0.0036 | 0.0111 |
| 0.9 | 0.0931 | 0.0939 | 0.0320 | 0.0352 | 0.0290 | 0.0318 |
| 0.1 | −0.0155 | 0.0213 | −0.0132 | 0.0199 | −0.0074 | 0.0155 |
| 0.5 | 0.0014 | 0.0121 | 0.0005 | 0.0106 | 0.0005 | 0.0105 |
| 0.9 | 0.0907 | 0.0917 | 0.0254 | 0.0286 | 0.0226 | 0.0263 |
Table A8.
Bayesian estimates of regression coefficients in the structural equation with missing data.
Table A8.
Bayesian estimates of regression coefficients in the structural equation with missing data.
| M1 | M2 | M3 |
---|
|
Par | | Bias | RMS | Bias | RMS | Bias | RMS |
| 0.1 | 0.0170 | 0.0190 | 0.0160 | 0.0179 | 0.0137 | 0.0157 |
| 0.5 | 0.0151 | 0.0162 | 0.0133 | 0.0146 | 0.0136 | 0.0146 |
| 0.9 | 0.0100 | 0.0145 | 0.0075 | 0.0129 | 0.0069 | 0.0137 |
| 0.1 | −0.0111 | 0.0132 | −0.0096 | 0.0134 | −0.0044 | 0.0109 |
| 0.5 | −0.0605 | 0.0607 | −0.0595 | 0.0597 | 0.0398 | 0.0402 |
| 0.9 | 0.0512 | 0.0537 | 0.0555 | 0.0585 | 0.0584 | 0.0608 |
| 0.1 | −0.0154 | 0.0178 | −0.0109 | 0.0146 | −0.0060 | 0.0119 |
| 0.5 | −0.0669 | 0.0671 | −0.0646 | 0.0650 | 0.0360 | 0.0364 |
| 0.9 | 0.0281 | 0.0328 | 0.0404 | 0.0439 | 0.0501 | 0.0525 |
| 0.1 | 0.0401 | 0.0407 | 0.0410 | 0.0417 | 0.0437 | 0.0448 |
| 0.5 | −0.0488 | 0.0490 | −0.0467 | 0.0470 | −0.0448 | 0.0453 |
| 0.9 | −0.0305 | 0.0336 | −0.0274 | 0.0307 | −0.0236 | 0.0275 |
| 0.1 | 0.0124 | 0.0173 | 0.0138 | 0.0193 | 0.0187 | 0.0243 |
| 0.5 | −0.0051 | 0.0099 | −0.0009 | 0.0089 | −0.0048 | 0.0112 |
| 0.9 | −0.0236 | 0.0343 | −0.0354 | 0.0427 | −0.0422 | 0.0509 |
| 0.1 | 0.0278 | 0.0302 | 0.0324 | 0.0345 | 0.0329 | 0.0353 |
| 0.5 | 0.0123 | 0.0150 | 0.0116 | 0.0148 | 0.0066 | 0.0122 |
| 0.9 | 0.0038 | 0.0291 | −0.0148 | 0.0298 | −0.0322 | 0.0402 |
|
Par | | Bias | RMS | Bias | RMS | Bias | RMS |
| 0.1 | 0.0031 | 0.0061 | 0.0023 | 0.0062 | 0.0011 | 0.0052 |
| 0.5 | 0.0064 | 0.0076 | 0.0061 | 0.0071 | 0.0046 | 0.0059 |
| 0.9 | 0.0126 | 0.0155 | 0.0099 | 0.0130 | 0.0093 | 0.0135 |
| 0.1 | 0.0391 | 0.0396 | 0.0416 | 0.0421 | 0.0457 | 0.0462 |
| 0.5 | −0.0213 | 0.0218 | −0.0211 | 0.0216 | −0.0190 | 0.0194 |
| 0.9 | 0.0236 | 0.0280 | 0.0233 | 0.0263 | 0.0245 | 0.0270 |
| 0.1 | 0.0391 | 0.0395 | 0.0411 | 0.0416 | 0.0448 | 0.0452 |
| 0.5 | −0.0156 | 0.0161 | −0.0164 | 0.0169 | −0.0155 | 0.0162 |
| 0.9 | 0.0372 | 0.0393 | 0.0333 | 0.0350 | 0.0359 | 0.0383 |
| 0.1 | 0.0510 | 0.0512 | 0.0521 | 0.0524 | 0.0555 | 0.0558 |
| 0.5 | −0.0423 | 0.0425 | −0.0400 | 0.0403 | −0.0377 | 0.0381 |
| 0.9 | −0.0154 | 0.0205 | −0.0064 | 0.0131 | −0.0005 | 0.0124 |
| 0.1 | 0.0520 | 0.0527 | 0.0524 | 0.0533 | 0.0590 | 0.0601 |
| 0.5 | 0.0086 | 0.0110 | 0.0101 | 0.0126 | 0.0162 | 0.0178 |
| 0.9 | 0.0449 | 0.0498 | 0.0275 | 0.0340 | 0.0234 | 0.0316 |
| 0.1 | 0.0456 | 0.0464 | 0.0474 | 0.0485 | 0.0508 | 0.0517 |
| 0.5 | 0.0076 | 0.0106 | 0.0083 | 0.0115 | 0.0118 | 0.0140 |
| 0.9 | 0.0465 | 0.0510 | 0.0311 | 0.0379 | 0.0256 | 0.0321 |
Table A9.
Bayesian estimates of other parameters with missing data when .
Table A9.
Bayesian estimates of other parameters with missing data when .
| | | |
---|
Par
|
Bias
|
RMS
|
Bias
|
RMS
|
Bias
|
RMS
|
---|
| −0.0048 | 0.0051 | −0.0051 | 0.0054 | −0.0055 | 0.0058 |
| −0.0070 | 0.0119 | −0.0086 | 0.0123 | −0.0146 | 0.0162 |
| −0.0060 | 0.0109 | −0.0067 | 0.0111 | −0.0129 | 0.0151 |
| −0.0004 | 0.0076 | −0.0007 | 0.0066 | 0.0007 | 0.0070 |
| −0.0023 | 0.0093 | −0.0021 | 0.0099 | 0.0021 | 0.0104 |
| −0.0012 | 0.0079 | −0.0008 | 0.0078 | 0.0020 | 0.0098 |
| 0.0042 | 0.0077 | 0.0043 | 0.0075 | 0.0050 | 0.0082 |
| 0.0015 | 0.0093 | 0.0016 | 0.0107 | 0.0051 | 0.0126 |
| 0.0015 | 0.0087 | 0.0012 | 0.0094 | 0.0046 | 0.0103 |
| 0.0236 | 0.0287 | 0.0071 | 0.0148 | −0.0343 | 0.0380 |
| 0.0224 | 0.0260 | 0.0036 | 0.0128 | −0.0370 | 0.0399 |
| −0.0285 | 0.0315 | −0.0310 | 0.0337 | 0.0340 | 0.0365 |
| −0.0285 | 0.0316 | −0.0305 | 0.0337 | 0.0341 | 0.0371 |
| −0.0208 | 0.0245 | −0.0258 | 0.0288 | 0.0358 | 0.0382 |
| −0.0214 | 0.0248 | −0.0232 | 0.0276 | 0.0342 | 0.0359 |
| −0.0238 | 0.0691 | −0.0307 | 0.0545 | −0.0597 | 0.0828 |
| −0.0538 | 0.0633 | −0.0435 | 0.0491 | 0.0682 | 0.0748 |
| 0.0083 | 0.0646 | 0.0174 | 0.0457 | 0.0020 | 0.0517 |