A New Story on the Multidimensionality of the MSPSS: Validity of the Internal Structure through Bifactor ESEM

The internal structure of the Multidimensional Scale of Perceived Social Support (MSPSS) in adolescents has been evaluated with some factorial analysis methodologies but not with bifactor exploratory structural equation modeling (ESEM), and possibly the inconsistency in the internal structure was dependent on these approaches. The objective of the study was to update evidence regarding its internal structure of MSPSS, by means of a detailed examination of its multidimensionality The participants were 460 adolescents from an educational institution in the Callao region, Lima, Peru. The structure was modeled using unidimensional, three-factor and bifactor models with confirmatory factor analysis (CFA) and ESEM approaches. The models showed good levels of fit, with the exception of the unidimensional model; however, the multidimensionality indicators supported the superiority of the bifactor ESEM. In contrast, the general factor was not strong enough, and the interfactorial correlations were substantially lower. It is concluded that the MSPSS can be interpreted by independent but moderately correlated factors, and there is possible systematic variance that potentially prevented the identification of a general factor.


Introduction
Adolescence is an intermediate stage between childhood and adulthood. Adolescence begins at the age of puberty, but it does not have an end age; it is fundamentally defined by psychological characteristics, and its duration is determined by the socioeconomic and cultural context in which the person lives [1,2]. Adolescence represents 18% of the total population in Latin America and the Caribbean [3] and 16% worldwide [4]. It is not only an evolutionary stage of great challenges but also a stage of high vulnerability and psychosocial risk [5], which can be evidenced by the presence of risky behaviors such as alcohol consumption in young people aged 15-19 years (27%), with an onset age of less than 15 years [6]. Likewise, data on drug use show a higher incidence in young people aged 18 to 24 than in adults [7]. The adolescent pregnancy rate worldwide is 46 births per 1000 girls [8], suicide was the second leading cause of death in individuals aged between 15 and 29 years in 2016 [9] and depression continues to be a problem in this population [10].
For the purpose of identifying the pattern of results and methodological procedures used in the study of the internal structure of the MSPSS, a rapid systematic review was carried out [41,42] using systematic review methods. The following keywords were used, and only psychometric studies conducted with adolescents were included: MSPSS, social support, adolescence, validity, and reliability. The review was performed through search engines in the ScienceDirect, Scopus, PubMed, and Google Scholar databases. Because the objective of this study was focused on adolescents and the internal structure of MSPSS, the exclusion criteria were a youth-adult sample and nonpsychometric studies (e.g., [43]). Table 1 shows the results of the search. Based on Table 1, the predominant model that was retained included three correlated factors; with the exception of the study by Trejos-Herrera et al. [46] that conducted a priori testing of this model. However, in an analytical context, confirmatory factor analysis (CFA) provides the advantage of directly testing and contrasting models; for example, it is possible to conduct a priori testing of a unidimensional, multidimensional and/or multidimensional bifactor model in the same study. These models are usually considered in the evaluation of the multidimensionality of a measure [48][49][50]. MSPSS total and subscale scores are typically reported, and correlations between these scores vary between moderate to high levels; these correlations are generally obtained using the maximum likelihood (ML) estimator, which is an analytical procedure that is not robust with nonnormal data [51,52].
On the other hand, the predominant factorial design was of a unique group, in which the internal structure parameters were estimated in a single sample without adding internal replicability options. The replicability of results in quantitative methodologies is important, however, only some studies implemented measurement invariance or sample partitioning. The sequential application of exploratory factor analysis (EFA) or CFA in the same sample cannot be considered a replicability test because what is obtained is a change in the estimation method or fit estimators that the EFA does not have. Some inconsistencies in these reports were also found, such as omitting the interfactoria correlations or identifying their source (observed or factorial scores), the estimation method, and the type of correlational matrix used (although not reported in Table 1, these can be polychoric or Pearson correlations). Regarding the estimation method, the principal component analysis (PCA) and varimax rotation have not yet disappeared in the psychometric methodology applied to the MSPSS. Finally, the internal consistency or reliability has been predominantly estimated by α coefficients, and the derived information may be underestimated due to limitations that were rarely recognized in the studies reviewed (the literature on this is extensive; for an example see [53]). Two additional problems that stand out were the coincidence with one of the observations of the MSPSS systematic review by Dambi et al. [43], that it was rare that a study contrasted several measurement models (e.g., one-, two-, or three-factor models), and that the relationship with social desirability was not explored (see Table 1).
An additional implication of this literature review is that the MSPSS is an instrument with numerous studies for evidence of internal structure and other sources of validity, and differences in internal structure were found. These differences in internal structure may be due to the method of assessing dimensionality, sample idiosyncrasies, cross-cultural differences, or biases not explicitly assessed (careless response). But a major source of these differences, keeping things equal, is the dimensionality assessment methodology. With the advancement in this methodology over the years, and in the context that to date the MSPSS dimensionality research has not been updated, the present study focuses on this point, that is, to introduce ESEM modeling into MSPSS dimensionality research. Table 1 also shows that the MSPSS has not yet been assessed with other structure analysis models, for example, exploratory structural equation modeling (ESEM; [54]) and bifactor ESEM [55]. Both are advanced models that were introduced to overcome the problems of CFA modeling and to better evaluate the structural properties of multidimensional measurements. ESEM contains exploratory and confirmatory approaches, and the characteristic procedure consists of estimating the crossed or divergent factor loads with the other factors analyzed, not only the convergent factor (which is hypothesized as the causal influence of the items). This estimation has been shown to influence the decrease in factor loadings and interfactor correlations [54,56]. In this way, the factorial solutions obtained by the ESEM approach are considered realistic [54] because they estimate the parameters that by design are not estimated in the CFA (factorial loads in the non-hypothesized factors). On the other hand, the bifactor has been rediscovered as an effective method to test the multidimensionality of a measure [48][49][50]57,58]. It basically consists of modeling a general factor common to all items, together with specific factors that represent the unique content of the items [54]. Integrating both approaches into a single model (i.e., bifactor ESEM) has not yet been used in the structural analysis of the MSPSS, although this approach could provide new information on the internal structure and modify the interpretation of the scores previously evaluated by CFA models.
Therefore, an attempt is made to advance the research on the internal structure by implementing the ESEM approach. This analytical alternative could yield greater precision in realistic conditions of factorial complexity in multidimensional instruments [55]. Because the MSPSS tends to show moderate or high correlations between its dimensions (see Table 1), the bifactor ESEM model can distinguish the sources of systematic variance by estimating the relationship of the items with all the dimensions and not only with its hypothesized dimension; that is, this approach results in estimates of cross loads that are not usually estimated in CFA models [55], including a general factor that can represent the systematic common variance in the MSPSS. Thus, this study aligns with the importance of examining instruments in new contexts and analysis conditions, as provided by guidelines from influential documents that guide the practice of test adaptation [59].

Participants
The study sample came from the Callao region, one of the areas in Lima (Perú) with high rates of violence, citizen insecurity, drug use, pollution, and poverty [60]. Due to availability and internal organizational ease, a public educational institution was chosen. This institution enrolls males and females at the high-school level, between 12 and 16 years old [61] and mostly from families of low socioeconomic status. Its administrative organization is the same as other Peruvian public institutions. The total population was 1042 enrolled students. The units sampled were classrooms that were randomly selected by simple sampling. Each classroom contained approximately 20 to 35 students, with 15 classrooms being selected. The total sample drawn from this selection was 510 adolescents between the ages of 11 and 18 years. Several exclusion criteria were taken into account: students identified with a disability condition, as defined by the "Service of Support and Advice for the Attention of Special Educational Needs" (SAANE; RM No 665, 2018), foreigners, and those who did not agree to participate voluntarily. After applying the exclusion criteria on irrelevant response patterns and multivariate extreme values (see below), the effective sample was 461 participants.

Instrument
The MSPSS [27] measures the perceived support that the person can receive from family, friends, and significant others. It is composed of 12 items, grouped into three dimensions: family (FAM; items: 3, 4, 8 and 11), friends (FRI; items: 6, 7, 9 and 12), and significant others (SO; items: 1, 2, 5 and 10). Response options are on an ordinal scale (1 = most of the time, 2 = sometimes, 3 = often, 4 = always or almost always). The approximate duration to complete the questionnaire is 10 min. All the items have a positive orientation, and higher scores indicate greater social support. This study used the Spanish adaptation made by Arechavala and Miranda [62], who varied the response scaling from the original version to facilitate the adolescents' level of understanding and avoid biases in the answer trends. This version was compared with the content of version validated in a previous Peruvian study. [36], which was based on the version provided by the author of the instrument; for consistency check, it was also compared with other recent Latin American versions [46].

Ethical Considerations
This study is a part of the research project (HIM/2015/017/SSA.1207; "Effects of mindfulness training on psychological distress and quality of life of the family caregiver") that was approved on 16 December 2014, by the Research, Ethics, and Biosafety Commissions of the Hospital Infantil de México Federico Gómez, National Institute of Health, in Mexico City. While conducting this study, the ethical rules and considerations for research with humans currently enforced in Mexico [63] and those outlined by the American Psychological Association [64] were followed. All family caregivers were informed of the objectives and scope of the research and their rights in accordance with the Declaration of Helsinki [65]. The caregivers who agreed to participate in the study signed an informed consent letter. Participation in this study was voluntary and did not involve payment. The caregivers who provided consent for their child to participate completed an informed consent letter. Youths provided assent and returned a survey if they wished to participate.

Data Collection
Coordination was carried out with the principal's office of the educational institution to obtain authorization to access the study sample, and the data were collected between June and August 2019. The informed consent form explaining the objective of the study had been previously delivered to parents through the tutors in the classrooms and requested the voluntary participation of their child. Adolescents whose parents agreed to participate filled out the informed consent form. No identifiable information on the participants was recorded. The scale was administered during the tutoring hour, and at the end of data collection, they were offered a talk on family functionality; completion of the process lasted approximately seven weeks. The entire procedure is aligned with the Declaration of Helsinki regarding anonymity, protection of responses and freedom of participation in the study.

Data Analysis
First, before examining the internal structure, outliers and irrelevant response patterns were identified. As a general measure of abnormal responses [66], the D2 distance [67] was used for each subject based on the quadratic distance of the multivariate centroid of the variables (i.e., the MSPSS items). Based on the metrics of the χ 2 distribution (degrees of freedom equal to the number of items), the Bonferroni correction [68] was applied at a 0.05 confidence level to establish the cutoff point for detection (χ 2 > 6.14). The detection of response patterns associated with insufficient effort was done by estimating the sequence of the longest string of identical responses given by a person (longstr), which, in this context of multidimensional measurement, suggests responses without variability [66]. For this purpose, the careless R program was used [69]. Second, a descriptive analysis of the items was performed, which included their distributional characteristics (i.e., distributional normality) and differences in response trends. For each subscale, these were examined using nonparametric analyses that included a measure of the magnitude of the difference [70] using the langtest [71] and MVN [72] R programs.
Third, the internal structure was evaluated through CFA-SEM to evaluate the MSPSS measurement model. First, the model established by the author Zimet et al. [27] and most subsequent studies, which consisted of three related dimensions (3F), were tested. The second model represented the use of the total MSPSS score, that is, it was defined by a single latent dimension (i.e., unidimensionality). The third model included bifactor modeling, which allows the identification of the general latent dimension (i.e., a general factor) and specific dimensions (i.e., specific factors). Because the literature generally finds moderate or strong associations between the dimensions of the MSPSS, ESEM modeling was applied in the model with three related factors (model 3F) and in the bifactor model. Both approaches (CFA and ESEM) can be applied in a complementary way to assess the multidimensionality of instruments. Oblique geomin rotation [56] was applied in the 3F-ESEM and bifactor-ESEM models. Due to its efficacy [51,52], the WLSMV estimator was used in all SEM modeling [73] with interitem polychoric correlations. Figure 1 shows the representation of each model tested.
In general, the fit of each of the models was evaluated using metrics such as CFI (≥0.95), RMSEA (≤0.05), WRMR (≤0.90; [74]) and gammaHat (≥0.95). For bifactor modeling (bifactor CFA and bifactor ESEM), several indicators suggested in the literature were used [49,50,58,75,76]: (a) the explained common variance (ECV) retained in the general factor Fg (ECVg), in the items of the Fg (I-ECV) and in the specific factors after removing the general variance (ECVf) had to exceed 0.70 or 0.80; (b) the reliable variance in the observed score (ω h ) and in the specific factors after removing the variance in Fg from the observed score (ωhs) had a cutoff point of ≥0.80; and (c) the degree of replicability of the construct in each factor (H ≥ 0.80) and the absolute relative parameter bias (ARPB), which are in the 10% to 15% range for the difference between the factorial loads estimated in the general factor of a unidimensional model vs. a bifactor model, are acceptable [49,50]. Finally, the determinability of the factors (FD) was also estimated (FD > 0.90; [77]). Furthermore, due to the predominance of the ECV in establishing the general bifactor dimension [78], ECV values higher than 0.70 suggest that there is an argument based on the common variance for the conclusion of an important unidimensional model [49,50]. An ECV below 0.70 suggests that the multidimensionality coming from the specific factors is nontrivial. The potential misspecifications were evaluated for each model using an approach that combines the statistical power and the size of the modification [79]. SEM modeling was carried out by the lavaan [80] and semtools [81] R programs.
Finally, based on Table 1, the interfactorial correlations were meta-analyzed using the random effects model, an appropriate method when the effect of the sampling error from different populations is presumed [82]. The Hartung-Knapp-Sidik-Jonkman estimator was used, since it can produce robust results with a small number of studies that have a high degree of heterogeneity [82,83].

Item Analysis
The responses to the items (see Table 2) showed apparent similarities in the direction of their distributional properties (skewness and kurtosis with the same sign) but also showed apparent differences in magnitude. The global difference in the responses on the SO scale was trivial (Friedman-χ 2 = 2.3247, gl = 3, p = 0.507; r = 0.031, 95% CI = −0.061, 0.122), while the differences on the FAM scale (Friedman-χ 2 = 265.53, gl = 3, p < 0.001) and FRI scale (Friedman-χ 2 = 38.83, gl = 3, p < 0.001) were statistically significant but small in size (r = 0.191, 95% CI: 0.101, 0.277; and r = 0.249, 95% CI = 0.166, 0.329, respectively). Furthermore, the responses to the items were not distributed with statistical normality (see Cramer-von Mises test statistics for each item), and multivariate normality, based on the Henze and Zirkler [84] test, was not fulfilled in the set of items, z = 1.77942 (p < 0.001).

Meta-Analytic Interfactor Correlations
Using the CFA studies in Table 1 (studies 3, 4

Model Fits
The general results in Table 3 indicated that the unidimensional model was of comparatively lesser statistical fit. Additionally, with both approaches (CFA and ESEM), the correlated three-factor model (congeneric model) fitted very well, but the bifactor model quantitatively outperformed it. The difference between the two approaches does not seem statistically substantial because both expressed a very good fit. The performance of the ESEM model was definitely superior to that of the CFA model, indicating that the estimation of the cross loads, as usually occurs in the ESEM, did have an impact on the amount of fit obtained.

CFA Modeling
Inspection of the parameters of interest in the CFA models (Table 4), with the exception of the bifactor-CFA model, revealed that the factor loadings were high in all models (λ > 0.70). In contrast, in the unidimensional model, although its fit was poor (see previous paragraph), its factor loadings were high (>0.66). In the 3-factor model, the interfactorial correlations were high (>0.60), suggesting insufficient discrimination between them. The bifactor-CFA model did not initially converge, and adjustment indicators could not be obtained. The apparent problem was item two of the SO factor, which showed excessive negative variance (=−678. 20), and this factor load was identified as a Heywood case in its specific factor (λ = 26.04). Its load in the general factor (F g-cfa ) and the other items of its factor showed loads λ > 0.80. Because the SO factor presented similar factorial loads (tau-equivalence; see next paragraph), its loads were constrained to equality between them to control the negative variance.
In the analysis of the multidimensionality of the MSPSS with the bifactor CFA, the ECV indicators, ω h and H, showed the following: (a) the ECVg indicator of the general factor Fg retained a moderately high variance (ECV g = 0.751) and was highly differentiated compared with the specific factors (ECV f around 0.35), and at the item level (I-ECV), the general common variance was moderate in the FAM and FRI factor items, and high for the SO factor items; (b) the items were shown to be predominantly within the acceptable range of multidimensional bias (between 0.10 and 0.15; [50]); (c) the reliable variance (ω h ) was high for Fg (>0.80) and low for the specific factors (<0.40); (d) the coefficient (H) indicated that the replicability was stronger for the general factor (H > 0.90) than for the specific factors (H < 0.70); (e) the determinability of the general factor Fg exceeded the minimum criterion (FD > 0.90) compared with the specific factors (FD < 0.90); and (f) ARPB occurred in a range that suggested that the general dimension Fg does not produce significant biases regarding the multidimensionality of the items (ARPB total = 0.121). Taken together, the CFA modeling showed that the bifactor model represents a dimensional solution where a general factor Fg is predominant and preferable over the consideration of specific factors as dimensions to interpret. The interfactor correlations could not be estimated due to the orthogonality constraint to run the CFA bifactor.  Table 5 shows the results of the ESEM modeling. In the 3F-ESEM model, items with factorial complexity were found (divergent or crossed loads, λ ≥ 0.10; items 3,6,8,10,11), and these were contrasted with the CFA specification on this type of factor loading (hypothesized factor loads equal to zero). Compared with the 3F-CFA model (Table 4), the interfactor correlations were comparatively low. Comparing the meta-analytic correlations with the correlations in 3F-ESEM, the differences were statistically significant and comparatively substantial in FAM-SO (z = 9.63, p = 0.01, q = 0.45), SO-FRI (z = 7.18, p = 0.01, q = 0.33) and FAM-FRI (z = 6.05, p = 0.01, q = 0.28).

ESEM Modeling
The bifactor ESEM model produced very different interfactor correlations than the 3F-ESEM, because they decreased between 25.2% and 65.2%. Compared with the metaanalytic correlations, statistically significant but small differences were found in FAM-SO (z = 1.94, p = 0.02, q = 0.09), SO-FRI (z = −4.88, p < 0.01, q = 0.22) and FAM-FRI (z = −4.27, p < 0.01, q = 0.19). However, factor loadings decreased in different proportions, sometimes by a moderately high amount, within the same factor: SO between 2.4% and 16.1%, FAM between 2.0% and 8.3%, and FRI between 4.7% and 14.8%. These decreases were more accentuated with the implementation of the bifactor ESEM.  In the multidimensional analysis, the bifactor ESEM produced values of the ECV, ω h , H and ARPB with the following characteristics: (a) the ECV g indicator of the general factor Fg was low compared to the recommended criterion, and that of the subscales was even lower (ECV f between 0.10 and 0.24), while at the item level (I-ECV), the general common variance was high only for FRI and low for FAM and SO; (b) the items showed a predominant multidimensional bias (ARPB item > 0.15) in SO and FAM; (c) the reliable variance (ω h ) for Fg was acceptable, and low for the specific factors (<0.10); (d) the coefficient (H) indicated that replicability was stronger with the general factor (H > 0.90) than with the specific factors; (e) the determinability (FD) of the general factor Fg and FAM exceeded the minimum criterion (FD > 0.90), however, it was moderately low for the other factors (SO and FRI); and (f) the degree of bias (ARPB total = 0.238) occurred in a range that suggested nontrivial differentiation between the factorial loads produced from a general dimension Fg and the multidimensional model. Altogether, ARPB (at the total and item level) and ECV for the general factor (ECV total ) and the items (I-ECV) of the bifactor ESEM model showed that the general factor was not strong enough.

Discussion
In the evolution of the MSPSS, a study introduced CFA to evaluate its internal structure [44] and achieved an advance in the direct verification of its dimensional models (e.g., first-order and second-order factors) and of the equivalence of these. In this study, bifactor ESEM model was introduced for the first time to examine the internal structure of the MSPSS, and the results indicated a significant contrast with all previous studies that applied CFA.
Regarding factor loadings, they decreased across a mixed range of changes, indicating that the validity of each item was differentially affected by the model used. The greatest decrease occurred with the bifactor ESEM, which is aligned with the frequent results found with the ESEM modeling in general and with the bifactor ESEM in particular [55][56][57].
Another result to be emphasized is that CFA concealed some misspecifications associated with nontrivial cross-factorial loads. As is known in the methodological literature [55], nonrealistic restrictions are imposed with CFA, and this is the main reason why ESEM models tend to show a better statistical fit than CFA models.
The ω h obtained for the general factor (0.795) indicated that there is enough reliable variance in the total score to be interpreted, and H suggested that the greater replicability and better definition obtained in the estimation of a general factor compared with the specific factors was beyond doubt. On the other hand, H suggested that the definition of the factor from its items is poor for FRI and moderate for SO and FAM. Both the potential replicability and the high definition of the constructs are derived from the interpretation of H [75]. Finally, based on the ECV for the factors, the individual differences can be captured by the general factor and FAM, and moderately by SO. With an explained variance in ECV = 0.795, 79.5% represents the common variability source, and this amount cannot be considered trivial. Therefore, in conjunction with the determinability of the factorial solution, reliable variance, and replicability (FD coefficients, ω h , and H, respectively), it is tempting to accept the interpretation of a general factor even in the context of the ECV being below the recommended cutoff point (<0. 70).
However, the complementary information at the item level (I-ECV and I-ARPB) suggested that the strength of the potential general factor did not guarantee accepting it as representative of a global construct of social support. Because the ECV is one of the key pieces to accept a bifactor model [78], its results at the item and general factor levels contrast with the rest of the general indices, it can be affirmed that there is a partial confirmation to accept the general factor; in addition, the parameters at the item level (i.e., I-ECV and I-ARPB) predominantly indicated a difference between multidimensionality and unidimensionality that cannot be ignored. One of the risks of accepting unidimensionality when I-ECV is less than 0.85 is the violation of local independence [85], and this is possibly more certain when half of the I-ECV values met the recommended cutoff point (>0.80).
The results of the bifactor CFA contrasted with the bifactor ESEM in the estimation of interfactorial correlations and in the conclusion about multidimensionality. In the bifactor CFA, the conclusion of unidimensionality seemed reasonable; however, this was shown to be overestimated when the structure of the MSPSS was evaluated by the bifactor ESEM. It was in this analysis that the general factor was not strong enough, and contradictory indications of multidimensionality were obtained. On the other hand, the interfactorial correlations reported by the bifactor ESEM were more differentiated among themselves compared with the rest of the factorial results, especially with the bifactor CFA. This differentiation adds an additional advantage in relation to the precision of these correlations because they realistically add the estimation of all factorial loadings of all the factors analyzed, which is characteristic of ESEM [54].
As observed, the interfactorial correlations estimated in the bifactor ESEM model can be interpreted as a structure with moderate or low conceptual dependence, and this has acceptability for the construction of instruments such as the MSPSS when measures that do not conceptually overlap are required. A large dependence between factors usually indicates insufficient conceptual discrimination between them [76,85], and to this same extent, the interpretation is factorially complex. Instead, the amount of dependence found in the bifactor ESEM may be more satisfactory and in line with what is found in other studies with different instruments (e.g., AFA-R: [24]; CASSS: [25]; MOS: [23]). In these studies, the support sources do not conclude that there is a strong degree of dependence between them, which would lead to potential problems of divergent validity in the instrument. Although it is plausible that social support sources covary in a subject, it is more reasonable to rely on the idea that they act independently unless the context of the person links them. Also, the multidimensionality of adolescent responses to MSPSS indicated that there is a need to examine perceptions of support in different areas, such as support of friends and significant others. Assessment of adolescents' perceptions of different types of support may be critical to the design of prevention messaging and interventions to improve their functioning.
It is possible that the specific undetected variance from undetected processes is involved in the variance in the general factor, inflating the parameters of interest (e.g., factor loadings). If this is so, this general factor can represent the joint effect of careless responses, composed of average responses and social desirability [86,87], method variance or the interaction between these. As has been detected in other studies [66,[86][87][88], this type of variance is omnipresent in measurements based on self-report and may be incorporated in the associated systematic variance in the content of the MSPSS.
The essential limitations of this study correspond to examining the invariance of the best fitted models (3F-ESEM and bifactor-ESEM models), as well as introducing methods that model the possible effect of irrelevant response styles. It is also important to point out sample size as an issue, because a larger size may be required to guarantee more stable results and a higher density in the groups compared for invariance measurements. However, our sample size is considered adequate, given the high trend in magnitude of the factor loadings (essentially > 0.60). Also, we did not have a measure assessing whether social desirability in the responding of the adolescents changed how items were characterized or reported. It may be beneficial to add a measure or questions to detect this in future studies. Although there is a report indicating that social desirability is not involved in MSPSS responses [89], for purposes of replicability and consistency in conclusions, this potential effect has yet to be studied. Finally, the chosen sample of participants does not ensure population representativeness, because this lack of representativeness may involve a range of experiences in social support that differ between low and other socioeconomic statuses.
The results introduced several implications for subsequent psychometric studies of the MSPSS, especially when examining its internal structure. One of the implications of the study is methodological. The distinction between the CFA and ESEM modeling was notable in the estimation of the parameters of interest (i.e., factor loadings and interfactor correlations) and consequently modified the conclusions about the conceptual relationships inferred from them. Both types of modeling provided two different figures to identify the best structural model that represented the structure of the MSPSS. In an analytical or methodological context, it is assumed that the implementation of both in a study can be considered canonical rather than complementary. Given the advance that ESEM modeling provides in understanding the internal structure of the MSPSS, it might be preferable to first choose the ESEM method to reveal the sources of variance in the items with better precision. Because it is common, reasonable, and realistic to find multidimensional variance in psychosocial measurement items [86], the ESEM effectively models this variance without imposing the known restrictions of the CFA.
Another implication is the requirement to consider modeling sources of systematic variance associated with irrelevant patterns or response styles that are particularly present in self-report measures (e.g., [66,87]), which are considered responses that are not necessarily related to the substantive content of the items. Some of these were detected in the study (i.e., extreme multivariate responses and long response sequences), representing a prevalence within the range reported in the literature (between 3.5% and 12%; [66]). However, an approach based on modeling can detect the variability in this phenomenon and look for other patterns, such as average responses and social desirability [88,89].

Conclusions
The present study highlights the effects of evaluation methods of the internal structure of the MSPSS carried out in previous studies and presents a more appropriate analysis of the multidimensionality or unidimensionality of the MSPSS in Peruvian adolescents. The MSPSS can be treated as a multidimensional measure, with its moderately associated dimensions in adolescents. However, the existence of a general dimension that is interpretable and expressed as a general score is not sufficiently justified because the general dimension was not psychometrically strong. These results, however, varied significantly as a consequence of the method of analysis of the internal structure used: with CFA modeling, a general dimension was recognized, but with ESEM modeling, the overall dimension was not strong enough. Additionally, with the CFA, there was a tendency to overestimate factorial loads and interfactor correlations. These overestimated interfactor correlations indicated a potential overall factor, with high strength to be treated as the single and representative MSPSS score. In contrast, ESEM modeling discovered moderate associations between the subscales and thus weakness of an overall factor. This finding is important, and points to the value of understanding adolescent perceptions of different types of support from family, significant others, and friends. Understanding the value and direction of supports needed from these different sources can be of value in designing interventions to improve social support of youth. Due to the limitations of the sample size in some interest groups, the equivalence of these results in the groups based on sex, age, and other groups requires investigation, as does a determination of the relationship of the MSPSS scores with external variables. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.

Data Availability Statement:
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.