Next Article in Journal
Analyzing the Business Case for Hydrogen-Fuel Infrastructure Investments with Endogenous Demand in The Netherlands: A Real Options Approach
Next Article in Special Issue
Secondary-School-Based Interventions to Improve Muscular Strength in Adolescents: A Systematic Review
Previous Article in Journal
The Importance of Spiritual Values in the Process of Managerial Decision-Making in the Enterprise
Previous Article in Special Issue
Subjective Well-Being and Psychosocial Adjustment: Examining the Effects of an Intervention Based on the Sport Education Model on Children
 
 
Article
Peer-Review Record

Basic Psychological Needs as a Motivational Competence: Examining Validity and Measurement Invariance of Spanish BPNSF Scale

Sustainability 2020, 12(13), 5422; https://doi.org/10.3390/su12135422
by Giuseppina Maria Cardella *, Brizeida Raquel Hernández-Sánchez and José Carlos Sánchez-García
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Sustainability 2020, 12(13), 5422; https://doi.org/10.3390/su12135422
Submission received: 9 May 2020 / Revised: 3 July 2020 / Accepted: 3 July 2020 / Published: 5 July 2020

Round 1

Reviewer 1 Report

The manuscript describes the psychometric evaluation of the Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS) in a Spanish context. Scale adaptions to different cultural context are generally important and, thus, the topic of the manuscript is interesting. However, I spotted several (mostly methodical) flaws, which make re-analyses necessary. I will just emphasize the most important issues.

(1) Usually I think the authors should use as much space as they need to provide all important information for their study. However, this introduction is way too long and unstructured. It is a manuscript regarding the psychometric evaluation of an assessment tool, but I did not read something about measurement until line 84. Thus, most of the information given in the introduction are not necessary for the purpose of the study. I recommend to re-write the introduction and focus more on topics regarding measurement (e.g., more information about previous research regarding the BPNSF scale). In addition, please check whether the information in lines 107-115 are important for the research question. If it is relevant, then you have to elaborate in more detail; otherwise it is another possibility to shorten the introduction.

(2) line 99: four cultures do not imply universality; there are many more cultures that have to be examined before the assumption of universality is justified.

(3) Measurement invariance: Why did you check for measurement invariance regarding gender? Did you expect any differences based on previous literature? There should be a rationale behind this approach, otherwise you have to justify why you have not considered all the other options with regard to group differences. For example, why not checking for measurement invariance with regard to the different “field of knowledge” (social sciences, heal sciences …)?

(4) The requirements for Cronbach's alpha are most likely not met in this study. The authors should use an alternative such as McDonald's omega (Dunn et al., 2013)

(5) EFA is an appropriate tool if you do not know the number of factors or if you have no knowledge about a previously validated measurement model. However, you already know the number of factors based on the theory and based on previous research with the same scale in different contexts. So why it is necessary to do an EFA? CFA, as you did in your “second study”, is the appropriate approach here.

(6) You do have categorical data (i.e., a 5-point Likert-type scale) and, therefore, Maximum Likelihood (ML) estimation cannot be used (neither for EFA nor for CFA). You do need at least 7 (better 9) categories to use ML estimation in factor analysis. With regard to CFA, the best estimator for your data would be the Weighted Least Square Mean and Variance Adjusted (WLSMV) estimator. I am afraid you have to re-run your analysis.

(7) Please provide criteria on which you can evaluate whether the model fit is good or not.

(8) As outlined above, you do have categorical data. Therefore, the evaluation with regard to normality are not meaningful. The study of Curran et al. did not consider categorical data.

(9) line 222: More information with regard the “two-higher order factor model” is necessary. What do you mean with higher-order factor? One could think about a second-order factor model (but then the results regarding the model fit do not make sense) or about a model with two instead of six factors. Either way, it needs more information.

(10) Furthermore, in order to investigate the number of factors within the CFA framework, you should use the approach described by Gignac et al. (2017). The analyses/measurement model you used are not sufficient.

(11) Multiple-group CFA: There is a standard procedure to investigate measurement invariance (Meredith, 1993). I recommend to use this procedure and the common terms (i.e., configural invariance, metric variance, scalar variance, strict variance etc). There is a new paper with examples and explanations, which might help: (Schroeders & Gnambs, 2020)

 

Final statement

I request that the authors add a statement to the paper confirming whether, for all experiments, they have reported all measures, conditions, data exclusions, and how they determined their sample sizes. The authors should, of course, add any additional text to ensure the statement is accurate. This is the standard reviewer disclosure request endorsed by the Center for Open Science [see http://osf.io/hadz3]. I include it in every review.

 

*References*

Dunn, T. J., Baguley, T., & Brunsden, V. (2013). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 1–14. https://doi.org/10.1111/bjop.12046

Gignac, G. E., & Kretzschmar, A. (2017). Evaluating dimensional distinctness with correlated-factor models: Limitations and suggestions. Intelligence, 62, 138–147. https://doi.org/10.1016/j.intell.2017.04.001

Meredith, W. (1993). Measurement invariance, factor analysis and factorial invariance. Psychometrika, 58(4), 525–543. https://doi.org/10.1007/BF02294825

Schroeders, U., & Gnambs, T. (2020). Degrees of Freedom in Multigroup Confirmatory Factor Analyses: Are Models of Measurement Invariance Testing Correctly Specified? European Journal of Psychological Assessment, 36(1), 105–113. https://doi.org/10.1027/1015-5759/a000500

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The article is focusing an important and relevant issue of today, and contributes thus to both society and research.
Although there are things to consider in how this is communicated in your text. I will give comments on the different parts of the draft, where I find things a bit unclear but will start with some general issues.
Citations in the text are not in accordance with the journal, hyphens should appear when they are consecutive citations, example: paragraph 71: [20-21-22-23-24] [20–24]
The explanation of the instrument is well developed, but it lacks basic information such as the weights of the latent and observed variables.
The first of the doubts that arise is that the questionnaire does not appear, it should appear in the appendix.
The second is that I need to see the weights of the observed and latent variables, so the path grah graph should be included.
 It would also need clarification in the discussion of the number of items from which it is initially started and how many are part of the questionnaire, paragraph 236 Once the factorial model of the scale with the best fit was found (6-Factors and 24 items), if it is the final number of items, have any been removed in the EFA or CFA?
Expert-based content validation, prior to factor validation, is also missing.
In paragraph 231 data to the model. Although the chi-square was significant we talk about due to the large sample, but the CFA sample is 538 student, it is not a great sample, so you should be more careful when writing confirmed excellent compatibility.
A greater explanation of what the six factors are, what variables they encompass and what literature supports this factorization is needed.
Paragraph 237 talks about the realization of a multigroup by gender, why that variable and not others such as the origin of studies? Since gender is not discussed in the objective and in the introduction, paragraph 120 Given the importance of studying self-determination and psychological needs in university 120 students, and taking into account the above, we have tried to fill this literature gap. Therefore, this 121 study was carried out with the aim of analyzing and validating the psychometric properties of the 122 Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS) developed by Chen et al. [46] 123 in a sample of Spanish university students.

Author Response

Thank you very much for your thoughtful and comprehensive review of our paper. In the revision, we have tried our best to address all the issues raised by you.

As for the quotes, we have corrected them. We have introduced the articles of the questionnaire in Spanish in table 1 (page 5).
We proceeded to add a path diagram (page 7), with all the standardized weights of the latent and observed variables.
As for the factors that make up the questionnaire, we underlined:
"In particular, three of these scales analyze the satisfaction needs of the three basic psychological needs such as autonomy (4 elements: for example" I feel that my decisions reflect what I really want "), the relationship (4 elements: ad example "I feel close and connected with other people who are important to me") and competence (4 elements: example, "I feel I can successfully complete difficult tasks"). The other three remaining scales evaluate the level of frustration for each psychological basic necessity, that is autonomy (4 elements: for example "Most of the things that I feel like 'I have to"), relationship (4 elements: example "I have the impression that I don't like the people I spend my time with ") and competence (4 articles: example," I have serious doubts that I can do things well "). The 24 articles have been evaluated through a 5-point Likert-type scale, which varies from 1 (completely false) to 5 (completely true) "adding that yes it deals with the instrument validated by Chen et al., (2015).
Regarding expert-based content validation, we added on page 3 (lines 124-130): " All participants completed the Spanish version of the BPNSFS scale. The form was translated into Spanish by two bilingual translators and then back-translated into English by a bilingual, native-English-speaking researcher, after which differences between the two English versions were discussed. Only minor differences in style between the back-translated and the original version were found. The spanish version was distributed to a sample of 50 university students who reported no problems with the meaning and clarity of the items. The original form was retrieved from the Self Determination Theory website (https://selfdeterminationtheory.org)".


We carried out all the analysis: specifically, the EFA was eliminated as it is a scale already known in the literature whose factors are already known and in the introduction part we added more detailed information regarding the composition of the scale (page 2, lines 65-86) in which we supported factoring with empirical evidence in the literature.
As far as the MGCF is concerned, we have considered its valuable advice and we have made a measurement of the invariance also relative to the different type of studies (used procedure and the common terms, i.e., configural invariance, metric variance, scalar variance, strict variance). 

Once again, we appreciate your insightful and constructive comments and suggestions, which have helped us significantly reframe and improve our manuscript. We sincerely hope that our revision has adequately addressed your concerns and you would agree that our revised manuscript has been improved and can make sufficient contributions to the literature.

Round 2

Reviewer 1 Report

Dear authors,

I regret that my recommendation is against the publication of the manuscript. The journal sustainability has very specific guidelines regarding data availability:

"In order to maintain the integrity, transparency and reproducibility of research records, authors must make their experimental and research data openly available either by depositing into data repositories or by publishing the data and files as supplementary information in this journal."

"The supplementary files will also be available to the referees as part of the peer-review process."

I understand that under certain circumstances the data cannot be made publicly available (doi: 10.1177/2515245917751886), especially if there are ethical concerns (... although I do not see any such problems with this study). In such cases, however, restricted access to the data must be provided at least for the review process, as is possible with the Open Science Framework (osf.io) (see doi: 10.1177/2515245918757689) or other technical solutions. The review of the analyses based on the original data should be part of the review process of the manuscript. As a reviewer, how should I be able to assess the quality of the manuscript if I lack the central information?

Anyway, the journal guidelines are straightforward, and I have not seen any convincing arguments against publishing the data. For the future, this article might be also interesting for the authors doi: 10.1371/journal.pone.0225883.

Kind regards,

Author Response

Dear Reviewer

Thank you very much for your comment. We accept your request and upload the data as supplementary material.

Kind Regards. 

Reviewer 2 Report

The reviews have been carried out and the document is significantly improved at the methodological level. Suitable for publication.The reviews have been carried out and the document is significantly improved at the methodological level. Suitable for publication.

Author Response

Dear Reviewer

Thank you very much for your comment

Best Regards

Round 3

Reviewer 1 Report

I think the revised manuscript improved a lot. However, I do have a few suggestions/recommendations:

  • title: The BPNSF scale is obviously used in different cultural contexts, so that it might be useful to formulate the title more clearly with regard to the study content, for example: "... examining validity and measurement invariance of the *Spanish* BPNSF scale
  • lines 120-121 - "factor invariance by gender": Please check whether this statement is still correct. Do you mean "measurement invariance" instead of "factor invariance"? (again, my recommendation is to stick to the established psychometrical terms) Furthermore, it is not only gender but also type of studies.
  • lines 123-129: Please check whether the translation procedure is in line with state-of-the-art procedures to adapt and translate questionnaires (see International Test Commission, 2017).
  • lines 162-167: Some of the cutoff values seem unusual (e.g., CFI > .90). Usually, CFI > .95 or .97 is considered acceptable (e.g., Schermelleh-Engel et al., 2003). It may be better to rely on recent methodical studies recommending combinations of model fit indices rather than collecting individual recommendations for each indicator from different textbooks.
  • lines 186-188: Please double check whether the recommended cutoff criteria are in line with Chen (2007). I think it is .010 (instead of .015) with regard to SRMR and residual invariance.
  • Table 1:
    • (1) An English translation of the items and a link to the numbered items of Chen et al.'s (2015; Table 3) study would be useful for non-Spanish readers.
    • (2) Some of the items show rather high/low means. I think a discussion of this issue with regard to former studies (e.g., Chen et al., 2015) is necessary: It is a specific for the Spanish sample or is this pattern also found in other studies?
  • Table 2: Please specify which correlations are displayed (i.e., observed based on manifest scale scores, or latent, or …).
  • Figure 1: The values of the inter-factor correlations are difficult to read. Furthermore, it would be useful to name the latent factors (and maybe items) with less cryptic abbreviations.
  • Please double check whether the references to exploratory factor analysis are still correct (e.g., line 275)
  • Table 3: I have looked only briefly at the data but spotted a few inconsistencies:
    • (1) it seems that the non-robust model fits were reported. Usually the robust statistics are reported (e.g., https://groups.google.com/forum/#!topic/lavaan/wYA9msIv5TI), which in this case implies a less good model fit.
    • (2) I'm not familiar with the JASP software. However, as it is described in the manuscript, it relies on the lavaan package. With regard to the lavaan package, you have to specify which variables are categorical (https://lavaan.ugent.be/tutorial/cat.html). It seems to make a difference even when you explicitly specify the DWLS estimator. I noticed that you will get better model fits and different factor loadings etc.; and that you reported the results based on the DWLS estimator without declaring the indicators as ‘ordered’. Please double check your CFA, as it might also have an impact on your analyses with regard to measurement invariance.
  • (as it is still missing) Please elaborate in the manuscript the issue of data availability according to the journal guidelines (https://www.mdpi.com/journal/sustainability/instructions#suppmaterials)
  • (as it is still missing) I request that the authors add a statement to the paper confirming whether they have reported all measures, conditions, data exclusions, and how they determined their sample sizes. The authors should, of course, add any additional text to ensure the statement is accurate. This is the standard reviewer disclosure request endorsed by the Center for Open Science [see http://osf.io/hadz3]. I include it in every review.
    In this sense, please also elaborate how missing data was handled. There does not seem to be a single missing value, which is remarkable for this sample size.

References

International Test Commission. (2017). The ITC Guidelines for Translating and Adapting Tests (Second edition). www.InTestCom.org

Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of structural equation models: Tests of significance and descriptive goodness-of-fit measures. Methods of Psychological Research, 8(2), 23–74.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop