Next Article in Journal
Development of Civil Society Organizations—Caught Up in the Framework of Different Welfare Systems
Previous Article in Journal
Identifying Active Aging Policy Objectives in Italian Regions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Revisiting Psychological Contract Measurement: Validation of the PSYCONES Questionnaire

by
Adrián García-Selva
1,*,
Beatriz Martín-del-Rio
1 and
José Ramos-López
2
1
Department of Behavioural Sciences and Health, University of Miguel Hernández, 03202 Elche, Spain
2
Department of Social Psychology, University of Valencia, 46010 Valencia, Spain
*
Author to whom correspondence should be addressed.
Soc. Sci. 2025, 14(3), 181; https://doi.org/10.3390/socsci14030181
Submission received: 18 February 2025 / Revised: 8 March 2025 / Accepted: 12 March 2025 / Published: 17 March 2025

Abstract

:
The psychological contract is a fundamental construct for understanding the relationships established between employee and employer. However, the current literature states that there is no validated instrument to measure this construct. The present research aims to validate the psychological contract questionnaire developed by the PSYCONES team, providing empirical evidence on its reliability and validity in the current labor context. A sample of 1764 workers in the Spanish labor market was used, and a cross-validation process was applied, in which exploratory factor analysis was performed and various models (CFA, ESEM, CFA bifactor and ESEM bifactor) were tested to evaluate the structure of the questionnaire. In addition, multigroup invariance analyses were performed to examine the stability of the model according to gender and job level. The results indicated that the four-factor ESEM model showed the best fit and representation of the following dimensions of the psychological contract: fulfillment of company promises, fulfillment of employee promises, psychological contract violation, and perception of justice and trust. Likewise, the scale showed significant relationships with job satisfaction and organizational commitment, reinforcing its concurrent validity. This study provides a psychometrically robust instrument to assess the psychological contract in the workplace, offering a basis for future research and practical applications in different organizational contexts.

1. Introduction

The last two decades have seen significant changes in the structure of companies. The changing nature of the current socioeconomic reality has forced organizations to respond to turbulent and changing environments, which has undoubtedly created new situations in the workplace (Galanti et al. 2023). More specifically, the work environment has undergone considerable transformations, driven by digitalization, automation, and, more recently, the COVID-19 pandemic. The global health crisis not only accelerated the adoption of telework and hybrid employment models but also intensified pre-existing structural challenges, such as labor precariousness, wage inequality, and polarization between high-skilled and low-skilled workers (Koutroukis et al. 2022). In addition, the impact of the pandemic on employee well-being has been notable, generating increased work stress, digital fatigue, and new expectations regarding job security and flexibility (Gavin et al. 2022). These changes have reshaped the employee–employer relationship, affecting organizational trust and perceptions of fairness in the workplace. Given this past, present, and future scenario, it is essential to understand the employee–organization relationship and the behavior of people in their work environment (Walling 2023). To this end, academia has made use of numerous approaches and theoretical perspectives over the years. One of these perspectives corresponds to the psychological contract, a construct that focuses on understanding the content and scope of the exchange that takes place in this relationship and the implications it has for both the organization and the employees.
Since its introduction by Argyris (1960), and especially under the influence of Rousseau (1989, 1995), the psychological contract has generated much attention in the academic and professional fields, and a wealth of knowledge is now available on the implications and consequences of expectations and obligations not fulfilled by each party (Coyle-Shapiro et al. 2019). The roots of this construct stem from Blau’s (Blau 1964) social exchange theory, which posits that social relationships develop through unspecified obligations and the distribution of unequal power resources. Following Rousseau’s (Rousseau 1995) framework, the psychological contract can be defined as a set of individual beliefs, shaped by the organization, regarding the terms of an exchange agreement between an individual and their employer.
Therefore, the psychological contract is an individual and subjective perception, and consequently, the parties involved in the relationship may have different views regarding the status of the psychological contract established with the other party (Morrison and Robinson 1997). This has led most research on this construct to sideline the perspective of the employer/organization and instead focus primarily on the employees’ perspective, addressing two main areas (Conway and Pekcan 2019; Topa et al. 2022): (1) the study of the content of the psychological contract (i.e., the perceived set of promises based on the exchange between the employee and the organization) and (2) the status of the psychological contract in terms of fulfillment and breach. Regarding the latter, it is worth noting that the scientific literature has often used the terms breach/violation of the psychological contract interchangeably (Topa et al. 2022; Zhao et al. 2007), despite the existence of a conceptual distinction between the two. Psychological contract breach corresponds to an individual’s cognitive evaluation that the other party has failed to fulfill its promises, while the violation of the psychological contract constitutes the emotional reaction resulting from that cognition or perception of breach (Morrison and Robinson 1997). Thus, this conceptual distinction between breach and violation of the psychological contract implies that a person may perceive a breach of promises but not necessarily experience feelings of contract violation (Coyle-Shapiro et al. 2019).
In any case, the scientific literature on the psychological contract has shown great interest in its status (in terms of fulfillment or breach) because this is the primary way in which this construct influences employee behavior (Rousseau 1989). In this regard, the review conducted by Topa et al. (2022) demonstrated that the breach of the psychological contract was strongly related to attitudinal outcomes (e.g., lower organizational commitment, decreased job satisfaction, and higher turnover intention) and behavioral outcomes (e.g., reduced job performance or development of negligent behaviors in the workplace). However, despite the attention that the psychological contract has garnered in academia over the past 15 years (Kozhakhmet et al. 2023), many issues remain unresolved, one of which concerns the question of how to measure this construct (Topa et al. 2022).
In a review conducted by Rousseau and Tijoriwala (1998), the authors proposed three approaches to measuring the psychological contract. The first approach focuses on its characteristics, determining the extent to which the psychological contract is primarily transactional (short-term economic exchange) or relational (long-term exchange of socio-emotional resources). The second approach focuses on content, examining the specific terms that constitute the contract, including the promises made by the organization and the employee. Finally, the third approach focuses on evaluation, determining the degree of fulfillment, change, or breach of the contract experienced within the exchange relationship. Thus, most instruments designed to measure the psychological contract aim to differentiate between the relational and transactional components (Adamska et al. 2015). However, the structure of the relational–transactional scale has not garnered substantial empirical support (Raeder et al. 2009). Leaving aside the bipolarity that encompasses the relational and transactional poles of the psychological contract, few studies have developed instruments that distinguish between the dimensions of employee promises and organizational promises (De Vos et al. 2003; Raeder and Grote 2004).
In a review conducted by Freese and Schalk (2008), the authors concluded that there are very few validated questionnaires on the psychological contract. Moreover, most of the developed questionnaires originate from English-speaking contexts (Millward and Hopkins 1998), with very few instruments developed or adapted for other countries and languages (Guerrero 2005; Raeder et al. 2009). Another important issue in analyzing psychological contract measurement instruments is that, despite researchers’ efforts, many questionnaires have been developed using very small or highly specific samples (Barbieri et al. 2018; Gresse and Linde 2020; Raeder et al. 2009; Spies et al. 2010; Zhang et al. 2020), which prevents adequate generalization of the data and makes it difficult to use these questionnaires in other contexts. Finally, it is worth noting that some of the measurement instruments were developed some time ago and in work environments that differ significantly from current ones (De Vos et al. 2003; Freese et al. 2008; Freese and Schalk 1997; Rousseau 1990, 2000). Therefore, it would be very valuable to have updated instruments to assess the psychological contract, allowing for a more satisfactory understanding of the content and status of current employment relationships.
Following the recommendations of Freese and Schalk (2008), the questionnaire developed by the PSYCONES team (Guest et al. 2010; Silla et al. 2005) is considered an appropriate tool for assessing the psychological contract. The PSYCONES project (Psychological Contract across Employment Situations) was an international research project conducted in six European countries (Germany, Belgium, Spain, the Netherlands, the United Kingdom, and Sweden), as well as Israel. The PSYCONES psychological contract questionnaire includes dimensions such as the fulfillment of a company’s promises, fulfillment of an employee’s promises, emotions related to fulfillment/breach (corresponding to the perception of psychological contract violation), and perceived justice and trust. This last variable is not common in Rousseau’s classic model (Rousseau 1989), but its inclusion was intended to capture a more evaluative aspect of the exchange relationship between both parties. Thus, the PSYCONES model aimed to explore the employment relationship in terms of justice and trust, rather than just focusing on the exchange of specific promises and obligations, thereby giving more importance to the bidirectionality of the relationship (Estreder 2012). To date, the psychological contract questionnaire developed by the PSYCONES team has been used in numerous studies across different countries and contexts (see Boros and Curseu 2005; Dhurup et al. 2015; Estreder et al. 2006, 2019; Snyman et al. 2015; van den Heuvel and Schalk 2009; van der Vaart et al. 2013). However, to the best of our knowledge, there is no validation of this questionnaire, making it highly relevant to carry out such an adaptation to have a reliable and effective instrument for use in both academic and professional environments.
Thus, the objective of the present research is to validate the psychological contract questionnaire developed by the PSYCONES research team (Guest et al. 2010; Silla et al. 2005), to provide empirical evidence of its reliability and validity in the current context. To achieve this objective, the factorial structure of the questionnaire will be examined, and the adequacy and stability of different models (CFA, bifactor CFA, ESEM, and bifactor ESEM) will be analyzed in order to identify which provides the best fit and representation of the dimensions of the psychological contract. The relevance of this study lies in addressing a critical need identified in the psychological contract literature: the validation of updated measurement instruments adapted to contemporary work contexts (Alcover et al. 2017; Topa et al. 2022). Despite academic interest in this construct, significant limitations persist in existing questionnaires, and efforts are needed to adapt and expand research instruments through the use of larger samples in different contexts and more robust statistical techniques. Although the PSYCONES psychological contract questionnaire is considered an appropriate tool for assessing this construct (Freese and Schalk 2008), its lack of validation may limit its utility in current research. In a time when organizations are facing constantly changing work environments (Shore et al. 2024), having validated and reliable instruments is crucial for understanding current employment relationships and designing effective interventions that promote organizational and employee well-being. This research aims to make a significant contribution in this direction by offering an adapted and validated tool that will be useful for both researchers and human resource management professionals.

2. Materials and Methods

2.1. Participants

The sample consisted of 1764 individuals from the Spanish labor market, of which 928 (52.61%) were women and 836 (47.39%) were men. Since a cross-validation process was followed, the total sample was divided into two random groups. Table 1 presents the distribution of the sociodemographic variables collected according to the two subgroups, as well as for the total sample.
As can be seen, in most of the variables, the percentage values corresponding to each subgroup are quite similar to each other, ranging within relatively narrow intervals.

2.2. Measures

The psychological contract was measured using the questionnaire developed by the PSYCONES research team (Guest et al. 2010; Silla et al. 2005), based on the works of Alcover (2002), Anderson and Schalk (1998), and Thomas and Anderson (1998). This instrument encompasses four dimensions: (1) the fulfillment of a company’s promises, with 15 items (e.g., “Has your organization fulfilled the promises made about your participation in decision making?”); (2) fulfillment of employee’s promises, with 17 items (e.g., “Have you kept your promises to work well as a team?”); (3) psychological contract violation, assessed through 6 items considering various emotions resulting from the fulfillment/non-fulfillment of the contract (e.g., “Considering the degree to which your company/organization has or has not fulfilled its promises, do you feel betrayed?”); and (4) justice and trust perceived by the employee toward the organization, with 7 items (e.g., “In general, do you feel that you are fairly rewarded for the effort you put into your work?”). The 45 items use a Likert-type response format, ranging from 1 (strongly disagree) to 6 (strongly agree).
Organizational commitment was assessed using the questionnaire developed by Cook and Wall (1980), consisting of five items (e.g., “Even if this organization were not doing well, I would be reluctant to change organizations”). The response scale is in Likert-type format, ranging from 1 (strongly disagree) to 5 (strongly agree).
Finally, job satisfaction was measured using the questionnaire developed by the PSYCONES research team (Estreder et al. 2006; Guest et al. 2010; Peiró et al. 2007), adapted from Price’s (1997) instrument, which assesses job satisfaction through four items (e.g., “Most days I am excited about my work”). The response scale is in Likert-type format, ranging from 1 (strongly disagree) to 5 (strongly agree).

2.3. Procedure

Regarding data collection, this procedure was carried out by the research team through online forms designed with Google Forms. These forms were sent to a wide variety of companies located in different regions of Spain in order to ensure a diverse representation of sectors and organizations. Contact was established with human resources managers and directors, who facilitated the distribution of the questionnaire among employees. Data collection was carried out in a structured manner between 2022 and 2024.
The survey was divided into two parts. In the first, each participant responded to the 45 items of the psychological contract questionnaire (first block). This allowed the 1764 respondents to initially complete the main variable of the study. Likewise, the variables of job satisfaction and organizational commitment were included in another block, which only 1052 people answered (approximately 60% of the total). Thus, the second block contained, in addition to the psychological contract questionnaire, four items of the job satisfaction questionnaire (Estreder et al. 2006; Guest et al. 2010; Peiró et al. 2007) plus five items of the organizational commitment questionnaire (Cook and Wall 1980). The presentation of the questionnaires was always the same, beginning with a brief description of the purpose of the research, together with a guarantee of the anonymity of the data and the absence of questions on sensitive or personal topics (to ensure, as far as possible, the reliability of the responses), followed by instructions on how to complete the entire survey.

2.4. Data Analysis

To achieve the objectives of this research, the recommendations proposed by Ferrando et al. (2022) and Hernández et al. (2020) were followed. In order to increase the validity and generalizability of the results, a cross-validation procedure was implemented, dividing the total sample into two random subgroups. Thus, with the first subsample (n1 = 882), an exploratory factor analysis (EFA) was conducted to study the instrument’s dimensionality. Prior to this, the adequacy measures for factor analysis, Kaiser–Meyer–Olkin coefficient (KMO) and Bartlett’s test of sphericity, were examined. The EFA was performed on the polychoric correlation matrix, using the principal axis factoring estimation with an oblique promax rotation. To determine the number of factors to extract, parallel analysis and the empirical Kaiser criterion (EKC; Braeken and van Assen 2017) were used, following the recommendations of Auerswald and Moshagen (2019) regarding the determination of factors to retain in exploratory factor analysis with large sample sizes (N > 500). According to the EFA results, those items with factor loadings lower than 0.40 in the main factor or that had cross-loadings with a difference of less than 0.30 with respect to the loading of the main factor were removed (Lloret-Segura et al. 2014).
Once the questionnaire was explored, the second subsample (n2 = 882) was used to assess its internal structure through four alternative models, which included confirmatory factor analyses (CFA), bifactor CFA, exploratory structural equation modeling (ESEM), and bifactor ESEM. Through the CFA, a four-factor model was calculated, where the loading of each item was restricted to a single factor, preventing cross-loadings with other factors (for this reason, this model is also commonly known in the scientific literature as the independent clusters model, ICM-CFA). This model included four correlated factors representing the dimensions of fulfillment of a company’s promises, fulfillment of an employee’s promises, psychological contract violation, and perceived justice and trust. Through ESEM, the same set of four factors was represented using a target oblique rotation, which allows for estimating all main loadings while approximating cross-loadings as close to zero as possible. Thus, ESEM allows for more realistic model constraints and provides less biased factor loadings and correlations (Asparouhov and Muthén 2009). In the bifactor CFA, all items were allowed to load simultaneously on a general factor (G) and four specific factors (S), corresponding to the dimensions of the psychological contract questionnaire, without allowing for cross-loadings between dimensions or correlations among them. The value of the bifactor model lies in its ability to determine unidimensionality in the presence of multidimensionality (Reise 2012). Finally, the bifactor ESEM allowed for estimating the same set of G and S factors as in the bifactor CFA solution while allowing for the free estimation of cross-loadings between the S factors through target orthogonal rotation, setting the correlations between the S factors to zero (Morin et al. 2016a). Figure 1 provides a simplified representation of each of the four models. All models were calculated using maximum likelihood estimation with robust standard errors and a scaled test statistic from Satorra–Bentler (Satorra and Bentler 1994), treating the items as continuous variables due to the number of categories used for each item in each factor (Rhemtulla et al. 2012).
To assess the fit of the models, the guidelines proposed by Brown (2015) and Kline (2023, 2024) were followed. Specifically, to evaluate the global model fit, the following indices were used (taking the robust version determined by the maximum likelihood estimation with robust standard errors): comparative fit index (CFI), Tucker–Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). Regarding the CFI and TLI, values greater than 0.95 indicate optimal fit, while values greater than 0.90 indicate good fit. RMSEA values should be below 0.08 for reasonable fit and below 0.06 for good fit. Finally, SRMR values below 0.08 indicate good model fit.
Since the four proposed models were nested, a comparison was made using the RMSEAD, following Savalei et al. (2024). The RMSEAD differs from the change in RMSEA (ΔRMSEA), in that the former is calculated based on the difference in χ2, rather than subtracting the difference of the independent RMSEAs calculated for the two models. The RMSEAD can be interpreted as a typical RMSEA, where lower values represent a smaller difference in fit between the models (Savalei et al. 2024).
On the other hand, the local fit of the models was also analyzed by examining the absolute correlations residuals that exceeded the absolute value of 0.10 for the same pair of variables, as these may be indicators of model specification errors (Maydeu-Olivares 2017). For this purpose, and following the recommendations of Kline (2024), the Benjamini–Hochberg (BH) procedure was used to control multiplicity when performing significance testing of residuals. The Benjamini–Hochberg procedure (Benjamini and Hochberg 1995) tends to minimize both false positives and false negatives by controlling the rate of false negatives (Jafari and Ansari-Pour 2019). After correcting the p-values for the multiple significance tests, the absolute residuals that were significant and exceeded |0.10| for correlation residuals were selected for further inspection in the final model. In this regard, it is worth noting that, based on the study of these absolute correlation residuals, there is no simple and automatic rule that allows the researcher to decide whether to reject the model or not (Kline 2024).
In addition to examining the fit indices, parameter estimates were also evaluated to select the best alternative among the models tested. Following the recommendations of Morin et al. (2016a, 2016b), the first step consisted of comparing the CFA and ESEM solutions, examining the factor loadings and correlations between factors, and supporting the model that presented lower correlations between the four dimensions (Swami et al. 2023). The next step was to contrast the retained model with its bifactor counterpart (bifactor CFA or bifactor ESEM). In this case, the arguments in favor of a bifactor representation are the presence of a well-defined G factor with strong factor loadings and the reduction in cross-loadings in the bifactor ESEM compared to the normal ESEM (Howard et al. 2016). Prior to this step, for the bifactor CFA and bifactor ESEM models, the power and reliability of the general factor were determined through the common variance explained (ECV), hierarchical omega (ωh), and the percentage of uncorrelated correlations (PUCs) (the latter was obtained only for the bifactor CFA model). Values above 0.70 in the mentioned indices support the presence of a strong general factor (Rodríguez et al. 2016).
Multigroup measurement invariance analyses were also performed with the total sample to examine the stability of the factor structure across two sociodemographic variables (gender and employment level), following the steps described by Kline (2023). The choice of these variables is because they allow for the creation of groups with a balanced number of subjects, ensuring sufficient statistical power, and because of their relevance for the study of the psychological contract, especially in the case of the job level variable (Dick and Nadin 2011; Ellis 2007; Rogozińska-Pawełczyk and Buchali 2022). Therefore, models of configurational invariance (equal structure), metric invariance (equal loadings), scalar invariance (equal intercepts), and strict invariance (equal residuals) were estimated. For each model, whether imposing these restrictions caused a significant loss of fit in the CFI, RMSEA, and SRMR indices was evaluated (Chen 2007). For large samples and groups with balanced sizes, metric invariance is achieved if changes in the CFI, RMSEA, and SRMR between the configurational and metric models are less than 0.010, 0.015, and 0.030, respectively (|ΔCFI| < 0.010; |ΔRMSEA| < 0.015; |ΔSRMR| < 0.030); scalar and strict invariance are achieved if, between the metric and scalar models and between the scalar and strict models, these indices have variations smaller than 0.010, 0.015, and 0.010, respectively (|ΔCFI| < 0.010; |ΔRMSEA| < 0.015; |ΔSRMR| < 0.010) (Chen 2007). However, given that the interpretation of the variation in these indices should be considered as a guideline and not as an absolute criterion (Putnick and Bornstein 2016), the multigroup invariance analyses are further strengthened by reporting and discussing the values provided by the RMSEAD (Savalei et al. 2024).
Finally, the reliability of the dimensions of the psychological contract scale was evaluated using the composite reliability index, which has been proposed as a superior alternative to other measures (Rönkkö and Cho 2022). Additionally, the concurrent validity of the scale was analyzed through correlation analysis and structural equation modeling, using job satisfaction and organizational commitment as criterion variables—two constructs that have been shown to be highly related to the psychological contract (Topa et al. 2022; Zhao et al. 2007). First, the correlations between the total scores of the psychological contract scale dimensions and the total scores of job satisfaction and organizational commitment were analyzed. Second, a structural equation modeling analysis was conducted using maximum likelihood estimation with robust standard errors, in which each dimension of the psychological contract was an exogenous latent variable, while job satisfaction and organizational commitment were endogenous latent variables.
All analyses were performed using the “lavaan” package (Rosseel 2012), “semTools” (Jorgensen et al. 2022), and “psych” (Revelle 2024) in R (version 4.4.2).

3. Results

In the first subsample (n1 = 882), both KMO (0.94) and Bartlett’s statistic (p < 0.001) showed a good data fit to be submitted to the EFA. The recommended number of dimensions according to the parallel analysis (Figure 2) and the EKC was four, so EFA was run with this four-dimension structure.
The EFA results (Table 2) show that all item loading on a single factor obtained loadings between 0.89 and 0.50. However, items 3 and 14 of the company promises dimension, items 1 and 3 of the psychological contract violation dimension, and items 1, 12, and 16 of the worker promises dimension obtained loadings below 0.40 on their main factor. In addition, items 4, 10, and 15 of the company promises dimension, item 6 of the psychological contract violation dimension, and items 7 and 17 of the worker promises dimension obtained cross-loadings with a difference with respect to the main loading of less than 0.30; thus, following the recommendations of Lloret-Segura et al. (2014), these items were removed from the subsequent CFA and ESEM. In summary, the EFA revealed a four-factor, 32-item structure that explained 48% of the variance. The final four factors make theoretical sense due to the content of the items that compose them.
Regarding the assessment of validity evidence based on internal structure, the four structural equation models depicted above in Figure 1 were tested with a comparison of the nested model fit. None of the models presented Heywood cases, as there were no negative error variances and no R2 statistic with a value greater than 1. The fit indices of the four models tested (four-factor CFA, four-factor ESEM, bifactor CFA, and bifactor ESEM) are shown in Table 3.
As can be seen, the four models showed adequate global fit indices. However, the four-factor ESEM, bifactor CFA, and bifactor ESEM solutions obtained the best global fit indices, highlighting the increase in the CFI and TLI values. Going deeper into the analysis of the differences in the global fit indices of the models, the RMSAD revealed that the four-factor CFA model had the worst global fit to the data than the rest of the models, obtaining an RMSAD index of 0.104 with respect to the bifactor CFA model. On the other hand, the RMSAD obtained between the four-factor ESEM model and the bifactor CFA model was 0.044, suggesting that both models have a similar overall fit. Finally, the bifactor ESEM model presented the best global fit, with an RMSAD of 0.122 between the bifactor ESEM model and the four-factor ESEM model.
Nevertheless, the choice of a model cannot be based solely on the global fit indices, since these represent the central tendency of the residuals, or where most of their values lie, without measuring their variability or dispersion. Therefore, it is also essential to examine the local fit of the models, which can be assessed through the correlation residuals (Kline 2024). Thus, in the four models analyzed, as there are a total of 32 observed variables (items), there are 496 residual correlations. After applying the Benjamini–Hochberg (BH) procedure to correct multiplicity in the residuals significance analysis, the results revealed that the four-factor CFA and bifactor CFA models had a higher number of significant correlation residuals greater than |0.10| (Figure 3).
More specifically, the four-factor CFA model obtained a total of 136 significant correlation residuals, of which 35 had values greater than |0.10| (7.05% of the total correlation residuals). The bifactor CFA model had a total of 93 significant correlation residuals, of which 33 had values greater than |0.10| (6.65% of the total correlation residuals). Conversely, four-factor ESEM and bifactor ESEM models had fewer significant correlation residuals (33 and 29, respectively), of which only 9 (1.81% of the total correlation residuals) and 4 (0.81% of the total correlation residuals) had values greater than |0.10|. Furthermore, the significant correlation residuals of the four-factor ESEM and bifactor ESEM solutions were all less than the absolute value of 0.20, a fact that did not occur in the four-factor CFA and bifactor CFA models. In summary, these results reveal that the four-factor ESEM and bifactor ESEM models presented a better local fit with respect to the four-factor CFA and bifactor CFA models.
Finally, in the last step of evaluating the four models, following the recommendations of Morin et al. (2016a, 2016b), a comparison of the parameter estimates of the four-factor CFA and four-factor ESEM models was first carried out. Thus, Table 4 shows the factor loadings, cross-loadings, and uniqueness of each item for the four-factor CFA and four-factor ESEM models, while Table 5 presents the correlations between factors of both models. As can be seen in Table 4, the overall size of the factor loadings of the items on their target factors remained similar in the four-factor CFA (λ = 0.588 to 0.883; M = 0.752) and four-factor ESEM solutions (λ = 0.554 to 0.895; M = 0.733), showing the existence of well-defined factors. In the ESEM solution, the target factor loadings consistently remained well above the cross-loadings, which were generally very small (|λ| = 0 to 0.258; |M| = 0.056). In fact, only one cross-loading was higher than |0.20|: item 14 of the worker promises (developing new skills and improving current ones) had a cross-loading on the company promises factor of 0.258, while the standardized loading on its corresponding factor was 0.635. The rest of the cross-loadings obtained values that were below |0.170|.
In turn, the correlations between factors (Table 5) were slightly lower in the four-factor ESEM model (r = −0.511 to 0.648; |M| = 0.337) than in the four-factor CFA model (r = −0.579 to 0.690; |M| = 0.382), which lends support to the use of the exploratory structural equation model (Swami et al. 2023). Furthermore, it is worth commenting that the overall pattern of these correlations was not altered by the use of the four-factor CFA or four-factor ESEM solution. Also, the two important aspects revealed in the correlation analysis between the factors are worth noting. First, the correlation between the psychological contract violation and employee promises factors in both models was very low and not significant. Second, the correlation between company promises and psychological contract violation, although significant, had a correlation index that was not very high. Finally, it should be noted that the highest correlations between factors were between the dimensions of company promises and perceived justice and trust, and between psychological contract violation and perceived justice and trust. Therefore, considering the fact that the four-factor ESEM model obtained better global fit indices with respect to the four-factor CFA model, a better local fit, and a decrease in the magnitude of the correlations between the factors, we proceed to retain the four-factor ESEM solution for comparison with the bifactor ESEM model.
As for the bifactor solutions, Table 6 shows the results of the parameter estimates of the bifactor CFA and bifactor ESEM models. It was previously shown that the bifactor CFA and bifactor ESEM had very satisfactory global fit indices, with those corresponding to the bifactor ESEM model being higher. However, it should be noted that bifactor models may tend to overfit the data regardless of whether the population model has a bifactor structure or not (Bonifay and Cai 2017; Bonifay et al. 2017; Markon 2019). Therefore, when evaluating these models, it is necessary to attend to more criteria in addition to the global fit indices. In this line, and as can be noticed when observing Table 6, the general factor in both models is not sufficiently well defined, since there are factor loadings below 0.30 in this general factor (Swami et al. 2023). This fact is also contrasted when evaluating the ECV and ωh values. Even though the PUC provided a value of 0.723 for the bifactor CFA model, which is higher than the established minimum (>0.70), the ECV indices of the bifactor CFA and bifactor ESEM models are 0.493 and 0.446, respectively, which are below the minimum required to consider that a model has a bifactor structure (>0.70) (Rodríguez et al. 2016). Likewise, the values of the ωh of the bifactor CFA (ωh = 0.641) and bifactor ESEM (ωh = 0.564) have not reached the cutoff point of 0.70 to support the existence of a strong general factor. Congruently, the ESEM bifactor model has not reduced the indices of the cross-factor loadings (|λ| = 0 to 0.213; |M| = 0.069) but has slightly increased them with respect to the ESEM solution (|λ| = 0 to 0.258; |M| = 0.056).
In summary, the analyses performed so far suggest retaining the four-factor ESEM model as the solution that best represents the data. As shown above, the four-factor CFA model obtained significantly lower global fit indices than the four-factor ESEM model, in addition to having a worse local fit and providing higher factor correlations than the four-factor ESEM model. For its part, the bifactor CFA model showed similar global fit indices to those of the four-factor ESEM model, although the local fit was worse in the bifactor model. As for the bifactor ESEM model, it obtained significantly higher global fit indices and a very good local fit. However, both bifactor CFA and bifactor ESEM models did not reflect the existence of a well-defined general factor, and the bifactor ESEM model slightly increased the cross-loadings of the items in specific factors, rather than reducing this factor loadings.
Regarding the multigroup measurement invariance analysis, two sociodemographic variables were used for the present study to test the stability of the factor structure. Specifically, the multigroup invariance analyses were carried out using the gender variable (male and female), and the job level variable, recoded to provide two groups of similar size (basic workers and managers/supervisors). Thus, as a prior step to invariance analysis, Table 7 shows the global fit indices corresponding to the four-factor ESEM model calculated for each group separately.
To perform the multigroup measurement invariance analysis, the steps described by Kline (2023) were followed, calculating first the configurational invariance (equal structure), followed by metric invariance (equal loadings), scalar invariance (equal intercepts), and, finally, strict invariance (equal residuals). In this regard, it should be noted that, when invariance tests fail, it is typical to consider partial invariance tests, in which the invariance restrictions are relaxed for one or a small number of parameters. Therefore, since the operationalization of the ESEM precludes its use in partial invariance estimation, the ESEM-within-CFA approach (EWC; Marsh et al. 2013) was used to perform the invariance analyses. The EWC basically consists of transforming the ESEM solution into the standard CFA framework in order to perform the analyses mentioned above (Morin et al. 2013).
Table 8 shows the results of the multigroup measurement invariance analysis performed with the sociodemographic grouping variables. As can be seen, the configurational models obtained satisfactory fit indices, which supports the presence of configurational invariance across groups. Regarding metric invariance, the differences between the RMSEA, CFI, and SRMR indices in the two multigroup analyses were smaller than the criterion determined by Chen (2007), while the RMSEAD values obtained between the configurational and metric models also support the presence of metric invariance, both for the multigroup analysis by gender (RMSEAD = 0.012) and for the multigroup analysis by job level (RMSEAD = 0.031). Regarding scalar invariance, the changes in the RMSEA, CFI, and SRMR indices were small and did not reach the limit considered to rule out scalar invariance, while the RMSEAD values were less than 0.070 in the multigroup analysis by gender (RMSEAD = 0.019) and in the multigroup analysis by job level (RMSEAD = 0.068). Finally, with respect to strict invariance, the results presented in Table 8 show that this type of invariance is met for the multigroup analysis by gender. In this sense, the changes produced in the RMSEA, CFI, and SRMR fit indices between the scalar and strict models were not large enough to rule out the presence of strict invariance in the comparisons by gender. Furthermore, paying attention to the value of the RMSEAD statistic, this index also supported the presence of strict invariance in the multigroup analyses by gender (RMSEAD = 0.037).
However, strict invariance by job level was not clearly supported by the results. More specifically, although the changes in the RMSEA, CFI, and SRMR indices did not reach the limit to reject this type of invariance (Chen 2007), the value of the RMSEAD obtained in the multigroup analysis by job level (RMSEAD = 0.097) could conclude that strict invariance was not reached. Therefore, strict partial invariance was determined by examining the χ2 values and their associated probability in the Lagrange multiplier tests of the strict invariance model to identify which parameters were to be estimated freely between the two groups. The highest indices corresponded to the release of the intercept of company promise item 5 (participation in decision making), the release of the residuals of company promise items 2 (guarantee of stable work) and 11 (opportunities for advancement/development), and the release of the residual of employee promise item 3 (showing loyalty to the organization). Thus, by freely estimating the intercept and the residuals of the aforementioned items in both groups, an RMSEAD index of 0.070 was obtained, which provided support for the existence of strict partial invariance in the multigroup analyses by job level.
Finally, the reliability analysis carried out on the dimensions of the psychological contract scale, as well as the concurrent validity analyses, are presented and discussed. As can be seen in Table 9, the dimensions of the psychological contract scale obtained very good composite reliability indices, all of them above 0.80. Table 9 also shows the results of the Pearson correlations calculated between the total scores of the scale dimensions and the scores of the job satisfaction and organizational commitment variables. As shown, the correlations of the psychological contract dimensions with job satisfaction and organizational commitment were significant, highlighting the values of the correlation indexes established between company promises and organizational commitment (r = 0.539, p < 0.01), between justice and trust and job satisfaction (r = 0.510, p < 0.01), and between justice and trust and organizational commitment (r = 0.604, p < 0.01).
Lastly, the structural equation model calculated to analyze the concurrent validity of the psychological contract scale is shown in Figure 4. The fit indices were satisfactory (χ2 = 1928.994; RMSEA = 0.049 [0.046, 0.052]; CFI = 0.917; TLI = 0.901; SRMR = 0.035), so, by examining the regression coefficients, it can be seen that the four dimensions of the psychological contract scale significantly influenced job satisfaction and organizational commitment.

4. Discussion

The purpose of the present study was to validate the psychological contract questionnaire designed by the PSYCONES research team (Guest et al. 2010; Silla et al. 2005). To this end, first, by parallel analysis and the EKC, the number of factors to be retained in the EFA was determined. As a result, it could be found that the four-factor structure supported by these tests coincided with the structure proposed by the authors of the questionnaire (Guest et al. 2010; Silla et al. 2005). Specifically, the dimensions of (1) fulfillment of company promises; (2) fulfillment of employee promises; (3) psychological contract violation; and (4) perceived justice and trust were maintained. These results are consistent with the recommendations proposed by Freese and Schalk (2008), who determined that a valid psychological contract assessment instrument should provide separate measures for the fulfillment of company promises, the fulfillment of worker promises, and the degree of perceived violation of the psychological contract.
On the other hand, the EFA procedure eliminated 13 items from the original 45 items of the scale. Even so, the final structure of the questionnaire consisted of 32 items (10 for fulfillment of company promises, 12 for fulfillment of employee promises, 3 for psychological contract violation, and 7 for perceived justice and trust), a number very similar to that shown by other studies, in which instruments have been designed and proposed to assess the psychological contract with a range of 22 to 57 items (Freese et al. 2008; Raeder et al. 2009; Spies et al. 2010; Zhang et al. 2020), and which allows each of the dimensions of the psychological contract to be adequately explored. Moreover, a qualitative analysis of the deleted items shows that their content or wording may refer to implausible situations in the work environment (e.g., “Has the company kept its promises to support you with non-work-related problems?”), or may induce social desirability, especially in items related to the dimension of fulfillment of the worker’s promises (e.g., “Have you kept your promises to be on time at work?”). This may have hindered the scoring of these items and, therefore, may have generated some distortion in the estimation of factor loadings. In this sense, it is difficult for respondents to clearly understand all the definitions of the promises they make to employers (Conway and Briner 2009) and not be influenced by some biases in determining their perception of the promises made to the company.
On the other hand, to evaluate the internal structure of the scale, the four-factor CFA, bifactor CFA, four-factor ESEM, and bifactor ESEM models were tested. The results obtained show positive fit indices in the four models analyzed, with those in the bifactor CFA and four-factor ESEM models being significantly higher than those of the four-factor CFA model, and the bifactor ESEM model being the one with the best overall fit indices. However, a deeper analysis of the bifactor models did not provide sufficient support to guarantee the existence of a general factor, so, finally, the four-factor ESEM model was retained as the one that best represented the structure of the psychological contract scale.
In comparison with the studies published to date on the design and validation of scales to assess psychological contract, none have employed internal validity analysis techniques that contrast several models in order to determine which one best fits the data (Guerrero 2005; Raeder et al. 2009; Spies et al. 2010; Zhang et al. 2020). Furthermore, the relevance of the present study also lies in the evidence provided on the factorial structure of the psychological contract scale of the PSYCONES team (Guest et al. 2010; Silla et al. 2005), continuing with the line proposed by Alcover et al. (2017) and Topa et al. (2022), and responding to the need raised by these authors for further research to clarify this issue. The large sample used to carry out the analyses, together with the cross-validation methodology and the reliability indices obtained for each dimension, support the robustness of the results.
At the same time, and in line with the above, the multigroup measurement invariance analyses support the non-existence of significant differences in the interpretation of the scale according to gender and job level, which evidences the stability of the model and facilitates the utilization of the questionnaire as a useful measurement tool. To date, no published work on the measurement of the psychological contract is known to have been subjected to multigroup invariance analysis, a technique that is currently considered in the social sciences as a necessary prerequisite to be able to make comparisons between groups, as it ensures that items have the same meaning in all groupings (Maassen et al. 2023; Putnick and Bornstein 2016).
It should be noted that the fact that the presence of invariance with respect to job level has been demonstrated does not imply that there are no differences in the dimensions of the psychological contract between basic workers and middle or managerial managers. On the contrary, the presence of invariance with respect to this grouping factor confirms that it is possible to compare this construct across groups (Protzko 2024). Measurement invariance determines that all effects of the grouping variable on the items are completely mediated by the latent construct (Borsboom 2023). In other words, when invariance occurs, all differences in the observed measure between groups can be explained by the differences caused by the grouping factor in the latent construct. By contrast, when invariance does not occur, group differences in the latent construct will lead to group differences in the items, but there will also be differences in the items between groups that will be the consequence of some other mechanism that is not included in the model.
Finally, the concurrent validity analyses conducted provide evidence of significant relationships between the dimensions of the psychological contract scale and job satisfaction and organizational commitment, two of the most studied variables as an outcome of the psychological contract (Topa et al. 2022; Zhao et al. 2007). More specifically, the review by Topa et al. (2022) found that, in the scientific literature, the relationships between psychological contract breach and job satisfaction range from r = −0.45 to −0.38, while the relationships between psychological contract breach and organizational commitment range from r = −0.38 to −0.32. Very similarly, in the present study, the analyses performed have shown that the psychological contract violation dimension presented a negative and significant correlation with job satisfaction (r = −0.434) and with organizational commitment (r = −0.336). Likewise, the structural equation model revealed that all the dimensions of the scale significantly influenced the variables of job satisfaction and organizational commitment. Such results are in line with other similar studies that have found that the psychological contract, and more specifically, its fulfillment, exerted a significant influence on job satisfaction and organizational commitment to a similar magnitude (Ampofo 2020; Bravo et al. 2019; Karani Mehta et al. 2024; Rodwell et al. 2022).
The results of this research not only determine the validity of the PSYCONES questionnaire as a reliable tool for measuring the psychological contract in the work context but also highlight its potential application in organizational management. In practical terms, the questionnaire can be used by human resources professionals and managers to assess employees’ perception of the fulfillment of organizational promises, detect possible gaps in the employee–employer relationship, and anticipate problems related to job dissatisfaction and organizational commitment. This would allow companies to design more effective strategies to strengthen trust and the perception of fairness within the organization, key aspects for talent retention and well-being at work (Gelencsér et al. 2023). Moreover, from a sectoral perspective, the PSYCONES questionnaire may be especially useful in industries where the state of the psychological contract plays a crucial role in employee motivation and performance. Sectors with high employee turnover, such as hospitality, retail, and services, can benefit from its application to identify factors that influence employee retention. Likewise, in sectors with a high degree of specialization, such as technology, education, or healthcare, the measurement of the psychological contract can help to design retention and career development policies aligned with employee expectations.
As organizations adapt to a constantly changing reality, measuring the psychological contract becomes essential to understand how mutual expectations and obligations have evolved in this changing context. Having validated and updated instruments will allow for a more accurate assessment of emerging work dynamics and their impact on employee well-being, satisfaction, and retention, thus contributing to effective talent management today.
In summary, it is considered that the present research contributes to the advancement of the study of the psychological contract by providing a questionnaire with good psychometric properties that allows the perception of this construct in workers to be evaluated.

5. Limitations and Future Research Directions

Despite the contributions of this study, it is worth pointing out some limitations and future lines of research. The first is the simple fact of using self-report questionnaires as a form of data collection, since one of the main difficulties in the evaluation of the psychological contract lies in something as elementary as its definition, which, as described by Rousseau (1990), corresponds to individual perceptions of the promises and obligations existing between an employee and his or her employer. In this sense, it is easy to think that there may be a situation in which an employee perceives promises made by his or her employer that have never actually been fulfilled and that are the result of misinterpretations by the individual himself or herself. It is therefore important to consider the possibility of developing forms of evaluation of this construct that are not limited to the use of self-reports but allow for the introduction of diverse sources of information that facilitate comparison of the data obtained.
Additionally, it would also be very relevant to develop measures that evaluate not only the fulfillment or non-fulfillment of the different promises but also the importance that each of them has for the employee. In other words, the value of a promise may be perceived differently by employees, which affects their perception of the fulfillment or non-fulfillment of the contract in a different way (De Vos and Meganck 2009; Kraak et al. 2017). For example, a person may perceive a breach of contract if the promise that was most important to him or her has not been fulfilled, even if all other promises of lesser value have been fulfilled (Freese and Schalk 2008). In short, new lines of research aimed at developing methods for assessing the psychological contract must consider the relevance of specific promises to the employee. This means that, when assessing whether the psychological contract has been fulfilled or not, it is essential to take into account not only the fulfillment but also the value or importance of the promise for the employee.
On the other hand, although this work has demonstrated the invariance of the questionnaire through two sociodemographic factors, it is also of great interest to know how the different dimensions of the scale can function with different sectors or with different working conditions, such as the type of employment contract. In the sample of the present research, due to the unequal number of participants among the different sectors or the type of contract, multigroup measurement invariance analysis was not carried out with these sociodemographic variables in order to avoid biased estimates caused by very different sample sizes among the groups. Therefore, another possible line of research with great relevance corresponds to the analysis of the performance of this scale, with workers from different sectors and types of employment contracts to verify whether this invariance is indeed maintained or whether there are substantial differences. Furthermore, in relation to invariance, one limitation to note is that the sociodemographic variable job level was recoded to obtain two groups of similar size, which has made it necessary to collapse two categories into one, leaving this variable represented by two categories (basic workers and supervisors/managers), instead of the three that it initially contained. Therefore, future studies should further analyze the invariance of this scale with respect to job level and determine whether the structure is indeed stable between entry-level workers, supervisors and managers or directors.
Finally, although the total sample size is considerable and allows researchers to obtain relevant conclusions, the fact that these results were obtained with a sample of working people belonging to the Spanish labor context may make the generalization of these data require caution, since the labor situation present in Spanish society may be very different from the labor realities of other countries, which may also have an impact on the content and status of the psychological contract (Aldossari et al. 2024; Conway et al. 2014; Jayaweera et al. 2021; Metz et al. 2012). Thus, it is necessary to recognize that the findings provided here, while important for academia and psychological contract research, should not be taken as an endpoint. On the contrary, other researchers are encouraged to conduct similar studies that measure the psychological contract through the scale presented in this research in order to further deepen the analysis of its validity and factorial structure.

Author Contributions

Conceptualization, A.G.-S., J.R.-L. and B.M.-d.-R.; methodology, A.G.-S. and J.R.-L.; software, A.G.-S.; formal analysis, A.G.-S., B.M.-d.-R. and J.R.-L.; data curation, A.G.-S.; writing—original draft preparation, A.G.-S., J.R.-L. and B.M.-d.-R.; writing—review and editing, A.G.-S., B.M.-d.-R. and J.R.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Commitee of Miguel Hernández University (protocol code DCC.ASP.04.22 and date of approval: 11 November 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Adamska, Krystyna, Paweł Jurek, and Joanna Różycka-Tran. 2015. The mediational role of relational psychological contract in belief in a zero-sum game and work input attitude dependency. Polish Psychological Bulletin 46: 579–86. [Google Scholar] [CrossRef]
  2. Alcover, Carlos María. 2002. El Contrato Psicológico. El Componente Implícito de las Relaciones Laborales. Archidona: Ediciones Aljibe. [Google Scholar]
  3. Alcover, Carlos-María, Ramón Rico, William H. Turnley, and Mark C. Bolino. 2017. Multi-dependence in the formation and development of the distributed psychological contract. European Journal of Work and Organizational Psychology 26: 16–29. [Google Scholar] [CrossRef]
  4. Aldossari, Maryam, Maria Simosi, and Denise M. Rousseau. 2024. So lucky to be paid on time! Downward social comparison and gratitude in crisis economy psychological contracts. Economic and Industrial Democracy 45: 1158–83. [Google Scholar] [CrossRef]
  5. Ampofo, Emmanuel Twumasi. 2020. Do job satisfaction and work engagement mediate the effects of psychological contract breach and abusive supervision on hotel employees’ life satisfaction? Journal of Hospitality Marketing & Management 30: 282–304. [Google Scholar] [CrossRef]
  6. Anderson, Neil, and René Schalk. 1998. The psychological contract in retrospect and prospect. Journal of Organizational Behavior 19: 637–47. [Google Scholar] [CrossRef]
  7. Argyris, C. 1960. Understanding Organisational Behaviour. Homewood: Doresy. [Google Scholar]
  8. Asparouhov, Tihomir, and Bengt Muthén. 2009. Exploratory structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal 16: 397–438. [Google Scholar] [CrossRef]
  9. Auerswald, Max, and Morten Moshagen. 2019. How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological Methods 24: 468–91. [Google Scholar] [CrossRef]
  10. Barbieri, Barbara, Maria Luisa Farnese, Isabella Sulis, Laura Dal Corso, and Alessandro De Carlo. 2018. One perception, two perspectives: Measuring psychological contract dimensionality through the Psychological Contract Content Questionnaire. TPM-Testing, Psychometrics, Methodology in Applied Psychology 25: 21–47. [Google Scholar] [CrossRef]
  11. Benjamini, Yoav, and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological) 57: 289–300. [Google Scholar] [CrossRef]
  12. Blau, Peter. 1964. Exchange and Power in Social Life. Hoboken: Wiley. [Google Scholar]
  13. Bonifay, Wes, and Li Cai. 2017. On the Complexity of Item Response Theory Models. Multivariate Behavioral Research 52: 465–84. [Google Scholar] [CrossRef]
  14. Bonifay, Wes, Sean P. Lane, and Steven P. Reise. 2017. Three concerns with applying a bifactor model as a structure of psychopathology. Clinical Psychological Science 5: 184–86. [Google Scholar] [CrossRef]
  15. Boros, Smaranda, and Petru L. Curseu. 2005. Psychological contracts in work relations. Some data regarding the psychometric properties of the PSYCONES questionnaire on a Romanian sample. Revista de Psihologie Organizationala 5: 123–45. [Google Scholar]
  16. Borsboom, Denny. 2023. Psychological Constructs as Organizing Principles. In Essays on Contemporary Psychometrics. Methodology of Educational Measurement and Assessment. Edited by L. Andries van der Ark, Wilco H. M. Emons and Rob R. Meijer. Berlin/Heidelberg: Springer, pp. 89–108. [Google Scholar] [CrossRef]
  17. Braeken, Johan, and Marcel A. L. M. van Assen. 2017. An empirical Kaiser criterion. Psychological Methods 22: 450–66. [Google Scholar] [CrossRef] [PubMed]
  18. Bravo, Gonzalo, Doyeon Won, and Weisheng Chiu. 2019. Psychological contract, job satisfaction, commitment, and turnover intention: Exploring the moderating role of psychological contract breach in National Collegiate Athletic Association coaches. International Journal of Sports Science & Coaching 14: 273–84. [Google Scholar] [CrossRef]
  19. Brown, Timothy A. 2015. Methodology in the Social Sciences. Confirmatory Factor Analysis for Applied Research, 2nd ed. New York: The Guilford Press. [Google Scholar]
  20. Chen, Fang Fang. 2007. Sensitivity of goodness of fit indices to lack of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal 14: 464–504. [Google Scholar] [CrossRef]
  21. Conway, Neil, and Claire Pekcan. 2019. Psychological contract research: Older, but is it wiser? In Handbook of Research on the Psychological Contract at Work. Edited by Yannick Griep and Cary Cooper. Northampton: Edward Elgar Publishing, pp. 10–34. [Google Scholar] [CrossRef]
  22. Conway, Neil, and Rob B. Briner. 2009. Fifty Years of Psychological Contract Research: What Do We Know and What are the Main Challenges? In International Review of Industrial and Organizational Psychology, Volume 24. Edited by Gerard P. Hodgkinson and J. Kevin Ford. Hoboken: John Wiley & Sons, Ltd., pp. 71–130. [Google Scholar]
  23. Conway, Neil, Tina Kiefer, Jean Hartley, and Rob B. Briner. 2014. Doing more with less? Employee reactions to psychological contract breach via target similarity or spillover during public sector organizational change. British Journal of Management 25: 737–54. [Google Scholar] [CrossRef]
  24. Cook, John, and Toby Wall. 1980. New work attitude measures of trust, organizational commitment and personal need non-fulfilment. Journal of Occupational Psychology 53: 39–52. [Google Scholar] [CrossRef]
  25. Coyle-Shapiro, Jacqueline A.-M., Sandra Pereira Costa, Wiebke Doden, and Chiachi Chang. 2019. Psychological contracts: Past, present, and future. Annual Review of Organizational Psychology and Organizational Behavior 6: 145–69. [Google Scholar] [CrossRef]
  26. De Vos, Ans, and Annelies Meganck. 2009. What HR managers do versus what employees value: Exploring both parties’ views on retention management from a psychological contract perspective. Personnel Review 38: 45–60. [Google Scholar] [CrossRef]
  27. De Vos, Ans, Dirk Buyens, and René Schalk. 2003. Psychological contract development during organizational socialization: Adaptation to reality and the role of reciprocity. Journal of Organizational Behavior 24: 537–59. [Google Scholar] [CrossRef]
  28. Dhurup, M., E. Keyser, and J. Surujlal. 2015. The psychological contract, violation of the psychological contract, work-related anxiety and intention to quit of sport coaches in South Africa management and governance. African Journal for Physical Health Education, Recreation and Dance 21: 195–208. [Google Scholar]
  29. Dick, Penny, and Sara J. Nadin. 2011. Exploiting the exploited: The psychological contract, workplace domination and symbolic violence. Culture and Organization 17: 293–311. [Google Scholar] [CrossRef]
  30. Ellis, Jennifer Butler. 2007. Psychological Contracts: Does Work Status Affect Perceptions of Making and Keeping Promises? Management Communication Quarterly 20: 335–62. [Google Scholar] [CrossRef]
  31. Estreder, Yolanda. 2012. El Estudio de la Percepción Global de Justicia Organizacional en el Marco del Contrato Psicológico: Una Aproximación Multinivel. Ph.D. Thesis, University of Valencia, Valencia, Spain. Available online: https://roderic.uv.es/items/95b7646e-efdc-43cc-ba0c-2adb8c5e0997 (accessed on 5 January 2025).
  32. Estreder, Yolanda, Inés Tomás, Maria José Chambel, and José Ramos. 2019. Psychological contract and attitudinal outcomes: Multilevel mediation model. Personnel Review 48: 1685–1700. [Google Scholar] [CrossRef]
  33. Estreder, Yolanda, José Ramos, B. Sora, F. Latorre, S. Carbonell, and Isabel Rodríguez. 2006. El contrato psicológico en función del grupo ocupacional de los trabajadores. Revista de Psicología Social Aplicada 16: 5–31. [Google Scholar]
  34. Ferrando, Pere, Urbano Lorenzo-Seva, Ana Hernández-Dorado, and José Muñiz. 2022. Decálogo para el análisis factorial de los ítems de un test. Psicothema 34: 7–17. [Google Scholar] [CrossRef] [PubMed]
  35. Freese, Charissa, and René Schalk. 1997. Tilburgse Psychologisch Contract Vragenlijst [Tilburg Psychological Contract Questionnaire]. Tilburg: Tilburg University. [Google Scholar]
  36. Freese, Charissa, and René Schalk. 2008. How to measure the psychological contract? A critical criteria-based review of measures. South African Journal of Psychology 38: 269–86. [Google Scholar] [CrossRef]
  37. Freese, Charissa, Rene Schalk, and Marcel Croon. 2008. The Tilburg psychological contract questionnaire. Gedrag en Organisatie 21: 278–94. [Google Scholar] [CrossRef]
  38. Galanti, Teresa, Bruna Ferrara, Paula Benevene, and Ilaria Buonomo. 2023. Rethinking the Unthinkable: A Delphi Study on Remote Work during COVID-19 Pandemic. Social Sciences 12: 497. [Google Scholar] [CrossRef]
  39. Gavin, Mihajla, Mahan Poorhosseinzadeh, and Jim Arrowsmith. 2022. The transformation of work and employment relations: COVID-19 and beyond. Labour and Industry 32: 1–9. [Google Scholar] [CrossRef]
  40. Gelencsér, Martin, Gábor Szabó-Szentgróti, Zsolt Sándor Kőmüves, and Gábor Hollósy-Vadász. 2023. The Holistic Model of Labour Retention: The Impact of Workplace Wellbeing Factors on Employee Retention. Administrative Sciences 13: 121. [Google Scholar] [CrossRef]
  41. Gresse, Werner G., and Barend Linde. 2020. The anticipatory psychological contract of management graduates: Validating a psychological contract expectations questionnaire. South African Journal of Economic and Management Sciences 23: a3285. [Google Scholar] [CrossRef]
  42. Guerrero, Sylvie. 2005. Measurement of the Psychological Contract in a French Work Context. Industrial Relations 60: 112–44. [Google Scholar] [CrossRef]
  43. Guest, David E., Kerstin Isaksson, and Hans De Witte. 2010. Employment Contracts, Psychological Contracts and Employee Well-Being: An International Study. Oxford: Oxford University Press. [Google Scholar]
  44. Hernández, Ana, María Hidalgo, Ronald Hambleton, and Juana Gómez Benito. 2020. International test commission guidelines for test adaptation: A criterion checklist. Psicothema 32: 390–98. [Google Scholar] [CrossRef]
  45. Howard, Joshua L., Marylène Gagné, Alexandre J. S. Morin, and Jacques Forest. 2016. Using Bifactor Exploratory Structural Equation Modeling to Test for a Continuum Structure of Motivation. Journal of Management 44: 2638–64. [Google Scholar] [CrossRef]
  46. Jafari, Mohieddin, and Naser Ansari-Pour. 2019. Why, When and How to Adjust Your P Values? Cell journal 20: 604–7. [Google Scholar] [CrossRef]
  47. Jayaweera, Thushel, Matthijs Bal, Katharina Chudzikowski, and Simon De Jong. 2021. The impact of economic factors on the relationships between psychological contract breach and work outcomes: A meta-analysis. Employee Relations 43: 667–86. [Google Scholar] [CrossRef]
  48. Jorgensen, Terrence D., Sunthud Pornprasertmanit, Alexander M. Schoemann, and Yves Rosseel. 2022. semTools: Useful Tools for Structural Equation Modeling. R Package Version 0.5-6. Available online: https://CRAN.R-project.org/package=semTools (accessed on 10 December 2024).
  49. Karani Mehta, Anushree, Heena Thanki, Rasananda Panda, and Payal Trivedi. 2024. Exploring the psychological contract during new normal: Construction and validation of the revised psychological contract scale. International Journal of Manpower 45: 255–78. [Google Scholar] [CrossRef]
  50. Kline, Rex B. 2023. Principles and Practice of Structural Equation Modeling, 5th ed. New York: Guilford Press. [Google Scholar]
  51. Kline, Rex B. 2024. How to evaluate local fit (residuals) in large structural equation models. International Journal of Psychology 59: 1293–1306. [Google Scholar] [CrossRef]
  52. Koutroukis, Theodore, Dimos Chatzinikolaou, Charis Vlados, and Victoria Pistikou. 2022. The Post-COVID-19 Era, Fourth Industrial Revolution, and New Globalization: Restructured Labor Relations and Organizational Adaptation. Societies 12: 187. [Google Scholar] [CrossRef]
  53. Kozhakhmet, Sanat, Yasin Rofcanin, Assylbek Nurgabdeshov, and Mireia Las Heras. 2023. A bibliometric analysis of psychological contract research: Current status development and future research directions. International Journal of Manpower 44: 918–35. [Google Scholar] [CrossRef]
  54. Kraak, Johannes Marcelus, Renaud Lunardo, Olivier Herrbach, and François Durrieu. 2017. Promises to employees matter, self-identity too: Effects of psychological contract breach and older worker identity on violation and turnover intentions. Journal of Business Research 70: 108–17. [Google Scholar] [CrossRef]
  55. Lloret-Segura, Susana, Adoración Ferreres-Traver, Ana Hernández-Baeza, and Inés Tomás-Marco. 2014. El Análisis Factorial Exploratorio de los Ítems: Una guía práctica, revisada y actualizada. Anales de Psicología 30: 1151–69. [Google Scholar] [CrossRef]
  56. Maassen, Esther, E. Damiano D’Urso, Marcel A. L. M. van Assen, Michèle B. Nuijten, Kim De Roover, and Jelte M. Wicherts. 2023. The dire disregard of measurement invariance testing in psychological science. Psychological Methods, Advance online publication. [Google Scholar] [CrossRef]
  57. Markon, Kristian E. 2019. Bifactor and Hierarchical Models: Specification, Inference, and Interpretation. Annual Review of Clinical Psychology 15: 51–69. [Google Scholar] [CrossRef]
  58. Marsh, Herbert W., Benjamin Nagengast, and Alexandre J. S. Morin. 2013. Measurement invariance of big-five factors over the life span: ESEM tests of gender, age, plasticity, maturity, and la dolce vita effects. Developmental Psychology 49: 1194–218. [Google Scholar] [CrossRef]
  59. Maydeu-Olivares, Alberto. 2017. Assessing the Size of Model Misfit in Structural Equation Models. Psychometrika 82: 533–58. [Google Scholar] [CrossRef] [PubMed]
  60. Metz, Isabel, Carol T. Kulik, Michelle Brown, and Christina Cregan Brown. 2012. Changes in psychological contracts during the global financial crisis: The manager’s perspective. International Journal of Human Resource Management 23: 4359–79. [Google Scholar] [CrossRef]
  61. Millward, Lynne J., and Lee J. Hopkins. 1998. Psychological contracts, organizational and job commitment. Journal of Applied Social Psychology 28: 1530–56. [Google Scholar] [CrossRef]
  62. Morin, Alexandre J. S., A. Katrin Arens, and Herbert W. Marsh. 2016a. A bifactor exploratory structural equation modeling framework for the identification of distinct sources of construct-relevant psychometric multidimensionality. Structural Equation Modeling 23: 116–39. [Google Scholar] [CrossRef]
  63. Morin, Alexandre J. S., A. Katrin Arens, Antoine Tran, and Hervé Caci. 2016b. Exploring sources of construct-relevant multidimensionality in psychiatric measurement: A tutorial and illustration using the Composite Scale of Morningness. International Journal of Methods in Psychiatric Research 25: 277–88. [Google Scholar] [CrossRef]
  64. Morin, Alexandre J. S., Herbert W. Marsh, and Benjamin Nagengast. 2013. Exploratory structural equation modeling. In Structural Equation Modeling: A Second Course, 2nd ed. Edited by Gregory R. Hancock and Ralph O. Mueller. Charlotte: IAP Information Age Publishing, pp. 395–436. [Google Scholar]
  65. Morrison, Elizabeth Wolfe, and Sandra L. Robinson. 1997. When employees feel betrayed: A model of how psychological contract violation develops. The Academy of Management Review 22: 226–56. [Google Scholar] [CrossRef]
  66. Peiró, José María, Yolanda Estreder, A. José Ramos, and Francisco Gracia Caballer. 2007. Employee’s Affective Commitment and Propensity to leave in Human Services. Psychosocial Resources in Health 5: 81–94. [Google Scholar]
  67. Price, James L. 1997. Handbook of organizational measurement. International Journal of Manpower 18: 305–558. [Google Scholar] [CrossRef]
  68. Protzko, John. 2024. Invariance: What Does Measurement Invariance Allow Us to Claim? Educational and Psychological Measurement, 00131644241282982, Advance online publication. [Google Scholar] [CrossRef]
  69. Putnick, Diane L., and Marc H. Bornstein. 2016. Measurement invariance conventions and reporting: The state of the art and future directions for psychological research. Developmental Review 41: 71–90. [Google Scholar] [CrossRef]
  70. Raeder, S., and G. Grote. 2004. Fairness as prerequisite for the sustainability of psychological contracts. In Management Research, Vol. 14, Fairness and Management. Edited by G. Schreyögg, P. Conrad and J. Sydow. Wiesbaden: Gabler, pp. 139–74. [Google Scholar]
  71. Raeder, Sabine, Anette Wittekind, Alice Inauen, and Gudela Grote. 2009. Testing a psychological contract measure in a Swiss employment context. Swiss Journal of Psychology 68: 177–88. [Google Scholar] [CrossRef]
  72. Reise, Steven P. 2012. The rediscovery of bifactor measurement models. Multivariate Behavioral Research 47: 667–96. [Google Scholar] [CrossRef]
  73. Revelle, William. 2024. Psych: Procedures for Psychological, Psychometric, and Personality Research, R Package Version 2.4.12; Available online: https://cran.r-project.org/package=psych (accessed on 10 December 2024).
  74. Rhemtulla, Mijke, Patricia É. Brosseau-Liard, and Victoria Savalei. 2012. When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods 17: 354–73. [Google Scholar] [CrossRef] [PubMed]
  75. Rodríguez, Anthony, Steven P. Reise, and Mark G. Haviland. 2016. Applying bifactor statistical indices in the evaluation of psychological measures. Journal of Personality Assessment 98: 223–37. [Google Scholar] [CrossRef] [PubMed]
  76. Rodwell, John, Andre Gulyas, and Dianne Johnson. 2022. The New and Key Roles for Psychological Contract Status and Engagement in Predicting Various Performance Behaviors of Nurses. International Journal of Environmental Research and Public Health 19: 13931. [Google Scholar] [CrossRef]
  77. Rogozińska-Pawełczyk, Anna, and Marcin Buchali. 2022. The effect of demographic, contextual and individual work-related factors on the formation of psychological contract in BSS organizations. Polityka Społeczna 583: 19–30. [Google Scholar] [CrossRef]
  78. Rosseel, Yves. 2012. Lavaan: An r package for structural equation modeling. Journal of Statistical Software 48: 1–36. [Google Scholar] [CrossRef]
  79. Rousseau, Denise M. 1989. Psychological and implied contracts in organizations. Employee Responsibilities and Rights Journal 2: 121–39. [Google Scholar] [CrossRef]
  80. Rousseau, Denise M. 1990. New hire perceptions of their own and their employer’s obligations: A study of psychological contracts. Journal of Organizational Behavior 11: 389–400. [Google Scholar] [CrossRef]
  81. Rousseau, Denise M. 1995. Psychological Contracts in Organizations: Understanding Written and Unwritten Agreements. New York: Sage Publications, Inc. [Google Scholar]
  82. Rousseau, Denise M. 2000. Psychological Contract Inventory: Technical Report. Pittsburgh: Carnegie Mellon University. [Google Scholar]
  83. Rousseau, Denise M., and Snehal A. Tijoriwala. 1998. Assessing psychological contracts: Issues, alternatives and measures. Journal of Organizational Behavior 19: 679–95. [Google Scholar] [CrossRef]
  84. Rönkkö, Mikko, and Eunseong Cho. 2022. An Updated Guideline for Assessing Discriminant Validity. Organizational Research Methods 25: 6–14. [Google Scholar] [CrossRef]
  85. Satorra, Albert, and Pete M. Bentler. 1994. Corrections to test statistics and standard errors in covariance structure analysis. In Latent Variable Analysis. Applications for Developmental Research. Edited by Alexander Von Eye and Clifford C. Clogg. New York: Sage, pp. 399–419. [Google Scholar]
  86. Savalei, Victoria, Jordan C. Brace, and Rachel T. Fouladi. 2024. We need to change how we compute RMSEA for nested model comparisons in structural equation modeling. Psychological Methods 29: 480–93. [Google Scholar] [CrossRef]
  87. Shore, Lynn M., Jacqueline A. Coyle-Shapiro, and Aurelie Cnop-Nielsen. 2024. Elevating Health Significance Post-Pandemic: Is the Employee-Organization Relationship in a Period of Change? Annual Review of Organizational Psychology and Organizational Behavior 12: 269–94. [Google Scholar] [CrossRef]
  88. Silla, Inmaculada, Francisco-Javier Gracia, and José-María Peiró. 2005. Diferencias en el contenido del contrato psicológico en función del tipo de contrato y de la gestión empresarial pública o privada. Revista de Psicología Social 20: 61–72. [Google Scholar] [CrossRef]
  89. Snyman, Annette, Nadia Ferreira, and Alda Deas. 2015. The psychological contract in relation to employment equity legislation and intention to leave in an open distance higher education institution. South African Journal of Labour Relations 39: 72–92. [Google Scholar] [CrossRef]
  90. Spies, Alan R., Noel E. Wilkin, John P. Bentley, Alicia S. Bouldin, Marvin C. Wilson, and Erin R. Holmes. 2010. Instrument to measure psychological contract violation in pharmacy students. American Journal of Pharmaceutical Education 74: 107. [Google Scholar] [CrossRef]
  91. Swami, Viren, Christophe Maïano, and Alexandre J. S. Morin. 2023. A guide to exploratory structural equation modeling (ESEM) and bifactor-ESEM in body image research. Body Image 47: 101641. [Google Scholar] [CrossRef]
  92. Thomas, Helena D. C., and Neil Anderson. 1998. Changes in newcomers’ psychological contracts during organizational socialization: A study of recruits entering the British Army. Journal of Organizational Behavior 19: 745–67. [Google Scholar] [CrossRef]
  93. Topa, Gabriela, Mercedes Aranda-Carmena, and Berta De-Maria. 2022. Psychological Contract Breach and Outcomes: A Systematic Review of Reviews. International Journal of Environmental Research and Public Health 19: 15527. [Google Scholar] [CrossRef] [PubMed]
  94. van den Heuvel, Sjoerd, and René Schalk. 2009. The relationship between fulfilment of the psychological contract and resistance to change during organizational transformations. Social Science Information 48: 283–313. [Google Scholar] [CrossRef]
  95. van der Vaart, Leoni, Bennie Linde, and Marike Cockeran. 2013. The state of the psychological contract and employees’ intention to leave: The mediating role of employee well-being. South African Journal of Psychology 43: 356–69. [Google Scholar] [CrossRef]
  96. Walling, Kayla. 2023. Relocated Employees’ Experience with the Costs and Benefits of Video Technology for Maintaining Relationships. Social Sciences 12: 286. [Google Scholar] [CrossRef]
  97. Zhang, Ting, Chengchen Yin, Yongchen Geng, Yan Zhou, Shusen Sun, and Fushan Tang. 2020. Development and Validation of Psychological Contract Scale for Hospital Pharmacists. Journal of Multidisciplinary Healthcare 13: 1433–42. [Google Scholar] [CrossRef]
  98. Zhao, Hao, Sandy J. Wayne, Brian C. Glibkowski, and Jesus Bravo. 2007. The impact of psychological contract breach on work-related outcomes: A meta-analysis. Personnel Psychology 60: 647–80. [Google Scholar] [CrossRef]
Figure 1. Simplified graphical representation of the four tested models.
Figure 1. Simplified graphical representation of the four tested models.
Socsci 14 00181 g001
Figure 2. Parallel analysis results.
Figure 2. Parallel analysis results.
Socsci 14 00181 g002
Figure 3. Histogram for correlation residuals significant at BH corrected p < 0.05 for the four models.
Figure 3. Histogram for correlation residuals significant at BH corrected p < 0.05 for the four models.
Socsci 14 00181 g003
Figure 4. Concurrent validity as examined by structural equation modeling (SEM). Items measuring latent variables have been omitted for clarity of understanding. ** = p < 0.01; *** = p < 0.001.
Figure 4. Concurrent validity as examined by structural equation modeling (SEM). Items measuring latent variables have been omitted for clarity of understanding. ** = p < 0.01; *** = p < 0.001.
Socsci 14 00181 g004
Table 1. Descriptive statistics of sociodemographic data distributed by group.
Table 1. Descriptive statistics of sociodemographic data distributed by group.
n1 (882)n2 (882)N (1764)
Gender
 Man434 (49.21%)402 (45.58%)836 (47.39%)
 Woman448 (50.79%)480 (54.42%)928 (52.61%)
Age (SD)33.99 (10.81)37.89 (12.48)35.94 (11.84)
Type of Employment Contract
 Temporary contract239 (27.1%)173 (19.61%)412 (23.36%)
 Civil servant56 (6.35%)99 (11.22%)155 (8.79%)
 Discontinuous contract22 (2.49%)71 (8.05%)93 (5.27%)
 Permanent contract565 (64.06%)539 (61.11%)1104 (62.59%)
Job Level
 Management195 (22.11%)171 (19.39%)366 (20.75%)
 Middle management263 (29.82%)193 (21.88%)456 (25.85%)
 Workers424 (48.72%)518 (58.73%)942 (53.40%)
Sectors
 Services740 (83.9%)515 (58.39%)1255 (71.15%)
 Industry45 (5.10%)98 (11.11%)143 (8.11%)
 Education16 (1.81%)74 (8.39%)90 (5.10%)
 Public Administration27 (3.06%)51 (5.78%)78 (4.42%)
 Commerce25 (2.83%)83 (9.41%)108 (6.12%)
 Healthcare29 (3.29%)61 (6.92%)90 (5.10%)
Level of Education
 Basic education52 (5.90%)58 (6.58%)110 (6.24%)
 Secondary education289 (32.77%)234 (26.53%)523 (29.65%)
 Vocational training185 (20.98%)177 (20.07%)362 (20.52%)
 University education304 (34.47%)302 (34.24%)606 (34.35%)
 Postgraduate education52 (5.90%)111 (12.59%)163 (9.24%)
Note. SD = standard deviation.
Table 2. Rotated factor loading matrix.
Table 2. Rotated factor loading matrix.
ItemMean (SD)F1F2F3F4ItemMean (SD)F1F2F3F4
CP013.79 (1.87)0.6690.007−0.020−0.024EP024.81 (1.57)0.197−0.1340.523−0.053
CP024.74 (1.60)0.5010.051−0.0520.000EP035.30 (1.04)0.0110.1020.6940.001
CP033.98 (1.74)0.3920.371−0.1200.036EP045.09 (1.24)0.005−0.0080.6580.062
CP044.56 (1.58)0.4590.0880.002−0.310EP055.16 (1.07)−0.0190.0010.634−0.007
CP053.68 (1.93)0.6070.1300.0070.075EP065.48 (0.75)−0.129−0.0840.649−0.224
CP063.43 (2.01)0.825−0.116−0.0620.072EP075.31 (1.03)−0.213−0.0400.461−0.152
CP073.60 (1.95)0.741−0.010−0.0080.095EP085.38 (1.05)0.008−0.1320.613−0.058
CP083.38 (2.04)0.5350.1300.0200.066EP094.62 (1.68)0.0620.0410.5500.207
CP094.67 (1.82)0.521−0.0780.099−0.204EP105.03 (1.32)0.105−0.0480.6380.061
CP104.74 (1.52)0.458−0.0090.081−0.340EP115.35 (0.87)−0.151−0.0030.630−0.109
CP113.96 (1.80)0.832−0.0790.001−0.027EP123.67 (2.10)0.1850.0100.2910.222
CP124.66 (1.49)0.5160.0530.034−0.155EP134.43 (1.61)0.159−0.0560.554−0.002
CP133.68 (1.92)0.892−0.139−0.0090.031EP144.73 (1.51)0.211−0.0320.5910.123
CP144.21 (1.81)0.3540.1430.006−0.194EP155.57 (0.77)−0.1760.0920.503−0.149
CP154.59 (1.56)0.4590.179−0.016−0.277EP164.30 (1.81)0.2710.1260.3790.260
CV013.95 (0.99)0.0800.2850.031−0.337EP175.20 (1.28)0.0050.2210.4940.026
CV021.82 (1.06)−0.001−0.080−0.0080.677J013.42 (1.06)−0.0880.856−0.0410.043
CV033.76 (1.03)0.1420.1700.000−0.217J023.28 (1.21)0.0350.8060.0040.060
CV041.65 (1.11)−0.025−0.0630.0420.674J033.23 (1.18)−0.0220.760−0.002−0.040
CV051.98 (1.25)−0.029−0.0950.0260.683J043.54 (1.11)−0.0430.7610.011−0.076
CV063.86 (1.10)0.1360.2770.0330.477J053.19 (1.27)−0.0950.819−0.0860.112
EP014.32 (1.97)−0.0150.0580.3420.168J063.48 (1.14)0.0890.6360.013−0.011
J073.96 (1.04)−0.0840.6160.084−0.217
Note. n1 = 882; SD = standard deviation; CP = fulfillment of company’s promises; CV = psychological contract violation; EP = fulfillment of employee’s promises; J = justice and trust. EFA with principal axis factoring and promax rotation. Factor loadings greater than 0.40 that do not share cross-loadings with a difference of less than 0.30 are in bold.
Table 3. Fit indices of the different models for the psychological contract scale and comparison of nested models.
Table 3. Fit indices of the different models for the psychological contract scale and comparison of nested models.
Global FitLocal Fit
Tested Modeldfχ2RMSEA (90% CI)CFITLISRMRCRRMSEAD (90% CI)
Four-factor CFA4581483.3550.061 (0.058, 0.064)0.9180.9110.0527.06%
Bifactor CFA4321287.5600.057 (0.054, 0.061)0.9310.9210.0526.65%0.104 (0.093, 0.116)
Four-factor ESEM3741131.8580.059 (0.055, 0.063)0.9370.9160.0301.81%0.044 (0.036, 0.052)
Bifactor ESEM346900.5420.052 (0.048, 0.056)0.9550.9350.0230.81%0.122 (0.112, 0.133)
Note. n2 = 882; df = degrees of freedom; χ2 = chi-square; RMSEA = root mean square error of approximation; CFI = comparative fit index; TLI = Tucker–Lewis index; SRMR = standardized root mean square residual; CR = percentage of significant absolute correlation residuals greater than |0.10|; RMSEAD = root mean square error of approximation associated with the χ2 difference test, each index is obtained from comparison with the previous model in the table. The p-values associated with the χ2 test of fit are 0.000 in all four models.
Table 4. Standardized factor loadings (λ) and uniqueness (δ) for the four-factor CFA and four-factor ESEM.
Table 4. Standardized factor loadings (λ) and uniqueness (δ) for the four-factor CFA and four-factor ESEM.
Four-Factor CFAFour-Factor ESEM
ItemsλδFactor 1 (λ)Factor 2 (λ)Factor 3 (λ)Factor 4 (λ)δ
1. Company promises
CP010.8170.3330.7660.0140.0090.0740.337
CP020.7750.4000.747−0.044−0.0320.0230.410
CP050.7620.4190.6970.0420.0550.0950.412
CP060.7680.4110.895−0.022−0.027−0.1650.366
CP070.8040.3540.8320.016−0.011−0.0200.342
CP080.8070.3480.749−0.0010.0540.0570.353
CP090.7570.4270.606−0.0450.0280.1670.435
CP110.8310.3100.868−0.0220.005−0.0560.293
CP120.7220.4790.554−0.0930.0320.1660.476
CP130.8230.3220.843−0.030−0.016−0.0230.306
2. Psychological contract violation
CV020.8210.326−0.0300.7690.000−0.0690.332
CV040.8350.302−0.0240.873−0.0170.0400.258
CV050.8580.264−0.0690.7760.011−0.0660.300
3. Employee promises
EP020.6970.5140.1240.0700.653−0.0010.514
EP030.7190.4830.0660.0430.6710.0960.480
EP040.7290.4690.0560.1080.6840.1010.468
EP050.6380.593−0.0350.0670.6480.0180.587
EP060.5990.641−0.138−0.1450.667−0.0530.584
EP080.6360.595−0.100−0.0850.683−0.0290.564
EP090.7100.497−0.0210.0670.7020.0490.502
EP100.6800.5370.079−0.0340.689−0.1370.523
EP110.5880.654−0.064−0.0890.641−0.0990.618
EP130.6970.514−0.0740.0090.6990.0820.513
EP140.6880.5270.2580.0490.635−0.1250.496
EP150.6100.628−0.128−0.0680.6440.0620.599
4. Justice and trust
J010.7870.3800.052−0.045−0.0720.7420.379
J020.8480.2820.1080.0100.0340.7680.284
J030.8530.273−0.0090.0060.0140.8620.266
J040.8830.221−0.011−0.0360.0310.8680.214
J050.7150.4880.0580.002−0.0560.6890.487
J060.8230.3230.060−0.0160.0560.7590.323
J070.7930.372−0.049−0.0740.0070.7870.362
Note: n2 = 882; CP = fulfillment of company’s promises; CV = psychological contract violation; EP = fulfillment of employee’s promises; J = justice and trust. Boldface indicates target ESEM factor loadings.
Table 5. Standardized factor correlations for the four-factor CFA and four-factor ESEM.
Table 5. Standardized factor correlations for the four-factor CFA and four-factor ESEM.
1234
1. Fulfillment of company promises −0.393 **0.323 **0.690 **
2. Psychological contract violation−0.304 ** −0.063−0.579 **
3. Fulfillment of employee promises0.297 **−0.044 0.245 **
4. Justice and trust0.648 **−0.511 **0.220 **
Note: n2 = 882. Four-factor CFA correlations are shown above the diagonal; four-factor exploratory structural equation modeling correlations are shown below the diagonal. ** p < 0.01.
Table 6. Standardized factor loadings (λ) and uniqueness (δ) for bifactor CFA and bifactor ESEM.
Table 6. Standardized factor loadings (λ) and uniqueness (δ) for bifactor CFA and bifactor ESEM.
Bifactor CFABifactor ESEM
ItemsG-Factor (λ)S-Factor (λ)δG-Factor (λ)S-Factor 1 (λ)S-Factor 2 (λ)S-Factor 3 (λ)S-Factor 4 (λ)δ
1. Company promises
CP010.7800.2190.3430.6420.4990.0320.0380.0530.334
CP020.7980.0450.3610.5770.515−0.0220.0020.0660.397
CP050.7200.2430.4230.6440.4070.0670.0660.0070.411
CP060.6570.5110.3080.5750.5360.0250.005−0.1130.368
CP070.7330.3360.3500.6450.4860.0530.010−0.0550.342
CP080.7720.2250.3530.6500.4670.0260.0740.0230.353
CP090.810−0.0410.3420.6010.437−0.0370.0520.1540.420
CP110.7630.3580.2890.6640.5110.0240.025−0.0650.293
CP120.798−0.1100.3520.5670.426−0.0860.0570.1850.452
CP130.7330.4480.2630.6600.5050.0100.007−0.0330.308
2. Psychological contract violation
CV02−0.3810.7220.333−0.4410.0120.6710.061−0.1330.333
CV04−0.3420.7820.272−0.3980.0180.7510.049−0.0900.267
CV05−0.4030.7460.281−0.5050.0380.6760.085−0.0750.273
3. Employee promises
EP020.2790.6350.5190.350−0.0080.1160.594−0.1380.492
EP030.3400.6380.4780.382−0.0140.0780.612−0.0480.472
EP040.3060.6580.4730.397−0.0780.1520.618−0.1180.417
EP050.1640.6190.5900.197−0.0480.0910.603−0.0600.583
EP060.0950.6150.613−0.0320.138−0.1700.6990.2130.417
EP080.1260.6390.5760.0870.043−0.0760.6710.0850.527
EP090.1990.6780.5010.352−0.1750.1290.623−0.1900.405
EP100.1890.6560.5330.1600.097−0.0040.666−0.0500.519
EP110.1060.5970.632−0.0220.155−0.1040.6670.1440.498
EP130.1990.6690.5130.297−0.1180.0470.631−0.0680.492
EP140.3170.6130.5240.3180.1230.0970.596−0.1490.497
EP150.1640.5970.6170.0710.093−0.0850.6510.2080.512
4. Justice and trust
J010.5590.5540.3800.6740.011−0.080−0.1030.3880.378
J020.6550.5340.2860.7570.039−0.023−0.0010.3790.281
J030.6080.6050.2650.724−0.013−0.042−0.0220.4540.267
J040.6430.6080.2180.7270.018−0.086−0.0000.5010.214
J050.5170.4970.4860.634−0.004−0.029−0.0880.3290.481
J060.6360.5180.3260.6810.077−0.0620.0360.4570.317
J070.5680.5550.3700.5980.050−0.128−0.0070.5330.340
Note: n2 = 882; G-Factor = general factor; S-Factor = specific factors; CP = fulfillment of company’s promises; CV = psychological contract violation; EP = fulfillment of employee’s promises; J = justice and trust. Boldface indicates bifactor ESEM target S-factor loadings.
Table 7. Fit indices of the four-factor ESEM models for each of the subsamples generated by the sociodemographic variables used in the multigroup measurement invariance analysis.
Table 7. Fit indices of the four-factor ESEM models for each of the subsamples generated by the sociodemographic variables used in the multigroup measurement invariance analysis.
Groupnχ2RMSEA (90% CI)CFITLISRMR
Women9281167.8690.057 (0.053, 0.061)0.9340.9120.030
Men8361213.1240.061 (0.057, 0.065)0.9250.9090.033
Basic level workers9421212.180.057 (0.053, 0.060)0.9290.9050.030
Managers/supervisors8221088.3490.059 (0.055, 0.063)0.9270.9030.032
Note. n = number of subjects per subsample; χ2 = chi-square; RMSEA = root mean square error of approximation; SRMR = standardized root mean square residual; CFI = comparative fit index; TLI = Tucker–Lewis index. The p-values associated with the χ2 test of fit are 0.000 in all four models.
Table 8. Multigroup measurement invariance analysis results.
Table 8. Multigroup measurement invariance analysis results.
χ2dfRMSEA (90% CI)CFITLISRMR
Gender
 Configurational2380.5577480.059 (0.056, 0.062)0.9260.9010.031
 Metric2513.4148600.055 (0.052, 0.058)0.9260.9140.035
 Scalar2566.1048880.054 (0.052, 0.057)0.9260.9170.035
 Estrict2556.4139200.053 (0.051, 0.056)0.9260.9200.035
Job level
 Configurational2294.4097480.058 (0.055, 0.060)0.9280.9040.031
 Metric2485.9488600.055 (0.052, 0.057)0.9260.9140.040
 Scalar2622.5898880.055 (0.053, 0.058)0.9220.9120.041
 Strict2815.2699200.057 (0.055, 0.059)0.9140.9070.044
 Partial strict2701.9829160.055 (0.053, 0.058)0.9180.9110.041
RMSEAD (90% CI)ΔRMSEAΔCFIΔSRMR
Gender
 Configurational
 Metric0.012 (0.000, 0.022)−0.0040.0000.004
 Scalar0.019 (0.000, 0.034)−0.0010.0000.000
 Strict0.037 (0.026, 0.049)−0.0010.0000.000
Job level
 Configurational
 Metric0.031 (0.024, 0.038)−0.003−0.0020.009
 Scalar0.068 (0.057, 0.079)0.001−0.0040.001
 Strict0.097 (0.087, 0.107)0.002−0.0080.003
 Partial strict a0.070 (0.059, 0.081)0.000−0.0030.001
Note. N = 1764; χ2 = chi-square; df = degrees of freedom; RMSEA = root mean square error of approximation; SRMR = standardized root mean square residual; CFI = comparative fit index; TLI = Tucker–Lewis index; RMSEAD = root mean square error of approximation associated with the χ2 difference test; each index is obtained from comparison with the previous model in the table. The p-values associated with the χ2 test of fit are 0.000 for all models. a Partial strict model is compared with the scalar model.
Table 9. Composite reliability index and correlations between the total scores of the psychological contract scale dimensions, job satisfaction, and organizational commitment.
Table 9. Composite reliability index and correlations between the total scores of the psychological contract scale dimensions, job satisfaction, and organizational commitment.
123456
1. Fulfillment of company promises0.867
2. Psychological contract violation−0.309 **0.864
3. Fulfillment of employee promises0.337 **−0.071 **0.876
4. Justice and trust0.608 **−0.493 **0.250 **0.880
5. Job satisfaction0.427 **−0.434 **0.306 **0.510 **0.800
6. Organizational commitment0.539 **−0.336 **0.424 **0.604 **0.631 **0.782
Note. N = 1052; ** = p < 0.01. The values corresponding to the composite reliability index are presented diagonally and in bold type.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

García-Selva, A.; Martín-del-Rio, B.; Ramos-López, J. Revisiting Psychological Contract Measurement: Validation of the PSYCONES Questionnaire. Soc. Sci. 2025, 14, 181. https://doi.org/10.3390/socsci14030181

AMA Style

García-Selva A, Martín-del-Rio B, Ramos-López J. Revisiting Psychological Contract Measurement: Validation of the PSYCONES Questionnaire. Social Sciences. 2025; 14(3):181. https://doi.org/10.3390/socsci14030181

Chicago/Turabian Style

García-Selva, Adrián, Beatriz Martín-del-Rio, and José Ramos-López. 2025. "Revisiting Psychological Contract Measurement: Validation of the PSYCONES Questionnaire" Social Sciences 14, no. 3: 181. https://doi.org/10.3390/socsci14030181

APA Style

García-Selva, A., Martín-del-Rio, B., & Ramos-López, J. (2025). Revisiting Psychological Contract Measurement: Validation of the PSYCONES Questionnaire. Social Sciences, 14(3), 181. https://doi.org/10.3390/socsci14030181

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop