1. Introduction
The last two decades have seen significant changes in the structure of companies. The changing nature of the current socioeconomic reality has forced organizations to respond to turbulent and changing environments, which has undoubtedly created new situations in the workplace (
Galanti et al. 2023). More specifically, the work environment has undergone considerable transformations, driven by digitalization, automation, and, more recently, the COVID-19 pandemic. The global health crisis not only accelerated the adoption of telework and hybrid employment models but also intensified pre-existing structural challenges, such as labor precariousness, wage inequality, and polarization between high-skilled and low-skilled workers (
Koutroukis et al. 2022). In addition, the impact of the pandemic on employee well-being has been notable, generating increased work stress, digital fatigue, and new expectations regarding job security and flexibility (
Gavin et al. 2022). These changes have reshaped the employee–employer relationship, affecting organizational trust and perceptions of fairness in the workplace. Given this past, present, and future scenario, it is essential to understand the employee–organization relationship and the behavior of people in their work environment (
Walling 2023). To this end, academia has made use of numerous approaches and theoretical perspectives over the years. One of these perspectives corresponds to the psychological contract, a construct that focuses on understanding the content and scope of the exchange that takes place in this relationship and the implications it has for both the organization and the employees.
Since its introduction by
Argyris (
1960), and especially under the influence of
Rousseau (
1989,
1995), the psychological contract has generated much attention in the academic and professional fields, and a wealth of knowledge is now available on the implications and consequences of expectations and obligations not fulfilled by each party (
Coyle-Shapiro et al. 2019). The roots of this construct stem from Blau’s (
Blau 1964) social exchange theory, which posits that social relationships develop through unspecified obligations and the distribution of unequal power resources. Following Rousseau’s (
Rousseau 1995) framework, the psychological contract can be defined as a set of individual beliefs, shaped by the organization, regarding the terms of an exchange agreement between an individual and their employer.
Therefore, the psychological contract is an individual and subjective perception, and consequently, the parties involved in the relationship may have different views regarding the status of the psychological contract established with the other party (
Morrison and Robinson 1997). This has led most research on this construct to sideline the perspective of the employer/organization and instead focus primarily on the employees’ perspective, addressing two main areas (
Conway and Pekcan 2019;
Topa et al. 2022): (1) the study of the content of the psychological contract (i.e., the perceived set of promises based on the exchange between the employee and the organization) and (2) the status of the psychological contract in terms of fulfillment and breach. Regarding the latter, it is worth noting that the scientific literature has often used the terms breach/violation of the psychological contract interchangeably (
Topa et al. 2022;
Zhao et al. 2007), despite the existence of a conceptual distinction between the two. Psychological contract breach corresponds to an individual’s cognitive evaluation that the other party has failed to fulfill its promises, while the violation of the psychological contract constitutes the emotional reaction resulting from that cognition or perception of breach (
Morrison and Robinson 1997). Thus, this conceptual distinction between breach and violation of the psychological contract implies that a person may perceive a breach of promises but not necessarily experience feelings of contract violation (
Coyle-Shapiro et al. 2019).
In any case, the scientific literature on the psychological contract has shown great interest in its status (in terms of fulfillment or breach) because this is the primary way in which this construct influences employee behavior (
Rousseau 1989). In this regard, the review conducted by
Topa et al. (
2022) demonstrated that the breach of the psychological contract was strongly related to attitudinal outcomes (e.g., lower organizational commitment, decreased job satisfaction, and higher turnover intention) and behavioral outcomes (e.g., reduced job performance or development of negligent behaviors in the workplace). However, despite the attention that the psychological contract has garnered in academia over the past 15 years (
Kozhakhmet et al. 2023), many issues remain unresolved, one of which concerns the question of how to measure this construct (
Topa et al. 2022).
In a review conducted by
Rousseau and Tijoriwala (
1998), the authors proposed three approaches to measuring the psychological contract. The first approach focuses on its characteristics, determining the extent to which the psychological contract is primarily transactional (short-term economic exchange) or relational (long-term exchange of socio-emotional resources). The second approach focuses on content, examining the specific terms that constitute the contract, including the promises made by the organization and the employee. Finally, the third approach focuses on evaluation, determining the degree of fulfillment, change, or breach of the contract experienced within the exchange relationship. Thus, most instruments designed to measure the psychological contract aim to differentiate between the relational and transactional components (
Adamska et al. 2015). However, the structure of the relational–transactional scale has not garnered substantial empirical support (
Raeder et al. 2009). Leaving aside the bipolarity that encompasses the relational and transactional poles of the psychological contract, few studies have developed instruments that distinguish between the dimensions of employee promises and organizational promises (
De Vos et al. 2003;
Raeder and Grote 2004).
In a review conducted by
Freese and Schalk (
2008), the authors concluded that there are very few validated questionnaires on the psychological contract. Moreover, most of the developed questionnaires originate from English-speaking contexts (
Millward and Hopkins 1998), with very few instruments developed or adapted for other countries and languages (
Guerrero 2005;
Raeder et al. 2009). Another important issue in analyzing psychological contract measurement instruments is that, despite researchers’ efforts, many questionnaires have been developed using very small or highly specific samples (
Barbieri et al. 2018;
Gresse and Linde 2020;
Raeder et al. 2009;
Spies et al. 2010;
Zhang et al. 2020), which prevents adequate generalization of the data and makes it difficult to use these questionnaires in other contexts. Finally, it is worth noting that some of the measurement instruments were developed some time ago and in work environments that differ significantly from current ones (
De Vos et al. 2003;
Freese et al. 2008;
Freese and Schalk 1997;
Rousseau 1990,
2000). Therefore, it would be very valuable to have updated instruments to assess the psychological contract, allowing for a more satisfactory understanding of the content and status of current employment relationships.
Following the recommendations of
Freese and Schalk (
2008), the questionnaire developed by the PSYCONES team (
Guest et al. 2010;
Silla et al. 2005) is considered an appropriate tool for assessing the psychological contract. The PSYCONES project (Psychological Contract across Employment Situations) was an international research project conducted in six European countries (Germany, Belgium, Spain, the Netherlands, the United Kingdom, and Sweden), as well as Israel. The PSYCONES psychological contract questionnaire includes dimensions such as the fulfillment of a company’s promises, fulfillment of an employee’s promises, emotions related to fulfillment/breach (corresponding to the perception of psychological contract violation), and perceived justice and trust. This last variable is not common in Rousseau’s classic model (
Rousseau 1989), but its inclusion was intended to capture a more evaluative aspect of the exchange relationship between both parties. Thus, the PSYCONES model aimed to explore the employment relationship in terms of justice and trust, rather than just focusing on the exchange of specific promises and obligations, thereby giving more importance to the bidirectionality of the relationship (
Estreder 2012). To date, the psychological contract questionnaire developed by the PSYCONES team has been used in numerous studies across different countries and contexts (see
Boros and Curseu 2005;
Dhurup et al. 2015;
Estreder et al. 2006,
2019;
Snyman et al. 2015;
van den Heuvel and Schalk 2009;
van der Vaart et al. 2013). However, to the best of our knowledge, there is no validation of this questionnaire, making it highly relevant to carry out such an adaptation to have a reliable and effective instrument for use in both academic and professional environments.
Thus, the objective of the present research is to validate the psychological contract questionnaire developed by the PSYCONES research team (
Guest et al. 2010;
Silla et al. 2005), to provide empirical evidence of its reliability and validity in the current context. To achieve this objective, the factorial structure of the questionnaire will be examined, and the adequacy and stability of different models (CFA, bifactor CFA, ESEM, and bifactor ESEM) will be analyzed in order to identify which provides the best fit and representation of the dimensions of the psychological contract. The relevance of this study lies in addressing a critical need identified in the psychological contract literature: the validation of updated measurement instruments adapted to contemporary work contexts (
Alcover et al. 2017;
Topa et al. 2022). Despite academic interest in this construct, significant limitations persist in existing questionnaires, and efforts are needed to adapt and expand research instruments through the use of larger samples in different contexts and more robust statistical techniques. Although the PSYCONES psychological contract questionnaire is considered an appropriate tool for assessing this construct (
Freese and Schalk 2008), its lack of validation may limit its utility in current research. In a time when organizations are facing constantly changing work environments (
Shore et al. 2024), having validated and reliable instruments is crucial for understanding current employment relationships and designing effective interventions that promote organizational and employee well-being. This research aims to make a significant contribution in this direction by offering an adapted and validated tool that will be useful for both researchers and human resource management professionals.
3. Results
In the first subsample (
n1 = 882), both KMO (0.94) and Bartlett’s statistic (
p < 0.001) showed a good data fit to be submitted to the EFA. The recommended number of dimensions according to the parallel analysis (
Figure 2) and the EKC was four, so EFA was run with this four-dimension structure.
The EFA results (
Table 2) show that all item loading on a single factor obtained loadings between 0.89 and 0.50. However, items 3 and 14 of the company promises dimension, items 1 and 3 of the psychological contract violation dimension, and items 1, 12, and 16 of the worker promises dimension obtained loadings below 0.40 on their main factor. In addition, items 4, 10, and 15 of the company promises dimension, item 6 of the psychological contract violation dimension, and items 7 and 17 of the worker promises dimension obtained cross-loadings with a difference with respect to the main loading of less than 0.30; thus, following the recommendations of
Lloret-Segura et al. (
2014), these items were removed from the subsequent CFA and ESEM. In summary, the EFA revealed a four-factor, 32-item structure that explained 48% of the variance. The final four factors make theoretical sense due to the content of the items that compose them.
Regarding the assessment of validity evidence based on internal structure, the four structural equation models depicted above in
Figure 1 were tested with a comparison of the nested model fit. None of the models presented Heywood cases, as there were no negative error variances and no
R2 statistic with a value greater than 1. The fit indices of the four models tested (four-factor CFA, four-factor ESEM, bifactor CFA, and bifactor ESEM) are shown in
Table 3.
As can be seen, the four models showed adequate global fit indices. However, the four-factor ESEM, bifactor CFA, and bifactor ESEM solutions obtained the best global fit indices, highlighting the increase in the CFI and TLI values. Going deeper into the analysis of the differences in the global fit indices of the models, the RMSAD revealed that the four-factor CFA model had the worst global fit to the data than the rest of the models, obtaining an RMSAD index of 0.104 with respect to the bifactor CFA model. On the other hand, the RMSAD obtained between the four-factor ESEM model and the bifactor CFA model was 0.044, suggesting that both models have a similar overall fit. Finally, the bifactor ESEM model presented the best global fit, with an RMSAD of 0.122 between the bifactor ESEM model and the four-factor ESEM model.
Nevertheless, the choice of a model cannot be based solely on the global fit indices, since these represent the central tendency of the residuals, or where most of their values lie, without measuring their variability or dispersion. Therefore, it is also essential to examine the local fit of the models, which can be assessed through the correlation residuals (
Kline 2024). Thus, in the four models analyzed, as there are a total of 32 observed variables (items), there are 496 residual correlations. After applying the Benjamini–Hochberg (BH) procedure to correct multiplicity in the residuals significance analysis, the results revealed that the four-factor CFA and bifactor CFA models had a higher number of significant correlation residuals greater than |0.10| (
Figure 3).
More specifically, the four-factor CFA model obtained a total of 136 significant correlation residuals, of which 35 had values greater than |0.10| (7.05% of the total correlation residuals). The bifactor CFA model had a total of 93 significant correlation residuals, of which 33 had values greater than |0.10| (6.65% of the total correlation residuals). Conversely, four-factor ESEM and bifactor ESEM models had fewer significant correlation residuals (33 and 29, respectively), of which only 9 (1.81% of the total correlation residuals) and 4 (0.81% of the total correlation residuals) had values greater than |0.10|. Furthermore, the significant correlation residuals of the four-factor ESEM and bifactor ESEM solutions were all less than the absolute value of 0.20, a fact that did not occur in the four-factor CFA and bifactor CFA models. In summary, these results reveal that the four-factor ESEM and bifactor ESEM models presented a better local fit with respect to the four-factor CFA and bifactor CFA models.
Finally, in the last step of evaluating the four models, following the recommendations of
Morin et al. (
2016a,
2016b), a comparison of the parameter estimates of the four-factor CFA and four-factor ESEM models was first carried out. Thus,
Table 4 shows the factor loadings, cross-loadings, and uniqueness of each item for the four-factor CFA and four-factor ESEM models, while
Table 5 presents the correlations between factors of both models. As can be seen in
Table 4, the overall size of the factor loadings of the items on their target factors remained similar in the four-factor CFA (λ = 0.588 to 0.883;
M = 0.752) and four-factor ESEM solutions (λ = 0.554 to 0.895;
M = 0.733), showing the existence of well-defined factors. In the ESEM solution, the target factor loadings consistently remained well above the cross-loadings, which were generally very small (|λ| = 0 to 0.258; |
M| = 0.056). In fact, only one cross-loading was higher than |0.20|: item 14 of the worker promises (developing new skills and improving current ones) had a cross-loading on the company promises factor of 0.258, while the standardized loading on its corresponding factor was 0.635. The rest of the cross-loadings obtained values that were below |0.170|.
In turn, the correlations between factors (
Table 5) were slightly lower in the four-factor ESEM model (r = −0.511 to 0.648; |
M| = 0.337) than in the four-factor CFA model (r = −0.579 to 0.690; |
M| = 0.382), which lends support to the use of the exploratory structural equation model (
Swami et al. 2023). Furthermore, it is worth commenting that the overall pattern of these correlations was not altered by the use of the four-factor CFA or four-factor ESEM solution. Also, the two important aspects revealed in the correlation analysis between the factors are worth noting. First, the correlation between the psychological contract violation and employee promises factors in both models was very low and not significant. Second, the correlation between company promises and psychological contract violation, although significant, had a correlation index that was not very high. Finally, it should be noted that the highest correlations between factors were between the dimensions of company promises and perceived justice and trust, and between psychological contract violation and perceived justice and trust. Therefore, considering the fact that the four-factor ESEM model obtained better global fit indices with respect to the four-factor CFA model, a better local fit, and a decrease in the magnitude of the correlations between the factors, we proceed to retain the four-factor ESEM solution for comparison with the bifactor ESEM model.
As for the bifactor solutions,
Table 6 shows the results of the parameter estimates of the bifactor CFA and bifactor ESEM models. It was previously shown that the bifactor CFA and bifactor ESEM had very satisfactory global fit indices, with those corresponding to the bifactor ESEM model being higher. However, it should be noted that bifactor models may tend to overfit the data regardless of whether the population model has a bifactor structure or not (
Bonifay and Cai 2017;
Bonifay et al. 2017;
Markon 2019). Therefore, when evaluating these models, it is necessary to attend to more criteria in addition to the global fit indices. In this line, and as can be noticed when observing
Table 6, the general factor in both models is not sufficiently well defined, since there are factor loadings below 0.30 in this general factor (
Swami et al. 2023). This fact is also contrasted when evaluating the ECV and ωh values. Even though the PUC provided a value of 0.723 for the bifactor CFA model, which is higher than the established minimum (>0.70), the ECV indices of the bifactor CFA and bifactor ESEM models are 0.493 and 0.446, respectively, which are below the minimum required to consider that a model has a bifactor structure (>0.70) (
Rodríguez et al. 2016). Likewise, the values of the ωh of the bifactor CFA (ωh = 0.641) and bifactor ESEM (ωh = 0.564) have not reached the cutoff point of 0.70 to support the existence of a strong general factor. Congruently, the ESEM bifactor model has not reduced the indices of the cross-factor loadings (|λ| = 0 to 0.213; |
M| = 0.069) but has slightly increased them with respect to the ESEM solution (|λ| = 0 to 0.258; |
M| = 0.056).
In summary, the analyses performed so far suggest retaining the four-factor ESEM model as the solution that best represents the data. As shown above, the four-factor CFA model obtained significantly lower global fit indices than the four-factor ESEM model, in addition to having a worse local fit and providing higher factor correlations than the four-factor ESEM model. For its part, the bifactor CFA model showed similar global fit indices to those of the four-factor ESEM model, although the local fit was worse in the bifactor model. As for the bifactor ESEM model, it obtained significantly higher global fit indices and a very good local fit. However, both bifactor CFA and bifactor ESEM models did not reflect the existence of a well-defined general factor, and the bifactor ESEM model slightly increased the cross-loadings of the items in specific factors, rather than reducing this factor loadings.
Regarding the multigroup measurement invariance analysis, two sociodemographic variables were used for the present study to test the stability of the factor structure. Specifically, the multigroup invariance analyses were carried out using the gender variable (male and female), and the job level variable, recoded to provide two groups of similar size (basic workers and managers/supervisors). Thus, as a prior step to invariance analysis,
Table 7 shows the global fit indices corresponding to the four-factor ESEM model calculated for each group separately.
To perform the multigroup measurement invariance analysis, the steps described by
Kline (
2023) were followed, calculating first the configurational invariance (equal structure), followed by metric invariance (equal loadings), scalar invariance (equal intercepts), and, finally, strict invariance (equal residuals). In this regard, it should be noted that, when invariance tests fail, it is typical to consider partial invariance tests, in which the invariance restrictions are relaxed for one or a small number of parameters. Therefore, since the operationalization of the ESEM precludes its use in partial invariance estimation, the ESEM-within-CFA approach (EWC;
Marsh et al. 2013) was used to perform the invariance analyses. The EWC basically consists of transforming the ESEM solution into the standard CFA framework in order to perform the analyses mentioned above (
Morin et al. 2013).
Table 8 shows the results of the multigroup measurement invariance analysis performed with the sociodemographic grouping variables. As can be seen, the configurational models obtained satisfactory fit indices, which supports the presence of configurational invariance across groups. Regarding metric invariance, the differences between the RMSEA, CFI, and SRMR indices in the two multigroup analyses were smaller than the criterion determined by
Chen (
2007), while the RMSEA
D values obtained between the configurational and metric models also support the presence of metric invariance, both for the multigroup analysis by gender (RMSEA
D = 0.012) and for the multigroup analysis by job level (RMSEA
D = 0.031). Regarding scalar invariance, the changes in the RMSEA, CFI, and SRMR indices were small and did not reach the limit considered to rule out scalar invariance, while the RMSEA
D values were less than 0.070 in the multigroup analysis by gender (RMSEA
D = 0.019) and in the multigroup analysis by job level (RMSEA
D = 0.068). Finally, with respect to strict invariance, the results presented in
Table 8 show that this type of invariance is met for the multigroup analysis by gender. In this sense, the changes produced in the RMSEA, CFI, and SRMR fit indices between the scalar and strict models were not large enough to rule out the presence of strict invariance in the comparisons by gender. Furthermore, paying attention to the value of the RMSEA
D statistic, this index also supported the presence of strict invariance in the multigroup analyses by gender (RMSEA
D = 0.037).
However, strict invariance by job level was not clearly supported by the results. More specifically, although the changes in the RMSEA, CFI, and SRMR indices did not reach the limit to reject this type of invariance (
Chen 2007), the value of the RMSEA
D obtained in the multigroup analysis by job level (RMSEA
D = 0.097) could conclude that strict invariance was not reached. Therefore, strict partial invariance was determined by examining the χ
2 values and their associated probability in the Lagrange multiplier tests of the strict invariance model to identify which parameters were to be estimated freely between the two groups. The highest indices corresponded to the release of the intercept of company promise item 5 (participation in decision making), the release of the residuals of company promise items 2 (guarantee of stable work) and 11 (opportunities for advancement/development), and the release of the residual of employee promise item 3 (showing loyalty to the organization). Thus, by freely estimating the intercept and the residuals of the aforementioned items in both groups, an RMSEA
D index of 0.070 was obtained, which provided support for the existence of strict partial invariance in the multigroup analyses by job level.
Finally, the reliability analysis carried out on the dimensions of the psychological contract scale, as well as the concurrent validity analyses, are presented and discussed. As can be seen in
Table 9, the dimensions of the psychological contract scale obtained very good composite reliability indices, all of them above 0.80.
Table 9 also shows the results of the Pearson correlations calculated between the total scores of the scale dimensions and the scores of the job satisfaction and organizational commitment variables. As shown, the correlations of the psychological contract dimensions with job satisfaction and organizational commitment were significant, highlighting the values of the correlation indexes established between company promises and organizational commitment (
r = 0.539,
p < 0.01), between justice and trust and job satisfaction (
r = 0.510,
p < 0.01), and between justice and trust and organizational commitment (
r = 0.604,
p < 0.01).
Lastly, the structural equation model calculated to analyze the concurrent validity of the psychological contract scale is shown in
Figure 4. The fit indices were satisfactory (χ
2 = 1928.994; RMSEA = 0.049 [0.046, 0.052]; CFI = 0.917; TLI = 0.901; SRMR = 0.035), so, by examining the regression coefficients, it can be seen that the four dimensions of the psychological contract scale significantly influenced job satisfaction and organizational commitment.
4. Discussion
The purpose of the present study was to validate the psychological contract questionnaire designed by the PSYCONES research team (
Guest et al. 2010;
Silla et al. 2005). To this end, first, by parallel analysis and the EKC, the number of factors to be retained in the EFA was determined. As a result, it could be found that the four-factor structure supported by these tests coincided with the structure proposed by the authors of the questionnaire (
Guest et al. 2010;
Silla et al. 2005). Specifically, the dimensions of (1) fulfillment of company promises; (2) fulfillment of employee promises; (3) psychological contract violation; and (4) perceived justice and trust were maintained. These results are consistent with the recommendations proposed by
Freese and Schalk (
2008), who determined that a valid psychological contract assessment instrument should provide separate measures for the fulfillment of company promises, the fulfillment of worker promises, and the degree of perceived violation of the psychological contract.
On the other hand, the EFA procedure eliminated 13 items from the original 45 items of the scale. Even so, the final structure of the questionnaire consisted of 32 items (10 for fulfillment of company promises, 12 for fulfillment of employee promises, 3 for psychological contract violation, and 7 for perceived justice and trust), a number very similar to that shown by other studies, in which instruments have been designed and proposed to assess the psychological contract with a range of 22 to 57 items (
Freese et al. 2008;
Raeder et al. 2009;
Spies et al. 2010;
Zhang et al. 2020), and which allows each of the dimensions of the psychological contract to be adequately explored. Moreover, a qualitative analysis of the deleted items shows that their content or wording may refer to implausible situations in the work environment (e.g., “Has the company kept its promises to support you with non-work-related problems?”), or may induce social desirability, especially in items related to the dimension of fulfillment of the worker’s promises (e.g., “Have you kept your promises to be on time at work?”). This may have hindered the scoring of these items and, therefore, may have generated some distortion in the estimation of factor loadings. In this sense, it is difficult for respondents to clearly understand all the definitions of the promises they make to employers (
Conway and Briner 2009) and not be influenced by some biases in determining their perception of the promises made to the company.
On the other hand, to evaluate the internal structure of the scale, the four-factor CFA, bifactor CFA, four-factor ESEM, and bifactor ESEM models were tested. The results obtained show positive fit indices in the four models analyzed, with those in the bifactor CFA and four-factor ESEM models being significantly higher than those of the four-factor CFA model, and the bifactor ESEM model being the one with the best overall fit indices. However, a deeper analysis of the bifactor models did not provide sufficient support to guarantee the existence of a general factor, so, finally, the four-factor ESEM model was retained as the one that best represented the structure of the psychological contract scale.
In comparison with the studies published to date on the design and validation of scales to assess psychological contract, none have employed internal validity analysis techniques that contrast several models in order to determine which one best fits the data (
Guerrero 2005;
Raeder et al. 2009;
Spies et al. 2010;
Zhang et al. 2020). Furthermore, the relevance of the present study also lies in the evidence provided on the factorial structure of the psychological contract scale of the PSYCONES team (
Guest et al. 2010;
Silla et al. 2005), continuing with the line proposed by
Alcover et al. (
2017) and
Topa et al. (
2022), and responding to the need raised by these authors for further research to clarify this issue. The large sample used to carry out the analyses, together with the cross-validation methodology and the reliability indices obtained for each dimension, support the robustness of the results.
At the same time, and in line with the above, the multigroup measurement invariance analyses support the non-existence of significant differences in the interpretation of the scale according to gender and job level, which evidences the stability of the model and facilitates the utilization of the questionnaire as a useful measurement tool. To date, no published work on the measurement of the psychological contract is known to have been subjected to multigroup invariance analysis, a technique that is currently considered in the social sciences as a necessary prerequisite to be able to make comparisons between groups, as it ensures that items have the same meaning in all groupings (
Maassen et al. 2023;
Putnick and Bornstein 2016).
It should be noted that the fact that the presence of invariance with respect to job level has been demonstrated does not imply that there are no differences in the dimensions of the psychological contract between basic workers and middle or managerial managers. On the contrary, the presence of invariance with respect to this grouping factor confirms that it is possible to compare this construct across groups (
Protzko 2024). Measurement invariance determines that all effects of the grouping variable on the items are completely mediated by the latent construct (
Borsboom 2023). In other words, when invariance occurs, all differences in the observed measure between groups can be explained by the differences caused by the grouping factor in the latent construct. By contrast, when invariance does not occur, group differences in the latent construct will lead to group differences in the items, but there will also be differences in the items between groups that will be the consequence of some other mechanism that is not included in the model.
Finally, the concurrent validity analyses conducted provide evidence of significant relationships between the dimensions of the psychological contract scale and job satisfaction and organizational commitment, two of the most studied variables as an outcome of the psychological contract (
Topa et al. 2022;
Zhao et al. 2007). More specifically, the review by
Topa et al. (
2022) found that, in the scientific literature, the relationships between psychological contract breach and job satisfaction range from
r = −0.45 to −0.38, while the relationships between psychological contract breach and organizational commitment range from
r = −0.38 to −0.32. Very similarly, in the present study, the analyses performed have shown that the psychological contract violation dimension presented a negative and significant correlation with job satisfaction (
r = −0.434) and with organizational commitment (
r = −0.336). Likewise, the structural equation model revealed that all the dimensions of the scale significantly influenced the variables of job satisfaction and organizational commitment. Such results are in line with other similar studies that have found that the psychological contract, and more specifically, its fulfillment, exerted a significant influence on job satisfaction and organizational commitment to a similar magnitude (
Ampofo 2020;
Bravo et al. 2019;
Karani Mehta et al. 2024;
Rodwell et al. 2022).
The results of this research not only determine the validity of the PSYCONES questionnaire as a reliable tool for measuring the psychological contract in the work context but also highlight its potential application in organizational management. In practical terms, the questionnaire can be used by human resources professionals and managers to assess employees’ perception of the fulfillment of organizational promises, detect possible gaps in the employee–employer relationship, and anticipate problems related to job dissatisfaction and organizational commitment. This would allow companies to design more effective strategies to strengthen trust and the perception of fairness within the organization, key aspects for talent retention and well-being at work (
Gelencsér et al. 2023). Moreover, from a sectoral perspective, the PSYCONES questionnaire may be especially useful in industries where the state of the psychological contract plays a crucial role in employee motivation and performance. Sectors with high employee turnover, such as hospitality, retail, and services, can benefit from its application to identify factors that influence employee retention. Likewise, in sectors with a high degree of specialization, such as technology, education, or healthcare, the measurement of the psychological contract can help to design retention and career development policies aligned with employee expectations.
As organizations adapt to a constantly changing reality, measuring the psychological contract becomes essential to understand how mutual expectations and obligations have evolved in this changing context. Having validated and updated instruments will allow for a more accurate assessment of emerging work dynamics and their impact on employee well-being, satisfaction, and retention, thus contributing to effective talent management today.
In summary, it is considered that the present research contributes to the advancement of the study of the psychological contract by providing a questionnaire with good psychometric properties that allows the perception of this construct in workers to be evaluated.
5. Limitations and Future Research Directions
Despite the contributions of this study, it is worth pointing out some limitations and future lines of research. The first is the simple fact of using self-report questionnaires as a form of data collection, since one of the main difficulties in the evaluation of the psychological contract lies in something as elementary as its definition, which, as described by
Rousseau (
1990), corresponds to individual perceptions of the promises and obligations existing between an employee and his or her employer. In this sense, it is easy to think that there may be a situation in which an employee perceives promises made by his or her employer that have never actually been fulfilled and that are the result of misinterpretations by the individual himself or herself. It is therefore important to consider the possibility of developing forms of evaluation of this construct that are not limited to the use of self-reports but allow for the introduction of diverse sources of information that facilitate comparison of the data obtained.
Additionally, it would also be very relevant to develop measures that evaluate not only the fulfillment or non-fulfillment of the different promises but also the importance that each of them has for the employee. In other words, the value of a promise may be perceived differently by employees, which affects their perception of the fulfillment or non-fulfillment of the contract in a different way (
De Vos and Meganck 2009;
Kraak et al. 2017). For example, a person may perceive a breach of contract if the promise that was most important to him or her has not been fulfilled, even if all other promises of lesser value have been fulfilled (
Freese and Schalk 2008). In short, new lines of research aimed at developing methods for assessing the psychological contract must consider the relevance of specific promises to the employee. This means that, when assessing whether the psychological contract has been fulfilled or not, it is essential to take into account not only the fulfillment but also the value or importance of the promise for the employee.
On the other hand, although this work has demonstrated the invariance of the questionnaire through two sociodemographic factors, it is also of great interest to know how the different dimensions of the scale can function with different sectors or with different working conditions, such as the type of employment contract. In the sample of the present research, due to the unequal number of participants among the different sectors or the type of contract, multigroup measurement invariance analysis was not carried out with these sociodemographic variables in order to avoid biased estimates caused by very different sample sizes among the groups. Therefore, another possible line of research with great relevance corresponds to the analysis of the performance of this scale, with workers from different sectors and types of employment contracts to verify whether this invariance is indeed maintained or whether there are substantial differences. Furthermore, in relation to invariance, one limitation to note is that the sociodemographic variable job level was recoded to obtain two groups of similar size, which has made it necessary to collapse two categories into one, leaving this variable represented by two categories (basic workers and supervisors/managers), instead of the three that it initially contained. Therefore, future studies should further analyze the invariance of this scale with respect to job level and determine whether the structure is indeed stable between entry-level workers, supervisors and managers or directors.
Finally, although the total sample size is considerable and allows researchers to obtain relevant conclusions, the fact that these results were obtained with a sample of working people belonging to the Spanish labor context may make the generalization of these data require caution, since the labor situation present in Spanish society may be very different from the labor realities of other countries, which may also have an impact on the content and status of the psychological contract (
Aldossari et al. 2024;
Conway et al. 2014;
Jayaweera et al. 2021;
Metz et al. 2012). Thus, it is necessary to recognize that the findings provided here, while important for academia and psychological contract research, should not be taken as an endpoint. On the contrary, other researchers are encouraged to conduct similar studies that measure the psychological contract through the scale presented in this research in order to further deepen the analysis of its validity and factorial structure.