Next Article in Journal
Material Deprivation, Institutional Trust, and Mental Well-Being: Evidence from Self-Employed Europeans
Previous Article in Journal / Special Issue
Leadership, Knowledge Management, and Transactive Memory System in International Technical Assistance: Policy Insights for Entrepreneurial Resilience in Emerging Markets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Structured Subjective Readiness in Situational Leadership: Validating the 4D Model as an Associative Predictor

Department of Economics, Algebra Bernays University, Gradiscanska ul. 24, HR 10000 Zagreb, Croatia
*
Author to whom correspondence should be addressed.
Adm. Sci. 2025, 15(12), 488; https://doi.org/10.3390/admsci15120488
Submission received: 28 October 2025 / Revised: 4 December 2025 / Accepted: 6 December 2025 / Published: 15 December 2025

Abstract

The accurate assessment of follower readiness remains a challenge within Situational Leadership Theory (SLT), which traditionally emphasizes competence and commitment while overlooking motivational and relational cues. To address this gap, the study examined a structured four-facet model of subjective readiness—Drive, Dare, Decode, and Dialogue—and its association with employee and manager satisfaction and team adaptability. Data from a cross-sectional survey of employees and managers were analyzed using a 12-item 4D readiness scale alongside traditional readiness indicators and established measures of satisfaction and adaptability. The 4D scale showed strong overall reliability and factorial validity, though the Drive facet displayed weaker psychometric properties in the employee sample and should be interpreted cautiously. Overall readiness profiles were positively associated with both satisfaction and adaptability, with Dialogue emerging as a consistent contributor across outcomes. These associations should be interpreted as indicative rather than conclusive, given the study’s correlational design and reliance on self-reported data. Including the 4D facets alongside traditional indicators offered modest yet meaningful incremental explanatory value. Taken together, our findings indicate that a structured subjective readiness framework can enrich SLT’s traditional view of readiness by emphasizing motivational and relational dynamics—although further validation and longitudinal studies are needed to confirm these initial results.

1. Introduction

Leadership research continues to emphasize the value of adaptability for managing today’s dynamic and multicultural workplaces (Schulze & Pinkow, 2020; Ramesh et al., 2024; Del Pino-Marchito et al., 2025). This importance was underscored during the COVID-19 pandemic, when leaders had to adapt almost overnight to remote teamwork, extreme uncertainty, and unprecedented challenges to employee well-being (Dirani et al., 2020; Contreras et al., 2020; Rudolph et al., 2021). The crisis showed that adaptability is not just a helpful managerial skill but is essential for modern leadership (Bartsch et al., 2021).
Situational Leadership Theory (SLT) remains one of the most widely used leadership frameworks in management training and practice (Johansen, 1990; Avery & Ryan, 2002; Egan, 2023). SLT’s intuitive premise—that leaders should vary their task and relationship behaviours in response to follower readiness—has found a place in countless executive programmes and consulting toolkits. Nevertheless, decades of research have revealed two persistent limitations. First, SLT’s two-dimensional model oversimplifies leader–follower interactions, overlooking factors such as psychological safety and organizational culture (Graeff, 1997; A. Edmondson, 1999; McLaurin, 2006; Omilion-Hodges & Ptacek, 2021). Second, SLT tends to neglect relational and emotional dynamics, favouring rational assessments of competence and commitment while giving less attention to affective signals, shared meaning making, and dialogic sensemaking (Humphrey, 2013; Thompson & Glasø, 2015; Jäppinen, 2017). These issues are not merely theoretical. Practitioners have observed that traditional SLT assessments can misjudge follower “readiness” in fast-changing, high-emotion situations such as crises, post-merger transitions, or remote work (Zhang & Fjermestad, 2006; Kashive et al., 2022; Westover, 2024). Yet, to date, SLT offers no framework to capture these subjective facets of readiness.
The primary objective of this study is to help address SLT’s limitations by developing a more nuanced model of follower readiness. Therefore, a 4D model was developed, containing Drive, Dare, Decode, and Dialogue as a multidimensional framework for the richer assessment of subjective readiness. This structured “4D” model extends SLT’s traditional focus on competence and commitment by explicitly incorporating motivational (Drive), confidence-based (Dare), cognitive (Decode), and relational/communicative (Dialogue) facets. In doing so, it addresses a key theoretical gap in leadership research: the lack of a multidimensional, subjective readiness construct that captures the full range of follower capabilities and engagement. Notably, scholars have long called for richer, multidimensional, and more subjective conceptions of readiness (Avolio & Hannah, 2008; Carsten & Uhl-Bien, 2012; Thompson & Glasø, 2018). This study responds to that call by proposing the 4D Model of Subjective Readiness as a new diagnostic framework. The 4D model is intended to broaden leadership readiness assessments and serve as a concise tool for managers to better align their behaviour with follower needs across contexts. Accordingly, this study asks the following research question: can a 4D subjective readiness model better capture follower readiness and relate to outcomes (satisfaction, adaptability) beyond traditional one-dimensional indicators?
To test the 4D model empirically, this study draws on cross-sectional survey data from full-time employees and managers who completed a newly developed 12-item readiness measure, along with established assessments of satisfaction, perceived team adaptability, and traditional readiness indicators such as tenure and training. This study aims to provide two primary contributions to the situational leadership literature. It proposes a broadened conception of follower readiness that incorporates motivational, cognitive, and relational dimensions alongside traditional competence and commitment, and it provides initial correlational evidence that these subjective facets are associated with important workplace outcomes beyond conventional indicators—though these insights should be interpreted cautiously given the design and psychometric limitations noted. The remainder of the article first examines the theoretical grounding and development of the 4D model, and then describes the research approach and presents the empirical findings. This is followed by a discussion of their implications and the study’s methodological constraints, after which we conclude with suggestions for future research.

2. Theoretical Framework

2.1. Situational Leadership Theory: Origins, Evolution, and Critiques

Situational Leadership Theory (SLT), introduced by Hersey and Blanchard in 1969 (formalized in 1982), posits that effective leaders adapt their approach to a follower’s readiness rather than relying on any single “best” style (Hersey & Blanchard, 1969, 1982; Del Pino-Marchito et al., 2025). In SLT, readiness is defined as a combination of ability (competence) and willingness/confidence (commitment) to perform a task. The model outlines four readiness levels (D1–D4), each matched with a corresponding leadership style (S1–S4: Directing, Coaching, Supporting, Delegating), granting followers greater autonomy as their readiness increases. Early studies reported that aligning leadership style to follower readiness improved satisfaction and performance (Hersey et al., 1996). However, even in these initial demonstrations, diagnosing readiness proved subjective, foreshadowing later critiques about bias and inconsistency in the SLT approach (Vecchio et al., 2006; Thompson & Glasø, 2018; Serrat, 2021).
Although SLT’s logic is appealing, empirical support is mixed and several enduring limitations are well documented (Vecchio, 1987; Cairns et al., 1998; Thompson & Glasø, 2015, 2018). A primary challenge is the bias and inconsistency in how readiness is assessed (Graeff, 1997). Because SLT relies on the leader as the sole judge of a follower’s readiness, evaluations can suffer from attribution errors and halo effects. Studies show SLT’s predictions hold only when leader and follower readiness ratings align—if they do not, the model’s explanatory power breaks down (Vecchio, 1987; Cairns et al., 1998; Thompson & Vecchio, 2009; Papworth et al., 2009; Thompson & Glasø, 2015, 2018). Another limitation is the theory’s linear progression (from S1/D1 up to S4/D4), which oversimplifies dynamic behaviour. Readiness is task-specific and fluctuates over time, so a stage-like model clashes with modern organizations where priorities shift rapidly, roles are fluid, and interdependence is high (Fernandez & Vecchio, 1997; Vecchio et al., 2006; Omilion-Hodges & Ptacek, 2021). Leaders must continuously recalibrate their approach (Papworth et al., 2009), and as a result, rigid style prescriptions often fail across different tasks and times.
Beyond these issues, research shows that context matters greatly for SLT. Cross-cultural studies have found that different cultures (and even different individuals) vary in their preference for directive versus supportive leadership and in how they interpret “readiness” (House et al., 2004; Rockstuhl et al., 2011; Kaifi et al., 2014). Likewise, sector-specific studies illustrate that SLT’s effectiveness is context-dependent, and many environments still lack a standardized way to apply the theory in practice (Fitriani et al., 2018; Manyuchi & Sukdeo, 2021; Wang et al., 2024). Moreover, in modern work arrangements such as virtual teams, hybrid workplaces, or crisis situations, leadership demands often extend beyond SLT’s original competence–commitment focus. For example, coordinating remote collaboration and adapting to technology-mediated work require considerations not captured by the basic two-dimensional model (Zhang & Fjermestad, 2006; Kashive et al., 2022; Westover, 2024).
Related research on sensemaking and psychological safety shows that the way employees interpret ambiguity and their sense of safety in speaking up are critical drivers of satisfaction, learning, and coordination (Weick, 1995; A. C. Edmondson & Lei, 2014; Thompson & Glasø, 2018; Schulze & Pinkow, 2020). However, SLT’s standard definition of readiness only addresses these cognitive and relational factors indirectly. This omission helps explain why SLT has yielded mixed empirical results when applied without considering the workplace’s relational climate and sensemaking processes (Vecchio, 1987; Thompson & Glasø, 2018).
Despite SLT’s shortcomings, studies consistently find that supportive, coaching-oriented leadership—essentially, the “high relationship” style in SLT terms—is linked to beneficial outcomes like lower turnover, greater innovation, and higher employee engagement (Hammond et al., 2011; Lee et al., 2019; Kim & Kim, 2021; Cho et al., 2022; Biscotti et al., 2018; Jiang et al., 2019; Ribeiro et al., 2022; Bonini et al., 2024). Notably, such positive outcomes are strongest when leadership assessments consider not just a follower’s task ability and willingness, but also motivation quality, efficacy beliefs, sensemaking under ambiguity, and open dialogue—dimensions that traditional SLT tools tend to ignore (Del Pino-Marchito et al., 2025; Thompson & Glasø, 2018; Schulze & Pinkow, 2020). Taken together, these observations point to the need to extend SLT’s readiness construct beyond a single competence–commitment continuum toward a multidimensional profile that incorporates these additional readiness dimensions.

2.2. Subjective Readiness and the 4D Model

Building on the limitations of SLT outlined above, contemporary leadership research has increasingly turned toward subjective, multidimensional conceptions of follower readiness (e.g., Avolio & Hannah, 2008; Thompson & Vecchio, 2009; Thompson & Glasø, 2018). A common critique is that SLT’s simple two-factor view of readiness (competence and commitment) ignores important motivational, affective, cognitive, and relational dynamics in follower behaviour (Graeff, 1997; Carsten & Uhl-Bien, 2012; Thompson & Glasø, 2018; Vanovenberghe et al., 2021). In light of this gap, scholars have called for richer readiness constructs that capture how employees interpret situations, maintain a sense of agency, and co-create meaning with their leaders (Thompson & Vecchio, 2009). This study responds to this call by proposing a structured four-facet readiness model that incorporates motivational, confidence-based, cognitive, and relational components that have been comparatively overlooked in prior SLT work. Importantly, the study does not directly test SLT’s style-matching predictions. Instead, it focuses on enriching the follower readiness construct itself. Accordingly, the 4D model is introduced as an exploratory framework and examined in terms of its associations with outcomes (i.e., as an associative predictor) in this initial investigation.
Early efforts to operationalize readiness relied on a mix of measures, including objective indicators (e.g., tenure, formal training) alongside self-reported performance (Silverthorne & Wang, 2001; Zigarmi & Roberts, 2017). However, these proxy measures showed only weak correlations with follower satisfaction and performance, as they failed to capture underlying psychological factors (Thompson & Glasø, 2018). For instance, an employee with a decade of tenure might still feel unprepared for a novel project, whereas a novice with strong support could demonstrate high initiative. This pattern suggests that readiness assessments should account for autonomous motivation and the quality of social exchanges. Studies confirm that these subjective, relational factors—such as a person’s sense of autonomy and trust in their leader—are critical for effective leadership adaptation but remain invisible when using purely objective metrics (Hocine et al., 2014; Akdol & Arikboga, 2017). Accordingly, this study uses tenure, formal training, and self-rated performance as proxy indicators of traditional SLT readiness components, while acknowledging that these are indirect, imperfect measures of “competence” and “commitment.”
A growing body of evidence shows that subjective assessments often outperform objective metrics in predicting adaptability and satisfaction, particularly in volatile and knowledge-intensive contexts (Fernandez & Vecchio, 1997; Thompson & Glasø, 2018). Unlike static objective data, subjective measures enable leaders to track real-time fluctuations in team morale, trust, and sensemaking—factors that are critical to team resilience (Hamilton et al., 2017). However, purely ad hoc subjective judgement can be biased and inconsistent (Graeff, 1997), underscoring the need for a more structured, theory-grounded approach to capture subjective readiness.
To move beyond SLT’s single continuum of “competence” vs. “commitment”, this study proposes a 4D model of subjective readiness—a four-dimensional framework with distinct facets aligned to the competence and commitment aspects of readiness:
  • Drive (motivational readiness) represents the quality of a follower’s motivation, as described by self-determination theory (Deci & Ryan, 2000). It focuses on how well autonomy, competence, and relatedness needs are fulfilled to energize one’s willingness to work. High Drive leads to greater engagement, higher satisfaction, and sustained effort—conditions under which a leader’s adaptive behaviors are more likely to succeed (Deci & Ryan, 2000; Pasaribu et al., 2022). Whereas metrics like tenure or credentials indicate someone’s exposure or experience, Drive captures the inner motivational substrate that turns capability into consistent contribution. In SLT terms, it elaborates the “commitment” component by adding depth regarding motivation quality (Deci & Ryan, 2000; Uzzaman & Karim, 2016). Employees with higher Drive are likely to report greater personal satisfaction and may also contribute to higher manager satisfaction through proactive, prosocial behavior. Drive may additionally bolster team adaptability indirectly by sustaining effort during times of change.
  • Dare (confidence readiness) reflects a follower’s self-efficacy and action orientation, drawing on Bandura’s (1997) theory of self-efficacy and recent research linking confidence to creativity (Islam & Islam, 2025). It goes beyond raw talent by emphasizing the belief in one’s ability to overcome challenges through goal-setting, persistent effort, and resilience. In essence, Dare is the volitional catalyst that helps translate competence into observable performance (Stajkovic & Luthans, 1998). It extends the “competence” side of SLT by specifying an agentic drive that mobilizes a person’s skills. High Dare is likely associated with greater manager-rated satisfaction because confident employees tend to be reliable and take initiative. Dare may also promote adaptability, as long as the individual’s risk-taking is disciplined by shared team priorities. At the same time, excessive assertiveness without a strong relational foundation may cause friction—highlighting the need to balance Dare with sufficient Dialogue.
  • Decode (cognitive readiness) captures a follower’s sensemaking ability and cognitive appraisal processes in the face of uncertainty (Lazarus & Folkman, 1984; Lazarus, 1991; Weick, 1995). In practice, it is the capability to interpret ambiguous cues, update one’s mental models, and choose adaptive responses when goals change or information is incomplete. Decode adds a dynamic cognitive dimension to SLT’s concept of readiness by explaining how a follower appraises events (e.g., seeing a situation as a challenge versus a threat) and how those appraisals drive learning and coordination. In doing so, it extends the “competence” aspect of readiness to include cognitive flexibility. High Decode is expected to contribute to greater team adaptability in volatile situations because accurate sensemaking allows rapid realignment and effective coordination (Weick, 1995; Pulakos et al., 2000). By contrast, Decode might have a weaker direct impact on day-to-day satisfaction, though it becomes crucial when tasks are novel or fluid.
  • Dialogue (relational readiness) represents the degree of open two-way communication and trust between leader and follower. It draws on concepts from transformational leadership (e.g., inspirational communication, individualized support) and leader–member exchange (LMX) theory. It also embeds the principle of psychological safety—confidence that one can take interpersonal risks (speaking up, admitting errors) without fear of punishment (Bass & Avolio, 1994; Dulebohn et al., 2012; A. C. Edmondson & Lei, 2014). In essence, Dialogue makes explicit the relational climate that was only implicit in SLT’s notion of “commitment” by defining the social conditions (trust, safe communication) needed for cooperation. High Dialogue is expected to be associated with greater satisfaction for both employees and managers, and it likely fosters team adaptability by enabling people to voice opinions, seek help, and share errors under uncertain conditions (Dulebohn et al., 2012; A. C. Edmondson & Lei, 2014).
In summary, the 4D model is proposed as an extension of the SLT framework, unpacking both of SLT’s original readiness components. Drive and Dialogue enrich the “commitment” dimension (adding motivation quality and relational safety), while Dare and Decode refine the “competence” dimension (adding efficacy-based action orientation and cognitive flexibility in appraisal). This integrated approach may help explain why traditional objective indicators of readiness (like tenure or completed training) and self-rated performance often have weak relationships with outcomes such as satisfaction or adaptability (Pulakos et al., 2000; Thompson & Glasø, 2018). Those simple measures likely miss how people actually feel, think, and interact in context. By contrast, it is proposed that the structured subjective cues provided by the 4D facets can capture these nuanced readiness factors. Table 1 maps SLT’s traditional readiness components to the 4D dimensions and highlights the additional conceptual value each facet introduces beyond the competence–commitment framework.
The relevance of each 4D facet can vary by context. In knowledge-intensive or turbulent environments, Decode and Dialogue tend to be most crucial because they enable collective sensemaking and coordination. By contrast, in routine, stable settings, Drive and Dare become more prominent predictors of steady performance (Fernandez & Vecchio, 1997; Omilion-Hodges & Ptacek, 2021). Cultural differences further influence which facets are emphasized. For example, in high-power-distance collectivist cultures, Dialogue is especially valued for maintaining harmony and alignment, whereas Drive and Dare are more prominent in individualist cultures that prize autonomy and personal agency (Fernandez & Vecchio, 1997; House et al., 2004; Rockstuhl et al., 2011). These contingencies may help explain why findings from SLT-based interventions have been mixed across different regions and industries. They also underscore the potential benefit of using a multidimensional, context-sensitive readiness model (Fitriani et al., 2018; Manyuchi & Sukdeo, 2021; Kashive et al., 2022; Zhang & Fjermestad, 2006; Westover, 2024). Synthesizing these arguments yields three testable hypotheses consistent with SLT’s adaptive logic and the 4D framework:
H1. 
Higher employee-rated 4D readiness is positively associated with employee satisfaction with the supervisor, primarily via Drive (autonomous motivation) and Dialogue (relational safety and communication).
H2. 
Higher manager-rated 4D readiness of employees is positively associated with managers’ satisfaction in collaborating with those employees, primarily through Drive (autonomous motivation), Dare (confidence and initiative), and Dialogue (open communication).
H3. 
Subjective assessments of employee readiness (4D model) have greater predictive power for employee satisfaction and team adaptability than traditional readiness indicators (e.g., tenure, completed training, self-rated performance).
Collectively, the 4D model is introduced as a concise, theory-grounded framework aimed at systematizing the assessment of follower readiness. It conceptually complements the traditional SLT framework by clearly delineating the motivational, cognitive, and relational building blocks through which leaders and followers achieve alignment across diverse organizational contexts (Avolio & Hannah, 2008; Thompson & Glasø, 2018; Carsten & Uhl-Bien, 2012; Vanovenberghe et al., 2021).

3. Materials and Methods

3.1. Overall Research Design

The study employed a quantitative cross-sectional survey design to examine the relationships between employees’ subjective readiness (the 4D model comprising Drive, Dare, Decode, and Dialogue) and selected organizational outcomes (employee satisfaction with the supervisor and perceived team adaptability). This one-time measurement approach aligns with the study’s aim to validate a new readiness model by examining associations at a single point in time rather than drawing any causal conclusion. The design was grounded in several theoretical frameworks, including transformational leadership (Bass, 1990), self-determination theory (Deci & Ryan, 2000), psychological safety (A. C. Edmondson & Lei, 2014), cognitive appraisal (Lazarus, 1991), sensemaking (Weick, 1995), and self-efficacy (Bandura, 1997).
Because employees and managers often interpret readiness differently, data were collected from each group separately rather than in matched employee–manager pairs. This independent sampling ensured that each perspective was captured on its own terms, avoiding cross-contamination between roles. Accordingly, Hypotheses 1 and 2 were tested within the employee and manager subsamples, respectively, examining the relationship between 4D subjective readiness and dyadic satisfaction in each case. Hypothesis 3 tested whether the 4D readiness model adds explanatory power beyond traditional objective indicators of readiness (tenure, formal training) and self-rated performance by examining the increase in variance explained in employee satisfaction and perceived team adaptability.

3.2. Participants and Procedure

Data were collected in May 2025 on the Prolific platform, which is widely used in academic research (e.g., Palan & Schitter, 2018; Peer et al., 2022; Gordon et al., 2023). Inclusion criteria were age ≥ 18, full-time employment, and fluency in English. A pilot survey (n = 50) confirmed that the items were clear, and the final survey yielded n = 260 valid responses. The employee subsample consisted of 169 respondents, whereas the manager subsample had 91 respondents, reflecting a modest sample size for the manager group. This imbalance was due in part to the availability of participants in those roles and the filtering procedures used on Prolific. Given the smaller manager group, the results from the manager perspective were interpreted with caution. The questionnaire was administered via Google Forms with branching logic to route participants to employee- or manager-specific items. Informed consent was obtained electronically; participation was voluntary with the option to withdraw at any point. All procedures complied with the Declaration of Helsinki (e.g., Yusof et al., 2022), and institutional ethics approval was obtained.
Sample size planning was conducted A Priori using recommendations from Green (1991) and Cohen (1992). For a multiple regression with seven predictors targeting a medium effect size (f2 ≈ 0.15) at α = 0.05 and power = 0.80, between 106 and 111 cases were recommended as the minimum sample size. For a medium-sized bivariate correlation (r ≈ 0.30), roughly 84 cases would be sufficient. The final sample of 260 respondents exceeded both benchmarks, helping to ensure stable parameter estimates overall. The employee subsample (n = 169) easily met the minimum requirements for all planned analyses, whereas the manager subsample (n = 91), although above the need for basic correlational tests, was slightly below some regression guidelines; accordingly, analyses involving manager ratings were interpreted with caution in light of the limited sample size. Accordingly, analyses involving the manager ratings were interpreted with caution. Because the population size was assumed to be effectively infinite, no finite-population correction was applied. Finally, to address potential heteroscedasticity, HC3 heteroscedasticity-consistent standard errors were used for all regression models (White, 1980; Hayes & Cai, 2007).

3.3. Measures

3.3.1. Development and Validation of the 4D Readiness Scale

The study utilized a newly developed 12-item 4D scale, designed to capture four dimensions of subjective readiness: Drive, Dare, Decode, and Dialogue. Content validity was established through expert review: an independent panel of ten specialists rated each item for relevance on a four-point scale (1 = “not relevant,” 4 = “highly relevant”). Item-level content validity indices (I-CVI) and scale-level averages (S-CVI) were then calculated following procedures outlined by Polit and Beck (2006). The employee version achieved S-CVI/Ave = 0.94 and S-CVI/UA = 0.67, while the manager version reached S-CVI/Ave = 0.93 and S-CVI/UA = 0.50. All items exceeded the recommended minimum I-CVI ≥ 0.78, confirming adequate coverage. Only a few minor wording adjustments were needed before the scale’s full deployment.
Respondents rated their agreement with each statement on a five-point Likert scale (1 = strongly disagree, 5 = strongly agree). Scores for each of the four dimensions were calculated as the mean of their three respective items. An overall 4D readiness index was then computed as the average of all 12 items. The composite 4D scale demonstrated strong internal consistency (Cronbach’s α = 0.88).
To evaluate the measurement structure, exploratory factor analyses (EFA) were performed separately on the employee and manager samples using polychoric correlations with principal-axis factoring and oblimin rotation. Metrics indicated the data were suitable for factor analysis (Kaiser–Meyer–Olkin ≈ 0.79; Bartlett’s test of sphericity was significant). Parallel analysis suggested a four-factor solution, aligning with the four theorized 4D dimensions.
Confirmatory factor analysis (CFA) was then conducted using the WLSMV estimator, which is suitable for ordinal data (Flora & Curran, 2004). The hypothesized four-factor model showed an excellent fit in both the employee and manager samples. For the employee sample: CFI ≈ 1.00, TLI ≈ 1.01, RMSEA = 0.00, SRMR = 0.083. For the manager sample: CFI ≈ 1.00, TLI ≈ 1.00, RMSEA = 0.00, SRMR = 0.038. All of these values met conventional criteria for good model fit (CFI/TLI ≥ 0.95, RMSEA ≤ 0.06, SRMR ≤ 0.08; Hu & Bentler, 1999).
Standardized factor loadings were high for most items, ranging from 0.68 to 0.94 in the employee sample and from 0.80 to 0.95 in the manager sample. However, the Drive facet in the employee sample emerged as a clear weakness: its reliability was much lower than desired (CR = 0.59, AVE = 0.42), and one item had a very weak loading (λ ≈ 0.41), falling below commonly accepted thresholds (CR > 0.70 and AVE > 0.50). This finding suggests that the Drive dimension was not measured as reliably in the employee sample. As a supplementary analysis, a two-item version of the Drive subscale (omitting the low-loading item) was tested on the employee sample (see Appendix A Table A6). This alternative specification yielded substantially improved psychometric properties (composite reliability ≈ 0.81, AVE ≈ 0.68), confirming that the original scale’s weakness was largely attributable to that single problematic item. Nonetheless, the full three-item Drive scale was retained in the main analyses to preserve content coverage. However, its weak psychometric performance in the employee sample warrants caution when interpreting Drive-related results. Discriminant validity was assessed using the heterotrait–monotrait ratio (HTMT; Henseler et al., 2015). In the employee sample, all HTMT values were below 0.90 (most were under 0.85). In the manager sample, some HTMT values approached 0.90, indicating that the facets were somewhat closely related yet still distinguishable. Discriminant validity was assessed using the heterotrait–monotrait ratio (HTMT; Henseler et al., 2015). In the employee data, all HTMT values were < 0.90 (most < 0.85). In the manager data, some HTMT values approached 0.90, suggesting conceptual proximity among facets while remaining within acceptable boundaries for discriminant validity. Given the perceptual nature of the constructs and the single-source measurement design, a degree of conceptual overlap is expected, and additional common-method variance diagnostics were incorporated into the analytical strategy. Detailed item loadings, reliability, and validity indices are provided in the Appendix A.

3.3.2. Dyadic Satisfaction

Dyadic satisfaction was measured from both employee and manager perspectives using two parallel three-item scales. The employee version asked respondents to rate their satisfaction with their supervisor (e.g., “I am satisfied with my collaboration with my supervisor”), while the manager version mirrored this for satisfaction with a specific employee (e.g., “I am satisfied with my collaboration with this employee”). The scales followed best practices for brief, psychometrically robust measures (Hinkin, 1998; DeVellis & Thorpe, 2022). Both versions showed high internal consistency (employee sample: α = 0.88, ω = 0.89; manager sample: α = 0.88, ω = 0.89). A CFA confirmed a clear two-factor structure with excellent fit (employee sample: CFI = 0.984, TLI = 0.970, RMSEA = 0.084, SRMR = 0.025; manager sample: CFI ≈ 1.00, TLI ≈ 1.00, RMSEA = 0.00, SRMR = 0.023). All item loadings were strong (λ = 0.71–0.93).
Because subjective readiness and satisfaction are both evaluative constructs, a high correlation between them was expected, especially in the manager-reported ratings. Accordingly, additional analyses were pre-specified to test the discriminant validity of the two constructs. In particular, the Fornell–Larcker criterion was applied by comparing the average variance extracted for each construct with the squared correlation between readiness and satisfaction. Additionally, a one-factor versus two-factor CFA model comparison was conducted for the manager-rated readiness and satisfaction data to verify that these constructs were empirically distinguishable.

3.3.3. Team Adaptability

Team adaptability was assessed using a short three-item questionnaire completed by employees, which measured responsiveness and flexibility in the face of change (Pulakos et al., 2000). The items (e.g., “Our team adapts quickly when circumstances change”) were rated on the same five-point Likert scale described above. Importantly, this scale reflects employees’ perceptions of their team’s adaptability, representing a subjective, self-reported assessment rather than an objective performance metric. The measure showed acceptable internal consistency (α = 0.82).

3.3.4. Indicators of Readiness

Finally, three traditional indicators of readiness were collected: organizational tenure and training completion as objective indicators, and self-rated performance as a subjective indicator. Organizational tenure was coded into three categories (0 for 0–4.9 years, 1 for 5–9.9 years, 2 for ≥10 years of tenure). Training completion was recorded as 0 = none, 1 = partial or in-progress, 2 = completed. Self-rated performance was measured with a single item on a 1–5 scale. Implausible values (e.g., tenure > 50 years) were removed before analysis. These indicators function as proxy measures for the traditional SLT components of competence and commitment rather than as direct assessments of readiness. They capture structural background characteristics rather than motivational or relational qualities. Accordingly, their predictive value is expected to be limited relative to the 4D dimensions, and this constraint is taken into account when evaluating their role in subsequent analyses.

3.4. Data Analysis

All analyses were conducted in R (version 4.3.2). For Hypotheses 1 and 2, Pearson product–moment correlations were first calculated to assess the bivariate association between the overall 4D readiness index and dyadic satisfaction (in H1, employee-rated readiness versus employee’s satisfaction with their supervisor; in H2, manager-rated readiness vs. manager’s satisfaction with their employee). Multiple linear regressions were then estimated including all four 4D dimensions simultaneously as predictors to identify each dimension’s unique contribution to satisfaction (Field, 2018). Because all measures used in H1 and H2 are subjective and evaluative, the resulting associations should be regarded as perceptual linkages rather than causal effects, and—particularly for H2—may be influenced by shared evaluative content due to single-source measurement.
For Hypothesis 3, a hierarchical multiple regression approach was applied. In Block 1 (Step 1), the three traditional readiness indicators (tenure, training completion, self-rated performance) were entered; in Block 2 (Step 2), the four subjective 4D readiness dimensions were added. The change in explained variance (ΔR2) from Block 1 to Block 2 was examined to determine whether the addition of the 4D model provided a significant incremental contribution. This design allowed for a direct test of the added value of the new 4D readiness framework beyond the traditional indicators. Given the cross-sectional and perceptual nature of the 4D measures, any incremental variance explained in the hierarchical models reflects associative improvements in predicting attitudinal outcomes rather than causal or performance-based explanatory power. For the manager sample, the hierarchical results should be regarded as exploratory due to the small sample size and the high conceptual proximity between readiness and satisfaction.
The statistical assumptions of the regression analyses were examined. Residuals showed slight departures from normality (Shapiro–Wilk test, p < 0.05), so each model was re-estimated using a robust regression estimator (rlm function in the MASS package; Venables & Ripley, 2002) as a comparison. In reporting results, heteroscedasticity-consistent HC3 standard errors were reported alongside ordinary least-squares estimates to enhance robustness. Linearity was evaluated through both visual inspection and formal tests. Residuals-versus-fitted plots and component-plus-residual (partial residual) plots did not reveal obvious nonlinear patterns, and Box–Tidwell tests for continuous predictors indicated no significant deviations from linearity. Models with spline-transformed predictors were also estimated as a sensitivity check, yielding the same substantive conclusions and supporting the suitability of the linear specification.
Heteroscedasticity was assessed with the Breusch–Pagan and White tests. When either test indicated heteroscedasticity (see Appendix A), HC3 standard errors were used in the analyses. Multicollinearity was examined via the full predictor correlation matrix (with significance levels and sample sizes reported in the Appendix A) and Belsley’s condition index diagnostics. The maximum condition index was 13.11, and no index approached the conventional threshold of ~30 that would signal severe multicollinearity with multiple variables. Additionally, all variance inflation factor (VIF) values were below 3.5, reaffirming that multicollinearity was not a serious concern. Influential outliers were checked using Cook’s distance (with a threshold of 4/N). Removing a small number of high Cook’s distance cases did not materially change the results, indicating that no single observation unduly influenced the findings (Cook, 1977).
Because all key constructs in this study were measured via the same survey source, there was a risk of common method variance (CMV) influencing the results. Formal diagnostics for CMV were therefore conducted by comparing three confirmatory factor analysis models based on item parcels. The models tested were: (1) a Harman single-factor model (all items load onto one general factor), which had poor fit (CFI = 0.612, TLI = 0.552, RMSEA = 0.192, SRMR = 0.166; χ2(104) = 1103.41); (2) the baseline four-factor measurement model with no common factor, which showed moderate fit (CFI = 0.830, TLI = 0.731, RMSEA = 0.149, SRMR = 0.136; χ2(76) = 514.48); and (3) the same four-factor model with an added orthogonal common latent factor (CLF) to account for a general method effect, which exhibited a much improved fit (CFI = 0.945, TLI = 0.890, RMSEA = 0.095, SRMR = 0.039; χ2(60) = 201.75). The inclusion of the CLF markedly improved model fit (ΔCFI ≈ +0.115; ΔSRMR ≈ −0.097) and substantially altered some item loadings (maximum |Δλ| ≈ 1.045), indicating that a non-trivial amount of CMV was present. This finding suggests that certain relationships among the self-reported variables may have been inflated by CMV. The main analyses were therefore reported based on the standard measurement model (without the CLF). However, sensitivity analyses including the CLF to control for CMV indicated that the key conclusions remained the same. For instance, the incremental variance explained by the 4D readiness dimensions in predicting outcomes (Hypothesis 3) can be viewed as an upper-bound estimate due to CMV, yet the significance and pattern of effects did not materially change after accounting for the common method factor. Full details of these factor models (fit indices, factor loadings, and changes in loadings with the CLF) are provided in the Appendix A.
Effect sizes (e.g., correlation coefficients, R2 values) were interpreted according to Cohen’s (1988) suggested benchmarks for small, medium, and large effects. Statistical significance for all tests was evaluated using a two-tailed α = 0.05 criterion, unless otherwise specified. Given the combination of cross-sectional data, subjective measures, and evidence of common method variance, effect size patterns provide a more meaningful basis for evaluating the results than strict reliance on statistical significance.

4. Results

Consistent with Hypothesis 1, employees’ overall 4D readiness was moderately and positively correlated with satisfaction with the supervisor (r = 0.53, p < 0.001, n = 169). Multiple regression including all four readiness dimensions explained 32.9% of the variance in satisfaction (R2 = 0.329; F(4, 164) = 20.09, p < 0.001). Because both readiness and satisfaction are perceptual self-reported constructs, and CMV tests indicated a non-trivial method factor, these associations should be interpreted as upper-bound estimates rather than precise effect sizes. Residuals deviated from normality (Shapiro–Wilk p < 0.001), so robust regression results are reported. In this model, Drive (β = 0.38, p < 0.01) and Dialogue (β = 0.51, p < 0.01) were significant positive correlates, while Dare and Decode were not. Given the weaker measurement reliability of the Drive facet in the employee sample, all Drive effects in this section should be viewed as tentative. Analyses using a shortened two-item Drive scale (excluding the weakly loading item) yielded a very similar pattern of results. However, results involving the Drive facet should still be interpreted cautiously due to this facet’s weak psychometric properties (notably low reliability) in the employee sample (see Appendix A Table A6).
Dare’s regression coefficient was negative when controlling for the other 4D facets, even though Dare’s zero-order correlation with satisfaction was near zero (slightly positive). Diagnostic checks indicated that this negative partial effect was not an artifact of multicollinearity or model misspecification. Collinearity was low (max condition index ≈ 13, with no severe dimensions), linearity and homoscedasticity assumptions were satisfied, and the negative effect of Dare persisted under robust estimators (HC3, WLS) and after removing outliers. Dare showed virtually no positive association with satisfaction on its own, yet after accounting for Drive and Dialogue, its coefficient turned negative, with higher Dare associated with lower satisfaction.
Consistent with Hypothesis 2, managers’ satisfaction with collaboration was strongly correlated with employees’ overall 4D readiness (r = 0.80, p < 0.001, n = 91). Follow-up validity checks indicated that this extremely high correlation reflects substantial conceptual overlap between managers’ readiness ratings and their satisfaction judgments: the shared variance (r2 ≈ 0.64) exceeded the average variance extracted for readiness, and a two-factor CFA did not clearly separate readiness and satisfaction as distinct latent constructs (see Appendix A Table A12). A regression model including the four 4D facets explained a substantial 69.9% of the variance in manager satisfaction, F(4, 86) = 49.83, p < 0.001 (R2 = 0.699), though this estimate is likely inflated by conceptual overlap, shared response format, and the modest manager sample size. Accordingly, the findings should be interpreted as convergent rather than predictive evidence.
Robust regression again showed Drive (β = 0.20, p < 0.05), Dare (β = 0.20, p < 0.05), and especially Dialogue (β = 0.61, p < 0.01) as positive correlates. Given the evaluative similarity of the constructs, these associations reflect shared impressions rather than independent predictive effects. Diagnostic tests showed no concerning multicollinearity (all VIFs < 3.4), and excluding four influential outlier cases did not materially change the model (final R2 ≈ 0.698).
To evaluate incremental validity (Hypothesis 3), hierarchical regression analyses were conducted to compare traditional readiness markers with the 4D readiness dimensions. In Step 1, the three traditional indicators—tenure, training completion, and self-rated performance—accounted for 24.4% of the variance in employee satisfaction and 15.9% of the variance in perceived team adaptability. In Step 2, adding the four subjective 4D dimensions significantly increased the explained variance to 38.8% for satisfaction (ΔR2 = 0.144, FΔ(4, 157) = 9.25, p < 0.001) and to 26.6% for adaptability (ΔR2 = 0.107, FΔ(4, 157) = 5.67, p < 0.001). Given the presence of a common method factor in the measurement model, these incremental R2 values should be viewed as upper-bound estimates rather than precise effect sizes. In the final combined model (using robust estimation), Drive (β = 0.29, p < 0.01), Dialogue (β = 0.43, p < 0.01), and Dare (β = −0.29, p < 0.05) emerged as significant unique correlates of employee satisfaction (with self-rated performance also remaining significant). Supplementary analyses using the shortened two-item Drive subscale yielded a comparable pattern, but the Drive results should still be interpreted with caution in light of this facet’s weaker psychometric performance for employees. For perceived team adaptability, Decode (β = 0.26, p < 0.05) and Dialogue (β = 0.24, p < 0.05) emerged as significant positive correlates. No concerning multicollinearity was evident (all VIFs < 2.7), and all other model diagnostics fell within acceptable limits.
Table 2, Table 3, Table 4, Table 5 and Table 6 report the full regression coefficients and diagnostic statistics for these analyses.

5. Discussion

Overall, the findings provide support for our hypotheses, albeit with some nuanced patterns across the 4D facets. Hypothesis 1, which proposed that followers’ subjective readiness would predict their satisfaction with superiors, was confirmed: facets reflecting intrinsic motivation and two-way communication (Drive and Dialogue) showed significant positive links with employees’ satisfaction. The overall 4D model explained roughly one-third of the variance in satisfaction, indicating a moderate effect size. However, because the data were cross-sectional self-reports and the Drive facet had weaker measurement reliability, these results should be considered upper-bound estimates rather than strong evidence of prediction. In essence, employees were most satisfied when they felt autonomously motivated and had clear, supportive communication with their leader—a pattern that aligns with self-determination theory’s emphasis on fulfilling autonomy and relatedness needs (Deci & Ryan, 2000).
By contrast, Dare (D2) and Decode (D3) did not make significant unique contributions to employee satisfaction. In routine day-to-day work, employees may prioritize feeling energized and understood over autonomy or complex sensemaking, valuing the “will” (Drive) and the “way” (Dialogue) more than independent decision confidence or contextual interpretation. This interpretation is consistent with work indicating that supportive and clarifying leadership behaviors are more effective than autonomy-focused approaches for sustaining engagement in routine tasks (Ferrari, 2024). In this cross-sectional context, all four readiness facets remain conceptually important, but Drive and Dialogue were the primary correlates of satisfaction, whereas Dare and Decode were comparatively less influential.
Hypothesis 2, which examined managers’ satisfaction with their employees, was supported (though, as noted, the result is based on a small sample and should be viewed as exploratory). Managers’ 4D readiness ratings were extremely strongly associated with their satisfaction with employees (around 70% shared variance), but this is likely inflated by conceptual overlap since the same individuals provided both ratings. Thus, the result is better viewed as convergent evidence of aligned perceptions than as an independent test of predictive power. Even so, Dialogue stood out as the strongest facet influencing managers’ satisfaction, with Drive and Dare also showing positive (if lesser) effects, while Decode had no notable unique impact. The prominence of open dialogue aligns with transformational leadership theories that stress high-quality communication in driving positive leader–member evaluations (Bass, 1990; Thompson & Glasø, 2018; Palupi, 2020).
Hypothesis 3 was supported: adding the four subjective 4D facets to traditional readiness metrics yielded a modest but significant boost in explained variance for both employee satisfaction and team adaptability (on the order of a 10–15% increase). This suggests that perceptual readiness factors capture motivational and relational qualities beyond what structural indicators alone reflect, consistent with prior evidence that attitudinal measures provide added value in explaining outcomes (Huselid, 1995; Lynch et al., 2011; Bonini et al., 2024). In the full models, Drive and Dialogue remained significant positive predictors of employees’ satisfaction, while Decode (with some contribution from Dialogue) was the strongest predictor of perceived team adaptability. Dare’s role was more complex: it showed no benefit for employees’ satisfaction (even a slight suppression effect when other facets were controlled) despite relating positively to managers’ satisfaction. Because this minor negative effect for employee satisfaction appeared only under statistical controls and not in simple correlations, we interpret it as a tentative artifact rather than a true negative influence.
Overall, the pattern indicates that in situations requiring adaptability, employees value shared sensemaking and open communication (Decode and Dialogue), whereas under routine conditions their satisfaction depends more on intrinsic motivation and supportive communication (Drive and Dialogue). This aligns with theoretical perspectives that different leadership behaviors matter for different outcomes—for example, situational leadership frameworks highlighting task-versus-relationship focus depending on context (Wang et al., 2024) and transformational leadership’s call for intellectual stimulation to foster adaptability versus relational support to sustain satisfaction (Bass, 1990).
If the negative association between Dare and employee satisfaction were to generalize, one plausible explanation is that bold or risk-taking behavior without corresponding relational support may be perceived as overconfident or disruptive, reducing others’ satisfaction with the relationship. From a self-determination perspective, high-Dare behavior lacking autonomy-supportive cues can appear controlling rather than empowering; without the relational warmth conveyed through Dialogue, “daring” initiatives may undermine connectedness and psychological safety (Deci & Ryan, 2000; A. Edmondson, 1999). Research on assertiveness similarly suggests an inverted-U pattern, with moderate risk-taking viewed positively but excessive or unmodulated assertiveness perceived negatively (Ames & Flynn, 2007). Cultural context may further modulate these reactions: in high power-distance or high uncertainty-avoidance settings, unrestrained risk-taking may be interpreted as challenging hierarchy or creating instability, thereby eliciting stronger negative responses to unmitigated Dare (Rockstuhl et al., 2011; Taras et al., 2010). Overall, these considerations suggest that Dare’s impact is likely context-dependent and, given the present data, should be interpreted cautiously rather than as a robust negative effect.
Across the three outcomes, the 4D model explained a moderate proportion of variance (R2 values of roughly 0.16–0.39), which is typical for organizational behavior research where many factors jointly shape results. In theory-driven work, the pattern and consistency of individual effects (such as the recurring contributions of Drive and Dialogue) are often more informative than the absolute R2 values themselves (Abelson, 1985; Shmueli, 2010). Nevertheless, the evidence is cross-sectional and based entirely on self-reports, so the associations observed here should be viewed as suggestive rather than causal and may be inflated by common method variance.
Although the sample spanned multiple industries, it did not explicitly account for cross-cultural differences, so any cultural interpretations remain speculative. Cross-cultural frameworks suggest that societal values could shift the relative importance of the facets (Hofstede, 2001). In more collectivist environments, Dialogue (D4) may exert an even stronger influence on satisfaction and adaptability because of the emphasis on harmony and group cohesion, whereas in more individualistic contexts Dare (D2) may be evaluated more positively, in line with norms favoring autonomy and initiative (Fernandez & Vecchio, 1997). The sensemaking captured by Decode (D3) may also differ, with collectivist teams leaning toward collective interpretation and consensus and individualistic teams relying more on individual cognitive processing (Yeo et al., 2025). In the present data, Dialogue remained the most consistent predictor across subgroups, hinting that high-quality communication may be broadly important, while Decode was more strongly tied to perceived adaptability than to dyadic satisfaction, potentially reflecting sample heterogeneity or unmeasured contextual factors.
Finally, our findings on team adaptability underscore that effective leadership requires not only matching one’s style to follower readiness levels, as emphasized in traditional SLT, but also flexibly shifting between different modes as situations evolve. The fact that both Decode and Dialogue predicted adaptability implies that leaders may need to combine “exploratory” behaviors (encouraging new interpretations and sensemaking) with “exploitative” behaviors (providing stable guidance and clear, supportive communication) to help teams respond to change. This view is consistent with accounts of adaptive and ambidextrous leadership, which emphasize flexible, context-responsive behavior rather than rigid adherence to a single style (Zacher & Rosing, 2015). Overall, the 4D framework appears compatible with established leadership theories and offers a concise way of articulating motivational, cognitive, and relational aspects of readiness, while recognizing that the present evidence is correlational and based on self-reported perceptions rather than objective performance.

6. Conclusions

This study suggests that subjective readiness, as measured through the 4D model (Drive, Dare, Decode, Dialogue), is associated with several key organizational outcomes. In particular, higher 4D readiness was associated with greater employee satisfaction with their supervisors (H1) and higher manager satisfaction with their employees (H2). Moreover, the 4D dimensions explained additional variance in both employee satisfaction and team adaptability beyond that explained by objective readiness indicators (H3). Because all variables were assessed via self-report and CMV diagnostics indicated non-trivial method effects, the associations observed here should be interpreted as indicative rather than as strong predictive or causal evidence. Taken together, the findings suggest that the 4D model offers a useful multidimensional lens for understanding perceived relational and team dynamics, while still requiring refinement—particularly for the Drive facet in the employee sample and for clearer differentiation between readiness and satisfaction in manager ratings.

6.1. Theoretical Implications

At a theoretical level, this study underscores the critical importance of intrinsic motivation and open communication in effective leader–follower relationships. By incorporating these elements into the definition of “readiness,” the 4D model extends Situational Leadership Theory beyond its traditional emphasis on follower competence and commitment to include motivational and relational dimensions that are often overlooked. Defining Drive and Dialogue as distinct facets of readiness is a key conceptual contribution—it addresses a gap in prior SLT research, where such psychological processes were treated as peripheral rather than central. That said, given the weaker reliability of the Drive measure in our employee sample, any conclusions about Drive’s role should be viewed as tentative until this facet is refined and validated in future research.
Moreover, the strong link observed between managers’ readiness ratings and their own satisfaction aligns with transformational leadership’s emphasis on open, empowering communication with subordinates—even if part of this link was likely inflated by the common source of the ratings. Likewise, our finding that 4D subjective readiness accounted for more variance in satisfaction and adaptability than objective credentials highlights the limitation of relying only on tenure or formal qualifications; it demonstrates the added value of including employees’ attitudinal indicators in readiness models. Together, these insights argue for a holistic approach to evaluating readiness: subjective readiness should complement (not replace) objective measures when assessing employee capabilities. However, since our evidence comes from same-source perceptions and lacks behavioural criteria, the influence of subjective readiness should still be interpreted with appropriate caution.

6.2. Managerial Implications

An important practical implication of our findings is the need for leaders to foster open two-way communication (Dialogue) with their followers, as this facet proved to be the strongest positive driver of both employee and manager satisfaction (and was even linked to better perceived team adaptability). Leaders who emphasize transparent communication, frequent feedback, and trust-building interactions are likely to see improvements in these outcomes. In practice, organizations can cultivate such dialogue through measures like regular debriefings, brief team huddles, 360-degree feedback systems, and open-door policies that normalize continuous leader–employee communication.
Organizations may also benefit from integrating the 4D model into training, development, and performance-management systems. By assessing and developing each readiness facet (Drive, Dare, Decode, Dialogue), managers can more holistically gauge and strengthen employees’ motivation, confidence, adaptability, and communication skills. Interventions might, for example, use scenario-based role plays to build Decode (sensemaking under change) and communication workshops to enhance Dialogue when customer or stakeholder interaction is central.
Cultivating employees’ intrinsic motivation (Drive) is generally beneficial for their satisfaction and retention, but our findings on this facet were not especially robust. Organizations should still promote intrinsic motivation by providing meaningful development opportunities, granting employees more autonomy in how they do their work, and designing roles that allow skill growth—all while keeping in mind that our evidence for Drive’s impact is preliminary.
Finally, our results indicate that conventional readiness metrics (like tenure or credentials) have limited value by themselves, whereas adding a structured subjective assessment (the 4D model) provides a more holistic view of readiness by capturing employees’ perceived motivation, confidence, sensemaking, and communication. Organizations should use the 4D framework to complement—not replace—those traditional indicators, so that both objective and attitudinal perspectives inform leadership decisions. Moreover, because our outcomes were all self-reported, any use of the 4D assessment in practice should be paired with monitoring of actual performance indicators to ensure that improvements in perceived readiness correspond to real, tangible results.

6.3. Limitations and Future Research

Although this study provides initial evidence for the 4D model’s explanatory value in a situational leadership context, several limitations warrant consideration. The sample size was relatively small (169 employees; 91 managers), which restricted generalizability, particularly for the manager subgroup. Accordingly, the manager results—especially the high variance explained in H2—should be regarded as exploratory. Replication with larger and more diverse manager samples is needed to determine whether these patterns are robust.
The Drive facet showed low reliability in the employee sample, indicating a measurement weakness for this dimension. Future research should refine or revalidate the Drive items for employee populations before drawing strong theoretical or practical conclusions about this facet. It would also be valuable to include more objective indicators (e.g., behavioural performance metrics, production indicators, professional certifications) alongside subjective readiness measures and attitudinal outcomes, especially given that team adaptability in this study was assessed via employees’ perceptions rather than objective performance.
Another limitation is that the sample, although spanning multiple sectors, was not stratified by cultural dimensions. Because the 4D framework draws on theoretical traditions developed primarily in Western, individualistic contexts, its applicability in collectivist or high power-distance settings remains uncertain. Future work should examine whether cultural values shape how readiness facets are interpreted and how strongly they relate to satisfaction and perceived adaptability to test the model’s generalizability across cultural contexts. In addition, the cross-sectional design means that changes in 4D readiness or its relationships with outcomes over time could not be observed; longitudinal designs are needed to examine temporal dynamics and potential causal ordering.
All measures in this study relied on the same method (self-reported surveys), raising the risk of common method variance (CMV) inflating associations. Post hoc analysis using a common latent factor indicated that CMV was present to a non-trivial degree, so the observed effects should be interpreted as indicative rather than strongly causal or predictive. To strengthen validity, future studies should adopt designs that reduce same-source bias, for example, by collecting predictor and outcome data from different informants (e.g., supervisors rating outcomes when employees rate their own readiness) and by incorporating multiple measurement waves. Additional procedural remedies—such as randomizing item order (including reverse-worded items) and including marker variables to detect response-style bias—would further help to diagnose and mitigate method effects.

Author Contributions

Conceptualization, D.G., N.D. and M.F.; methodology, D.G., N.D. and M.F.; software, D.G., N.D. and M.F.; validation, D.G., N.D. and M.F.; formal analysis, D.G., N.D. and M.F.; investigation, D.G., N.D. and M.F.; resources, D.G.; data curation, D.G., N.D. and M.F.; writing—original draft preparation, D.G., N.D. and M.F.; writing—review and editing, D.G., N.D. and M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Algebra Bernays University Ethical Committee (Approval Code: Class 004-01/25-01/01 no. 785-03-01-25-01; Approval Date: 2 April 2025).

Informed Consent Statement

All participants provided electronic informed consent via Prolific prior to commencing the survey. Prolific’s platform ensures that participants are fully informed about study objectives, procedures, data usage, storage duration, confidentiality protection, and the legal basis for processing—requirements aligned with GDPR—before they agree to participate. Participation was entirely voluntary; respondents could withdraw at any time by ending the survey, and any submitted data from withdrawing participants was deleted without penalty.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy.

Acknowledgments

During the preparation of this manuscript, the authors used ChatGPT 4.5 and 5 pro, Gemini 2.5 Pro, and Grammarly for the purposes of language editing and sentence rephrasing. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SLTSituational Leadership Theory
4DDrive, Dare, Decode, and Dialogue (the four dimensions of subjective readiness)
CEOChief Executive Officer
LMXLeader-Member Exchange
OCBOrganizational Citizenship Behavior
SDTSelf-Determination Theory
GDPRGeneral Data Protection Regulation

Appendix A

Table A1. CVI results employees and managers.
Table A1. CVI results employees and managers.
GroupDimension_TrueS_CVI_AveS_CVI_UAk_Items
EmployeesDare0.9250.54
Decode0.90.54
Dialogue113
Drive111
scopek_itemsS_CVI_AveS_CVI_UA
overall120.9420.667
ManagersDare0.8670.3333
Decode0.86703
Dialogue113
Drive0.9670.6673
scopek_itemsS_CVI_AveS_CVI_UA
overall120.9250.5
Note. I-CVI = proportion of raters who rated 3–4 (“mostly/very relevant”) for an item; S-CVI/Ave = average I-CVI across items; S-CVI/UA = proportion of items with universal agreement (all ratings 3–4). For panels of 6–8 experts, the item acceptance threshold is approximately I-CVI ≥ 0.78 (otherwise ≈ 0.80).
Table A2. Model fit (CFA).
Table A2. Model fit (CFA).
GroupCFITLIRMSEASRMR
Employees1.0001.0070.0000.083
Managers1.0001.0030.0000.038
Note. CFA fit indicators: CFI = Comparative Fit Index; TLI = Tucker–Lewis Index; RMSEA = Root Mean Square Error of Approximation; SRMR = Standardized Root Mean Square Residual. Recommended guidelines (Hu & Bentler, 1999): CFI/TLI ≥ 0.95, RMSEA ≤ 0.06, SRMR ≤ 0.08.
Table A3. Standardized loads (λ) employees.
Table A3. Standardized loads (λ) employees.
ItemF1 (Drive)F2 (Dare)F3 (Decode)F4 (Dialogue)
EMP_D1_10.6840.0000.0000.000
EMP_D1_20.4050.0000.0000.000
EMP_D1_30.7980.0000.0000.000
EMP_D2_10.0000.8220.0000.000
EMP_D2_20.0000.8070.0000.000
EMP_D2_30.0000.7230.0000.000
EMP_D3_10.0000.0000.8720.000
EMP_D3_20.0000.0000.6810.000
EMP_D3_30.0000.0000.7070.000
EMP_D4_10.0000.0000.0000.904
EMP_D4_20.0000.0000.0000.715
EMP_D4_30.0000.0000.0000.940
Notes. Standardized factor loadings (λ) from the four-factor CFA model (F1 = Drive, F2 = Dare, F3 = Decode, F4 = Dialogue). Estimator: WLSMV; indicators treated as ordered; factors freely correlated. Cross-loadings are fixed at 0 according to the CFA specification, so they are shown as 0.000. Sample sizes: N_employees = 169. Fit indices in Table A2.
Table A4. Standardized loads (λ) managers.
Table A4. Standardized loads (λ) managers.
ItemF1 (Drive)F2 (Dare)F3 (Decode)F4 (Dialogue)
MAN_D1_10.9350.0000.0000.000
MAN_D1_20.7980.0000.0000.000
MAN_D1_30.9280.0000.0000.000
MAN_D2_10.0000.8310.0000.000
MAN_D2_20.0000.9520.0000.000
MAN_D2_30.0000.8480.0000.000
MAN_D3_10.0000.0000.8900.000
MAN_D3_20.0000.0000.8380.000
MAN_D3_30.0000.0000.8300.000
MAN_D4_10.0000.0000.0000.943
MAN_D4_20.0000.0000.0000.893
MAN_D4_30.0000.0000.0000.935
Notes. Standardized factor loadings (λ) from the four-factor CFA model (F1 = Drive, F2 = Dare, F3 = Decode, F4 = Dialogue). Estimator: WLSMV; indicators treated as ordered; factors freely correlated. Cross-loadings are fixed at 0 according to the CFA specification, so they are shown as 0.000. Sample sizes: N_menagers = 91. Fit indices in Table A2.
Table A5. Composite reliability (CR) and AVE—employees and manager.
Table A5. Composite reliability (CR) and AVE—employees and manager.
FactorCRAVE
Drive (F1)—employee0.5930.423
Dare (F2)—employee0.7820.617
Decode (F3)—employee0.7120.575
Dialogue (F4)—employee0.8190.738
Drive (F1)—manager0.8840.791
Dare (F2)—manager0.8630.772
Decode (F3)—manager0.8520.728
Dialogue (F4)—manager0.9010.854
Note. CR = composite reliability (Raykov ρ); AVE = average variance extracted. Values are calculated from standardized loadings and residual variances of the four-factor CFA model.
Table A6. Comparison of Employee Drive Measurement Models (Three-Item vs. Two-Item Specification).
Table A6. Comparison of Employee Drive Measurement Models (Three-Item vs. Two-Item Specification).
Item/StatisticOriginal 3 Item Drive (Employee Sample)Reduced 2 Item DRIVE (Employee Sample)
“I am highly motivated to perform my job tasks.”0.900.85
“I do not need external motivation to start working.”0.38-
“I feel excited when given new tasks or challenges.”0.760.80
Composite Reliability (CR)0.590.81
Average Variance Extracted (AVE)0.420.68
Note: “-” indicates that the item was excluded in the two-item model. The original three-item model’s CR and AVE fell below recommended thresholds, whereas the two-item model meets or exceeds them.
Table A7. Correlations between factors (ψ) employees.
Table A7. Correlations between factors (ψ) employees.
F1F2F3F4
F11.0000.8390.6640.743
F20.8391.0000.8730.685
F30.6640.8731.0000.728
F40.7430.6850.7281.000
Notes. Correlations among latent factors (ψ) from the CFA model.
Table A8. Correlations between factors (ψ) managers.
Table A8. Correlations between factors (ψ) managers.
F1F2F3F4
F11.0000.8750.9010.823
F20.8751.0000.8970.862
F30.9010.8971.0000.771
F40.8230.8620.7711.000
Notes. Correlations among latent factors (ψ) from the CFA model.
Table A9. HTMT results employees.
Table A9. HTMT results employees.
F1F2F3F4
F1-0.8550.6780.759
F20.855-0.8760.687
F30.6780.876-0.733
F40.7590.6870.733-
Note. HTMT is the ratio of average heterotrait-monotrait correlations; obtained from a standardized measurement model. Usual thresholds: HTMT < 0.85 (more stringent) or < 0.90 (more liberal) to confirm discriminant validity.
Table A10. HTMT results managers.
Table A10. HTMT results managers.
F1F2F3F4
F1-0.8760.9020.824
F20.876-0.8980.863
F30.9020.898-0.772
F40.8240.8630.772-
Note. HTMT is the ratio of average heterotrait-monotrait correlations; obtained from a standardized measurement model. Usual thresholds: HTMT < 0.85 (more stringent) or < 0.90 (more liberal) to confirm discriminant validity.
Table A11. CMV diagnostics.
Table A11. CMV diagnostics.
cfitlirmseasrmrchisqdf
Harman_1F0.6120.5520.1920.1661103.405104
Measurement0.830.7310.1490.136514.47876
With_CLF0.9450.890.0950.039201.74660
Table A12. Discriminant Validity Between Manager-Rated Readiness and Manager Satisfaction.
Table A12. Discriminant Validity Between Manager-Rated Readiness and Manager Satisfaction.
StatisticsManager-Rated ReadinessManager SatisfactionInterpretation
AVE0.6040.708 (from CFA satisfaction table)AVE < r2 indicates insufficient discriminant validity
Correlation (r)-0.846Very high shared variance
Shared variance (r2)-0.716Higher than AVE → constructs not empirically separable
CFA Model 1: One-factor modelCFI = 0.612, TLI = 0.552, RMSEA = 0.192, SRMR = 0.166-Poor fit
CFA Model 2: Two-factor modelCFI = 0.830, TLI = 0.731, RMSEA = 0.149, SRMR = 0.136-Better but not strongly separating constructs
Δχ2 (Two-factor vs. One-factor)-Δχ2(1) = 12.47, p < 0.001Statistically better, but substantively limited improvement
Latent correlation (φ)-~0.85 (near unity)Indicates the two constructs reflect a largely shared evaluative factor
Note. AVE = Average Variance Extracted. CFA = Confirmatory Factor Analysis. Values are from manager sample only (n = 91). Together, these diagnostics show that readiness and manager satisfaction are not discriminant constructs in this dataset and reflect a common evaluative dimension.
Table A13. Reliability and convergent validity (short satisfaction scales).
Table A13. Reliability and convergent validity (short satisfaction scales).
ScalekαωCRAVEn
Employees (EMP)30.880.890.9340.702169
Managers (MAN)30.880.890.9350.70891
Note. k = number of items in the scale; α = Cronbach’s alpha; ω = McDonald’s omega; CR = composite reliability (Raykov’s ρ); AVE = average variance extracted; n = sample size. α/ω are taken from the reliability summary.csv report; CR/AVE are calculated based on standardized CFA loadings and residual variances (lavaan, std.all). Thresholds: CR ≥ 0.70 and AVE ≥ 0.50.
Table A14. CFA of the two-factor satisfaction model (WLSMV/robustly corrected in brackets).
Table A14. CFA of the two-factor satisfaction model (WLSMV/robustly corrected in brackets).
GroupCFITLIRMSEASRMR
Employees0.984 (0.993),0.970 (0.986)0.084 (0.042)0.025
Managers1.0001.000.0000.023
Note. Two-factor CFA model of satisfaction estimated with WLSMV estimator for ordinal indicators; factors are freely correlated. Robustly corrected values (scaled indices; e.g., SB/mean-variance corrections) are shown in parentheses. Guidelines for good fit (Hu & Bentler, 1999): CFI/TLI ≥ 0.95, RMSEA ≤ 0.06, SRMR ≤ 0.08.
Table A15. Standardized loads (λ) employees and managers.
Table A15. Standardized loads (λ) employees and managers.
FactorItemλ (std.)
Satisfaction (EMP)EMP_SAT10.855
EMP_SAT20.756
EMP_SAT30.932
Satisfaction (MAN)MAN_SAT10.899
MAN_SAT20.835
MAN_SAT30.835
Notes. Standardized factor loadings (λ) from the two-factor CFA model of satisfaction for employees (EMP) and managers (MAN). Assessment of WLSMV; indicators ordered; factors freely correlated.

References

  1. Abelson, R. P. (1985). A variance explanation paradox: When a little is a lot. Psychological Bulletin, 97(1), 129–133. [Google Scholar] [CrossRef]
  2. Akdol, B., & Arikboga, F. S. (2017). Leader member exchange as a mediator of the relationship between servant leadership and job satisfaction: A research on Turkish ICT companies. International Journal of Organizational Leadership, 6(4), 525. [Google Scholar] [CrossRef]
  3. Ames, D. R., & Flynn, F. J. (2007). What breaks a leader: The curvilinear relation between assertiveness and leadership. Journal of Personality and Social Psychology, 92(2), 307–324. [Google Scholar] [CrossRef]
  4. Avery, G., & Ryan, J. (2002). Applying situational leadership in Australia. Journal of Management Development, 21, 242–262. [Google Scholar] [CrossRef]
  5. Avolio, B. J., & Hannah, S. T. (2008). Developmental readiness: Accelerating leader development. Consulting Psychology Journal: Practice and Research, 60(4), 331–347. [Google Scholar] [CrossRef]
  6. Bandura, A. (1997). Self-efficacy: The exercise of control. W. H. Freeman; Times Books; Henry Holt & Co. [Google Scholar]
  7. Bartsch, S., Weber, E., Büttgen, M., & Huber, A. (2021). Leadership matters in crisis-induced digital transformation: How to lead service employees effectively during the COVID-19 pandemic. Journal of Service Management, 32(1), 71–85. [Google Scholar] [CrossRef]
  8. Bass, B. M. (1990). From transactional to transformational leadership: Learning to share the vision. Organizational Dynamics, 18(3), 19–31. [Google Scholar] [CrossRef]
  9. Bass, B. M., & Avolio, B. J. (Eds.). (1994). Improving organizational effectiveness through transformational leadership. Sage. [Google Scholar]
  10. Biscotti, A. M., Mafrolla, E., Giudice, M. D., & D’Amico, E. (2018). CEO turnover and the new leader propensity to open innovation: Agency-resource dependence view and social identity perspective. Management Decision, 56(6), 1348–1364. [Google Scholar] [CrossRef]
  11. Bonini, A., Panari, C., Caricati, L., & Mariani, M. G. (2024). The relationship between leadership and adaptive performance: A systematic review and meta-analysis. PLoS ONE, 19(10), e0304720. [Google Scholar] [CrossRef]
  12. Cairns, T., Hollenback, J., Preziosi, R., & Snow, W. (1998). Technical note: A study of Hersey and Blanchard’s situational leadership theory. Leadership & Organization Development Journal, 19, 113–116. [Google Scholar] [CrossRef]
  13. Carsten, M. K., & Uhl-Bien, M. (2012). Follower beliefs in the co-production of leadership. Zeitschrift für Psychologie, 220(4), 210–220. [Google Scholar] [CrossRef]
  14. Cho, Y., Jeong, S.-H., Kim, H.-S., & Kim, Y.-M. (2022). Effects of leadership styles of nursing managers on turnover intention of hospital nurses: A systematic review and meta-analysis. Journal of Korean Academy of Nursing, 52(5), e22039. [Google Scholar] [CrossRef]
  15. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum. [Google Scholar]
  16. Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. [Google Scholar] [CrossRef]
  17. Contreras, F., Baykal, E., & Abid, G. (2020). E-leadership and teleworking in times of COVID-19 and beyond: What we know and where do we go. Frontiers in Psychology, 11, 590271. [Google Scholar] [CrossRef]
  18. Cook, R. D. (1977). Detection of influential observation in linear regression. Technometrics, 19(1), 15–18. [Google Scholar] [CrossRef]
  19. Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268. [Google Scholar] [CrossRef]
  20. Del Pino-Marchito, A., Galán-García, A., & Plaza-Mejía, M. d. l. Á. (2025). The hersey and blanchard’s situational leadership model revisited: Its role in sustainable organizational development. World, 6(2), 63. [Google Scholar] [CrossRef]
  21. DeVellis, R. F., & Thorpe, C. T. (2022). Scale development: Theory and applications (4th ed.). Sage. [Google Scholar]
  22. Dirani, K. M., Abadi, M., Alizadeh, A., Barhate, B., Garza, R. C., Gunasekara, N., Ibrahim, G., & Majzun, Z. (2020). Leadership competencies and the essential role of human resource development in times of crisis: A response to COVID-19 pandemic. Human Resource Development International, 23(4), 380–394. [Google Scholar] [CrossRef]
  23. Dulebohn, J. H., Bommer, W. H., Liden, R. C., Brouer, R. L., & Ferris, G. R. (2012). A meta-analysis of antecedents and consequences of leader–member exchange: Integrating the past with an eye toward the future. Journal of Management, 38(6), 1715–1759. [Google Scholar] [CrossRef]
  24. Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. [Google Scholar] [CrossRef]
  25. Edmondson, A. C., & Lei, Z. (2014). Psychological safety: The history, renaissance, and future of an interpersonal construct. Annual Review of Organizational Psychology and Organizational Behavior, 1(1), 23–43. [Google Scholar] [CrossRef]
  26. Egan, J. (2023). Situational leadership role-play and debrief: An exploration of useful components and limitations. Management Teaching Review, 9(2), 129–146. [Google Scholar] [CrossRef]
  27. Fernandez, C. F., & Vecchio, R. (1997). Situational leadership theory revisited: A test of an across-jobs perspective. Leadership Quarterly, 8(1), 67–84. [Google Scholar] [CrossRef]
  28. Ferrari, F. (2024). Path-goal theory and followers’ work engagement: An empirical exploration of the situational leadership approach. In Proceedings of the 19th European conference on management, leadership and governance. Academic Conferences International. [Google Scholar] [CrossRef]
  29. Field, A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). Sage. [Google Scholar]
  30. Fitriani, F., Betaubun, P., Pure, E. A. G., Tikson, D. T., Maturbongs, E., Cahyanti, T. W. A., & Waas, R. F. (2018). Relationship of employee ethnic background in validation of Situational Leadership Theory. Indian Journal of Public Health Research and Development, 9(4), 200–205. [Google Scholar] [CrossRef]
  31. Flora, D. B., & Curran, P. J. (2004). An empirical evaluation of alternative methods for estimation with categorical data. Psychological Methods, 9(4), 466–491. [Google Scholar] [CrossRef]
  32. Gordon, A., Tomczak, J., Adams, J., & Pickering, J. S. (2023). What over 1,000,000 participants tell us about online research protocols. Frontiers in Human Neuroscience, 17, 1228365. [Google Scholar] [CrossRef]
  33. Graeff, C. L. (1997). Evolution of situational leadership theory: A critical review. Leadership Quarterly, 8(2), 153–170. [Google Scholar] [CrossRef]
  34. Green, S. B. (1991). How many subjects does it take to do a regression analysis. Multivariate Behavioral Research, 26(3), 499–510. [Google Scholar] [CrossRef]
  35. Hamilton, K., Mancuso, V., Mohammed, S., Tesler, R., & McNeese, M. (2017). Skilled and unaware: The interactive effects of team cognition, team metacognition, and task confidence on team performance. Journal of Cognitive Engineering and Decision Making, 11(4), 382–395. [Google Scholar] [CrossRef]
  36. Hammond, M. M., Neff, N. L., Farr, J. L., Schwall, A. R., & Zhao, X. (2011). Predictors of individual-level innovation at work: A meta-analysis. Psychology of Aesthetics, Creativity, and the Arts, 5(1), 90–105. [Google Scholar] [CrossRef]
  37. Hayes, A. F., & Cai, L. (2007). Using heteroskedasticity-consistent standard error estimators in OLS regression: An introduction and software implementation. Behavior Research Methods, 39(4), 709–722. [Google Scholar] [CrossRef] [PubMed]
  38. Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. [Google Scholar] [CrossRef]
  39. Hersey, P., & Blanchard, K. H. (1969). Life cycle theory of leadership. Training and Development Journal, 23(5), 26–34. [Google Scholar]
  40. Hersey, P., & Blanchard, K. H. (1982). Management of organizational behavior: Utilizing human resources (4th ed.). Prentice-Hall. [Google Scholar]
  41. Hersey, P., Blanchard, K. H., & Johnson, D. E. (1996). Management of organizational behavior: Utilizing human resources (7th ed.). Prentice Hall. [Google Scholar]
  42. Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1(1), 104–121. [Google Scholar] [CrossRef]
  43. Hocine, Z., Zhang, J., Song, Y., & Ye, L. (2014). Autonomy-supportive leadership behavior contents. Open Journal of Social Sciences, 2, 433–440. [Google Scholar] [CrossRef]
  44. Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions, and organizations across nations (2nd ed.). Sage. [Google Scholar]
  45. House, R. J., Hanges, P. J., Javidan, M., Dorfman, P. W., & Gupta, V. (Eds.). (2004). Culture, leadership, and organizations: The GLOBE study of 62 societies. Sage. [Google Scholar]
  46. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. [Google Scholar] [CrossRef]
  47. Humphrey, R. H. (2013). Effective leadership: Theory, cases, and applications. SAGE Publications. [Google Scholar]
  48. Huselid, M. A. (1995). The impact of human resource management practices on turnover, productivity, and corporate financial performance. Academy of Management Journal, 38(3), 635–672. [Google Scholar] [CrossRef]
  49. Islam, M. M., & Islam, S. (2025). An empirical study on the relationship between self-efficacy, employees’ creative performance, and thinking style in the Bangladeshi trade companies. Future Business Journal, 11, 65. [Google Scholar] [CrossRef]
  50. Jäppinen, A. K. (2017). Analysis of leadership dynamics in educational settings during times of external and internal change. Educational Research, 59(4), 460–477. [Google Scholar] [CrossRef]
  51. Jiang, W., Wang, L., Chu, Z., & Zheng, C. (2019). Does leader turnover intention hinder team innovation performance? the roles of leader self-sacrificial behavior and empathic concern. Journal of Business Research, 104, 261–270. [Google Scholar] [CrossRef]
  52. Johansen, B. (1990). Situational leadership: A review of the research. Human Resource Development Quarterly, 1, 73–85. [Google Scholar] [CrossRef]
  53. Kaifi, B. A., Noor, A. O., Nguyen, N. R., Aslami, W., & Khanfar, N. M. (2014). The importance of situational leadership in the workforce: A study based on gender, place of birth, and generational affiliation. Journal of Contemporary Management, 3(2), 29–40. [Google Scholar]
  54. Kashive, N., Khanna, V., & Powale, L. (2022). Virtual team performance: E-leadership roles in the era of COVID-19. Journal of Management Development, 41(5), 277–300. [Google Scholar] [CrossRef]
  55. Kim, H., & Kim, E. G. (2021). A meta-analysis on predictors of turnover intention of hospital nurses in South Korea (2000–2020). Nursing Open, 8(5), 2406–2418. [Google Scholar] [CrossRef]
  56. Lazarus, R. S. (1991). Emotion and adaptation. Oxford University Press. [Google Scholar]
  57. Lazarus, R. S., & Folkman, S. (1984). Stress, appraisal, and coping. Springer. [Google Scholar]
  58. Lee, A., Legood, A., Hughes, D., Tian, A. W., Newman, A., & Knight, C. (2019). Leadership, creativity and innovation: A meta-analytic review. European Journal of Work and Organizational Psychology, 29(1), 1–35. [Google Scholar] [CrossRef]
  59. Lynch, B. M., McCormack, B., & McCance, T. (2011). Development of a model of situational leadership in residential care for older people. Journal of Nursing Management, 19(8), 1058–1069. [Google Scholar] [CrossRef]
  60. Manyuchi, M., & Sukdeo, N. (2021, April 5–8). Application of the situational leadership model to achieve effective performance in mining organization teams. 2nd South American International Conference on Industrial Engineering and Operations Management, Sao Paulo, Brazil. [Google Scholar] [CrossRef]
  61. McLaurin, J. (2006). The role of situation in the leadership process: A review and application. Academy of Strategic Management Journal, 5, 97–112. [Google Scholar]
  62. Omilion-Hodges, L. M., & Ptacek, J. K. (2021). Leadership and context: Reading the room. In Leader-Member exchange and organizational communication. New perspectives in organizational communication (pp. 143–157). Palgrave Macmillan. [Google Scholar] [CrossRef]
  63. Palan, S., & Schitter, C. (2018). Prolific.ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17, 22–27. [Google Scholar] [CrossRef]
  64. Palupi, M. (2020). Efforts to improve employee creativity through transformational leadership. Jurnal Manajemen Bisnis, 11(2), 224–232. [Google Scholar] [CrossRef]
  65. Papworth, M. A., Milne, D., & Boak, G. (2009). An exploratory content analysis of situational leadership. Journal of Management Development, 28(7), 593–606. [Google Scholar] [CrossRef]
  66. Pasaribu, S. B., Goestjahjanti, F. S., Srinita, S., Novitasari, D., & Haryanto, B. (2022). The role of situational leadership on job satisfaction, organizational citizenship behavior (OCB), and employee performance. Frontiers in Psychology, 13, 896539. [Google Scholar] [CrossRef] [PubMed]
  67. Peer, E., Rothschild, D., Gordon, A., Evernden, Z., & Damer, E. (2022). Data quality of platforms and panels for online behavioral research. Behavior Research Methods, 54(4), 1643–1662. [Google Scholar] [CrossRef]
  68. Polit, D. F., & Beck, C. T. (2006). The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Research in Nursing & Health, 29(5), 489–497. [Google Scholar] [CrossRef]
  69. Pulakos, E. D., Arad, S., Donovan, M. A., & Plamondon, K. E. (2000). Adaptability in the workplace: Development of a taxonomy of adaptive performance. Journal of Applied Psychology, 85(4), 612–624. [Google Scholar] [CrossRef]
  70. Ramesh, P., Bhavikatti, V., Omnamasivaya, B., Chaitanya, G., Tejaswini, Hiremath, S., Gondesi, S. H., & Kameswari, J. (2024). Organizational adaptability: A study of the mediating role of leadership in the influence of strategies, complexity, and technology. International Journal of Innovation Management, 27(07n08), 2350036. [Google Scholar] [CrossRef]
  71. Ribeiro, N., Gomes, D., Singh, S., & Romão, S. (2022). The impact of leaders’ coaching skills on employees’ happiness and turnover intention. Administrative Sciences, 12(3), 84. [Google Scholar] [CrossRef]
  72. Rockstuhl, T., Seiler, S., Ang, S., Van Dyne, L., & Annen, H. (2011). Beyond general intelligence (IQ) and emotional intelligence (EQ): The role of cultural intelligence (CQ) on cross-border leadership effectiveness in a globalized world. Journal of Social Issues, 67(4), 825–840. [Google Scholar] [CrossRef]
  73. Rudolph, C. W., Allan, B., Clark, M., & Zacher, H. (2021). Pandemics: Implications for research and practice in industrial and organizational psychology. Industrial and Organizational Psychology, 14(1–2), 1–35. [Google Scholar] [CrossRef]
  74. Schulze, J. H., & Pinkow, F. (2020). Leadership for organisational adaptability: How enabling leaders create adaptive space. Administrative Sciences, 10(3), 37. [Google Scholar] [CrossRef]
  75. Serrat, O. (2021). Review of Situational Leadership® after 25 years—A retrospective (Blanchard, Zigarmi, & Nelson, 1993). In Leading solutions (pp. 41–45). Springer. [Google Scholar] [CrossRef]
  76. Shmueli, G. (2010). To explain or to predict? Statistical Science, 25(3), 289–310. [Google Scholar] [CrossRef]
  77. Silverthorne, C., & Wang, T. (2001). Situational leadership style as a predictor of success and productivity among Taiwanese business organizations. The Journal of Psychology, 135(4), 399–412. [Google Scholar] [CrossRef]
  78. Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124(2), 240–261. [Google Scholar] [CrossRef]
  79. Taras, V., Kirkman, B. L., & Steel, P. (2010). Examining the impact of Culture’s Consequences: A three-decade, multilevel, meta-analytic review of Hofstede’s cultural value dimensions. Journal of Applied Psychology, 95(3), 405–439. [Google Scholar] [CrossRef] [PubMed]
  80. Thompson, G., & Glasø, L. (2015). Situational leadership theory: A test from three perspectives. Leadership & Organization Development Journal, 36(5), 527–544. [Google Scholar] [CrossRef]
  81. Thompson, G., & Glasø, L. (2018). Situational leadership theory: A test from a leader-follower congruence approach. Leadership & Organization Development Journal, 39(5), 574–591. [Google Scholar] [CrossRef]
  82. Thompson, G., & Vecchio, R. P. (2009). Situational leadership theory: A test of three versions. The Leadership Quarterly, 20(5), 837–848. [Google Scholar] [CrossRef]
  83. Uzzaman, M., & Karim, A. K. M. (2016). The psychometric properties of school engagement scale in Bangladeshi culture. Journal of the Indian Academy of Applied Psychology, 42(1), 143–153. [Google Scholar]
  84. Vanovenberghe, C., Du Bois, M., Lauwerier, E., & Van Den Broeck, A. (2021). Does motivation predict return to work? A longitudinal analysis. Journal of Occupational Health, 63, e12284. [Google Scholar] [CrossRef]
  85. Vecchio, R. P. (1987). Situational leadership theory: An examination of a prescriptive theory. Journal of Applied Psychology, 72(3), 444–451. [Google Scholar] [CrossRef]
  86. Vecchio, R. P., Bullis, R. C., & Brazil, D. M. (2006). The utility of situational leadership theory. Small Group Research, 37(5), 407–424. [Google Scholar] [CrossRef]
  87. Venables, W. N., & Ripley, B. D. (2002). Modern applied statistics with S (4th ed.). Springer. [Google Scholar]
  88. Wang, X., Liu, Y., Peng, Z., Li, B., Liang, Q., Liao, S., & Liu, M. (2024). Situational leadership theory in nursing management: A scoping review. BMC Nursing, 23(1), 930. [Google Scholar] [CrossRef]
  89. Weick, K. E. (1995). Sensemaking in organizations. Sage. [Google Scholar]
  90. Westover, J. H. (2024). Leveraging situational leadership theory for transformation and excellence. Human Capital Leadership Review, 12(2). [Google Scholar] [CrossRef]
  91. White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48(4), 817–838. [Google Scholar] [CrossRef]
  92. Yeo, Y. H., Lee, J. H., & Lee, K. H. (2025). A study of the effects of situational strength on self-efficacy and happiness: Comparing individualist and collectivist cultures. Frontiers in Psychology, 16, 1563643. [Google Scholar] [CrossRef] [PubMed]
  93. Yusof, M. Y. P. M., Teo, C. H., & Ng, C. J. (2022). Electronic informed consent criteria for research ethics review: A scoping review. BMC Medical Ethics, 23, 117. [Google Scholar] [CrossRef]
  94. Zacher, H., & Rosing, K. (2015). Ambidextrous leadership and team innovation. Leadership & Organization Development Journal, 36(1), 54–68. [Google Scholar] [CrossRef]
  95. Zhang, S., & Fjermestad, J. (2006). Bridging the gap between traditional leadership theories and virtual team leadership. International Journal of Technology, Policy and Management, 6(3), 274–289. [Google Scholar] [CrossRef]
  96. Zigarmi, D., & Roberts, T. P. (2017). A test of three basic assumptions of situational leadership® II model and their implications for HRD practitioners. European Journal of Training and Development, 41(3), 241–260. [Google Scholar] [CrossRef]
Table 1. Integration of the 4D Model with SLT and Added Theoretical Value.
Table 1. Integration of the 4D Model with SLT and Added Theoretical Value.
SLT Readiness Component4D Dimension(s)Theoretical Anchor(s)Primary Psychological MechanismKey OutcomesAdded Value Beyond SLT
Competence (Ability)DareSelf-efficacy theory (Bandura, 1997)Confidence in capability to act; initiative and perseveranceHigher individual performance; readiness to take responsibilitySpecifies how competence is mobilized (agentic confidence), distinguishing action-readiness from mere skill possession.
DecodeSensemaking & cognitive appraisal theories (Weick, 1995; Lazarus, 1991)Interpreting ambiguity; adaptive cognitionEnhanced adaptability; team learningExpands competence to include cognitive flexibility and appraisal in uncertain contexts—absent in SLT’s static competence construct.
Commitment (Willingness)DriveSelf-determination theory (Deci & Ryan, 2000)Intrinsic motivation; fulfillment of autonomy and competence needsJob satisfaction; sustained engagementDeepens commitment by differentiating motivation quality (autonomous vs. controlled), not merely willingness.
DialogueTransformational leadership, LMX, psychological safety (Bass, 1990; A. C. Edmondson & Lei, 2014)Relational trust; open communication; voice behaviorTeam cohesion; relational satisfactionMakes the relational/emotional climate explicit (trust, safety, openness), which SLT treats only implicitly through “commitment.”
Table 2. Summary of Correlation and Regression Analyses for H1 and H2.
Table 2. Summary of Correlation and Regression Analyses for H1 and H2.
H1/2AnalysisVariabler/R2Fp-ValueNShapiro–Wilk (WW, p-Value)Max VIF
H1Pearson’s CorrelationTotal 4D vs. Employee Satisfaction0.531-<0.0011690.966, <0.001-
Multiple RegressionD1, D2, D3, D40.32920.09 (4, 164)<0.0011690.966, <0.0012.62
H2Pearson’s CorrelationTotal 4D vs. Manager Satisfaction0.804-<0.001910.952, 0.002-
Multiple RegressionD1, D2, D3, D40.69949.83 (4, 86)<0.001910.952, 0.0023.379
Note: Shapiro–Wilk test results indicate non-normal residuals, addressed via robust regression. Maximum VIF values confirm no multicollinearity (VIF < 5).
Table 3. Robust Regression Coefficients for H1 and H2.
Table 3. Robust Regression Coefficients for H1 and H2.
H1: Employee SatisfactionH2: Manager Satisfaction
PredictorβSEtβSEt
(Intercept)1.0450.4382.388 *0.4570.2282.004 *
D1 (Drive)0.3780.0824.602 **0.2030.0782.603 *
D2 (Dare)−0.1590.124−1.2800.1950.0852.302 *
D3 (Decode)0.0380.1260.305−0.0780.079−0.992
D4 (Dialogue)0.5080.1134.494 **0.6120.0817.542 **
Residual Std. Error0.664 0.269
Degrees of Freedom161 86
Note: Robust regression addressed non-normal residuals. * p < 0.05, ** p < 0.01.
Table 4. Model Diagnostics for H1, H2, and H3.
Table 4. Model Diagnostics for H1, H2, and H3.
HypothesisModelShapiro–WilkVIF (Tenure Group, Training, Performance; D1_Drive, D2_Dare, D3_Decode, D4_Dialogue)Max VIF
H1Employee Satisfaction0.966, <0.001-, -, -, 1.591, 2.410, 2.104, 1.5532.410
H2Manager Satisfaction0.952, 0.002-, -, -, 3.301, 3.379, 3.204, 2.5753.379
H3Employee Satisfaction0.968, <0.0011.064, 1.089, 1.779, 1.724, 2.625, 2.169, 1.6302.625
Team Adaptability0.974, 0.0031.015, 1.043, 1.333, 1.312, 1.620, 1.472, 1.2762.625
Note: Shapiro–Wilk tests indicate non-normal residuals, addressed via robust regression. VIF values confirm no multicollinearity (VIF < 5). For H3, VIF values include objective indicators (tenure, training), together with self-rated performance, and 4D predictors; maximum VIF is reported due to space constraints.
Table 5. Summary of Hierarchical Regression Models for H3.
Table 5. Summary of Hierarchical Regression Models for H3.
Dependent VariableModelPredictorsR2ΔR2Fp-Value
Employee Satisfaction1Traditional Indicators0.244---
2Traditional + 4D Dimensions0.3880.1449.251 (4, 157)<0.001
Team Adaptability1Traditional Indicators0.159---
2Traditional + 4D Dimensions0.2660.1075.666 (4, 157)<0.001
Note: Traditional indicators include Tenure_Group, Training_Numeric, and Self-Rated Performance. Four-dimensional dimensions include Drive, Dare, Decode, and Dialogue.
Table 6. Robust Regression Coefficients for H3.
Table 6. Robust Regression Coefficients for H3.
PredictorEmployee SatisfactionTeam Adaptability
βSEtβSEt
(Intercept)0.1380.4820.2870.8910.4202.123 *
Tenure_Group (5–10 years)0.0370.1310.280−0.0190.114−0.165
Tenure_Group (>10 years)0.0010.1750.003−0.0500.153−0.326
Training_Numeric0.1830.0892.060 *−0.0100.078−0.132
Self-Rated Performance0.3900.1153.399 **0.1590.1001.589
D1 (Drive)0.2870.0863.359 **0.1110.0751.495
D2 (Dare)−0.2940.130−2.266 *−0.0010.113−0.011
D3 (Decode)0.0720.1280.5600.2610.1112.346 *
D4 (Dialogue)0.4290.1163.694 **0.2440.1012.415 *
Residual Std. Error0.642 0.532
Degrees of Freedom157 157
Note: Robust regression addressed non-normal residuals. * p < 0.05, ** p < 0.01.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Giergia, D.; Drašković, N.; Fraculj, M. Structured Subjective Readiness in Situational Leadership: Validating the 4D Model as an Associative Predictor. Adm. Sci. 2025, 15, 488. https://doi.org/10.3390/admsci15120488

AMA Style

Giergia D, Drašković N, Fraculj M. Structured Subjective Readiness in Situational Leadership: Validating the 4D Model as an Associative Predictor. Administrative Sciences. 2025; 15(12):488. https://doi.org/10.3390/admsci15120488

Chicago/Turabian Style

Giergia, Dino, Nikola Drašković, and Mario Fraculj. 2025. "Structured Subjective Readiness in Situational Leadership: Validating the 4D Model as an Associative Predictor" Administrative Sciences 15, no. 12: 488. https://doi.org/10.3390/admsci15120488

APA Style

Giergia, D., Drašković, N., & Fraculj, M. (2025). Structured Subjective Readiness in Situational Leadership: Validating the 4D Model as an Associative Predictor. Administrative Sciences, 15(12), 488. https://doi.org/10.3390/admsci15120488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop