1. Introduction
Higher education institutions increasingly rely on digitally integrated infrastructures to support teaching, research, and administrative coordination. Learning management systems, cloud-based platforms, and interconnected campus technologies now underpin routine academic operations. While this integration enhances institutional efficiency and connectivity, it simultaneously expands exposure to cybersecurity threats. Incidents involving unauthorized access, data breaches, and service disruptions indicate that institutional vulnerability cannot be explained solely by technical configurations. Rather, security performance is shaped by how individuals engage with digital systems in everyday practice. Empirical research consistently shows that organizational security failures often stem from behavioral inconsistencies rather than from the absence of technological safeguards [
1,
2,
3,
4].
Formal policies and monitoring mechanisms, though essential, do not automatically produce secure behavior. Security-related decisions are influenced by perceived norms, social expectations, and evaluations of the practicality and effectiveness of protective measures [
5,
6]. Even when awareness is high, individuals may bypass prescribed routines if security controls are perceived as burdensome or ineffective [
7]. Systematic reviews further demonstrate that knowledge acquisition alone rarely leads to sustained behavioral change without motivational reinforcement and contextual alignment [
8,
9,
10]. These findings suggest that cybersecurity effectiveness should be conceptualized as a behavioral outcome embedded within organizational contexts, rather than as a purely technical attribute.
Theoretical explanations of cybersecurity behavior commonly draw on established behavioral frameworks. The Theory of Planned Behavior (TPB) posits behavioral intention as the immediate antecedent of action, shaped by attitudes, subjective norms, and perceived behavioral control [
11]. Social Cognitive Theory (SCT), in turn, emphasizes self-efficacy, defined as individuals’ perceived capability to perform protective actions under conditions of risk [
12]. Contemporary research confirms that intention and self-efficacy are central predictors of cybersecurity compliance [
2,
13]. At the same time, studies of policy compliance underscore the influence of institutional governance structures in shaping behavioral responses beyond individual cognition [
14]. Taken together, these perspectives indicate that cybersecurity outcomes emerge from the interaction of cognitive evaluation, perceived capability, social influence, and contextual reinforcement.
These dynamics are particularly salient in higher education. Universities operate as decentralized environments characterized by heterogeneous user populations and uneven levels of cybersecurity expertise [
15,
16]. Norms emphasizing openness, collaboration, and academic autonomy coexist with obligations to protect sensitive research data, intellectual property, and personal information. This structural tension complicates centralized enforcement and heightens the importance of user behavior in sustaining institutional security.
From a behavioral standpoint, information security effectiveness can be understood as the extent to which routine user actions preserve confidentiality, integrity, and availability (CIA). Rather than treating CIA as a static technical property of information systems, it may be conceptualized as an outcome enacted through structured behavioral processes. This study therefore examines how technological exposure and perceived social expectations shape cybersecurity knowledge, protective attitudes, self-efficacy, and behavioral intention, and how these factors collectively influence CIA-related outcomes. Structural equation modeling is employed to test an integrated framework grounded in TPB and SCT, clarifying the pathways through which awareness is translated into enacted protective behavior within smart university environments.
Despite the growing body of research on cybersecurity behavior, important limitations remain. Many studies apply TPB or SCT independently, without integrating motivational and capability-based mechanisms within a unified structural model [
9,
17]. Moreover, much of the literature centers on compliance intention without empirically modeling how behavioral processes translate into measurable CIA-based outcomes [
1,
18]. In higher education contexts specifically, research frequently emphasizes awareness or descriptive compliance indicators, while mediation mechanisms linking contextual exposure, cognitive processing, motivational alignment, and enacted security outcomes remain underdeveloped [
2,
16].
To address these gaps, this study integrates TPB and SCT within a single structural framework linking contextual antecedents, cognitive resources, motivational mechanisms, and CIA outcomes. By specifying both mediation and serial mediation pathways, the proposed model captures how cybersecurity knowledge and self-efficacy interact with attitudinal and normative influences to shape protective behavior. In doing so, the study advances behavioral cybersecurity scholarship by offering an outcome-oriented model tailored to decentralized smart university environments and by providing empirical evidence on how user-centered mechanisms sustain information protection in digitally intensive higher education institutions.
The remainder of this article is organized as follows.
Section 2 reviews the relevant literature and develops research hypotheses.
Section 3 describes the research design and analytical procedures.
Section 4 presents the empirical results.
Section 5 discusses theoretical and practical implications, and
Section 6 concludes with key contributions and directions for future research.
2. Literature Review
2.1. From Smart Cities to Smart Universities
The concept of the smart university derives from the broader smart city paradigm, which emphasizes digitally integrated infrastructures, data ecosystems, and sustainability-oriented governance models [
19,
20]. Within higher education, this paradigm materializes in interconnected campus environments supported by Internet of Things (IoT) systems, analytics platforms, and adaptive digital services [
21].
Recent scholarship has proposed structured frameworks to guide smart campus development, highlighting infrastructure integration, governance coordination, and performance evaluation as core dimensions [
22,
23]. Complementary technological surveys demonstrate how IoT architectures and data-driven platforms enhance operational efficiency and environmental sustainability in university settings [
24,
25].
While this body of literature provides strong architectural and strategic foundations, it gives comparatively limited attention to the cognitive and behavioral processes that shape secure system use. As digital infrastructures expand, cybersecurity effectiveness becomes inseparable from how users interact with these socio-technical ecosystems. Extending the smart university framework therefore requires integrating infrastructural perspectives with behavioral mechanisms governing the protection of confidentiality, integrity, and availability (CIA).
2.2. Behavioral Foundations of Cybersecurity
Contemporary cybersecurity research increasingly recognizes that organizational security effectiveness depends as much on human behavior as on technological safeguards [
3,
4]. Empirical studies consistently show that security incidents often originate in routine user practices, cognitive shortcuts, and informal workarounds—even in environments governed by formal policies [
1,
26]. Cybersecurity outcomes, in this view, are embedded in everyday organizational activity rather than confined to technical system design.
Systematic reviews identify knowledge, risk perception, social influence, and motivational dynamics as central behavioral determinants [
8]. Recent empirical work further suggests that behavioral resilience—particularly in phishing detection—depends not only on awareness but also on evaluative judgment and situational interpretation [
27]. Technology exposure has likewise been shown to shape awareness formation and security perceptions, indicating that digital environments influence cognitive risk processing [
28].
Despite these advances, many studies examine determinants independently, without modeling how contextual exposure is cognitively processed and translated into measurable security outcomes. Research on cybersecurity culture similarly emphasizes normative reinforcement, organizational climate, and governance structures as drivers of compliance [
6,
9]. Yet the sequential pathways linking contextual conditions, cognitive appraisal, perceived capability, motivational intention, and enacted CIA protection remain insufficiently specified.
These limitations underscore the need for an integrated behavioral framework capable of capturing how contextual exposure, cognitive evaluation, self-efficacy, and intention jointly shape measurable cybersecurity outcomes.
2.3. Cybersecurity in Higher Education Contexts
Higher education institutions constitute a distinctive cybersecurity environment characterized by decentralization, heterogeneous user populations, and institutional norms that privilege openness, collaboration, and academic autonomy [
15]. These structural features generate vulnerabilities that differ from those found in centralized corporate settings.
Universities face elevated exposure to cyber threats due to expanded digital infrastructures, cloud-based learning management systems, distributed access privileges, and widespread Bring-Your-Own-Device (BYOD) practices. The rapid digital acceleration during and following the COVID-19 pandemic further intensified these risks, expanding institutional attack surfaces and increasing dependence on remote platforms [
16].
The pandemic fundamentally reshaped cybersecurity conditions in higher education. The abrupt transition to remote instruction and distributed administration amplified reliance on personal devices, home networks, and cloud-based authentication systems. Reports during and after this period document heightened phishing campaigns and ransomware incidents targeting university communities. These developments illustrate how cybersecurity vulnerabilities in universities are not only structurally embedded but also situationally amplified during periods of rapid digital transformation.
Existing research in higher education cybersecurity broadly falls into three streams. Awareness-oriented studies assess cybersecurity knowledge and digital competence among students and staff [
15]. Although diagnostically valuable, such studies are often descriptive and do not model how contextual exposure leads to enacted security outcomes. Compliance-focused research examines adherence to security policies and institutional governance mechanisms [
14,
15], yet frequently treats intention as the terminal outcome rather than modeling CIA-based effectiveness. More recent human-factor and resilience-oriented studies investigate training effectiveness, phishing detection, and cognitive risk evaluation [
28,
29]. While these contributions advance understanding of behavioral determinants, they rarely integrate contextual exposure, knowledge formation, self-efficacy, and intention within a unified structural framework linked directly to CIA outcomes.
Consequently, despite expanding scholarship, two gaps remain salient: the limited specification of mediation pathways connecting contextual conditions to CIA-based outcomes, and the absence of integrated models combining motivational (TPB-based) and capability-based (SCT-based) mechanisms within smart university ecosystems.
The present study addresses these limitations by specifying and empirically testing a multi-stage behavioral pathway that connects technological exposure and social norms to cognitive development, motivational alignment, and measurable CIA outcomes in digitally intensive higher education environments.
2.4. Research Gap and Theoretical Positioning
Three interrelated limitations emerge from the preceding literature. First, behavioral cybersecurity research frequently applies the Theory of Planned Behavior (TPB) or Social Cognitive Theory (SCT) in isolation, limiting theoretical integration between intention formation and self-efficacy mechanisms. Second, many studies conceptualize compliance intention as the endpoint variable rather than modeling confidentiality, integrity, and availability (CIA)-based security outcomes. Third, structured mediation pathways linking contextual exposure, cognitive processing, and enacted security effectiveness remain underdeveloped in smart university contexts.
To advance the field, this study integrates TPB and SCT within a unified structural equation framework and empirically models CIA-based outcomes in digitally intensive higher education environments. By doing so, it repositions cybersecurity effectiveness as a structured behavioral outcome emerging from the interaction of contextual, cognitive, and motivational processes.
3. Hypotheses Development
3.1. Integrating TPB and Social Cognitive Perspectives
Explaining cybersecurity protection in smart university environments requires moving beyond isolated psychological predictors toward a more integrated understanding of how contextual exposure, cognitive resources, motivational evaluation, and perceived capability interact. Rather than treating behavioral determinants independently, the present framework conceptualizes protection as the outcome of a structured cognitive–motivational process.
The Theory of Planned Behavior (TPB) identifies Behavioral Intention (BI) as the most immediate antecedent of action, shaped primarily by Attitudes toward Cybersecurity Protection (ATT). Social Cognitive Theory (SCT) complements this perspective by emphasizing Self-Efficacy (SE) and knowledge-based competence as essential enablers of effective behavior. Whereas TPB explains motivational orientation, SCT clarifies how perceived capability conditions the translation of intention into action.
Integrating these perspectives, the proposed framework conceptualizes cybersecurity protection as a layered behavioral system. Technology exposure (TECH) and Social Norms & Influence (SN) function as contextual drivers shaping Cybersecurity Knowledge (KNOW) and Self-Efficacy (SE). These cognitive resources influence Attitudes (ATT) and Behavioral Intention (BI), with enacted protection ultimately reflected in confidentiality, integrity, and availability (CIA).
3.2. Technology (TECH) as a Driver of Knowledge and Capability
Technology (TECH) reflects the degree to which individuals are exposed to and interact with institutional digital systems. Repeated engagement with digital platforms facilitates experiential learning, increasing familiarity with security procedures, threat recognition, and system functionality.
Such exposure is expected to strengthen Cybersecurity Knowledge (KNOW) by deepening users’ understanding of risks and safeguards. At the same time, regular technological interaction may enhance confidence in managing digital environments, thereby reinforcing Self-Efficacy (SE). Individuals who routinely navigate digital systems are more likely to perceive themselves as capable of responding effectively to cybersecurity threats.
H1: Technology (TECH) positively influences Cybersecurity Knowledge (KNOW).
H2: Technology (TECH) positively influences Self-Efficacy (SE).
3.3. Social Norms & Influence (SN) as a Normative Reinforcement Mechanism
Social Norms & Influence (SN) capture perceived expectations and social pressures regarding appropriate cybersecurity conduct. In organizational settings, cybersecurity learning rarely occurs in isolation; it is shaped by peer observation, institutional signaling, and shared standards.
When secure behavior is visibly endorsed and reinforced, individuals are more inclined to seek information, engage in security-related discussions, and strengthen their Cybersecurity Knowledge (KNOW). Normative expectations also shape evaluative orientation, influencing Attitudes toward Cybersecurity Protection (ATT). When security is socially valued, it becomes cognitively and morally salient.
H3: Social Norms & Influence (SN) positively influence Cybersecurity Knowledge (KNOW).
H4: Social Norms & Influence (SN) positively influence Attitudes toward Cybersecurity Protection (ATT).
3.4. Cybersecurity Knowledge (KNOW) as a Foundational Cognitive Resource
Cybersecurity Knowledge (KNOW) enables individuals to recognize threats, interpret vulnerabilities, and evaluate appropriate responses. Although knowledge alone does not guarantee secure behavior, it provides the cognitive foundation upon which capability and motivation develop.
Greater knowledge is expected to reinforce Self-Efficacy (SE), as individuals who understand security principles are more likely to feel competent applying them. Knowledge also shapes Attitudes (ATT) by clarifying the value and necessity of protective behavior. Informed individuals are therefore more likely to form stronger Behavioral Intentions (BI) to act securely.
Beyond these mediated pathways, knowledge may also exert a direct influence on CIA outcomes by improving judgment accuracy in real-world decision contexts, particularly when rapid responses are required.
H5: Cybersecurity Knowledge (KNOW) positively influences Self-Efficacy (SE).
H6: Cybersecurity Knowledge (KNOW) positively influences Attitudes toward Cybersecurity Protection (ATT).
H7: Cybersecurity Knowledge (KNOW) positively influences Behavioral Intention (BI).
H8: Cybersecurity Knowledge (KNOW) positively influences CIA.
3.5. Self-Efficacy (SE) as a Motivational Enabler
Self-Efficacy (SE) refers to individuals’ perceived capability to perform cybersecurity protection behaviors under real-world conditions. Even when knowledge is sufficient, individuals may refrain from acting if they doubt their ability to implement protective measures effectively. Perceived capability therefore functions as a critical bridge between understanding and execution.
Higher levels of Self-Efficacy are expected to strengthen Attitudes (ATT), as individuals who feel capable of acting securely are more likely to evaluate protective behaviors as manageable and worthwhile. Self-Efficacy also reinforces Behavioral Intention (BI) by reducing uncertainty and increasing confidence in one’s ability to act.
In addition, efficacy beliefs may exert a direct influence on CIA outcomes. Individuals with stronger confidence in their protective abilities are more likely to implement security measures consistently and competently, thereby sustaining confidentiality, integrity, and availability.
H10: Self-Efficacy (SE) positively influences Behavioral Intention (BI).
H11: Self-Efficacy (SE) positively influences CIA.
3.6. Attitudes (ATT), Behavioral Intention (BI), and CIA
Attitudes toward Cybersecurity Protection (ATT) represent individuals’ overall evaluative orientation toward secure digital conduct. When protective behavior is perceived as important and beneficial, individuals are more likely to develop a strong Behavioral Intention (BI).
Behavioral Intention serves as the most proximal determinant of enacted behavior. Stronger intention is therefore expected to translate into improved CIA outcomes. In addition, favorable attitudes may contribute directly to CIA by encouraging consistent adherence to security standards, even in situations where intention is not explicitly deliberated.
H12: Attitudes toward Cybersecurity Protection (ATT) positively influence Behavioral Intention (BI).
H13: Attitudes toward Cybersecurity Protection (ATT) positively influence CIA.
H14: Behavioral Intention (BI) positively influences CIA.
3.7. Integrated Research Model
The integrated model conceptualizes cybersecurity protection as a progressive behavioral pathway (see
Figure 1). Contextual drivers (TECH and SN) shape cognitive development (KNOW and SE), which influences motivational mechanisms (ATT and BI), culminating in CIA outcomes. By specifying both mediated and complementary direct effects, the framework captures cybersecurity protection as an interconnected cognitive–motivational system rather than a single-factor explanation.
4. Materials and Methods
4.1. Research Design
This study employed a quantitative, cross-sectional survey design to examine theoretically specified relationships among behavioral and cognitive constructs related to information security outcomes in higher education institutions. The objective was to evaluate structural associations among latent constructs derived from established behavioral theories, rather than to establish experimentally verified causality or temporal sequencing.
Structural equation modeling (SEM) was selected as the primary analytical technique because it enables the simultaneous estimation of complex relationships among multiple latent variables, including direct, indirect, and serial mediation effects. SEM further permits concurrent evaluation of measurement reliability, construct validity, and structural pathways within a unified latent-variable framework, making it particularly suitable for theory-driven behavioral research [
30,
31].
Although the hypothesized paths are theoretically directional, the cross-sectional design limits definitive causal inference. Accordingly, the findings should be interpreted as statistically supported, theory-consistent associations observed at a single point in time. Longitudinal or experimental designs may further clarify temporal dynamics among the constructs.
4.2. Population and Sample
Data were collected using an online questionnaire distributed through official communication systems of participating higher education institutions in Thailand. The survey link was disseminated via university-wide email lists and internal academic platforms at faculty and departmental levels. Institutional coordinators facilitated broad circulation across multiple universities. No material incentives were offered. The survey remained open for a defined period to ensure adequate participation.
The study employed an institutionally distributed, self-administered survey approach rather than a randomized probability sampling design. Although this strategy yielded a substantial and heterogeneous sample (N = 540), it does not constitute stratified or fully randomized sampling. The dataset should therefore be interpreted as institutionally disseminated rather than statistically representative.
Because participation was voluntary, self-selection bias cannot be entirely excluded. Individuals with greater cybersecurity awareness or engagement may have been more inclined to respond. Nevertheless, the inclusion of respondents from multiple institutions and the relatively large sample size enhance variability and strengthen analytical robustness.
4.3. Measurement Instruments
A structured questionnaire was developed to operationalize the constructs specified in the research model. Measurement items were adapted from validated instruments in cybersecurity and behavioral research. Minor wording adjustments ensured contextual clarity within higher education settings while preserving theoretical meaning.
The instrument assessed seven latent constructs: technological exposure (TECH), social norms and influence (SN), cybersecurity knowledge (KNOW), attitudes toward cybersecurity protection (ATT), self-efficacy (SE), behavioral intention (BI), and information security outcomes (CIA). All items were measured using a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree).
Information security outcomes were operationalized as perceived results reflecting the extent to which confidentiality, integrity, and availability were maintained during routine system interaction. This perceptual operationalization aligns with the study’s behavioral orientation, conceptualizing CIA as user-enacted outcomes rather than purely technical system attributes. CIA was modeled as a higher-order reflective construct comprising three first-order dimensions—confidentiality, integrity, and availability—thereby preserving dimensional distinctiveness while capturing an overarching security outcome. The full measurement instrument is provided in
Appendix A.
4.4. Instrument Validation and Pilot Testing
Prior to full-scale data collection, the questionnaire underwent expert review by specialists in information security, information systems, and higher education administration. The review assessed content validity, construct alignment, and clarity. Minor revisions were implemented to enhance interpretability and contextual fit.
A pilot study was subsequently conducted to evaluate instrument functionality and detect potential ambiguities. Results indicated consistent item interpretation and smooth completion. Preliminary reliability analysis demonstrated acceptable internal consistency across constructs, supporting progression to the main survey.
4.5. Data Collection Procedure
The main survey was administered online through institutional communication channels. Participation was voluntary, and respondents were informed of the study’s purpose before completing the questionnaire. To reduce response bias and encourage candid participation, anonymity was guaranteed and no personally identifiable information was collected.
Only fully completed questionnaires were retained for analysis, ensuring response completeness and minimizing the need for post hoc data imputation.
4.6. Data Analysis
A two-stage SEM procedure was implemented. First, descriptive statistics were computed to summarize respondent characteristics and examine distributional properties. The dataset was screened for missing values, extreme responses, and violations of normality assumptions. No critical issues were identified.
Second, measurement and structural models were estimated using Mplus Version 8.3. Confirmatory factor analysis (CFA) assessed measurement adequacy. Standardized factor loadings, Cronbach’s alpha, composite reliability (CR), and average variance extracted (AVE) were evaluated to establish internal consistency and convergent validity. Discriminant validity was examined using inter-construct correlations and AVE comparisons in accordance with established SEM guidelines [
30].
Following confirmation of measurement validity, the structural model was estimated to test the fourteen hypothesized relationships (H1–H14). Both direct and mediated effects were assessed within the integrated TPB–SCT framework. Indirect and serial mediation effects were examined using bootstrapping with bias-corrected confidence intervals, thereby avoiding reliance on normality assumptions [
31].
Model fit was evaluated using multiple indices, including χ
2, CFI, TLI, RMSEA, and SRMR. These indices were interpreted collectively according to established threshold recommendations [
30].
4.7. Analytical Procedure (Pseudocode Summary)
To enhance methodological transparency, the analytical workflow is summarized in
Table 1. The pseudocode representation outlines sequential steps from instrument validation to structural model estimation and mediation analysis.
4.8. Formal Specification of the Structural Model
The structural framework was specified as a system of equations linking contextual, cognitive, motivational, and outcome constructs. The latent variables include TECH (technological exposure), SN (social norms and influence), KNOW (cybersecurity knowledge), ATT (attitudes toward cybersecurity protection), SE (self-efficacy), BI (behavioral intention), and CIA (information security outcomes).
The structural component of the model is specified as follows:
Here, β represents standardized structural coefficients and ε denotes disturbance terms. This specification captures a partially mediated behavioral system in which contextual antecedents influence CIA through sequential cognitive and motivational mechanisms, while also allowing complementary direct effects.
4.9. Ethical Considerations
Ethical considerations were rigorously addressed throughout the research process. The Institutional Review Board (IRB) of Mahasarakham University granted ethical approval under approval number 375-429/2025. The protocol underwent exemption review and was approved for the period from 16 June 2025 to 15 June 2026.
Participation was voluntary, with informed consent obtained electronically from all respondents prior to survey completion. Participants were informed of the study’s purpose, the voluntary nature of participation, their right to withdraw at any time without consequence, and the exclusive academic use of the collected data.
No personally identifiable information, IP addresses, device identifiers, or traceable metadata were collected. All responses were recorded anonymously and analyzed in aggregate to ensure confidentiality. The dataset was stored securely, retained in accordance with institutional data governance policies, and accessed exclusively by the research team for scholarly purposes. The study fully complied with institutional ethical standards and applicable research governance guidelines.
6. Discussion
This study advances behavioral cybersecurity research by demonstrating that information security effectiveness in smart universities cannot be reduced to technological deployment or compliance intention alone. Instead, confidentiality, integrity, and availability (CIA) emerge through a structured, multi-stage cognitive–motivational process. By modeling CIA as an outcome variable rather than stopping at behavioral intention, the present research extends prior compliance-focused frameworks and contributes a performance-oriented perspective to higher education cybersecurity.
Unlike earlier studies that primarily predict policy compliance intention, the structural model empirically verifies that contextual exposure (TECH, SN) influences measurable security outcomes through sequential mediation involving knowledge formation, efficacy beliefs, attitudinal alignment, and behavioral intention. This layered mechanism clarifies how socio-technical environments translate into enacted protection of institutional information assets.
6.1. Security Effectiveness and the Protection of Information Assets
A central contribution of the study lies in empirically linking Behavioral Intention (BI) directly to CIA-based outcomes. As demonstrated in the structural model, BI exhibited the strongest direct effect on CIA, confirming that information security effectiveness fundamentally depends on intentional commitment to protective practices. While prior TPB-based cybersecurity studies frequently conceptualize intention as the final dependent variable, the present findings show that intention serves as a proximal mechanism leading to measurable protection of confidentiality, integrity, and availability.
Importantly, BI mediated the effects of Cybersecurity Knowledge (KNOW), Self-Efficacy (SE), and Attitudes (ATT) on CIA, indicating that cognitive and motivational resources must consolidate into behavioral commitment to produce tangible outcomes. This finding extends the socio-behavioral cybersecurity literature by demonstrating that awareness and capability become operationally meaningful only when translated into enacted protective behavior.
Self-Efficacy (SE) demonstrated both direct and indirect effects on CIA, supporting Social Cognitive Theory’s proposition that perceived capability influences both intention and behavioral execution. This dual pathway suggests that efficacy beliefs not only motivate protective behavior but also enhance implementation competence—an insight particularly relevant in decentralized smart university environments.
By contrast, KNOW did not function as a dominant direct predictor of CIA; its influence was primarily indirect. This result reinforces contemporary evidence indicating that knowledge alone does not guarantee performance unless integrated with motivational and efficacy-based mechanisms. Thus, the findings refine existing behavioral models by distinguishing foundational cognitive resources from proximal behavioral drivers.
6.2. Technology and Social Influence as Enabling Conditions
The study further clarifies the role of contextual drivers. Technology (TECH) and Social Norms & Influence (SN) did not directly predict CIA; instead, their influence was fully mediated by cognitive and motivational constructs. This explains why investments in digital infrastructure or policy reinforcement do not automatically translate into improved security outcomes.
While digital transformation research frequently emphasizes technological capability as a performance enabler, the present findings demonstrate that technological exposure primarily serves as a learning environment rather than a direct security determinant. Similarly, social norms exerted a stronger influence on KNOW than TECH, highlighting the importance of institutional culture and peer reinforcement in shaping cybersecurity cognition.
These findings contribute beyond model comparison by empirically specifying how contextual variables operate through internal psychological pathways. Rather than confirming isolated effects, the study models a sequential mechanism that integrates environmental, cognitive, and motivational dimensions within a unified SEM framework.
6.3. Theoretical Contribution: Integrating TPB and SCT Toward CIA Outcomes
The integration of the Theory of Planned Behavior (TPB) and Social Cognitive Theory (SCT) constitutes a key theoretical contribution. While previous research often applies these frameworks independently, the present study demonstrates their complementary explanatory power within a unified structural model.
Attitudes and Self-Efficacy jointly shaped Behavioral Intention, while Self-Efficacy additionally exerted a direct effect on CIA, validating SCT’s executional dimension. The serial pathway—from contextual exposure to knowledge, efficacy, attitude, intention, and ultimately CIA—empirically demonstrates a multi-stage behavioral process rather than a single-factor causal relationship.
Most importantly, by modeling CIA-based performance outcomes rather than compliance intention alone, this research extends TPB–SCT cybersecurity scholarship toward an outcome-oriented, effectiveness-based paradigm. This shift addresses a persistent gap in higher education cybersecurity research, where measurable protection of information assets has rarely been incorporated into behavioral structural models.
6.4. Practical Implications for Smart Universities
From a practical standpoint, the findings suggest that strengthening cybersecurity in smart universities requires behavioral system design rather than isolated technical interventions.
Because Behavioral Intention emerged as the strongest predictor of CIA, institutions should prioritize strategies that enhance commitment to protective conduct. Training programs should move beyond awareness transmission to build Self-Efficacy through applied scenarios and capability reinforcement. Additionally, cultivating normative environments that visibly reward secure behavior may amplify knowledge internalization and motivational alignment.
These implications underscore that effective cybersecurity governance in higher education depends on coordinated cognitive, motivational, and contextual strategies rather than infrastructure investment alone.
6.5. Synthesis and Novelty Statement
Overall, the study contributes a structured, behaviorally grounded explanation of CIA protection in digitally intensive higher education ecosystems. The novelty of this research lies in (1) integrating TPB and SCT within a unified structural equation model, (2) modeling confidentiality, integrity, and availability as measurable performance outcomes, and (3) empirically specifying a multi-stage mediation pathway linking contextual exposure to enacted security effectiveness.
By moving beyond compliance intention and explicitly operationalizing CIA outcomes, the study advances a performance-based understanding of cybersecurity in smart universities and provides a theoretically integrated foundation for future research.
7. Conclusions
7.1. Theoretical Contributions
This research advances the behavioral cybersecurity literature by empirically validating a multi-layered Theory of Planned Behavior–Social Cognitive Theory (TPB–SCT) framework that links contextual conditions to confidentiality, integrity, and availability (CIA) outcomes. Conceptualizing CIA as behaviorally enacted outcomes reframes cybersecurity effectiveness as a structured cognitive and motivational process instead of a solely technical safeguard.
The findings elucidate the mediating role of Cybersecurity Knowledge (KNOW) and identify Self-Efficacy (SE) as a key mechanism bridging cognition and behavior. Additionally, the prominence of Behavioral Intention (BI) supports current behavioral models that emphasize commitment as the most immediate determinant of secure conduct [
18,
32].
7.2. Practical Implications
The results suggest that universities undergoing digital transformation should:
Strengthen normative reinforcement mechanisms;
Build user confidence through applied training;
Foster positive evaluative attitudes toward cybersecurity;
Explicitly target intention formation.
In decentralized academic environments, sustainable CIA protection depends on intentional, confident, and culturally supported user behavior.
7.3. Limitations and Future Research
Several limitations should be acknowledged. First, the cross-sectional design restricts causal inference and captures behavioral perceptions at a single point in time. Longitudinal or experimental research could provide deeper insight into how cybersecurity intentions and practices evolve within dynamic smart university environments.
Second, all constructs—including the CIA outcome variable—were measured using self-reported perceptions. Although Harman’s single-factor test indicated that common method variance was unlikely to be a serious threat, statistical diagnostics cannot entirely eliminate potential social desirability or perceptual inflation effects. The CIA construct therefore reflects perceived information security effectiveness rather than objective technical system indicators such as incident logs, vulnerability scans, or telemetry-based security metrics.
Future research should integrate behavioral survey data with objective cybersecurity indicators to enhance methodological robustness and external validity. Multilevel modeling approaches may further explore institutional-level variation in cybersecurity culture, governance structures, and digital maturity across universities. Cross-national comparisons would also strengthen the generalizability of the proposed framework.
7.4. Final Remarks
Cybersecurity effectiveness in smart universities is fundamentally behavioral. By demonstrating how contextual exposure is translated into CIA outcomes through cognitive and motivational mechanisms, this study provides a structured foundation for advancing user-centered cybersecurity theory and practice in higher education.