Next Article in Journal
Service Life Dataset Development for Non-Structural Building Envelope Materials—Current State, Knowledge Gaps and Inconsistencies
Next Article in Special Issue
Leveraging Transformers and LLMs for Automated Grading and Feedback Generation Using a Novel Dataset
Previous Article in Journal
Dataset on Continuous Sewer Hydraulic and Pollutant Concentration Observations from 2008 to 2011 Including Precipitation Data, Laboratory Analysis and a Hydrodynamic Model
Previous Article in Special Issue
Dataset of Students for Learning Analytics with Gamification
 
 
Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Behavioral and Cognitive Pathways to Information Security Outcomes in Smart Universities

Mahasarakham Business School, Mahasarakham University, Maha Sarakham 44150, Thailand
*
Author to whom correspondence should be addressed.
Data 2026, 11(3), 46; https://doi.org/10.3390/data11030046
Submission received: 17 January 2026 / Revised: 24 February 2026 / Accepted: 25 February 2026 / Published: 26 February 2026

Abstract

Cybersecurity effectiveness in digitally intensive university environments depends not only on technological safeguards but also on how individuals enact protective behaviors within decentralized systems. While prior research has largely emphasized compliance intention, limited empirical attention has examined how behavioral mechanisms translate into measurable confidentiality, integrity, and availability (CIA) outcomes in smart universities. This study develops and tests an integrated structural model grounded in the Theory of Planned Behavior and Social Cognitive Theory to examine how contextual exposure, cognitive resources, and motivational processes jointly influence security outcomes. Using structural equation modeling (SEM), data from 540 respondents across multiple higher education institutions were analyzed. Behavioral intention (β = 0.489) emerges as the strongest predictor of CIA, followed by self-efficacy (β = 0.190). Cybersecurity knowledge influences CIA indirectly through attitudes and intention rather than through a dominant direct path. Technological exposure (β = 0.250) and social norms (β = 0.540) primarily strengthen knowledge formation. The model demonstrates strong empirical fit (CFI = 0.997; RMSEA = 0.057; SRMR = 0.015). These findings show that CIA protection in smart universities emerges through structured cognitive–motivational pathways in which awareness is transformed into capability and intention, rather than through technological exposure alone.

1. Introduction

Higher education institutions increasingly rely on digitally integrated infrastructures to support teaching, research, and administrative coordination. Learning management systems, cloud-based platforms, and interconnected campus technologies now underpin routine academic operations. While this integration enhances institutional efficiency and connectivity, it simultaneously expands exposure to cybersecurity threats. Incidents involving unauthorized access, data breaches, and service disruptions indicate that institutional vulnerability cannot be explained solely by technical configurations. Rather, security performance is shaped by how individuals engage with digital systems in everyday practice. Empirical research consistently shows that organizational security failures often stem from behavioral inconsistencies rather than from the absence of technological safeguards [1,2,3,4].
Formal policies and monitoring mechanisms, though essential, do not automatically produce secure behavior. Security-related decisions are influenced by perceived norms, social expectations, and evaluations of the practicality and effectiveness of protective measures [5,6]. Even when awareness is high, individuals may bypass prescribed routines if security controls are perceived as burdensome or ineffective [7]. Systematic reviews further demonstrate that knowledge acquisition alone rarely leads to sustained behavioral change without motivational reinforcement and contextual alignment [8,9,10]. These findings suggest that cybersecurity effectiveness should be conceptualized as a behavioral outcome embedded within organizational contexts, rather than as a purely technical attribute.
Theoretical explanations of cybersecurity behavior commonly draw on established behavioral frameworks. The Theory of Planned Behavior (TPB) posits behavioral intention as the immediate antecedent of action, shaped by attitudes, subjective norms, and perceived behavioral control [11]. Social Cognitive Theory (SCT), in turn, emphasizes self-efficacy, defined as individuals’ perceived capability to perform protective actions under conditions of risk [12]. Contemporary research confirms that intention and self-efficacy are central predictors of cybersecurity compliance [2,13]. At the same time, studies of policy compliance underscore the influence of institutional governance structures in shaping behavioral responses beyond individual cognition [14]. Taken together, these perspectives indicate that cybersecurity outcomes emerge from the interaction of cognitive evaluation, perceived capability, social influence, and contextual reinforcement.
These dynamics are particularly salient in higher education. Universities operate as decentralized environments characterized by heterogeneous user populations and uneven levels of cybersecurity expertise [15,16]. Norms emphasizing openness, collaboration, and academic autonomy coexist with obligations to protect sensitive research data, intellectual property, and personal information. This structural tension complicates centralized enforcement and heightens the importance of user behavior in sustaining institutional security.
From a behavioral standpoint, information security effectiveness can be understood as the extent to which routine user actions preserve confidentiality, integrity, and availability (CIA). Rather than treating CIA as a static technical property of information systems, it may be conceptualized as an outcome enacted through structured behavioral processes. This study therefore examines how technological exposure and perceived social expectations shape cybersecurity knowledge, protective attitudes, self-efficacy, and behavioral intention, and how these factors collectively influence CIA-related outcomes. Structural equation modeling is employed to test an integrated framework grounded in TPB and SCT, clarifying the pathways through which awareness is translated into enacted protective behavior within smart university environments.
Despite the growing body of research on cybersecurity behavior, important limitations remain. Many studies apply TPB or SCT independently, without integrating motivational and capability-based mechanisms within a unified structural model [9,17]. Moreover, much of the literature centers on compliance intention without empirically modeling how behavioral processes translate into measurable CIA-based outcomes [1,18]. In higher education contexts specifically, research frequently emphasizes awareness or descriptive compliance indicators, while mediation mechanisms linking contextual exposure, cognitive processing, motivational alignment, and enacted security outcomes remain underdeveloped [2,16].
To address these gaps, this study integrates TPB and SCT within a single structural framework linking contextual antecedents, cognitive resources, motivational mechanisms, and CIA outcomes. By specifying both mediation and serial mediation pathways, the proposed model captures how cybersecurity knowledge and self-efficacy interact with attitudinal and normative influences to shape protective behavior. In doing so, the study advances behavioral cybersecurity scholarship by offering an outcome-oriented model tailored to decentralized smart university environments and by providing empirical evidence on how user-centered mechanisms sustain information protection in digitally intensive higher education institutions.
The remainder of this article is organized as follows. Section 2 reviews the relevant literature and develops research hypotheses. Section 3 describes the research design and analytical procedures. Section 4 presents the empirical results. Section 5 discusses theoretical and practical implications, and Section 6 concludes with key contributions and directions for future research.

2. Literature Review

2.1. From Smart Cities to Smart Universities

The concept of the smart university derives from the broader smart city paradigm, which emphasizes digitally integrated infrastructures, data ecosystems, and sustainability-oriented governance models [19,20]. Within higher education, this paradigm materializes in interconnected campus environments supported by Internet of Things (IoT) systems, analytics platforms, and adaptive digital services [21].
Recent scholarship has proposed structured frameworks to guide smart campus development, highlighting infrastructure integration, governance coordination, and performance evaluation as core dimensions [22,23]. Complementary technological surveys demonstrate how IoT architectures and data-driven platforms enhance operational efficiency and environmental sustainability in university settings [24,25].
While this body of literature provides strong architectural and strategic foundations, it gives comparatively limited attention to the cognitive and behavioral processes that shape secure system use. As digital infrastructures expand, cybersecurity effectiveness becomes inseparable from how users interact with these socio-technical ecosystems. Extending the smart university framework therefore requires integrating infrastructural perspectives with behavioral mechanisms governing the protection of confidentiality, integrity, and availability (CIA).

2.2. Behavioral Foundations of Cybersecurity

Contemporary cybersecurity research increasingly recognizes that organizational security effectiveness depends as much on human behavior as on technological safeguards [3,4]. Empirical studies consistently show that security incidents often originate in routine user practices, cognitive shortcuts, and informal workarounds—even in environments governed by formal policies [1,26]. Cybersecurity outcomes, in this view, are embedded in everyday organizational activity rather than confined to technical system design.
Systematic reviews identify knowledge, risk perception, social influence, and motivational dynamics as central behavioral determinants [8]. Recent empirical work further suggests that behavioral resilience—particularly in phishing detection—depends not only on awareness but also on evaluative judgment and situational interpretation [27]. Technology exposure has likewise been shown to shape awareness formation and security perceptions, indicating that digital environments influence cognitive risk processing [28].
Despite these advances, many studies examine determinants independently, without modeling how contextual exposure is cognitively processed and translated into measurable security outcomes. Research on cybersecurity culture similarly emphasizes normative reinforcement, organizational climate, and governance structures as drivers of compliance [6,9]. Yet the sequential pathways linking contextual conditions, cognitive appraisal, perceived capability, motivational intention, and enacted CIA protection remain insufficiently specified.
These limitations underscore the need for an integrated behavioral framework capable of capturing how contextual exposure, cognitive evaluation, self-efficacy, and intention jointly shape measurable cybersecurity outcomes.

2.3. Cybersecurity in Higher Education Contexts

Higher education institutions constitute a distinctive cybersecurity environment characterized by decentralization, heterogeneous user populations, and institutional norms that privilege openness, collaboration, and academic autonomy [15]. These structural features generate vulnerabilities that differ from those found in centralized corporate settings.
Universities face elevated exposure to cyber threats due to expanded digital infrastructures, cloud-based learning management systems, distributed access privileges, and widespread Bring-Your-Own-Device (BYOD) practices. The rapid digital acceleration during and following the COVID-19 pandemic further intensified these risks, expanding institutional attack surfaces and increasing dependence on remote platforms [16].
The pandemic fundamentally reshaped cybersecurity conditions in higher education. The abrupt transition to remote instruction and distributed administration amplified reliance on personal devices, home networks, and cloud-based authentication systems. Reports during and after this period document heightened phishing campaigns and ransomware incidents targeting university communities. These developments illustrate how cybersecurity vulnerabilities in universities are not only structurally embedded but also situationally amplified during periods of rapid digital transformation.
Existing research in higher education cybersecurity broadly falls into three streams. Awareness-oriented studies assess cybersecurity knowledge and digital competence among students and staff [15]. Although diagnostically valuable, such studies are often descriptive and do not model how contextual exposure leads to enacted security outcomes. Compliance-focused research examines adherence to security policies and institutional governance mechanisms [14,15], yet frequently treats intention as the terminal outcome rather than modeling CIA-based effectiveness. More recent human-factor and resilience-oriented studies investigate training effectiveness, phishing detection, and cognitive risk evaluation [28,29]. While these contributions advance understanding of behavioral determinants, they rarely integrate contextual exposure, knowledge formation, self-efficacy, and intention within a unified structural framework linked directly to CIA outcomes.
Consequently, despite expanding scholarship, two gaps remain salient: the limited specification of mediation pathways connecting contextual conditions to CIA-based outcomes, and the absence of integrated models combining motivational (TPB-based) and capability-based (SCT-based) mechanisms within smart university ecosystems.
The present study addresses these limitations by specifying and empirically testing a multi-stage behavioral pathway that connects technological exposure and social norms to cognitive development, motivational alignment, and measurable CIA outcomes in digitally intensive higher education environments.

2.4. Research Gap and Theoretical Positioning

Three interrelated limitations emerge from the preceding literature. First, behavioral cybersecurity research frequently applies the Theory of Planned Behavior (TPB) or Social Cognitive Theory (SCT) in isolation, limiting theoretical integration between intention formation and self-efficacy mechanisms. Second, many studies conceptualize compliance intention as the endpoint variable rather than modeling confidentiality, integrity, and availability (CIA)-based security outcomes. Third, structured mediation pathways linking contextual exposure, cognitive processing, and enacted security effectiveness remain underdeveloped in smart university contexts.
To advance the field, this study integrates TPB and SCT within a unified structural equation framework and empirically models CIA-based outcomes in digitally intensive higher education environments. By doing so, it repositions cybersecurity effectiveness as a structured behavioral outcome emerging from the interaction of contextual, cognitive, and motivational processes.

3. Hypotheses Development

3.1. Integrating TPB and Social Cognitive Perspectives

Explaining cybersecurity protection in smart university environments requires moving beyond isolated psychological predictors toward a more integrated understanding of how contextual exposure, cognitive resources, motivational evaluation, and perceived capability interact. Rather than treating behavioral determinants independently, the present framework conceptualizes protection as the outcome of a structured cognitive–motivational process.
The Theory of Planned Behavior (TPB) identifies Behavioral Intention (BI) as the most immediate antecedent of action, shaped primarily by Attitudes toward Cybersecurity Protection (ATT). Social Cognitive Theory (SCT) complements this perspective by emphasizing Self-Efficacy (SE) and knowledge-based competence as essential enablers of effective behavior. Whereas TPB explains motivational orientation, SCT clarifies how perceived capability conditions the translation of intention into action.
Integrating these perspectives, the proposed framework conceptualizes cybersecurity protection as a layered behavioral system. Technology exposure (TECH) and Social Norms & Influence (SN) function as contextual drivers shaping Cybersecurity Knowledge (KNOW) and Self-Efficacy (SE). These cognitive resources influence Attitudes (ATT) and Behavioral Intention (BI), with enacted protection ultimately reflected in confidentiality, integrity, and availability (CIA).

3.2. Technology (TECH) as a Driver of Knowledge and Capability

Technology (TECH) reflects the degree to which individuals are exposed to and interact with institutional digital systems. Repeated engagement with digital platforms facilitates experiential learning, increasing familiarity with security procedures, threat recognition, and system functionality.
Such exposure is expected to strengthen Cybersecurity Knowledge (KNOW) by deepening users’ understanding of risks and safeguards. At the same time, regular technological interaction may enhance confidence in managing digital environments, thereby reinforcing Self-Efficacy (SE). Individuals who routinely navigate digital systems are more likely to perceive themselves as capable of responding effectively to cybersecurity threats.
H1: 
Technology (TECH) positively influences Cybersecurity Knowledge (KNOW).
H2: 
Technology (TECH) positively influences Self-Efficacy (SE).

3.3. Social Norms & Influence (SN) as a Normative Reinforcement Mechanism

Social Norms & Influence (SN) capture perceived expectations and social pressures regarding appropriate cybersecurity conduct. In organizational settings, cybersecurity learning rarely occurs in isolation; it is shaped by peer observation, institutional signaling, and shared standards.
When secure behavior is visibly endorsed and reinforced, individuals are more inclined to seek information, engage in security-related discussions, and strengthen their Cybersecurity Knowledge (KNOW). Normative expectations also shape evaluative orientation, influencing Attitudes toward Cybersecurity Protection (ATT). When security is socially valued, it becomes cognitively and morally salient.
H3: 
Social Norms & Influence (SN) positively influence Cybersecurity Knowledge (KNOW).
H4: 
Social Norms & Influence (SN) positively influence Attitudes toward Cybersecurity Protection (ATT).

3.4. Cybersecurity Knowledge (KNOW) as a Foundational Cognitive Resource

Cybersecurity Knowledge (KNOW) enables individuals to recognize threats, interpret vulnerabilities, and evaluate appropriate responses. Although knowledge alone does not guarantee secure behavior, it provides the cognitive foundation upon which capability and motivation develop.
Greater knowledge is expected to reinforce Self-Efficacy (SE), as individuals who understand security principles are more likely to feel competent applying them. Knowledge also shapes Attitudes (ATT) by clarifying the value and necessity of protective behavior. Informed individuals are therefore more likely to form stronger Behavioral Intentions (BI) to act securely.
Beyond these mediated pathways, knowledge may also exert a direct influence on CIA outcomes by improving judgment accuracy in real-world decision contexts, particularly when rapid responses are required.
H5: 
Cybersecurity Knowledge (KNOW) positively influences Self-Efficacy (SE).
H6: 
Cybersecurity Knowledge (KNOW) positively influences Attitudes toward Cybersecurity Protection (ATT).
H7: 
Cybersecurity Knowledge (KNOW) positively influences Behavioral Intention (BI).
H8: 
Cybersecurity Knowledge (KNOW) positively influences CIA.

3.5. Self-Efficacy (SE) as a Motivational Enabler

Self-Efficacy (SE) refers to individuals’ perceived capability to perform cybersecurity protection behaviors under real-world conditions. Even when knowledge is sufficient, individuals may refrain from acting if they doubt their ability to implement protective measures effectively. Perceived capability therefore functions as a critical bridge between understanding and execution.
Higher levels of Self-Efficacy are expected to strengthen Attitudes (ATT), as individuals who feel capable of acting securely are more likely to evaluate protective behaviors as manageable and worthwhile. Self-Efficacy also reinforces Behavioral Intention (BI) by reducing uncertainty and increasing confidence in one’s ability to act.
In addition, efficacy beliefs may exert a direct influence on CIA outcomes. Individuals with stronger confidence in their protective abilities are more likely to implement security measures consistently and competently, thereby sustaining confidentiality, integrity, and availability.
H10: 
Self-Efficacy (SE) positively influences Behavioral Intention (BI).
H11: 
Self-Efficacy (SE) positively influences CIA.

3.6. Attitudes (ATT), Behavioral Intention (BI), and CIA

Attitudes toward Cybersecurity Protection (ATT) represent individuals’ overall evaluative orientation toward secure digital conduct. When protective behavior is perceived as important and beneficial, individuals are more likely to develop a strong Behavioral Intention (BI).
Behavioral Intention serves as the most proximal determinant of enacted behavior. Stronger intention is therefore expected to translate into improved CIA outcomes. In addition, favorable attitudes may contribute directly to CIA by encouraging consistent adherence to security standards, even in situations where intention is not explicitly deliberated.
H12: 
Attitudes toward Cybersecurity Protection (ATT) positively influence Behavioral Intention (BI).
H13: 
Attitudes toward Cybersecurity Protection (ATT) positively influence CIA.
H14: 
Behavioral Intention (BI) positively influences CIA.

3.7. Integrated Research Model

The integrated model conceptualizes cybersecurity protection as a progressive behavioral pathway (see Figure 1). Contextual drivers (TECH and SN) shape cognitive development (KNOW and SE), which influences motivational mechanisms (ATT and BI), culminating in CIA outcomes. By specifying both mediated and complementary direct effects, the framework captures cybersecurity protection as an interconnected cognitive–motivational system rather than a single-factor explanation.

4. Materials and Methods

4.1. Research Design

This study employed a quantitative, cross-sectional survey design to examine theoretically specified relationships among behavioral and cognitive constructs related to information security outcomes in higher education institutions. The objective was to evaluate structural associations among latent constructs derived from established behavioral theories, rather than to establish experimentally verified causality or temporal sequencing.
Structural equation modeling (SEM) was selected as the primary analytical technique because it enables the simultaneous estimation of complex relationships among multiple latent variables, including direct, indirect, and serial mediation effects. SEM further permits concurrent evaluation of measurement reliability, construct validity, and structural pathways within a unified latent-variable framework, making it particularly suitable for theory-driven behavioral research [30,31].
Although the hypothesized paths are theoretically directional, the cross-sectional design limits definitive causal inference. Accordingly, the findings should be interpreted as statistically supported, theory-consistent associations observed at a single point in time. Longitudinal or experimental designs may further clarify temporal dynamics among the constructs.

4.2. Population and Sample

Data were collected using an online questionnaire distributed through official communication systems of participating higher education institutions in Thailand. The survey link was disseminated via university-wide email lists and internal academic platforms at faculty and departmental levels. Institutional coordinators facilitated broad circulation across multiple universities. No material incentives were offered. The survey remained open for a defined period to ensure adequate participation.
The study employed an institutionally distributed, self-administered survey approach rather than a randomized probability sampling design. Although this strategy yielded a substantial and heterogeneous sample (N = 540), it does not constitute stratified or fully randomized sampling. The dataset should therefore be interpreted as institutionally disseminated rather than statistically representative.
Because participation was voluntary, self-selection bias cannot be entirely excluded. Individuals with greater cybersecurity awareness or engagement may have been more inclined to respond. Nevertheless, the inclusion of respondents from multiple institutions and the relatively large sample size enhance variability and strengthen analytical robustness.

4.3. Measurement Instruments

A structured questionnaire was developed to operationalize the constructs specified in the research model. Measurement items were adapted from validated instruments in cybersecurity and behavioral research. Minor wording adjustments ensured contextual clarity within higher education settings while preserving theoretical meaning.
The instrument assessed seven latent constructs: technological exposure (TECH), social norms and influence (SN), cybersecurity knowledge (KNOW), attitudes toward cybersecurity protection (ATT), self-efficacy (SE), behavioral intention (BI), and information security outcomes (CIA). All items were measured using a five-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree).
Information security outcomes were operationalized as perceived results reflecting the extent to which confidentiality, integrity, and availability were maintained during routine system interaction. This perceptual operationalization aligns with the study’s behavioral orientation, conceptualizing CIA as user-enacted outcomes rather than purely technical system attributes. CIA was modeled as a higher-order reflective construct comprising three first-order dimensions—confidentiality, integrity, and availability—thereby preserving dimensional distinctiveness while capturing an overarching security outcome. The full measurement instrument is provided in Appendix A.

4.4. Instrument Validation and Pilot Testing

Prior to full-scale data collection, the questionnaire underwent expert review by specialists in information security, information systems, and higher education administration. The review assessed content validity, construct alignment, and clarity. Minor revisions were implemented to enhance interpretability and contextual fit.
A pilot study was subsequently conducted to evaluate instrument functionality and detect potential ambiguities. Results indicated consistent item interpretation and smooth completion. Preliminary reliability analysis demonstrated acceptable internal consistency across constructs, supporting progression to the main survey.

4.5. Data Collection Procedure

The main survey was administered online through institutional communication channels. Participation was voluntary, and respondents were informed of the study’s purpose before completing the questionnaire. To reduce response bias and encourage candid participation, anonymity was guaranteed and no personally identifiable information was collected.
Only fully completed questionnaires were retained for analysis, ensuring response completeness and minimizing the need for post hoc data imputation.

4.6. Data Analysis

A two-stage SEM procedure was implemented. First, descriptive statistics were computed to summarize respondent characteristics and examine distributional properties. The dataset was screened for missing values, extreme responses, and violations of normality assumptions. No critical issues were identified.
Second, measurement and structural models were estimated using Mplus Version 8.3. Confirmatory factor analysis (CFA) assessed measurement adequacy. Standardized factor loadings, Cronbach’s alpha, composite reliability (CR), and average variance extracted (AVE) were evaluated to establish internal consistency and convergent validity. Discriminant validity was examined using inter-construct correlations and AVE comparisons in accordance with established SEM guidelines [30].
Following confirmation of measurement validity, the structural model was estimated to test the fourteen hypothesized relationships (H1–H14). Both direct and mediated effects were assessed within the integrated TPB–SCT framework. Indirect and serial mediation effects were examined using bootstrapping with bias-corrected confidence intervals, thereby avoiding reliance on normality assumptions [31].
Model fit was evaluated using multiple indices, including χ2, CFI, TLI, RMSEA, and SRMR. These indices were interpreted collectively according to established threshold recommendations [30].

4.7. Analytical Procedure (Pseudocode Summary)

To enhance methodological transparency, the analytical workflow is summarized in Table 1. The pseudocode representation outlines sequential steps from instrument validation to structural model estimation and mediation analysis.

4.8. Formal Specification of the Structural Model

The structural framework was specified as a system of equations linking contextual, cognitive, motivational, and outcome constructs. The latent variables include TECH (technological exposure), SN (social norms and influence), KNOW (cybersecurity knowledge), ATT (attitudes toward cybersecurity protection), SE (self-efficacy), BI (behavioral intention), and CIA (information security outcomes).
The structural component of the model is specified as follows:
KNOW = β1TECH + β2SN + ε1
SE = β3TECH + β4KNOW + ε2
ATT = β5KNOW + β6SE + β7SN + ε3
BI = β8ATT + β9SE + β10KNOW + ε4
CIA = β11BI + β12SE + β13ATT + β14KNOW + ε5
Here, β represents standardized structural coefficients and ε denotes disturbance terms. This specification captures a partially mediated behavioral system in which contextual antecedents influence CIA through sequential cognitive and motivational mechanisms, while also allowing complementary direct effects.

4.9. Ethical Considerations

Ethical considerations were rigorously addressed throughout the research process. The Institutional Review Board (IRB) of Mahasarakham University granted ethical approval under approval number 375-429/2025. The protocol underwent exemption review and was approved for the period from 16 June 2025 to 15 June 2026.
Participation was voluntary, with informed consent obtained electronically from all respondents prior to survey completion. Participants were informed of the study’s purpose, the voluntary nature of participation, their right to withdraw at any time without consequence, and the exclusive academic use of the collected data.
No personally identifiable information, IP addresses, device identifiers, or traceable metadata were collected. All responses were recorded anonymously and analyzed in aggregate to ensure confidentiality. The dataset was stored securely, retained in accordance with institutional data governance policies, and accessed exclusively by the research team for scholarly purposes. The study fully complied with institutional ethical standards and applicable research governance guidelines.

5. Results

5.1. Demographic Profile of Respondents

The final dataset comprised 540 valid responses. As summarized in Table 2, the sample was predominantly female (71.2%), with males accounting for 28.8%. Regarding age distribution, the majority of respondents were aged 20–29 years (53.5%), followed by participants under 20 years (38.5%). Individuals aged 30 years and above represented a smaller proportion of the sample.
With respect to cybersecurity training experience (see Table 2), 54.3% of respondents reported never attending formal cybersecurity training. Approximately 30.9% had attended training once, 12.6% participated at least annually, and only 2.2% attended more than twice per year. This distribution suggests limited exposure to structured cybersecurity education despite high digital engagement.
To assess potential common method bias, Harman’s single-factor test was conducted. The first unrotated factor accounted for 26.189% of total variance, which is below the conservative 40% threshold for serious bias. This result indicates that common method variance is unlikely to materially affect the study’s validity.

5.2. Measurement Model Evaluation

Prior to structural hypothesis testing, the measurement model was evaluated using Confirmatory Factor Analysis (CFA). As reported in Table 3, all global fit indices meet established SEM criteria. The relative chi-square (χ2/df = 2.0579) is below the recommended threshold of 3.00. The RMSEA value (0.044; 90% CI = 0.042–0.047), CFI (0.956), TLI (0.951), and SRMR (0.039) collectively indicate good model fit.
Reliability and convergent validity statistics are presented in Table 4. Composite reliability (CR) values range from 0.871 to 0.931, exceeding the recommended threshold of 0.70. Cronbach’s alpha values are all above 0.83, confirming internal consistency. Average Variance Extracted (AVE) values surpass 0.50 for all constructs, supporting convergent validity.
The standardized factor loadings are visually displayed in Figure 2, which presents the measurement model with standardized coefficients (STDYX). All factor loadings are strong and statistically significant (p < 0.05), reinforcing the construct validity summarized in Table 3 and Table 4.

5.3. Structural Model Assessment and Hypothesis Testing

The structural model demonstrated satisfactory global fit. As shown in Table 5, the goodness-of-fit indices collectively indicate that the hypothesized structural model provides an adequate representation of the observed data (χ2/df = 2.7656; RMSEA = 0.057; CFI = 0.997; TLI = 0.989; SRMR = 0.015). The values of CFI and TLI exceed the recommended 0.95 threshold, while RMSEA and SRMR remain within acceptable limits, confirming that the model achieves both absolute and incremental fit standards.
The standardized structural path coefficients are summarized numerically in Table 6, which reports the magnitude (β), statistical significance (p-values), and hypothesis testing results (Accepted). The overall structural configuration and relative strength of relationships among constructs are visually illustrated in Figure 3, allowing readers to observe the integrated TPB–SCT framework in diagrammatic form.
With respect to contextual antecedents, both Technology (TECH) and Social Norms & Influence (SN) significantly predicted Cybersecurity Knowledge (KNOW), as detailed in Table 6. Social norms exerted the stronger influence (β = 0.540, p < 0.001), indicating that institutional and peer expectations play a more substantial role than technological exposure in shaping knowledge formation. Technology also demonstrated a direct positive effect on Self-Efficacy (SE) (β = 0.113, p = 0.002), while KNOW exhibited a strong effect on SE (β = 0.648, p < 0.001), suggesting that cognitive understanding substantially enhances perceived capability.
Attitudes toward Cybersecurity Protection (ATT) were significantly influenced by KNOW, SE, and SN, reflecting the combined impact of cognitive resources, efficacy beliefs, and normative pressures. Behavioral Intention (BI) was predicted by ATT (β = 0.341), SE (β = 0.398), and KNOW (β = 0.184), confirming that intention formation represents the convergence of evaluative, motivational, and knowledge-based mechanisms.
Regarding the outcome construct, Behavioral Intention (BI) exhibited the strongest direct effect on CIA (β = 0.489), as shown in Table 6 and visually emphasized in Figure 3. Self-Efficacy (SE), Attitudes (ATT), and Cybersecurity Knowledge (KNOW) also demonstrated significant direct effects on CIA. The pattern of coefficients indicates a partially mediated structural configuration: contextual factors influence CIA primarily through sequential cognitive and motivational pathways, while retaining complementary direct effects from efficacy, attitude, and knowledge.

5.4. Explanatory Power of the Structural Model

The coefficient of determination (R2) values for endogenous constructs are reported in Table 7. Cybersecurity Knowledge (KNOW) achieved R2 = 0.557, indicating moderate explanatory power. Self-Efficacy (SE) (R2 = 0.693), Attitudes (ATT) (R2 = 0.677), and Behavioral Intention (BI) (R2 = 0.736) demonstrate substantial explanatory strength. Notably, Information Security Outcomes (CIA) achieved R2 = 0.803, indicating strong predictive capability of the integrated framework.

5.5. Decomposed Effects on CIA

Total, direct, and indirect effects on CIA are presented in Table 8. Cybersecurity Knowledge (KNOW) exhibited the largest total effect (0.553), with the majority transmitted through indirect pathways (0.446), indicating mediation via SE, ATT, and BI.
Self-Efficacy (SE) demonstrated a total effect of 0.522, combining both direct (0.176) and indirect (0.346) influences. Technology (TECH) and Social Norms & Influence (SN) affected CIA exclusively through indirect pathways, with total effects of 0.203 and 0.430, respectively.
These decomposed effects, summarized in Table 8, confirm that contextual antecedents influence security outcomes through structured cognitive and motivational mediation rather than through immediate direct impact.

6. Discussion

This study advances behavioral cybersecurity research by demonstrating that information security effectiveness in smart universities cannot be reduced to technological deployment or compliance intention alone. Instead, confidentiality, integrity, and availability (CIA) emerge through a structured, multi-stage cognitive–motivational process. By modeling CIA as an outcome variable rather than stopping at behavioral intention, the present research extends prior compliance-focused frameworks and contributes a performance-oriented perspective to higher education cybersecurity.
Unlike earlier studies that primarily predict policy compliance intention, the structural model empirically verifies that contextual exposure (TECH, SN) influences measurable security outcomes through sequential mediation involving knowledge formation, efficacy beliefs, attitudinal alignment, and behavioral intention. This layered mechanism clarifies how socio-technical environments translate into enacted protection of institutional information assets.

6.1. Security Effectiveness and the Protection of Information Assets

A central contribution of the study lies in empirically linking Behavioral Intention (BI) directly to CIA-based outcomes. As demonstrated in the structural model, BI exhibited the strongest direct effect on CIA, confirming that information security effectiveness fundamentally depends on intentional commitment to protective practices. While prior TPB-based cybersecurity studies frequently conceptualize intention as the final dependent variable, the present findings show that intention serves as a proximal mechanism leading to measurable protection of confidentiality, integrity, and availability.
Importantly, BI mediated the effects of Cybersecurity Knowledge (KNOW), Self-Efficacy (SE), and Attitudes (ATT) on CIA, indicating that cognitive and motivational resources must consolidate into behavioral commitment to produce tangible outcomes. This finding extends the socio-behavioral cybersecurity literature by demonstrating that awareness and capability become operationally meaningful only when translated into enacted protective behavior.
Self-Efficacy (SE) demonstrated both direct and indirect effects on CIA, supporting Social Cognitive Theory’s proposition that perceived capability influences both intention and behavioral execution. This dual pathway suggests that efficacy beliefs not only motivate protective behavior but also enhance implementation competence—an insight particularly relevant in decentralized smart university environments.
By contrast, KNOW did not function as a dominant direct predictor of CIA; its influence was primarily indirect. This result reinforces contemporary evidence indicating that knowledge alone does not guarantee performance unless integrated with motivational and efficacy-based mechanisms. Thus, the findings refine existing behavioral models by distinguishing foundational cognitive resources from proximal behavioral drivers.

6.2. Technology and Social Influence as Enabling Conditions

The study further clarifies the role of contextual drivers. Technology (TECH) and Social Norms & Influence (SN) did not directly predict CIA; instead, their influence was fully mediated by cognitive and motivational constructs. This explains why investments in digital infrastructure or policy reinforcement do not automatically translate into improved security outcomes.
While digital transformation research frequently emphasizes technological capability as a performance enabler, the present findings demonstrate that technological exposure primarily serves as a learning environment rather than a direct security determinant. Similarly, social norms exerted a stronger influence on KNOW than TECH, highlighting the importance of institutional culture and peer reinforcement in shaping cybersecurity cognition.
These findings contribute beyond model comparison by empirically specifying how contextual variables operate through internal psychological pathways. Rather than confirming isolated effects, the study models a sequential mechanism that integrates environmental, cognitive, and motivational dimensions within a unified SEM framework.

6.3. Theoretical Contribution: Integrating TPB and SCT Toward CIA Outcomes

The integration of the Theory of Planned Behavior (TPB) and Social Cognitive Theory (SCT) constitutes a key theoretical contribution. While previous research often applies these frameworks independently, the present study demonstrates their complementary explanatory power within a unified structural model.
Attitudes and Self-Efficacy jointly shaped Behavioral Intention, while Self-Efficacy additionally exerted a direct effect on CIA, validating SCT’s executional dimension. The serial pathway—from contextual exposure to knowledge, efficacy, attitude, intention, and ultimately CIA—empirically demonstrates a multi-stage behavioral process rather than a single-factor causal relationship.
Most importantly, by modeling CIA-based performance outcomes rather than compliance intention alone, this research extends TPB–SCT cybersecurity scholarship toward an outcome-oriented, effectiveness-based paradigm. This shift addresses a persistent gap in higher education cybersecurity research, where measurable protection of information assets has rarely been incorporated into behavioral structural models.

6.4. Practical Implications for Smart Universities

From a practical standpoint, the findings suggest that strengthening cybersecurity in smart universities requires behavioral system design rather than isolated technical interventions.
Because Behavioral Intention emerged as the strongest predictor of CIA, institutions should prioritize strategies that enhance commitment to protective conduct. Training programs should move beyond awareness transmission to build Self-Efficacy through applied scenarios and capability reinforcement. Additionally, cultivating normative environments that visibly reward secure behavior may amplify knowledge internalization and motivational alignment.
These implications underscore that effective cybersecurity governance in higher education depends on coordinated cognitive, motivational, and contextual strategies rather than infrastructure investment alone.

6.5. Synthesis and Novelty Statement

Overall, the study contributes a structured, behaviorally grounded explanation of CIA protection in digitally intensive higher education ecosystems. The novelty of this research lies in (1) integrating TPB and SCT within a unified structural equation model, (2) modeling confidentiality, integrity, and availability as measurable performance outcomes, and (3) empirically specifying a multi-stage mediation pathway linking contextual exposure to enacted security effectiveness.
By moving beyond compliance intention and explicitly operationalizing CIA outcomes, the study advances a performance-based understanding of cybersecurity in smart universities and provides a theoretically integrated foundation for future research.

7. Conclusions

7.1. Theoretical Contributions

This research advances the behavioral cybersecurity literature by empirically validating a multi-layered Theory of Planned Behavior–Social Cognitive Theory (TPB–SCT) framework that links contextual conditions to confidentiality, integrity, and availability (CIA) outcomes. Conceptualizing CIA as behaviorally enacted outcomes reframes cybersecurity effectiveness as a structured cognitive and motivational process instead of a solely technical safeguard.
The findings elucidate the mediating role of Cybersecurity Knowledge (KNOW) and identify Self-Efficacy (SE) as a key mechanism bridging cognition and behavior. Additionally, the prominence of Behavioral Intention (BI) supports current behavioral models that emphasize commitment as the most immediate determinant of secure conduct [18,32].

7.2. Practical Implications

The results suggest that universities undergoing digital transformation should:
  • Strengthen normative reinforcement mechanisms;
  • Build user confidence through applied training;
  • Foster positive evaluative attitudes toward cybersecurity;
  • Explicitly target intention formation.
In decentralized academic environments, sustainable CIA protection depends on intentional, confident, and culturally supported user behavior.

7.3. Limitations and Future Research

Several limitations should be acknowledged. First, the cross-sectional design restricts causal inference and captures behavioral perceptions at a single point in time. Longitudinal or experimental research could provide deeper insight into how cybersecurity intentions and practices evolve within dynamic smart university environments.
Second, all constructs—including the CIA outcome variable—were measured using self-reported perceptions. Although Harman’s single-factor test indicated that common method variance was unlikely to be a serious threat, statistical diagnostics cannot entirely eliminate potential social desirability or perceptual inflation effects. The CIA construct therefore reflects perceived information security effectiveness rather than objective technical system indicators such as incident logs, vulnerability scans, or telemetry-based security metrics.
Future research should integrate behavioral survey data with objective cybersecurity indicators to enhance methodological robustness and external validity. Multilevel modeling approaches may further explore institutional-level variation in cybersecurity culture, governance structures, and digital maturity across universities. Cross-national comparisons would also strengthen the generalizability of the proposed framework.

7.4. Final Remarks

Cybersecurity effectiveness in smart universities is fundamentally behavioral. By demonstrating how contextual exposure is translated into CIA outcomes through cognitive and motivational mechanisms, this study provides a structured foundation for advancing user-centered cybersecurity theory and practice in higher education.

Author Contributions

C.P. contributed to the conceptualization of the study, research design, data collection, and statistical analysis. C.S. contributed to conceptualization, supervision, critical review, and overall coordination of the manuscript preparation and submission process. A.S. led the development of the manuscript, including theoretical integration, structural interpretation of the SEM results, validation of methodological rigor, and refinement of the academic presentation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external grant funding. Publication support was provided by Mahasarakham Business School, Mahasarakham University.

Institutional Review Board Statement

The study was approved by the Mahasarakham University Ethics Committee for Research Involving Human Subjects, Approval Number 375-429/2025.

Data Availability Statement

The data presented in this study are available from the corresponding author upon reasonable request. The data are not publicly available due to ethical and privacy considerations related to the participating respondents.

Acknowledgments

This research project was financially supported by Mahasarakham Business School, Mahasarakham University, Thailand.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Research Questionnaire

Instructions
This questionnaire aims to assess cybersecurity awareness in higher education institutions to support intelligent security management. The questionnaire is divided into four sections:
  • Part 1: Respondent Demographics
  • Part 2: Evaluation of Factors Influencing Cybersecurity
  • Part 3: CIA Triad Practices
  • Part 4: Additional Comments and Suggestions
Please carefully read each question and mark ✔ in the box that best reflects your opinion or practice. All responses will be used strictly for academic research purposes. Your information will remain confidential, and no personal identifiers will be disclosed.
Part 1: Respondent Demographics
Gender
☐ Male
☐ Female
Age
☐ Under 20 years
☐ 20–29 years
☐ 30–39 years
☐ 40–49 years
☐ 50–59 years
☐ 60 years and above
Cybersecurity Training
☐ Never attended
☐ Attended once
☐ Attended at least once per year
☐ Attended more than twice per year
Part 2: Evaluation of Factors Influencing Cybersecurity
Please rate your agreement with the following statements:
1 = Strongly Disagree/Very Low Impact
2 = Disagree/Low Impact
3 = Moderate/Neutral
4 = Agree/High Impact
5 = Strongly Agree/Very High Impact
Technology (TECH)
  • The university provides secure authentication systems (e.g., multi-layer passwords or Two-Factor Authentication) and appropriate access control levels.
  • The university continuously implements and maintains intrusion detection and prevention systems (IDPS).
  • Computers in university laboratories are regularly updated with antivirus/anti-malware software.
  • The university maintains effective data backup systems and regularly tests recovery procedures.
  • When internet disruptions occur, assistance is available through accessible channels (e.g., Help Desk, online ticket system) with timely response.
Social Norms & Influence (SN)
6.
People around me (e.g., colleagues, lecturers, peers) emphasize compliance with cybersecurity measures.
7.
University leadership actively supports cybersecurity compliance.
8.
University departments consistently coordinate cybersecurity implementation.
9.
I feel social expectations or pressure to comply with cybersecurity policies.
10.
The university organizes knowledge-sharing activities related to cyber threats and lessons learned.
Cybersecurity Knowledge (KNOW)
11.
I can correctly identify phishing or spam emails.
12.
I understand the risks of downloading files from unsafe sources and know preventive measures.
13.
I know how to configure privacy settings and access controls within university systems.
14.
I understand the proper channels for reporting suspected cyber incidents.
15.
I understand national regulations and university cybersecurity policies at a basic level.
Attitude toward Cybersecurity Protection (ATT)
16.
Following cybersecurity measures enhances system and network security.
17.
Cybersecurity is an important issue requiring serious attention.
18.
Clicking unknown or suspicious links poses cybersecurity risks.
19.
Cybersecurity training is necessary for students and staff.
20.
I have a positive attitude toward applying cybersecurity practices in daily system usage.
Self-Efficacy (SE)
21.
I can independently configure privacy and access control settings securely.
22.
I can properly follow the university’s cybersecurity procedures.
23.
I can address basic security issues (e.g., disconnecting from networks when suspicious activity occurs).
24.
I can effectively use university-provided security tools (e.g., antivirus, VPN, reporting systems).
25.
I recognize that certain behaviors (e.g., password reuse, unsafe downloads) increase cybersecurity risks.
Behavioral Intention (BI)
26.
I intend to use secure authentication (e.g., Two-Factor Authentication) for important accounts.
27.
I intend to verify links or attachments before opening them.
28.
I intend to back up my data regularly.
29.
I intend to report suspicious activities through official university channels.
30.
I intend to regularly review and update my privacy settings.
Part 3: CIA Triad Practices
Please rate your level of practice:
1 = Very Low
2 = Low
3 = Moderate
4 = High
5 = Very High
Confidentiality
  • I use complex and unique passwords for important accounts.
  • I avoid sending sensitive information through unsecured channels.
  • I do not share my personal passwords with others.
  • I enable multi-factor authentication for supported accounts.
  • I avoid using public Wi-Fi when handling sensitive information.
Integrity
6.
I verify the accuracy of downloaded files before use.
7.
I verify the credibility of electronic information sources.
8.
I avoid installing software from unofficial sources.
9.
I refrain from unauthorized modification or deletion of data.
10.
I maintain records of file or data changes for transparency.
Availability
11.
I regularly back up important data.
12.
I know how to recover data if loss occurs.
13.
I can access university systems when needed.
14.
I avoid actions that may disrupt system accessibility.
15.
I receive timely notifications when university systems experience downtime.
Part 4: Additional Comments
Please provide any additional suggestions or comments:
________________________________________________________________________________
________________________________________________________________________________
Thank you very much for your cooperation.

References

  1. Parsons, K.; Calic, D.; Pattinson, M.; Butavicius, M.; McCormac, A.; Zwaans, T. The Human Aspects of Information Security Questionnaire (HAIS-Q): Two further validation studies. Comput. Secur. 2017, 66, 40–51. [Google Scholar] [CrossRef]
  2. Khan, N.F.; Ikram, N.; Murtaza, H.; Javed, M. Evaluating protection motivation based cybersecurity awareness training on Kirkpatrick’s Model. Comput. Secur. 2023, 125, 103049. [Google Scholar] [CrossRef]
  3. Alshaikh, M. Developing cybersecurity culture to influence employee behavior: A practice perspective. Comput. Secur. 2020, 98, 102003. [Google Scholar] [CrossRef]
  4. Mikuletič, S.; Vrhovec, S.; Skela-Savič, B.; Žvanut, B. Security and privacy-oriented information security culture. Comput. Secur. 2024, 136, 103489. [Google Scholar] [CrossRef]
  5. Yazdanmehr, A.; Wang, J. Peers matter: The moderating role of social influence on information security policy compliance. Inf. Syst. J. 2020, 30, 791–844. [Google Scholar] [CrossRef]
  6. Chowdhury, N.; Adam, M.T.P.; Teubner, T. Time pressure in human cybersecurity behavior: Theoretical and empirical analysis. Comput. Secur. 2020, 97, 101963. [Google Scholar] [CrossRef]
  7. Alraja, M.N.; Butt, U.J.; Abbod, M.F. Information security policies compliance in a global setting: An employee’s perspective. Comput. Secur. 2023, 129, 103208. [Google Scholar] [CrossRef]
  8. Qawasmeh, S.A.-D.; AlQahtani, A.A.S.; Khan, M.K. Navigating cybersecurity training: A comprehensive review. Comput. Electr. Eng. 2025, 123, 110097. [Google Scholar] [CrossRef]
  9. Daengsi, T.; Pornpongtechavanich, P.; Wuttidittachotti, P. Cybersecurity awareness enhancement: A study of the effects of age and gender of Thai employees associated with phishing attacks. Educ. Inf. Technol. 2022, 27, 4729–4752. [Google Scholar] [CrossRef]
  10. Bada, M.; Sasse, A.M.; Nurse, J.R.C. Cyber Security Awareness Campaigns: Why Do They Fail to Change Behaviour? arXiv 2019, arXiv:1901.02672. [Google Scholar]
  11. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  12. Bandura, A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol. Rev. 1977, 84, 191–215. [Google Scholar] [CrossRef] [PubMed]
  13. Torres-Hernández, N.; Gallego-Arrufat, M.J. Indicators to assess preservice teachers’ digital competence in security: A systematic review. Educ. Inf. Technol. 2022, 27, 8583–8602. [Google Scholar] [CrossRef] [PubMed]
  14. Moody, G.D.; Siponen, M.; Pahnila, S. Toward a unified model of information security policy compliance. MIS Q. 2018, 42, 285–311. [Google Scholar] [CrossRef]
  15. Alzahrani, L.; Panwar Seth, K. The impact of organizational practices on information security management performance. Information 2021, 12, 398. [Google Scholar] [CrossRef]
  16. Hong, W.C.H.; Chi, C.; Liu, J.; Zhang, Y.; Lei, V.N.-L.; Xu, X. The influence of social education level on cybersecurity awareness and behavior. Educ. Inf. Technol. 2023, 28, 439–470. [Google Scholar] [CrossRef]
  17. Taj, I.; Waktole Dadi, S.; Samer Wazan, A.; Laborde, R.; Shoufan, A. CheatGuard: A cybersecurity inspired anti-cheating platform for higher education. Educ. Inf. Technol. 2025, 30, 23729–23759. [Google Scholar] [CrossRef]
  18. Sulaiman, N.S.; Fauzi, M.A.; Wider, W.; Rajadurai, J.; Hussain, S.; Harun, S.A. Cyber–information security compliance and violation behaviour in organisations: A systematic review. Soc. Sci. 2022, 11, 386. [Google Scholar] [CrossRef]
  19. Demigha, O.; Larguet, R. Hardware-based solutions for trusted cloud computing. Comput. Secur. 2021, 103, 102117. [Google Scholar] [CrossRef]
  20. Min-Allah, N.; Alrashed, S. Smart campus—A sketch. Sustain. Cities Soc. 2020, 59, 102231. [Google Scholar] [CrossRef]
  21. Carmo, J.E.S.; Lacerda, D.P.; Klingenberg, C.O.; Piran, F.A.S. Digital transformation in the management of higher education institutions. Sustain. Futures 2025, 9, 100692. [Google Scholar] [CrossRef]
  22. Uskov, V.L.; Bakken, J.P.; Howlett, R.J.; Jain, L.C. Smart Campus; Springer: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  23. Polin, K.; Yigitcanlar, T.; Limb, M.; Washington, T. The making of smart campus: A review and conceptual framework. Buildings 2023, 13, 891. [Google Scholar] [CrossRef]
  24. Polin, K.; Yigitcanlar, T.; Limb, M.; Washington, T. Unpacking smart campus assessment: Developing a framework via narrative literature review. Sustainability 2024, 16, 2494. [Google Scholar] [CrossRef]
  25. Haggag, M.; Oulefki, A.; Amira, A.; Kurugollu, F.; Mushtaha, E.S.; Soudan, B.; Hamad, K.; Foufou, S. Integrating advanced technologies for sustainable smart campus development: A comprehensive survey of recent studies. Adv. Eng. Inform. 2025, 66, 103412. [Google Scholar] [CrossRef]
  26. Villegas-Ch, W.; Palacios-Pacheco, X.; Luján-Mora, S. Application of a smart city model to a traditional university campus with a big data architecture: A sustainable smart campus. Sustainability 2019, 11, 2857. [Google Scholar] [CrossRef]
  27. Kuo, K.-M.; Talley, P.C.; Huang, C.-H. A meta-analysis of the deterrence theory in security-compliant and security-risk behaviors. Comput. Secur. 2020, 96, 101928. [Google Scholar] [CrossRef]
  28. Prümmer, J.; van Steen, T.; van den Berg, B. Assessing the effect of cybersecurity training on end-users: A meta-analysis. Comput. Secur. 2025, 150, 104206. [Google Scholar] [CrossRef]
  29. Khadka, K.; Ullah, A.B. Human factors in cybersecurity: An interdisciplinary review and framework proposal. Int. J. Inf. Secur. 2025, 24, 119. [Google Scholar] [CrossRef]
  30. Hair, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 3rd ed.; SAGE: London, UK, 2022. [Google Scholar]
  31. Hayes, A.F. Introduction to Mediation, Moderation, and Conditional Process Analysis, 3rd ed.; Guilford Press: New York, NY, USA, 2022. [Google Scholar]
  32. Tran, D.V.; Nguyen, P.V.; Le, L.P.; Nguyen, S.T.N. From awareness to behaviour: Understanding cybersecurity compliance in Vietnam. Int. J. Organ. Anal. 2025, 33, 209–229. [Google Scholar] [CrossRef]
Figure 1. Research Model of Cybersecurity Awareness and Information Security Outcomes.
Figure 1. Research Model of Cybersecurity Awareness and Information Security Outcomes.
Data 11 00046 g001
Figure 2. Structural equation model with standardized path coefficients (STDYX). All displayed paths are statistically significant at p < 0.05.
Figure 2. Structural equation model with standardized path coefficients (STDYX). All displayed paths are statistically significant at p < 0.05.
Data 11 00046 g002
Figure 3. Structural model with standardized path coefficients.
Figure 3. Structural model with standardized path coefficients.
Data 11 00046 g003
Table 1. Algorithmic Representation of the SEM Analytical Procedure.
Table 1. Algorithmic Representation of the SEM Analytical Procedure.
StepProcedureDescription
1Instrument ValidationConduct expert review and pilot testing to ensure content validity and clarity.
2Data ScreeningRemove incomplete responses; assess missing values, outliers, and normality assumptions.
3Descriptive AnalysisCompute demographic statistics and summarize respondent characteristics.
4Measurement Model EstimationPerform Confirmatory Factor Analysis (CFA) using Mplus 8.3.
5Reliability AssessmentEvaluate Cronbach’s alpha, Composite Reliability (CR), and AVE.
6Discriminant Validity CheckConfirm construct distinctiveness using AVE comparisons and cross-loadings.
7Structural Model EstimationEstimate hypothesized paths (H1–H14) using SEM.
8Direct Effect TestingExamine standardized path coefficients and statistical significance.
9Mediation AnalysisApply bootstrapping to test indirect and serial mediation effects.
10Model Fit EvaluationAssess χ2/df, CFI, TLI, RMSEA, and SRMR according to established thresholds.
11Decomposition of EffectsCalculate direct, indirect, and total effects on CIA outcomes.
12InterpretationInterpret results in relation to TPB–SCT framework and CIA outcomes.
Table 2. Demographic Characteristics of Respondents (N = 540).
Table 2. Demographic Characteristics of Respondents (N = 540).
CharacteristicCategoryFrequency (N)Percentage (%)
GenderMale15528.8
Female38571.2
Total540100.0
Age GroupUnder 20 years20838.5
20–29 years28953.5
30–39 years101.9
40–49 years264.8
50–59 years71.3
Total540100.0
Cybersecurity Training
Experience
Never attended training29354.3
Attended once16730.9
At least once per year6812.6
More than twice per year122.2
Total540100.0
Table 3. Measurement Model Fit Indices.
Table 3. Measurement Model Fit Indices.
Fit IndexObserved ValueRecommended ThresholdInterpretation
χ2/df2.0579<3.00Good fit
RMSEA0.044≤0.05Good fit
CFI0.956≥0.95Good fit
TLI0.951≥0.95Good fit
SRMR0.039≤0.08Good fit
Table 4. Reliability and Convergent Validity.
Table 4. Reliability and Convergent Validity.
ConstructAVECRCronbach’s Alpha
Technology (TECH)0.5930.8790.833
Social Norms & Influence (SN)0.6910.9180.915
Cybersecurity Knowledge (KNOW)0.6540.9040.910
Attitudes toward Cybersecurity Protection (ATT)0.7200.9280.929
Self-Efficacy (SE)0.6810.9140.917
Behavioral Intention (BI)0.7290.9310.930
CIA—Confidentiality0.6520.9030.912
CIA—Integrity0.6740.9120.921
CIA—Availability0.5760.8710.905
Table 5. Structural Model Fit Indices.
Table 5. Structural Model Fit Indices.
Fit IndexObserved ValueInterpretation
χ2/df2.7656Acceptable
RMSEA0.057Acceptable
CFI0.997Excellent
TLI0.989Excellent
SRMR0.015Excellent
Table 6. Structural Path Coefficients.
Table 6. Structural Path Coefficients.
Pathβp-ValueResults
TECH KNOW0.2500.000Accepted
SN KNOW0.5400.000Accepted
TECH SE0.1130.002Accepted
KNOW SE0.6480.000Accepted
KNOW ATT0.1240.007Accepted
SE ATT0.5370.000Accepted
SN ATT0.2290.000Accepted
ATT BI0.3410.000Accepted
SE BI0.3980.000Accepted
KNOW BI0.1840.000Accepted
BI CIA0.4890.000Accepted
SE CIA0.1900.000Accepted
ATT CIA0.1660.000Accepted
KNOW CIA0.1190.001Accepted
Table 7. Coefficient of Determination (R2) for Endogenous Constructs.
Table 7. Coefficient of Determination (R2) for Endogenous Constructs.
Endogenous ConstructR2Interpretation
Cybersecurity Knowledge (KNOW)0.557Moderate
Self-Efficacy (SE)0.693Substantial
Attitudes toward Cybersecurity Protection (ATT)0.677Substantial
Behavioral Intention (BI)0.736Substantial
Information Security Outcomes (CIA)0.803Substantial
Table 8. Direct, indirect, and total standardized effects on CIA.
Table 8. Direct, indirect, and total standardized effects on CIA.
PathDirect EffectIndirect EffectTotal Effect
KNOW CIA0.1190.4460.553
SE CIA0.1760.3460.522
ATT CIA0.1570.1570.314
TECH CIA0.0000.2030.203
SN CIA0.0000.4300.430
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Phakaedam, C.; Savithi, C.; Suttidee, A. Behavioral and Cognitive Pathways to Information Security Outcomes in Smart Universities. Data 2026, 11, 46. https://doi.org/10.3390/data11030046

AMA Style

Phakaedam C, Savithi C, Suttidee A. Behavioral and Cognitive Pathways to Information Security Outcomes in Smart Universities. Data. 2026; 11(3):46. https://doi.org/10.3390/data11030046

Chicago/Turabian Style

Phakaedam, Chanwit, Charuay Savithi, and Arisaphat Suttidee. 2026. "Behavioral and Cognitive Pathways to Information Security Outcomes in Smart Universities" Data 11, no. 3: 46. https://doi.org/10.3390/data11030046

APA Style

Phakaedam, C., Savithi, C., & Suttidee, A. (2026). Behavioral and Cognitive Pathways to Information Security Outcomes in Smart Universities. Data, 11(3), 46. https://doi.org/10.3390/data11030046

Article Metrics

Back to TopTop