Next Article in Journal
How Business Environments Affect Enterprise Vitality: A Complex Adaptive Systems Theory Perspective
Previous Article in Journal
Regulating Cyberworthiness: Governance Frameworks for Safety-Critical Cyber-Physical Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Building Trust and Cybersecurity Awareness in Saudi Arabia: Key Drivers of AI-Powered Smart Home Device Adoption

by
Mohammad Mulayh Alshammari
* and
Yaser Hasan Al-Mamary
*
Department of Management and Information Systems, College of Business Administration, University of Ha’il, Hail 81451, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Systems 2025, 13(10), 863; https://doi.org/10.3390/systems13100863
Submission received: 21 August 2025 / Revised: 27 September 2025 / Accepted: 28 September 2025 / Published: 30 September 2025
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)

Abstract

Smart home technologies are increasingly powered by artificial intelligence (AI), offering convenience, energy efficiency, and security, but also raising serious concerns around privacy and cybersecurity. This study seeks to explore the factors that affect the adoption of AI-powered smart home devices by extending the Trust in Technology Model (TTM) to incorporate cybersecurity awareness. The objective is to better understand how users’ trust in technology, institutions, and specific devices, combined with their cybersecurity awareness, influences adoption behavior. A quantitative research design was used, and Structural Equation Modeling (SEM) was employed to examine the assumed relationships among the variables. The results confirm that propensity to trust, in general, technology significantly enhances institution-based trust, which in turn positively influences trust in specific technologies. Trust in specific technologies and cybersecurity awareness were both found to strongly increase users’ intention to adopt AI-powered smart home devices. Moreover, users’ intentions showed the strongest effect on deep structure use, highlighting that positive behavioral intention is a key driver of actual, advanced utilization of these technologies. These results highlight the importance of trust-building and awareness initiatives for fostering wider adoption. This research extends the current literature on technology adoption and provides a framework that can help explain the user’s adoption of AI-powered smart home devices. Its originality lies in integrating cybersecurity awareness into the TTM, offering both theoretical contributions and practical implications for policymakers, developers, and marketers.

1. Introduction

Smart home technologies are changing people’s interaction with their living environment, offering attractive convenience and enhanced security. Among these technologies, AI-powered smart home solutions have demonstrated their capabilities to improve people’s lives by increasing home security, optimizing energy consumption, learning from user behavior, and more [1]. The functionalities of the smart home devices are increasingly powered by AI [2], which has transformed daily life by becoming an essential household tool. These include smart home assistants (Alexa, Google, Siri), smart security systems, and other devices. According to a recent report by Statista [3], 77.6% of all households will own at least one smart device by the year 2025, and this figure is anticipated to increase to 92.5% by 2029. Additionally, the number of IoT devices is estimated to reach 55.7 billion units by 2025 [4], with the smart security devices market being expected to reach $7.89 B by 2027 [5].
Despite the services offered by these smart devices being substantial, they raise serious security challenges due to their integration with AI [6]. Paramount among these challenges are data privacy and unauthorized access. The integration of artificial intelligence (AI) into those smart solutions is essential for enhancing their functionalities and providing better services. However, this could introduce new vulnerabilities open to exploitation, resulting in breaches of human privacy and other damage [7,8]. Due to the high rate of adoption, these technologies, nowadays, have become a very valuable and attractive target for cyber threats [9]. This is demonstrated by the large number of attacks targeting IoT devices to date, a number that is expected to significantly increase [4]. Privacy is a very critical challenge posed by the AI-powered smart home devices, such as voice assistants that store and process user data. Despite privacy settings being available to give users control over their data, most remain unaware of those options or find them complex to use [10]. This highlights the broader non-technical challenges associated with smart home adoption that might interfere with effective privacy management.
In the adoption of smart home devices, trust plays a critical role in overcoming users’ risks and uncertainties associated with such technologies [11]. This challenge illustrates the importance of people’s cybersecurity awareness (CSA) in overcoming those security challenges. The existing literature has broadly studied smart home technologies and cybersecurity practices; however, this study aims to address a specific area: the role of CSA in affecting human behavior and that of trust in the context of AI-powered smart home devices.
Beyond individual adoption decisions, smart home technologies contribute to systemic properties such as resilience, adaptability, and stability within digital ecosystems. For example, AI-powered devices can autonomously adjust to emerging threats or changing user needs, highlighting their role in creating adaptive and sustainable households. Studying smart homes from this systemic perspective deepens our understanding of socio-technical ecosystems, where these properties are critical for sustainable digital transformation [12,13].
For the present study, the purpose was to understand how the varying levels of CSA influence humans’ decisions to adopt smart technologies, their trust in the security of these devices, and their efforts to protect their data privacy. The results of this research will offer significant contributions to the extant literature by extending the Trust in Technology Model (TTM) to include cybersecurity awareness (CSA) and providing a more comprehensive framework for understanding the adoption of AI-powered smart homes. While TTM has previously explored trust in both general and specific technologies, it has overlooked the role of CSA in shaping user behavior. This study addresses the identified gap by examining how general trust, institution-based trust, and CSA influence users’ adoption decisions. In particular, this research offers valuable insights into how users behave in AI-powered smart homes. Based on real data from Saudi Arabia, the findings may also help understand similar changes happening in other regions embracing digital technology.

2. Literature Review

2.1. AI-Powered Smart Home Technologies

In this study, AI-powered smart homes are defined as residential environments equipped with Internet of Things (IoT) devices and intelligent systems that utilize AI to automate tasks, optimize energy use, enhance security, and personalize user experiences [14,15]. These technologies include devices such as voice assistants, security systems, and smart appliances, all of which rely on AI algorithms for adaptive decision-making.
Smart home technology has seen significant adoption over the last few years. The reason for this is the advancement of the IoT, wireless communication, and AI. A smart home is an application of IoT that includes smart TVs, security cameras, smart locks, smart lighting, and much more [14]. By leveraging IoT, these technologies enable households to have more control over their homes by connecting electronic devices. Additionally, the integration of AI into these smart technologies has further enhanced user experiences. This enhancement is possible because AI can analyze user behavior, thereby offering more personalized experiences [16] and matching household preferences and habits. Unlike traditional technology, where users play a key role in its functionality, AI-powered technologies rely on machine learning (ML) algorithms. Notably, AI-powered security systems can enhance home security by detecting potential threats and alerting households to unusual activity. The integration of AI into security home systems enhances the overall house security by such means as employing facial recognition and behavior analysis in order to detect unusual activity and reduce false alarms.
In addition to security systems, another example of AI-powered smart home services is Voice Assistants, such as Amazon Echo (Alexa) and Apple HomePod (Siri), which are one of the most popular technologies in home ecosystems [2]. Beyond simple tasks, such as providing weather conditions, Alexa, for example, can serve as a centralized unit for smart home ecosystems, allowing users to automate many tasks, such as controlling lighting in the home, adjusting thermostats, and initiating routines (e.g., locking the door).
Smart home technologies continue to evolve, with the integration of AI becoming an integral part of the smart home environment, thus shaping its future. Concerning security, AI can enhance home security by analyzing security camera footage to help detect unusual activity. However, the vast amounts of personal data that AI-powered IoT devices generate and analyze daily, along with Internet technology in general, raise serious security concerns [17]. AI-powered IoT devices expose households to serious cybersecurity risks [18], leaving their systems vulnerable to malicious actors [19] and potentially putting their privacy at significant risk.
Within the wider AI ecosystem, smart homes provide a distinctive context because they embed AI and IoT directly into the household environment, shaping daily routines and personal security practices. In contrast, other AI domains, such as smart cities, healthcare, or autonomous vehicles, operate at larger infrastructural or institutional levels, often governed by stricter regulations and collective oversight. This comparison highlights both the uniqueness of smart homes, where individual trust and cybersecurity awareness are central, and their relevance to broader AI adoption challenges, where user confidence and risk management are equally critical [1,2,20,21].
From a systemic viewpoint, smart homes embody autonomous capabilities where devices interact dynamically with users and their environment. This interaction fosters adaptive responses to security threats, promotes household resilience, and stabilizes routines. By embedding AI within everyday infrastructures, smart homes reflect systemic qualities of flexibility and robustness, making their study especially relevant for understanding larger socio-technical transitions [15,22,23].
From a systems theory perspective, smart homes can be understood as autonomous socio-technical subsystems whose functionality depends on user trust and cybersecurity awareness. This perspective highlights their capacity for self-regulation and continuity under uncertainty, offering a theoretical lens for explaining how micro-level adoption drivers connect to macro-level systemic stability.

2.2. Human Trust in AI-Powered Technologies

In this study, human trust in AI is conceptualized as the willingness of individuals to rely on AI systems for decision-making and everyday functions, despite uncertainties about outcomes [24,25]. In the context of this research, human trust in AI refers specifically to users’ confidence in the reliability, security, and privacy of AI-powered smart home devices.
Trust influences both the intention behind and the ongoing use of technology [24]. It plays a crucial role in the adoption of smart home technology. Household trust perception tends to be a determining factor that influences the decision to adopt and use AI-powered IoT devices [26]. However, AI-powered technologies pose a threat to trust [27], threatening households’ security and privacy.
The literature on users’ acceptance of technology has emphasized that perceived usefulness, ease of use, trust, and privacy determine users’ behavior toward technology adoption [28,29]. Trust is seen as an essential factor in adopting new technology [30]. Drawing on AI technologies literature, a recent systematic literature review on user trust in AI shows that users’ levels of knowledge about it, prior experiences with it, demographic factors, and socio-ethical factors (fairness and transparency) positively influence users’ trust in AI-powered AI technologies [31]. However, many users lack knowledge about cyberattacks or when their devices are being exploited.
From a different angle, the growing integration of IoT into daily life presents another trust challenge. That is, it brings about significant privacy and data security challenges, causing users to hesitate to share data that typically would not raise any privacy concerns [6]. From a technology perspective, this could limit the effectiveness of IoT and AI-powered services. This is because IoT heavily relies on the data shared by users to provide optimal services. From the human perspective, this might lead to missed enhanced services when users refuse to share essential data, which could also negatively impact IoT adoption. In fact, growing evidence from prior research suggests that the current AI-powered approach overlooks the broader implications of AI on individuals and society, such as ethical considerations and privacy, by focusing solely on a technical perspective [32,33]. Privacy concerns and users’ control over their personal information significantly influence their decisions about sharing data [34]. Trust, security, and privacy are among the top user concerns regarding the adoption of IoT [35].
Despite the growing popularity of IoT and smart home devices, they raise critical cybersecurity challenges due to numerous vulnerabilities [36]. These risks arise from the interaction of constant connectivity, cloud-based data processing, limited user control over the data, and security mismanagement [37]. While AI-powered IoT devices, such as those provided by Amazon, bring convenience to homeowners, they are vulnerable due to their continuous listening mode and reliance on internet-based services for functionality, exposing them to various cybersecurity risks [8]. This includes the potential for these devices to be exploited, thus putting homeowners at risk. For example, attackers can exploit insecure Wi-Fi configurations to gain unauthorized access to them and manipulate their functionality. The Mirai Botnet is a well-known case, where compromised IoT devices were hijacked and used to carry out a massive Distributed Denial-of-Service (DDoS) attack [38]. Another issue is that many of these devices are poorly secured (e.g., with default passwords), which makes them easier to exploit over time [2,36].
Concerning data privacy, voice assistant technology faces real security risks, such as voice spoofing, where commands can be initiated to manipulate these devices without user acknowledgment [39]. Additionally, resold or rented devices may also raise privacy concerns, as these may hold user data. A report on smart homes has shown that 40.8% of homeowners had at least one vulnerable device, which puts the entire home at risk [40]. As a consequence, many industrial and academic studies have discussed the importance of cybersecurity awareness in overcoming the challenges and concerns posed by AI-powered technology, thereby empowering IoT users with the necessary knowledge for protecting their devices and data [41,42,43]. Cybersecurity awareness is a key solution for addressing users’ concerns by offering this knowledge.
Cybersecurity awareness must incorporate best security practices to mitigate the risks posed by these technologies. For example, with IoT, it is recommended that users change default passwords, enable encryption, update firewalls, and meet other requirements to ensure privacy and tackle security challenges.
Trust also carries systemic implications. High levels of user trust strengthen the stability of the adoption ecosystem, while distrust may destabilize it. Trust can be seen as a systemic stabilizer that maintains continuity, resilience, and adaptation within socio-technical systems such as AI-powered smart homes [31,44,45,46].

2.3. Trust in Technology Model (TTM)

TTM was introduced by McKnight et al. [47] to explore the concept of trust in technology. The authors exemplified trust in the context of technology acceptance and risk, which is important for influencing individual behavior regarding technology usage. TTM was developed to gain an understanding of how individuals build trust in technology over time. The model is divided into three components, as explained below, with Figure 1 illustrating it.

2.3.1. Propensity to Trust (PTT)

Propensity to Trust (PTT) refers to the tendency to trust technology in general. In other words, it reflects one’s overall willingness to trust technologies based on personality or prior experiences [47]. Two perspectives are important for testing PTT. First, there is one’s belief that using technology, in general, will lead to positive outcomes. For instance, when someone has a strong trust in technology, they are more likely to continue using it confidently until it proves otherwise. Second, there is one’s perception and belief in the overall quality and capability of technology. For example, individuals with high faith in them may believe that smart home technologies are safe, efficient, and capable. Trust has been recognized as a fundamental driver of the relationship between users and AI-powered technologies [48]. According to Zarifis and Fu [49], propensity to trust represents how people naturally trust technology and technology systems based on their individual trusting nature and their general faith in technology. Their empirical findings demonstrate that people who trust others tend to develop institutional trust, but this relationship remains less powerful than other trust origins. People who trust technology more tend to trust institutions that protect its users from harm and ensure system reliability. According to Wu et al. [50], individuals with a higher propensity to trust are more inclined to develop stronger faith in institutional safeguards and structural assurances that protect them from potential risks in online interactions. Their study demonstrated that propensity to trust significant positively impacts institution-based trust, which subsequently enhances users’ sense of perceived control and overall trust in platforms. This finding highlights that trust propensity, as an internal disposition, influences how individuals perceive and rely on institutional mechanisms to mitigate uncertainty. Therefore, we hypothesize:
H1. 
Propensity to Trust in General Technology Will Positively Affect Institution-Based Trust in Technology.

2.3.2. Institution-Based Trust in Technology (IBT)

The second component is Institution-Based Trust in Technology (IBT). Unlike PTT, which applies to various situations, IBT is tied to a specific context or class of technology [47]. It refers to the belief that technology will function successfully, because of supportive situations and structures (e.g., regulations and organizational support). IBT implies that trust is derived from two perspectives. Firstly, the belief that using technology is a typical and predictable activity in a specific context leads users to feel comfortable using it. For example, one may view using smart home technologies, such as smart lights or AI assistants, as part of modern living. Therefore, this may result in a comfortable feeling when using these technologies, which leads to trust in specific smart home systems. Secondly, there is the belief that external structures, such as infrastructure-supporting technology, make using technology reliable and secure. For example, if a user trusts smart home technologies, such as AI assistants (e.g., Amazon Alexa), they might do so because they trust that strong security measures (e.g., privacy regulations) are in place to protect their information and privacy. Trust in AI-powered technologies can be achieved by assuring that the risks associated with them can be managed effectively [51]. This trust might be strengthened by knowing that the company providing this technology is following privacy protocols by holding industry certification.
We believe that IBT is an important factor influencing users’ trust in the next component of Trust in a Specific Technology (TST): an AI-powered smart home in the context of this study. Consequently, we hypothesize:
H2. 
Institution-Based Trust in Technology will positively affect Trust in Specific Technology.

2.3.3. Trust in a Specific Technology (TST)

The third component is trust in a specific technology (TST), which relates to one’s relationship with that technology. Mcknight et al. [47] define TST as a trust in a technology’s reliability, functionality, and helpfulness. This trust is usually influenced by users’ direct experiences with a specific technology. Unlike a broader trust, TST focuses on technology itself and its ability to deliver the desired outcomes. TST, in the context of this study, is about how well the system (e.g., surveillance system) protects personal data, ensures privacy, and provides users with control over their information. Any potential negative consequences of using A-powered technologies are likely to negatively affect user trust [31].
The TTM suggests that users’ perceptions usually influence people’s trust in a particular technology. First, it refers to an individual’s belief about a technology’s ability to perform the required tasks. For example, it can pertain to users’ trust that an AI-powered smart home can manage their data accurately without compromising their privacy (e.g., only storing and sharing necessary data). The second perspective involves the degree to which technology provides effective assistance. For example, users trust that an AI-powered smart home will enable them to control their privacy settings, like scheduling recording times. Lastly, it refers to the extent of the reliability of a technology’s consistent and predictable operation. Users, for example, may trust that an AI-powered smart home effectively protects their data from unauthorized access.
In sum, trust in a specific technology in the context of AI-driven smart homes relies on its ability to safeguard user data, ensure privacy, and provide control over personal information. This trust is essential, as it most likely influences users’ decisions to utilize and depend on similar security solutions. Moreover, the model indicates that trust in a specific technology (TST) leads to two post-adoption behaviors: Intention to Explore (ITE) and Deep Structure Use (DSU).
ITE emphasizes the discovery aspect (willingness to try new features of that technology). In other words, ITE is not simply about curiosity, for it pertains to deliberate willingness and purpose to engage with and explore new technologies [52], while DSU relates to the actual use of those complex features. Sykes and Venkatesh [53] define DSU as being “a post-acceptance behavior that involves the integration of the system with the user’s tasks” (p. 919). According to Robert and Sykes [54], it “represents the degree to which a system is used for everyday activities and the extent to which a user is fully leveraging the capabilities of the system” (p. 86). In relation to the adoption of AI-powered smart home devices, unlike basic measures, such as asking a voice assistant for the weather, DSU refers to how effectively users leverage the advanced features of smart home technologies, user engagement, satisfaction, and the true value driven from smart technologies in the home. In alignment with the objective of this study, we will analyze the impact of TST on ITE. We hypothesize that:
H3. 
Trust in a specific technology will positively affect individuals’ intention to explore the technology in the adoption of AI-powered smart home technologies.

2.4. Role of Cybersecurity Awareness

Cybersecurity Awareness (CSA) is defined as an individual’s knowledge, perception, and proactive behavior regarding potential cyber risks and the protective measures required to mitigate them [55,56]. It encompasses both the cognitive understanding of security threats, such as privacy risks, data breaches, or unauthorized access, and the behavioral practices, including updating software, using strong passwords, and adjusting device privacy settings. In the context of AI-powered smart homes, CSA refers specifically to household users’ ability to recognize vulnerabilities in smart devices and adopt preventive actions that safeguard personal data. By clarifying CSA in this way, this highlights its dual role as both a cognitive factor influencing trust and a behavioral driver of adoption.
When it comes to cybersecurity threats linked to IoT, numerous technological solutions have been developed and introduced to mitigate such risks. To combat them, several studies have examined this issue from a technical angle, including proposing a detection system that uses deep learning techniques to safeguard the IoT environment. Diro and Chilamkurti [57] emphasize the importance of cryptographic mechanisms [58]. Data privacy is an essential component of IoT security. However, it is one of the major challenges associated with IoT [59,60]. This is due to the lack of household awareness of passive data collection and the challenges in managing such matters [61]. Dang et al. [62] propose an authentication scheme to provide additional protection in the cloud server in IoT.
Concerning the security challenges associated with IoT, privacy is consistently linked to CSA. It is defined as the user’s right to control access to their personal data [63]. However, privacy takes a new dimension in AI-powered smart home devices as these are constantly collecting and passively processing personal information, such as voice recordings and behavior patterns, which can pose a threat to user privacy [10]. When users understand the extent and sensitivity of this data, their privacy concerns increase, leading them to explore security features and adopt safe practices. Thus, CSA influences how privacy concerns translate into actions. Privacy concerns drive users to put more effort into understanding and managing this [64]. While both expert and non-expert users may worry about privacy, those with higher cybersecurity awareness are more likely to respond to technical signals, such as permission settings [65].
Another important aspect that influences the protection of the user’s privacy and the adoption of AI-powered home technologies is the role of information security policies (ISPs). Typically, these set the standard for how personal data is collected, stored, and shared. In a smart home, ISPs are included within the user agreement or app permissions. However, many users are often unaware of them [10], thus accepting them without fully understanding the implications for their privacy and security. This lack of awareness can negatively impact the level of trust and result in slow adoption. Awareness of ISPs is crucial for effective cybersecurity practices [66].
When AI-powered smart home technologies are introduced for users’ convenience, their integration with home networks introduces security risks, such as unauthorized surveillance or data breaches. Additionally, the vulnerabilities in smart homes, such as weak passwords or unsecured communication protocols, can often be exploited. Users’ understanding of potential vulnerabilities is essential for them to take preventive action [67]. In other words, user awareness and understanding of these vulnerabilities can strengthen their overall cybersecurity stance.
The rapid development of IoT makes it a prime target for cybercriminals and exposes it to additional security threats [68]. In addition to the technical variabilities, many studies have found that there are other non-technical reasons behind cybersecurity attacks in IoT. These include a lack of awareness of cybersecurity [69], as well as poor compliance with ISPs and recommendations [70]. The number of non-technical users of the smart home is on the rise, yet most of them lack an understanding of CSA and privacy [71]. While recent studies have shown that households may have CSA, they can lack knowledge regarding the risks they are exposed to by using IoT. In recognition of the increased security risk, the U.S. Federal Bureau of Investigation (FBI) has issued many public safety alerts, warning households that cybercriminals exploit IoT device vulnerabilities, such as those of smart cameras, routers, among others [72,73]. Households, especially those that are not technically inclined, need to pay extra attention to safeguarding their privacy and improving their overall security, particularly regarding CSA.
Theoretically, CSA has been incorporated into several theories to examine different phenomena. For example, Ng et al. [55], in their study, extended the Health Belief Model (HBM) by incorporating CSA as a factor in studying the user’s computer security behavior in terms of adopting security measures. CSA was found to be significant in shaping users’ decisions and taking proactive action. They incorporated it into a broader framework that related CSA to human behavior in an information security context. Moreover, the Technology Acceptance Model (TAM) suggests that perceived usefulness and perceived ease of use drive individual intention behavior [28,74]. In this regard, CSA could enhance perceived ease of use and usefulness by providing users with the confidence to adopt secure systems.
Furthermore, the Unified Theory of Acceptance and Use of Technology (UTAUT) emphasizes that facilitating conditions are significant determinants of intention [75]. Awareness of cybersecurity aligns with facilitating conditions and potentially enhances users’ trust, while positively impacting both intention and actual behavior. Protection Motivation Theory (PMT) asserts that an individual’s awareness of threats, as well as the perceived efficacy of coping mechanisms, significantly influences their protective behavior [76,77]. This leads us to believe that cybersecurity has a high potential impact on driving technology adoption.
From the above, it can be seen that users need to be aware of potential threats and employ safeguards in order to stay secure and reduce the probability and the impact of security incidents. According to CNSS [78], safeguards are “protective measures and controls prescribed to meet the security requirements … may include security features, management constraints, personnel security, and security of physical structures, areas, and devices” (p. 172). The applications of the security safeguards in smart homes involve both technical mechanisms (e.g., encryption) and intelligent user behavior (updating device firmware, reviewing privacy settings, and ensuring strong passwords).
We believe CSA is more likely to affect intention and actual behavior when adopting AI-powered smart home technologies. So, we are of the view that users with high CSA could be more critical of technologies that do not provide a strong security solution. Those users are more likely to act as savvy consumers who carefully assess the security and privacy features of technologies and whether these meet the desired standards before adoption, as their critical view reflects selective adoption, rather than full acceptance or rejection of those technologies. This assumption is supported by the idea that CSA significantly influences the users’ decision-making processes [56,79].
We hold that CSA could be a meaningful factor for TTM, as it offers a way to capture how user security knowledge affects their intention to use and the actual behavior of the adoption of AI-powered smart home technologies. Considering all the above, we argue that CSA is an important factor and a valuable addition to TTM, as it could influence the adoption of AI-powered smart homes. Each of TTM’s constructs (PPT, SAT, and TST) highlights different and important aspects of trust, while CSA adds an additional layer concerning security threats. For example, in the context of adopting smart home technology, a user with a high PPT may adopt it without question, while one who relies on SAT might do so based on its certifications or compliance with regulations. Moreover, a user with high CSA would go beyond that by looking at how well those technologies handle potential cybersecurity risks. So, this leads us to the belief that intention and actual behavior in adopting AI-powered smart home technologies may be influenced by CSA levels. Table 1 demonstrates the relation of cybersecurity awareness to TTM’s constructs. Hence, we hypothesize:
H4. 
Cybersecurity Awareness will positively affect individuals’ intention to explore the technology in the adoption of AI-powered smart home technologies.
The relationship between intention and deep structure use is influenced by the level of trust users develop in a specific technology. Those who find it functional, helpful, and reliable will develop an intention to explore its features. The intention to use the system leads to deep structure use, where users move beyond basic operations and engage in more advanced functionalities. They gain the necessary confidence to try complex features through their direct experience with the technology, which forms their trusting beliefs. Through this process, intention functions as a motivational link that converts trust into practical technology usage with value [47,80].
Dinger et al. [81] state that intention to explore and deep structure use are post-adoptive behaviors that are influenced by both emotional and cognitive factors. Users develop trusting cognitive beliefs about technology’s helpfulness, capability, and reliability through their emotional experiences of both positive and negative events. Positive emotions both build trust and boost user willingness to use the technology, which shows emotions can motivate users beyond logical evaluation. The effective use of information technology depends on the fundamental interaction between emotional and cognitive processes. Moqbel et al. [82] identify two essential components of technology engagement after adoption: user intention and deep structure use. The system user makes an intentional choice to continue using the system, which allows them to predict their future actions. The system’s advanced features enable deep structure use, which allows users to achieve meaningful outcomes that go beyond basic functionality. The willingness to use technology defines intention, whereas deep structure use demonstrates how well one uses the technology.
A strong intention often leads to deeper usage, as motivated users are more likely to explore and apply complex functionalities. These constructs, together, help explain both the readiness to use a technology and the extent of its actual use. Therefore, we hypothesize the following:
H5. 
Positive individuals’ intention behavior significantly enhances the deep structure use with the adoption of AI-powered smart home technologies.
Cybersecurity awareness enhances not only individual protection but also systemic resilience. By equipping users with knowledge and proactive behaviors, CSA reduces the overall vulnerability of interconnected devices, thereby contributing to the adaptive capacity and stability of smart home ecosystems [83,84].

2.5. Conceptual Model

In this study, the main drivers behind adopting AI-powered smart home devices are examined, focusing on trust and CSA. The Trust in Technology Model (TTM) by McKnight et al. [47] is utilized, incorporating Propensity to Trust Technology (PTT) and Institution-Based Trust (IBT). PTT reflects a general trust in technology, while IBT involves trust supported by institutions like laws and vendor policies [45].
Cybersecurity Awareness (CSA) is a key factor in measuring users’ knowledge of cyber threats and safe practices. It builds user trust by increasing their confidence in handling security concerns [55,71].
The model identifies two behavioral outcomes: Intention to Use (ITU) and Deep Structure Use (DSU). ITU covers initial adoption, while DSU involves advanced use, such as automation and AI-based security features [82]. Researchers like Vardakis et al. [85], Slama & Mahmoud [86], Popova and Zagulova [87] highlight the importance of these deeper usage patterns. The model shows that trust, awareness, and intention drive both the adoption and ongoing use of smart home AI technologies. It demonstrates how trust, together with CSA and behavioral intention, influences both the adoption and continued use of AI-powered smart home devices. The model presents adoption as an active process that develops through trust-building and behavioral encouragement, while filling theoretical gaps by adding CSA to existing adoption models. The proposed research model is presented in Figure 2.

3. Methodology

3.1. Research Design

The purpose of this survey study was to examine the main factors that enhance actual behavior with the adoption of AI-powered smart home technologies in Saudi Arabia. Propensity to trust technology and institution-based trust, trust in specific technology and intention to use, cybersecurity awareness, and deep structure use data were collected through survey methods. The survey questions were crafted using tools that have been shown to be successful in other studies. The research model proposed was analyzed using the PLS SEM method supported by SmartPLS software (v.4.1.1.2). This study contributes new knowledge to existing research on AI adoption by offering insights for tech providers and policymakers looking to improve user involvement and happiness with smart home technologies.
The choice of a quantitative survey and the use of PLS-SEM were guided by the objectives of this study. PLS-SEM is particularly suitable for examining complex theoretical frameworks with multiple constructs and indicators, while also being robust with smaller samples and non-normal data distributions. This methodological choice supports the study’s theoretical differentiation by extending the Trust in Technology Model (TTM) with CSA, thus ensuring practical contributions by offering insights that can guide policymakers and technology providers in fostering safer adoption of AI-powered smart home devices.

3.2. Research Instrument

The research instrument consisted of a 29-item tool that was adapted from previous studies to assess the variables of interest. The trust-related measures followed McKnight et al. [47], while intention and deep structure use were based on McKnight et al. [47] and Moqbel et al. [82]. The main goal was to evaluate users’ readiness for the adoption of technology and their continued meaningful use of it. The survey questions on security awareness were taken from Ng et al. [55] to gauge how users perceive the security risks linked to using technology.
The 29 items, as detailed in Table 2, were thoughtfully selected to mirror the aspects of each factor and guarantee their relevance within the research setting. The survey tool was divided into five sections for ease of use and understanding purposes. One of the sections, labeled Section A, focused on gathering information from participants, including their education level, age, and level of proficiency in using technology. This data played a role in putting the results into perspective. Sections B through E measured the independent, dependent, mediating, and moderating variables of the study.
To ensure clarity and minimize potential ambiguity, important terms were clearly defined at the beginning of the questionnaire. This introductory section was designed to familiarize respondents with key concepts and terminology used throughout the survey, thereby enhancing the accuracy and consistency of their responses.
The questionnaire used in this study employed a five-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”), thereby allowing participants to express the extent to which they agreed with specific statements related to trust, CSA, and technology adoption. The Propensity to Trust Technology (PTT) scale measured users’ general trust in AI smart home devices based on experience and perceived reliability. The Institution-Based Trust (IBT) scale looked at how factors like vendor reputation and privacy policies affect trust. The Trust in Specific Technology (TST) scale focused on users’ confidence in particular devices during daily tasks. The CSA component evaluated users’ security habits and understanding of cyber threats, including updating devices and using encryption. The behavioral measures, Intention to Use (ITU) and Deep Structure Use (DSU), tracked users’ readiness to use and engage deeply with AI smart home technologies. The survey was designed to align with the study model, thus ensuring precise data collection. Each part contributed to understanding user trust, security awareness, and behavior toward smart home AI devices.
The pilot test served to validate the clarity and suitability of the questionnaire. Analysis of the responses highlighted several issues. For example, some items produced inconsistent answers across respondents, and a few contained terms that were not well understood in the local context. Some questions, for instance, used the term AI-enabled while others used AI-powered, which left respondents uncertain about whether they referred to the same concept or different ones. We therefore adopted AI consistently, as it was more widely understood. These patterns indicated ambiguity or misalignment with the study setting. To address them, we simplified the wording of the complex items and replaced culturally unfamiliar terms with context-appropriate ones. These revisions improved the instrument’s clarity and ensured that each item measured the intended construct. The final version was then administered to the full study sample.

3.3. Sampling and Data Collection

Data was collected by distributing an online questionnaire link to individuals in Saudi Arabia who utilize AI-powered home technology, resulting in 230 valid participants. To ensure that responses came only from relevant users, the survey included a screening question at the beginning, asking whether the respondent had experience using AI-powered smart home devices; only those who answered “Yes” were allowed to proceed. The survey link was distributed through LinkedIn and WhatsApp groups to reach people who probably would have had experience with AI-powered smart home technologies. Demographic information about educational background, age, and self-assessed technological proficiency was collected to achieve granular results and to reveal adoption patterns across different user segments. The demographic distribution of the final sample (see Table 3) confirmed that responses represented different groups in terms of educational background, age, and technological proficiency.
Our final sample is aligned with established guidelines for SEM. Kline [88] recommends a minimum of 200 participants for such analysis, and our final sample of 230 exceeded this threshold. It also satisfied the 10-times rule [89], which indicates that the minimum sample size should be 10 times the maximum number of predictors pointing to a latent variable in the model. Finally, according to the inverse square root and gamma-exponential methods of Kock and Hadaya [90], a model with two predictors requires substantially fewer than 230 cases, and thus, our sample clearly exceeded the minimum requirement. Together with the demographic distribution reported in Table 3, these criteria confirm both the adequacy and diversity of the sample.

4. Results

4.1. Demographic Profile

Table 3 provides an overview of the demographic characteristics of the participants in the study. The vast majority of the participants (83.9%) held a tertiary qualification, thus indicating a well-educated sample. The age distribution reveals that most were under 30 years old (86.1%, n = 198), followed by those aged 31–40 (8.3%, n = 19), 41–50 (5.2%, n = 12), and a minimal representation of participants older than 50 years (0.4%, n = 1). Regarding proficiency in using technology, the majority rated it as medium (61.7%, n = 142), with a significant portion identifying themselves as being highly proficient (33.9%, n = 78) and only a small number considering themselves to have low proficiency (4.3%, n = 10).
This suggests they were likely to be more familiar with and receptive to emerging technologies. The greater number of participants under 30 indicates that the sample consisted mainly of young people. At the same time, the high percentage of medium to high self-assessed skill levels indicates that most felt comfortable with digital tools. However, this measure may not capture individual variations in attitudes, motivation, or job-related passion that could influence technology adoption. Therefore, future research should incorporate behavioral studies across different categories of users to provide a more comprehensive understanding of technology adoption.
Overall, the demographic profile reveals a sample primarily composed of younger, well-educated individuals with moderate to high levels of technological proficiency, which provides insight into the respondents’ ability to engage with and understand technological advancements and related practices.
Table 3. Demographic Profile.
Table 3. Demographic Profile.
ItemsScaleFrequencyPercent
Education LevelHigh School3716.1
Tertiary Qualified19383.9
AgeLess than 3019886.1
31–40198.3
41–50125.2
More than 5010.4
Level of Proficiency in Using TechnologyHigh7833.9
Medium14261.7
Low104.3
Total230100

4.2. Measurement Model

Partial Least Squares Structural Equation Modeling (PLS-SEM) with Smart PLS (v.4.1.1.2) software was employed for data analysis. This was to examine the relationship between the underlying constructs and the observed variables, or indicators. This approach focused on the assessment of the connections between the constructs and indicators to determine how well the latter measured the former. The measurement model provided a systematic way of checking how accurately the indicators captured the theoretical constructs after ensuring that the data collected from the indicators properly represented the theoretical concepts.
PLS-SEM was employed because it is flexible enough to handle complex models with numerous variables and indicators. Through the measurement model, the study ensured that each of the constructs was adequately measured by correctly selecting the indicators. This analysis was helpful in providing a solid foundation for the assessment of the relationships postulated in the next structural model. Figure 3 shows the outcomes of the PLS-SEM algorithm analysis, where the factor loadings and the connections between the observed and latent variables are depicted. This lays a strong foundation for the analysis of the structural model and is, therefore, adequate to confirm that the measurement model is well conceived and reliable.

4.3. Reliability and Validity of a Measurement Model

4.3.1. Convergent Validity

Convergent validity was assessed using the following measures: factor loadings, Cronbach’s alpha, composite reliability, Average Variance Extracted (AVE), and the Inner Variance Inflation Factor (VIF). First, the factor loadings for each construct were examined to ensure that all were greater than 0.7 [89,91], which would indicate that the indicators had a good correlation with the corresponding constructs.
To increase the reliability of the findings, Cronbach’s alpha and composite reliability were calculated. All these measures reached or even surpassed the recommended level of 0.7 [89], which means that the constructs had a good level of consistency. Also, the Average Variance Extracted (AVE) was calculated for each construct, and all had values that were greater than the recommended threshold of 0.5. This suggests that the constructs are explaining more than 50% of the variance and thus, enhancing the convergent validity of the model. Lastly, the inner variance inflation factor (VIF) was checked to determine whether there was multicollinearity between the constructs. The VIF values were estimated to be within the suggested range of 5, and thus, there was no issue of multicollinearity [89].
When all the findings were considered together, they showed evidence of the adequacy of the measurement model and the conceptualization of the constructs, which provided a solid foundation for the analysis of the structural model. A summary of the findings from the validation of the measurement model is shown in Table 4.

4.3.2. Discriminant Validity

To test for discriminant validity, whether each of the constructs in the model is not confounded with another was checked, which was performed by using the Fornell–Larcker Criterion. This compares the square root of the Average Variance Extracted (AVE) for each construct with the correlations between that construct and all the others. This criterion postulates that the square root of the AVE for each factor should be greater than the correlations of this factor with the others, which implies that the factor is well conceptualized and does not have high covariance with the rest. The results of applying the Fornell–Larcker Criterion for the discriminant validity are presented in Table 5.

4.4. Structural Model

In order to understand the relationships between the variables and to assess the paths laid out in the research, the structural model was employed. This model represents the conceptual framework in which the causal relationships between the variables are depicted as the independent variables that have an influence on the dependent variables with the help of the mediating ones. The analysis of the structural model was performed using path coefficients, which represent the magnitude and direction of the relationships between the variables.
Based on the structural model analysis, it can be said that this helped to make clear and meaningful contributions to the understanding of the research questions, as it identified the relationships among the key constructs and provided support for the hypothesized relationships as well as a solid base for the conclusions of the study. All things considered, the structural model analysis confirmed the validity of the proposed relationships and gave a solid basis for the study’s conclusions by revealing insightful information about the relationships between the constructs. The path coefficients are shown in Figure 4, and the hypothesis test results are in Table 6.
The path coefficients and hypothesis testing results are presented in Table 6, which demonstrates the relationships between the variables used in the study. All five hypotheses are supported with significant path coefficients, large T-statistics, and p-values of 0.001 or less, thus providing strong support for the postulated relationships.
H1 examines the relationship between Propensity to Trust Technology (PTT) and Institution-Based Trust in Technology (IBT). The results show a path coefficient of 0.834, thus indicating a strong positive relationship. The coefficient is statistically significant at the 0.05 level, as evidenced by a T-statistic of 29.169 and a p-value of 0.000. These findings provide strong support for H1, confirming that consumers’ general propensity to trust technology positively influences their institution-based trust in technology.
H2 examines the relationship between Institution-Based Trust (IBT) and Trust in Specific Technology (TST). The path coefficient of 0.713 shows a moderate positive relationship. This is also supported by the T-statistic of 18.327 and the p-value of 0.000, which confirms that institutional factors, such as policies and regulations, positively influence the trust in specific technologies.
H3 examines the relationship between Trust in Specific Technologies (TST) and Intention to Use (ITU). The path coefficient of 0.469 and a T-statistic of 7.511 indicate a highly significant and positive relationship. This means that trust in AI-enabled devices has a positive influence on users’ intentions to adopt and use such devices.
H4 examines the relationship between Cybersecurity Awareness CSA and ITU. The path coefficient is 0.396, and the T-statistic is 6.602, which is significant and positive. This result also reinforces the notion that users’ awareness of cybersecurity risks is crucial in determining their adoption intentions.
H5 tests the impact of ITU on DSU to see whether the latter is affected by the former. This has the highest path coefficient of 0.834, a T-statistic of 30.851, and a p-value of 0.000, which shows that this relationship is highly significant. This means that the users’ intentions to use AI-enabled devices can be fully linked with actual usage behaviors.
Thus, the results show that trust (PTT, IBT, TST), CSA, and the intention to use are crucial factors that affect the adoption and the level of engagement with AI-enabled smart home devices. This finding is consistent with the theoretical framework of the study and provides useful recommendations for encouraging technology adoption.
Table 7 presents the R-squared (R2) and the adjusted R-squared (R2 adjusted) values for the dependent variables in the study, which show the ability of the independent variables to explain the variances in the dependent variables [89]. For Deep Structure Use (DSU), the R2 is 0.696, which means that 69.6% of the variance in DSU can be attributed to the variables included in the model. The adjusted R2 is slightly lower at 0.695, which also establishes that the explanatory power of the model is still quite high even after adjusting for the number of independent variables. This high value indicates that the variables that have an influence over DSU, including ITU, are adequately covered in the model.
For ITU, the R2 is 0.597, which means that its independent variables can predict 59.7% of the variance in ITU. The adjusted R2 value of 0.593 suggests that there is only a small decrease in the R-squared value, thus indicating that the predictors used in the model are still relevant and the model is not overfitted. This indicates that trust in a specific technology and CSA is critical in determining users’ intentions. In the case of Trust in Specific Technology (TST), the R2 is 0.509, indicating that 50.9% of the variance in TST can be attributed to the independent variables. The adjusted R2 value of 0.507 shows that the predictors have a high level of consistency in accounting for the variance of TST. This indicates that there are strong relationships between variables like propensity to trust and institution-based trust in the users’ trust in specific technologies. In the case of Institution-Based Trust in Technology (IBT), the R2 value is 0.695, meaning that 69.5% of the variance in this variable can be explained by the independent variables. The adjusted R2 value of 0.694 further confirms the model’s robustness, suggesting that the predictors demonstrate a high level of consistency in explaining the variance of IBT.
In summary, the R2 and adjusted R2 values indicate that the model has moderate to high predictive power for all the dependent variables, which shows that it is efficient in explaining DSU, ITU, TST, and IBT. The close proximity of R2 and the adjusted R2 value indicates and strengthens the fact that the model is not overfit and is able to depict the essential relationships well. This strengthens the proposition of the conceptual model that has been put forward to explicate the adoption of AI-enabled smart home devices.

5. Discussion

Hypothesis 1 is strongly supported by the results, which demonstrate a positive relationship between Propensity to Trust Technology (PTT) and Institution-Based Trust in Technology (IBT). The findings are consistent with previous studies by Wu et al. [50], Zarifis and Fu [49], and McKnight et al. [47], who emphasized that individuals with a higher propensity to trust are more likely to rely on institutional safeguards and structural assurances, which in turn, strengthen their trust in technology. The hypothesis that PTT will positively affect IBT is particularly pertinent in the current context of Saudi Arabia, where there is a growing use of AI-powered smart home devices and other technological advancements in line with the Vision 2030 goals to enhance the quality of life of citizens. The analysis indicates that participants who have a general trust in technology also extend this trust to institutional mechanisms that ensure the reliability, safety, and accountability of systems, including AI-based home assistants and security devices. This is especially important in Saudi Arabia, where a young and tech-savvy population is accustomed to adopting new technologies in everyday life. This general willingness to trust provides a strong foundation for IBT, as positive experiences with general technologies, including smartphones and IoT platforms. This encourages users to place confidence not only in the technologies themselves, but also in the institutional structures that regulate and safeguard them.
Hypothesis 2 is strongly supported, which demonstrates a positive relationship between Institution-based Trust (IBT) and Trust in a Specific Technology (TST). The findings of this study are in line with previous studies by McKnight et al. [47] and Lu et al. [92]. It is argued that IBT in technology will have a positive impact on TST, and this is very relevant in the Saudi Arabian context, where institutional factors play a key role in shaping trust in third-party AI-powered assurances for smart homes, such as devices, laws, institution-based standards, and certification. The trust in Saudi Arabia’s technology programs, based on Vision 2030, has strengthened the institutional framework for their adoption of technology; legal and regulatory requirements have been put in place to ensure the security of data and the user’s confidence view of AI. These based environment systems, including smart home assistants and security cameras, are considered safe because they are endorsed by well-established companies, comply with privacy policies, and are protected by strong cybersecurity measures.
To Saudi users, context-based trust helps to address the most pressing issues concerning the privacy and security of AI-powered solutions, enabling them to use systems comfortably. For instance, knowing that smart home devices are developed and maintained by renowned companies that are known to deliver quality products enhances trust in the performance of these devices. In addition, the perception of government oversight and certification provides additional reassurance since individuals think that such technologies are regulated by a strict set of rules. This institutional assurance plays a major role in fostering the uptake of AI-powered smart home devices since it offers a form of protection that makes people comfortable going the extra mile and accepting these technologies. The research shows how societal institutions in Saudi Arabia impact the behavior of people, and that such factors are key in determining the level of trust that people have and, thus, their willingness to embrace new technologies.
Hypothesis 3 is strongly supported, with there being a positive relationship between trust in a specific technology and intention to use AI-powered smart home devices. The findings are in line with previous studies by Mcknight et al. [47] and Kuen et al. [93]. Trust in a specific technology will positively influence the individual’s intention to use it and this hypothesis is relevant to the context of Saudi Arabia, specifically to the adoption of AI-powered smart home devices based on the users’ direct experience and trust in the technology. Trust in a specific technology can be defined as the level of trust that a user has in its ability to perform as expected, meet the user’s needs, and be safe. In Saudi Arabia, for instance, as the study findings show, users who believe in the capabilities of AI-powered devices, such as smart security systems or virtual assistants, are likely to try out other options and implement the new technologies in their everyday lives. This trust is because of the perception by the users that these systems have improved the convenience and security of households, hence encouraging more usage.
In the Saudi context, the interest towards the adoption of AI-powered smart home technologies is also contributed to by the development of the society and the young generation. The findings also reveal that when users believe that smart home devices will protect their privacy, they are likely to use more features. This pattern is in line with the general technological advancement of Saudi Arabia and demonstrates increasing trust in the application of AI in homes.
Hypothesis 4 is strongly supported, given that a positive relationship between CSA and intention to use AI-powered smart home devices has emerged. The findings are in line with previous studies by Grassegger and Nedbal [94] and Hwang et al. [95]. CSA will have a positive impact on the individual’s intention to participate in the use of the technology, which is particularly crucial in the context of Saudi Arabia, where there are issues regarding privacy and security of the use of AI in smart home devices. CSA is defined as the level of awareness of the users of the risks that are involved with the use of their data and devices. The findings of the study show that those who are well informed about the cybersecurity risks associated with AI-based smart home devices are more open to trying out the new features available. This level of exposure gives people the confidence to use these systems and enables them to take advantage of the features offered, while also assuring them that their data is safe from intrusions, such as from hackers.
This relationship is especially important in the Saudi context, given the fact that the country is striving to ensure the safe and secure use of technology as part of the Vision 2030 plan. Cybersecurity campaigns and educational programs are central to building user confidence and the reduction in fear and skepticism regarding the adoption of AI-powered smart home systems. Users who grasp the value of encryption, secure configuration settings, and regular updates tend to investigate advanced features, including real-time monitoring and remote access, because they trust these technologies. Research indicates that increased CSA leads to higher user anxiety, but simultaneously boosts their interest in smart home technology adoption, which supports the main objective of building trust and promoting adoption.
The research findings support Hypothesis 5, in that it has been found that the intention to use AI-powered smart home devices is positively related to deep structure use. The results match the findings of Popova and Zagulova [87]. This shows that positive individual intention behavior leads to positive actual behavior when people use AI-powered smart home technologies, because the intention–behavior relationship is strong. The results also show that people who plan to use AI-powered smart home devices will be more likely to put their plans into action. The growing awareness of AI-powered device advantages, including camera-based security and energy efficiency, has made this possible. For instance, those who plan to use features, such as remote monitoring or energy management, are likely to upgrade to devices with such features and integrate them into their daily routines, which shows the level of trust they have in technology.
The Saudi Arabian market benefits from ongoing initiatives, including Vision 2030, which promotes technology integration to enhance life quality. The research findings demonstrate that when people develop more positive intentions, they become more likely to adopt AI-powered smart home technologies deeply and persistently. This change from the intention to the actual behavior is in conformity with the region’s technological advancement and the enhancement of smart home technology. The outcomes highlight the role of positive user intentions that can be fostered by trust-building actions, user education, and technology advocacy since such intentions are the major predictors of the actual adoption and further use of AI-powered devices in Saudi Arabia.
The findings highlight the systemic relevance of AI-powered smart homes. Users’ trust and cybersecurity awareness, together with intention and deep structure use, shape how households integrate technology into evolving routines and interactions. These dynamics suggest that adoption drivers extend beyond individual behavior to influence resilience, stability, and adaptability at the systemic level. For example, when users are more aware of cybersecurity risks, they take fewer risks. This reduces the vulnerabilities in IoT ecosystems and makes them more resilient.
Specifically, the findings show that: (1) propensity to trust supports systemic stability through confidence in institutions; (2) cybersecurity awareness enhances systemic resilience by reducing vulnerabilities; and (3) intention and deep structure use foster systemic adaptation by embedding AI into household routines. Together, these links highlight how micro-level adoption factors map onto systemic properties.

6. Implications

6.1. Practical Implications

The research results deliver multiple useful implications for technology companies, marketing professionals, and government officials, who need to consider Saudi Arabia’s current technological progress. The findings also stress that, in addition to providing functional and innovative products, the technology developers and manufacturers of AI-powered smart home devices need to pay significant attention to security and privacy issues. The outcomes reveal that in Saudi Arabia and other emerging markets, users are very much aware of the security threats that come with using AI-powered devices. This study also provides evidence that institution-based trust, that is, the belief that smart home devices are backed by well-known companies that have put in place proper cybersecurity measures, is a key factor that cannot be ruled out in the adoption process. Hence, the manufacturers should focus on aspects of data security, such as providing clear privacy policies, encrypted networks, and meeting the required cybersecurity policies of the specific country. Features that explain how user data is protected, the steps taken to ensure privacy, and the security measures put in place will help increase users’ confidence and make them more willing to adopt AI-powered devices in their homes.
In addition, product guarantees and certifications are crucial instruments that help to enhance the level of institutional trust. In the Saudi market, where consumers may be new to the use of AI technologies, third-party stamps of approval or government approval of product safety and security can go a long way to putting consumers’ minds at ease. Technologists should work with the regulators to ensure that the products they develop are up to the required standards, and this should be communicated to the consumers through simple guarantees that their data and devices are safe. This will not only enable companies to boost the adoption rate of their AI-powered smart home gadgets but also help to establish a strong brand identity in the current market and in the future.
Additionally, marketers need to develop particular approaches that will build trust in technology as well as in particular applications. The initiatives should demonstrate how AI-powered smart home devices deliver concrete advantages, including better security and convenience and energy efficiency, while actively addressing consumer worries. Marketing communications should focus on the implementation of new products and services, including aspects that may lock individuals into the continued use of smart home technologies.
From a policy perspective, the findings highlight the significance of awareness programs in education to inform about the threats posed by IoT and AI, thereby enhancing knowledge of the associated risks. It is essential to build trust in policymakers’ institutions by enforcing laws that protect users’ information and ensuring that manufacturers comply with these. The education of new AI technology users about security practices depends heavily on awareness initiatives. The programs can be disseminated to the public through various delivery methods, including online forums, training workshops, and university collaborations, to provide information about safe and effective smart home device usage.
Another implication of the results is the need for coordinated efforts between the public and private sectors to establish national guidelines for smart home cybersecurity standards. The study findings highlight how trust is strengthened by visible institutional support. A centralized certification system or a ‘Saudi Smart Home Security Label’ could provide consumers with assurance that a product meets national safety benchmarks, much like energy efficiency labels do today. This standardization would also encourage healthy competition among vendors to meet higher security thresholds.
Furthermore, academic institutions and training centers should integrate smart home technology and cybersecurity education into their digital literacy programs. Government-sponsored awareness campaigns and community engagement initiatives, especially in rural and underserved regions, can help bridge the knowledge gap and accelerate safe and confident adoption. Moreover, the research results can help Saudi Arabia develop future strategic plans under Vision 2030 by showing how public perception and behavioral factors affect digital transformation. The findings support the creation of secure, user-friendly technologies and national strategies to build digital resilience in society. Through the focus on trust, CSA, and user behavior, the study findings emphasize the importance of building both propensity to trust and institution-based trust to increase user confidence in adopting new technologies.
The findings also provide guidance for designing strategies that bridge the gap between intention and actual adoption behavior, particularly in cultural contexts like Saudi Arabia. Understanding that positive behavioral intention can lead to actual use, organizations should develop targeted interventions that reinforce user motivation and facilitate ease of use. Furthermore, the findings can inform similar initiatives in other developing countries and the Middle Eastern region undergoing digital transformation. That is, the theoretical framework developed in this study can help stakeholders in such contexts to assess and address trust and security concerns to accelerate the adoption of smart technologies.

6.2. Theoretical Implications

This study makes important theoretical contributions to the existing research on the adoption of AI-enabled smart home technology, with a focus on trust, CSA, and user behavior. First, it extends the Trust in Technology Model (TTM) by showing how propensity to trust and institution-based trust in technology (IBT) influence that of users in specific technologies. This study makes important theoretical contributions to the existing research on the adoption of AI-enabled smart home technology, with a focus on trust, CSA, and user behavior. First, it extends the Trust in Technology Model (TTM) by showing how propensity to trust influences IBT. Second, it extends the Trust in Technology Model (TTM) by showing how IBT influences the trust of users in specific technologies. Through the empirical analysis of the adoption of AI-enabled smart home devices in Saudi Arabia, the study findings support the notion that general trust in technology is the basic factors that underpin users’ IBT. Also, they help to expand the understanding of the role of CSA in users’ plans to adopt AI-enabled smart home technology, which emphasizes the need to incorporate security knowledge into understanding users’ adoption behavior. This theoretical extension helps to explain how trust and security behaviors influence the adoption of technology, particularly where data privacy and security are a concern.
Theoretically, this study connects micro-level adoption drivers to macro-level systemic properties such as resilience, stability, and adaptation. By showing how trust and CSA interact to strengthen user confidence and behavioral continuity, the study demonstrates that adoption models can also capture systemic features of socio-technical ecosystems, not only individual attitudes.
In addition, the study findings enhance the knowledge of the relation between intention and behavior regarding the adoption of AI technologies, especially in the cultural context of Saudi Arabia. The outcomes provide strong evidence that positive behavioral intention results in actual technology adoption. The use of Saudi Arabia as the context for the research also extends the theory of technology acceptance by revealing how the factors may impact the adoption of technology in developing countries. This results in the development of a theoretical framework that can be used for regions, such as other Middle Eastern countries, which are going through the process of technological advancement and digital change.

7. Limitations and Directions for Future Research

The present study has some limitations that should be mentioned. First, the research took place in Saudi Arabia, and while the results deliver valuable insights about AI adoption in this specific country, they fail to provide enough information to apply these findings to users in other Gulf countries. The model needs further testing across different countries of this region to verify its applicability in various cultural and institutional settings. The research findings may have been affected by Saudi Arabia’s specific circumstances. For instance, the country has invested heavily in smart technologies and digital infrastructure as well as AI integration, as part of its Vision 2030 initiative. The rapid technological progress has made Saudi Arabia one of the top smart home technology adopters in the Middle East. Notably, the adoption of AI-powered devices remains higher among urban, affluent, and tech-savvy populations, because average consumers face challenges such as high device costs. Saudi Arabia exists in a transitional phase worldwide because it has more developed infrastructure and policy support than numerous developing nations, but its household adoption rates remain lower than those of the United States, Germany, and theRepublic of Korea. The study delivers valuable insights that could work for comparable emerging markets that experience digital transformation, but researchers need to exercise caution when using these findings for lower-income or less digitally advanced countries. Future comparative studies across diverse economies would provide a broader understanding of how national context influences trust, CSA, and the adoption of smart home technologies.
Second, the cross-sectional nature of the study provides only a snapshot of users’ intentions and behaviors at a certain point in time. Therefore, future research should employ longitudinal designs to explore the relationship between trust, CSA, and institutional support, pertaining to the adoption of AI-powered home technologies over time and under changing conditions, as well as with the introduction of new technologies and policies.
Third, the sample composition could restrict how well the research findings represent the population. The research participants consisted mainly of young people who were tech-savvy and well-educated. That is, the demographic profile did not represent the entire population of smart home technology users, especially the older adults and those with lower levels of digital literacy, who may have different trust concerns or adoption behaviors. Future research needs to include a more diverse and stratified sample to improve the external validity of the findings.
Fourth, another limitation of this research lies in its reliance on a single theoretical framework, namely the Trust in Technology Model (TTM). Despite this model providing valuable insights into the roles of trust and CSA, it represents a relatively narrow perspective on technology adoption. Broader models, such as the Theory of Planned Behavior (TPB) and the Technology Acceptance Model (TAM), capture additional psychological, social, and behavioral determinants that may enrich our understanding of user adoption. Future studies should, therefore, integrate or compare these models with TTM to provide a more comprehensive theoretical foundation. Such an approach would not only deepen the critical reflection on adoption factors but also support the development of more advanced and well-rounded research instruments, including questionnaires that capture a wider scope of influences on user behavior.
Finally, the research was concentrated on AI-powered smart home devices that operate with unique features and operational risks when compared to other AI applications (e.g., AI in healthcare or transportation). Hence, the results should be applied with care when making predictions about wider AI adoption scenarios. However, all these limitations notwithstanding, this study makes a number of theoretical and practical contributions that will be useful for informing future research as well as for the effective adoption of AI-powered smart home technologies in Saudi Arabia and other similar markets.

8. Conclusions

This study was aimed at identifying the factors influencing the adoption of AI-powered smart home devices in Saudi Arabia, with emphasis on general trust in technology, institution-based trust, cybersecurity awareness, and trust in specific technologies. All five hypotheses were confirmed, thus indicating the importance of general trust in technology as well as institution-based trust in specific technologies. In addition, this study identified the roles of trust in specific technologies and cybersecurity awareness in determining users’ intentions toward adopting AI-powered smart home devices. Intentions were also seen to have a very significant relationship with deep structure use. The findings also support the notion that those who have trust in technology, in general, will most likely trust other technologies, including AI-powered smart home devices.
Furthermore, the subject matter experts who had better cybersecurity knowledge were found to be more likely to enjoy the trialing of these devices’ advanced settings and options. Also, the base of institutional trust in technology, which is protected by appropriate laws and regulations, also enhances the trust of users. Furthermore, the research revealed that positive behavioral sentiments towards the adoption of directly translated technologies are practiced, thus underlining the role of trust and intent in the adoption process.
This research was conducted using Structural Equation Modeling (SEM) and Smart PLS software (v.4.1.1.2) to examine the relationships among these variables and thus, provide support for the proposed model. The outcomes of this study can be useful to technology developers, marketers, and policymakers, especially in the case of Saudi Arabia, where digital transformation is one of the main objectives of Vision 2030. To this end, the stakeholders should endeavor to build trust among users by implementing clear and conspicuous security measures, strong institutions, and awareness campaigns on cybersecurity. The research findings have demonstrated that trust stands as the primary factor that determines user intentions and actual use, whether in the form of general trust in technology, institution-based trust, or trust in specific AI-powered smart home devices. In particular, the trust between users and their devices grows stronger through cybersecurity awareness, because it provides them with the understanding needed to use sophisticated device capabilities. The trust and awareness dynamic should be a strategic focus for policymakers and technology providers who want to accelerate the adoption of AI-powered smart home devices in Saudi Arabia. In sum, this paper identifies these factors that, if addressed by technology, would facilitate providers and regulators in developing a secure and enabling environment for the adoption of AI-powered smart home devices.

Author Contributions

Conceptualization, M.M.A. and Y.H.A.-M.; methodology, M.M.A. and Y.H.A.-M.; software, M.M.A. and Y.H.A.-M.; validation, M.M.A. and Y.H.A.-M.; formal analysis, M.M.A. and Y.H.A.-M.; investigation, M.M.A. and Y.H.A.-M.; resources, M.M.A. and Y.H.A.-M.; data curation, M.M.A. and Y.H.A.-M.; writing—original draft preparation, M.M.A. and Y.H.A.-M.; writing—review and editing, M.M.A. and Y.H.A.-M.; visualization, M.M.A. and Y.H.A.-M.; supervision, Y.H.A.-M.; project administration, M.M.A.; funding acquisition, M.M.A. and Y.H.A.-M.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Institutional Review Board Statement

This study has been reviewed and approved by the Research Ethics Committee (REC) at the University of Hail dated 20 January 2025 no. H-2025-576.

Informed Consent Statement

Informed consent for participation was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Marikyan, D.; Papagiannidis, S.; Alamanos, E. Cognitive dissonance in technology adoption: A study of smart home users. Inf. Syst. Front. 2023, 25, 1101–1123. [Google Scholar] [CrossRef]
  2. Vasalou, A.; Benton, L.; Serta, A.; Gauthier, A.; Besevli, C.; Turner, S.; Gill, R.; Payler, R.; Roesch, E.; McAreavey, K.; et al. Doing cybersecurity at home: A human-centred approach for mitigating attacks in AI-enabled home devices. Comput. Secur. 2025, 148, 104112. [Google Scholar] [CrossRef]
  3. Statista, Smart Home: Market Data & Analysis. 2024. Available online: https://www.statista.com/outlook/cmo/smart-home/worldwide (accessed on 1 October 2024).
  4. Bovenzi, G.; Aceto, G.; Ciuonzo, D.; Montieri, A.; Persico, V.; Pescapé, A. Network Anomaly Detection Methods in Iot Environments Via Deep Learning: A Fair Comparison of Performance and Robustness. Comput. Secur. 2023, 128, 103167. [Google Scholar] [CrossRef]
  5. Allen, A.; Mylonas, A.; Vidalis, S.; Gritzalis, D. Smart Homes under Siege: Assessing the Robustness of Physical Security against Wireless Network Attacks. Comput. Secur. 2024, 139, 103687. [Google Scholar] [CrossRef]
  6. Cichy, P.; Salge, T.O.; Kohli, R. Privacy Concerns and Data Sharing in the Internet of Things: Mixed Methods Evidence from Connected Cars. MIS Q. 2021, 45, 1863–1891. [Google Scholar] [CrossRef]
  7. Zeng, P.; Pan, B.; Choo, K.-K.R.; Liu, H. Multidimensional and Multidirectional Data Aggregation for Edge Computing-Enhanced Iot. J. Syst. Arch. 2020, 106, 101713. [Google Scholar] [CrossRef]
  8. Liu, Y.-L.; Huang, L.; Yan, W.; Wang, X.; Zhang, R. Privacy in Ai and the Iot: The Privacy Concerns of Smart Speaker Users and the Personal Information Protection Law in China. Telecommun. Policy 2022, 46, 102334. [Google Scholar] [CrossRef]
  9. Heartfield, R.; Loukas, G.; Bezemskij, A.; Panaousis, E. Self-Configurable Cyber-Physical Intrusion Detection for Smart Homes Using Reinforcement Learning. IEEE Trans. Inf. Forensics Secur. 2020, 16, 1720–1735. [Google Scholar] [CrossRef]
  10. Acosta, L.H.; Reinhardt, D. “Alexa, How Do You Protect My Privacy?” A Quantitative Study of User Preferences and Requirements About Smart Speaker Privacy Settings. Comput. Secur. 2025, 151, 104302. [Google Scholar] [CrossRef]
  11. Liu, Y.; Gan, Y.; Song, Y.; Liu, J. What Influences the Perceived Trust of a Voice-Enabled Smart Home System: An Empirical Study. Sensors 2021, 21, 2037. [Google Scholar] [CrossRef]
  12. Chrysikou, E.; Biddulph, J.P.; Loizides, F.; Savvopoulou, E.; Rehn-Groenendijk, J.; Jones, N.; Dennis-Jones, A.; Nandi, A.; Tziraki, C. Creating resilient smart homes with a heart: Sustainable, technologically advanced housing across the lifespan and frailty through inclusive design for people and their robots. Sustainability 2024, 16, 5837. [Google Scholar] [CrossRef]
  13. Li, W.; Yigitcanlar, T.; Erol, I.; Liu, A. Motivations, barriers and risks of smart home adoption: From systematic literature review to conceptual framework. Energy Res. Soc. Sci. 2021, 80, 102211. [Google Scholar] [CrossRef]
  14. Nikou, S. Factors Driving the Adoption of Smart Home Technology: An Empirical Assessment. Telemat. Inform. 2019, 45, 101283. [Google Scholar] [CrossRef]
  15. Torres-Hernandez, C.M.; Garduño-Aparicio, M.; Rodriguez-Resendiz, J. Smart Homes: A Meta-Study on Sense of Security and Home Automation. Technologies 2025, 13, 320. [Google Scholar] [CrossRef]
  16. Xu, Y.; Liu, X.; Cao, X.; Huang, C.; Liu, E.; Qian, S.; Liu, X.; Wu, Y.; Dong, F.; Qiu, C.-W.; et al. Artificial Intelligence: A Powerful Paradigm for Scientific Research. Innovation 2021, 2, 100179. [Google Scholar] [CrossRef]
  17. Morris, D.; Madzudzo, G.; Perez, A.G. Cybersecurity and the Auto Industry: The Growing Challenges Presented by Connected. Int. J. Automot. Technol. Manag. 2018, 18, 105. [Google Scholar] [CrossRef]
  18. Urquhart, L.; McAuley, D. Avoiding the Internet of Insecure Industrial Things. Comput. Law Secur. Rev. 2018, 34, 450–466. [Google Scholar] [CrossRef]
  19. Humayun, M.; Tariq, N.; Alfayad, M.; Zakwan, M.; Alwakid, G.; Assiri, M. Securing the Internet of Things in Artificial Intelligence Era: A Comprehensive Survey. IEEE Access 2024, 12, 25469–25490. [Google Scholar] [CrossRef]
  20. Li, S.; Yuan, F.; Liu, J. Smart city VR landscape planning and user virtual entertainment experience based on artificial intelligence. Entertain. Comput. 2024, 51, 100743. [Google Scholar] [CrossRef]
  21. Lim, H.S.M.; Taeihagh, A. Autonomous vehicles for smart and sustainable cities: An in-depth exploration of privacy and cybersecurity implications. Energies 2018, 11, 1062. [Google Scholar] [CrossRef]
  22. Roche, R.; Berthold, F.; Gao, F.; Wang, F.; Ravey, A.; Williamson, S. A model and strategy to improve smart home energy resilience during outages using vehicle-to-home. In Proceedings of the 2014 IEEE International Electric Vehicle Conference (IEVC), Florence, Italy, 17–19 December 2014; pp. 1–6. [Google Scholar]
  23. Wilson, C.; Hargreaves, T.; Hauxwell-Baldwin, R. Smart homes and their users: A systematic analysis and key challenges. Pers. Ubiquitous Comput. 2015, 19, 463–476. [Google Scholar] [CrossRef]
  24. Lankton, N.K.; McKnight, D.H.; Tripp, J. Technology, Humanness, and Trust: Rethinking Trust in Technology. J. Assoc. Inf. Syst. 2015, 16, 880–918. [Google Scholar] [CrossRef]
  25. Omrani, N.; Rivieccio, G.; Fiore, U.; Schiavone, F.; Agreda, S.G. To trust or not to trust? An assessment of trust in AI-based systems: Concerns, ethics and contexts. Technol. Forecast. Soc. Change 2022, 181, 121763. [Google Scholar] [CrossRef]
  26. Cheng, X.; Zhang, X.; Yang, B.; Fu, Y. An Investigation on Trust in Ai-Enabled Collaboration: Application of Ai-Driven Chatbot in Accommodation-Based Sharing Economy. Electron. Commer. Res. Appl. 2020, 54, 101164. [Google Scholar] [CrossRef] [PubMed]
  27. Jagatheesaperumal, S.K.; Pham, Q.-V.; Ruby, R.; Yang, Z.; Xu, C.; Zhang, Z. Explainable Ai over the Internet of Things (Iot): Overview, State-of-the-Art and Future Directions. IEEE Open J. Commun. Soc. 2022, 3, 2106–2136. [Google Scholar] [CrossRef]
  28. Dhagarra, D.; Goswami, M.; Kumar, G. Impact of Trust and Privacy Concerns on Technology Acceptance in Healthcare: An Indian Perspective. Int. J. Med. Inform. 2020, 141, 104164. [Google Scholar] [CrossRef]
  29. Singh, S.; Sahni, M.M.; Kovid, R.K. What Drives Fintech Adoption? A Multi-Method Evaluation Using an Adapted Technology Acceptance Model. Manag. Decis. 2020, 58, 1675–1697. [Google Scholar] [CrossRef]
  30. Söllner, M.; Benbasat, I.; Gefen, D.; Leimeister, J.M.; Pavlou, P.A. Trust: An Mis Quarterly Research Curation. MIS Q. 2016, 1–9. Available online: http://misq.org/research-curations (accessed on 1 October 2024).
  31. Bach, T.A.; Khan, A.; Hallock, H.; Beltrão, G.; Sousa, S. A Systematic Literature Review of User Trust in Ai-Enabled Systems: An Hci Perspective. Int. J. Hum.-Comput. Interact. 2022, 40, 1251–1266. [Google Scholar] [CrossRef]
  32. Shneiderman, B. Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS Trans. Hum.-Comput. Interact. 2020, 12, 109–124. [Google Scholar] [CrossRef]
  33. Xu, W. Toward Human-Centered Ai: A Perspective from Human-Computer Interaction. Interactions 2019, 26, 42–46. [Google Scholar] [CrossRef]
  34. Sah, J.; Jun, S. The Role of Consumers’ Privacy Awareness in the Privacy Calculus for Iot Services. Int. J. Hum.-Comput. Interact. 2024, 40, 3173–3184. [Google Scholar] [CrossRef]
  35. Padyab, A.; Habibipour, A.; Rizk, A.; Ståhlbröst, A. Adoption Barriers of Iot in Large Scale Pilots. Information 2020, 11, 23. [Google Scholar] [CrossRef]
  36. Hammi, B.; Zeadally, S.; Khatoun, R.; Nebhen, J. Survey on Smart Homes: Vulnerabilities, Risks, and Countermeasures. Comput. Secur. 2022, 117, 102677. [Google Scholar] [CrossRef]
  37. Rentz, P. Owasp Releases Latest Top 10 Iot Vulnerabilities. 2019. Available online: https://www.techwell.com/techwell-insights/2019/01/owasp-releases-latest-top-10-iot-vulnerabilities (accessed on 1 October 2024).
  38. Allifah, N.M.; Zualkernan, I.A. Ranking Security of Iot-Based Smart Home Consumer Devices. IEEE Access 2022, 10, 18352–18369. [Google Scholar] [CrossRef]
  39. Zheng, S.; Apthorpe, N.; Chetty, M.; Feamster, N. User Perceptions of Smart Home Iot Privacy. Proc. ACM Hum.-Comput. Interact. 2018, 2, 200. [Google Scholar] [CrossRef]
  40. Avast. Avast Smart Home Security Report 2019. 2019. Available online: https://cdn2.hubspot.net/hubfs/486579/avast_smart_home_report_feb_2019.pdf (accessed on 1 October 2024).
  41. NIST. Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. 2024. Available online: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf (accessed on 1 October 2024).
  42. ENISA. Threat Landscape for Artificial Intelligence. 2020. Available online: https://futurium.ec.europa.eu/sites/default/files/2020-12/ENISA%20Report%20-%20Artificial%20Intelligence%20Cybersecurity%20Challenges.pdf (accessed on 1 October 2024).
  43. Dosumu, O.S.; Uwayo, S.M. Modelling the Adoption of Internet of Things (Iot) for Sustainable Construction in a Developing Economy. Built Environ. Proj. Asset Manag. 2023, 13, 394–411. [Google Scholar] [CrossRef]
  44. Choudhury, A.; Shamszare, H. Investigating the impact of user trust on the adoption and use of ChatGPT: Survey analysis. J. Med. Internet Res. 2023, 25, e47184. [Google Scholar] [CrossRef]
  45. Albayaydh, W.; Flechais, I.; Zhao, R.; Albayaydh, J. AI For Privacy in Smart Homes: Exploring How Leveraging AI-Powered Smart Devices Enhances Privacy Protection. arXiv 2025, arXiv:2509.14050. [Google Scholar] [CrossRef]
  46. Büscher, C.; Sumpf, P. “Trust” and “confidence” as socio-technical problems in the transformation of energy systems. Energy Sustain. Soc. 2015, 5, 34. [Google Scholar] [CrossRef]
  47. Mcknight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a Specific Technology: An Investigation of Its Components and Measures. ACM Trans. Manag. Inf. Syst. 2011, 2, 12. [Google Scholar] [CrossRef]
  48. Shin, D. The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI. Int. J. Hum.-Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  49. Zarifis, A.; Fu, S. Re-evaluating trust and privacy concerns when purchasing a mobile app: Re-calibrating for the increasing role of artificial intelligence. Digital 2023, 3, 286–299. [Google Scholar] [CrossRef]
  50. Wu, J.; Wang, Z.; Huang, L. The relationship among propensity to trust, institution-based trust, perceived control, and trust in platform. In Proceedings of the 2010 IEEE 2nd Symposium on Web Society (SWS), Beijing, China, 16–17 August 2010; pp. 424–428. [Google Scholar]
  51. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. Ai4people—An Ethical Framework for a Good Ai Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
  52. Imran, A.; Gregor, S. Conceptualising an It Mindset and Its Relationship to It Knowledge and Intention to Explore It in the Workplace. Inf. Technol. People 2019, 32, 1536–1563. [Google Scholar] [CrossRef]
  53. Sykes, T.A.; Venkatesh, V. Explaining Post-Implementation Employee System Use and Job Performance. MIS Q. 2017, 41, 917–936. Available online: https://www.jstor.org/stable/26635019 (accessed on 1 October 2024). [CrossRef]
  54. Robert, L.P.; Sykes, T.A. Extending the Concept of Control Beliefs: Integrating the Role of Advice Networks. Inf. Syst. Res. 2017, 28, 84–96. [Google Scholar] [CrossRef]
  55. Ng, B.-Y.; Kankanhalli, A.; Xu, Y. Studying Users’ Computer Security Behavior: A Health Belief Perspective. Decis. Support Syst. 2009, 46, 815–825. [Google Scholar] [CrossRef]
  56. Vishwanath, A.; Neo, L.S.; Goh, P.; Lee, S.; Khader, M.; Ong, G.; Chin, J. Cyber Hygiene: The Concept, Its Measure, and Its Initial Tests. Decis. Support Syst. 2020, 128, 113160. [Google Scholar] [CrossRef]
  57. Diro, A.A.; Chilamkurti, N. Distributed Attack Detection Scheme Using Deep Learning Approach for Internet of Things. Future Gener. Comput. Syst. 2018, 82, 761–768. [Google Scholar] [CrossRef]
  58. Zeadally, S.; Das, A.K.; Sklavos, N. Cryptographic Technologies and Protocol Standards for Internet of Things. Internet Things 2021, 14, 100075. [Google Scholar] [CrossRef]
  59. Perez, A.J.; Zeadally, S. Privacy Issues and Solutions for Consumer Wearables. It Prof. 2018, 20, 46–56. [Google Scholar] [CrossRef]
  60. Moqurrab, S.A.; Anjum, A.; Tariq, N.; Srivastava, G. Instant_Anonymity: A Lightweight Semantic Privacy Guarantee for 5g-Enabled Iiot. IEEE Trans. Ind. Inform. 2023, 19, 951–959. [Google Scholar] [CrossRef]
  61. Jun, Y.; Craig, A.; Shafik, W.; Sharif, L. Artificial Intelligence Application in Cybersecurity and Cyberdefense. Wirel. Commun. Mob. Comput. 2021, 2021, 3329581. [Google Scholar] [CrossRef]
  62. Dang, T.K.; Pham, C.D.; Nguyen, T.L. A Pragmatic Elliptic Curve Cryptography-Based Extension for Energy-Efficient Device-to-Device Communications in Smart Cities. Sustain. Cities Soc. 2020, 56, 102097. [Google Scholar] [CrossRef]
  63. Solove, D.J. A Taxonomy of Privacy. Univ. Pa. Law Rev. 2006, 154, 477–564. [Google Scholar] [CrossRef]
  64. Dinev, T.; Hart, P. An Extended Privacy Calculus Model for E-Commerce Transactions. Inf. Syst. Res. 2006, 17, 61–80. [Google Scholar] [CrossRef]
  65. Jorgensen, Z.; Chen, J.; Gates, C.S.; Li, N.; Proctor, R.W.; Yu, T. Dimensions of Risk in Mobile Applications: A User Study. In Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, San Antonio, TX, USA, 2–4 March 2015; pp. 49–60. [Google Scholar]
  66. Siponen, M.; Vance, A. Neutralization: New Insights into the Problem of Employee Information Systems Security Policy Violations. MIS Q. 2010, 34, 487–502. [Google Scholar] [CrossRef]
  67. NIST. Automation Support for Security Control Assessments. 2017. Available online: https://nvlpubs.nist.gov/nistpubs/ir/2017/NIST.IR.8011-1.pdf (accessed on 1 October 2024).
  68. Tariq, N.; Asim, M.; Al-Obeidat, F.; Zubair Farooqi, M.; Baker, T.; Hammoudeh, M.; Ghafir, I. The Security of Big Data in Fog-Enabled Iot Applications Including Blockchain: A Survey. Sensors 2019, 19, 1788. [Google Scholar] [CrossRef]
  69. Lee, L. Internet of Things (Iot) Cybersecurity: Literature Review and Iot Cyber Risk Management. Future Internet 2020, 12, 157. [Google Scholar] [CrossRef]
  70. Jeremiah, P.; Samy, G.N.; Shanmugam, B.; Ponkoodalingam, K.; Perumal, S.; Haney, J.M.; Furman, S.M.; Acar, Y. Potential Measures to Enhance Information Security Compliance in the Healthcare Internet of Things. In Proceedings of the 3rd International Conference of Reliable Information and Communication Technology, Kuala Lumpur, Malaysia, 23–24 June 2018. [Google Scholar]
  71. Haney, J.M.; Furman, S.M.; Acar, Y. Smart Home Security and Privacy Mitigations: Consumer Perceptions, Practices, and Challenges. In HCI for Cybersecurity, Privacy and Trust; HCII 2020; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  72. FBI. Cyber Actors Use Internet of Things Devices as Proxies for Anonymity and Pursuit of Malicious Cyber Activities. 2018. Available online: https://www.ic3.gov/PSA/2018/PSA180802 (accessed on 1 October 2024).
  73. FBI. Tech Tuesday: Internet of Things (Iot). 2019. Available online: https://www.fbi.gov/contact-us/field-offices/portland/news/press-releases/tech-tuesday-internet-of-things-iot (accessed on 1 October 2024).
  74. Chau, P.Y.K. An Empirical Assessment of aModified Technology Acceptance Model. J. Manag. Inf. Syst. 1996, 13, 185–204. [Google Scholar] [CrossRef]
  75. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  76. Rogers, R.W. A Protection Motivation Theory of Fear Appeals and Attitude Change. J. Psychol. 1975, 91, 93–114. [Google Scholar] [CrossRef] [PubMed]
  77. Yoon, C.; Hwang, J.W.; Kim, R. Exploring Factors That Influence Students’ Behaviors in Information Security. J. Inf. Syst. Educ. 2012, 23, 407–416. [Google Scholar]
  78. CNSS. Committee on National Security Systems (Cnss) Glossary. 2022. Available online: https://www.cnss.gov/CNSS/openDoc.cfm?a=r8l8KlRtJBLFYlVB%2BpcmYQ%3D%3D&b=AF4667DC57D94898ABC34379649234E4CD698C669EC3FC4CBFB8B31BBB7CC38A753A208BDF8BBF08BE408C8A1A502E53 (accessed on 1 October 2024).
  79. Kő, A.; Tarján, G.; Mitev, A. Information Security Awareness Maturity: Conceptual and Practical Aspects in Hungarian Organizations. Inf. Technol. People 2023, 36, 174–195. [Google Scholar] [CrossRef]
  80. Alshammari, M.M.; Al-Mamary, Y.H. Bridging Policy and Practice: Integrated Model for Investigating Behavioral Influences on Information Security Policy Compliance. Systems 2025, 13, 630. [Google Scholar] [CrossRef]
  81. Dinger, M.; Wade, J.T.; Dinger, S.; Carter, M.; Thatcher, J.B. Affect and information technology use: The impact of state affect on cognitions and IT use. Internet Res. 2024, 34, 265–293. [Google Scholar] [CrossRef]
  82. Moqbel, M.; Hewitt, B.; Nah, F.F.-H.; McLean, R.M. Sustaining patient portal continuous use intention and enhancing deep structure usage: Cognitive dissonance effects of health professional encouragement and security concerns. Inf. Syst. Front. 2021, 24, 1483–1496. [Google Scholar] [CrossRef]
  83. Vardakis, G.; Hatzivasilis, G.; Koutsaki, E.; Papadakis, N. Review of smart-home security using the internet of things. Electronics 2024, 13, 3343. [Google Scholar] [CrossRef]
  84. Schiller, T.; Caulkins, B.; Wu, A.S.; Mondesire, S. Security Awareness in Smart Homes and Internet of Things Networks through Swarm-Based Cybersecurity Penetration Testing. Information 2023, 14, 536. [Google Scholar] [CrossRef]
  85. Vardakis, G.; Tsamis, G.; Koutsaki, E.; Haridimos, K.; Papadakis, N. Smart home: Deep learning as a method for machine learning in recognition of face, silhouette and human activity in the service of a safe home. Electronics 2022, 11, 1622. [Google Scholar] [CrossRef]
  86. Slama, S.B.; Mahmoud, M. A deep learning model for intelligent home energy management system using renewable energy. Eng. Appl. Artif. Intell. 2023, 123, 106388. [Google Scholar] [CrossRef]
  87. Popova, Y.; Zagulova, D. Utaut Model for Smart City Concept Implementation: Use of Web Applications by Residents for Everyday Operations. Informatics 2022, 9, 27. [Google Scholar] [CrossRef]
  88. Kline, R.B. Principles and Practice of Structural Equation Modeling; Guilford: New York, NY, USA, 2011. [Google Scholar]
  89. Hair, J.F.; Ringle, C.M.; Sarstedt, M. Pls-Sem: Indeed a Silver Bullet. J. Mark. Theory Pract. 2011, 19, 139–152. [Google Scholar] [CrossRef]
  90. Kock, N.; Hadaya, P. Minimum Sample Size Estimation in Pls-Sem: The Inverse Square Root and Gamma-Exponential Methods. Inf. Syst. J. 2016, 28, 227–261. [Google Scholar] [CrossRef]
  91. Al-Mamary, Y.H.; Alfalah, A.A.; Shamsuddin, A.; Abubakar, A.A. Artificial intelligence powering education: ChatGPT’s impact on students’ academic performance through the lens of technology-to-performance chain theory. J. Appl. Res. High. Educ. 2024. ahead-of-print. [Google Scholar] [CrossRef]
  92. Lu, B.; Fan, W.; Zhou, M. Social Presence, Trust, and Social Commerce Purchase Intention: An Empirical Research. Comput. Hum. Behav. 2016, 55, 225–237. [Google Scholar] [CrossRef]
  93. Kuen, L.; Westmattelmann, D.; Bruckes, M.; Schewe, G. Who Earns Trust in Online Environments? A Meta-Analysis of Trust in Technology and Trust in Provider for Technology Acceptance. Electron. Mark. 2023, 33, 61. [Google Scholar] [CrossRef]
  94. Grassegger, T.; Nedbal, D. The Role of Employees’ Information Security Awareness on the Intention to Resist Social Engineering. Procedia Comput. Sci. 2021, 181, 59–66. [Google Scholar] [CrossRef]
  95. Hwang, J.; Choe, J.Y.; Kim, H.M.; Kim, J.J. Human Baristas and Robot Baristas: How Does Brand Experience Affect Brand Satisfaction, Brand Attitude, Brand Attachment, and Brand Loyalty? Int. J. Hosp. Manag. 2021, 99, 103050. [Google Scholar] [CrossRef]
Figure 1. Trust in Technology Model (TTM). Source: McKnight et al. [47].
Figure 1. Trust in Technology Model (TTM). Source: McKnight et al. [47].
Systems 13 00863 g001
Figure 2. Research Model.
Figure 2. Research Model.
Systems 13 00863 g002
Figure 3. PLS-SEM Measurement Results.
Figure 3. PLS-SEM Measurement Results.
Systems 13 00863 g003
Figure 4. Path Coefficient Results.
Figure 4. Path Coefficient Results.
Systems 13 00863 g004
Table 1. Summarizes the Main Variables of the Proposed Conceptual Model.
Table 1. Summarizes the Main Variables of the Proposed Conceptual Model.
VariablesFocusBasis of Trust
Propensity to Trust TechnologyGeneral tendency to trust technologyAn individual’s natural tendency to trust
Institution-Based TrustLaws, regulations, or organization policyExternal factors (law and policy)
Trust in a Specific TechnologyConfidence in the performance of specific technologyDirect experiences
Cybersecurity Awareness Understanding of security risksCybersecurity awareness specifically impacts trust in security aspects (e.g., data protection) of a technology; not relying on building trust over time or external factors for training (e.g., certifications)
Table 2. Measurement Items.
Table 2. Measurement Items.
ConstructItemsSources
Propensity to Trust Technology
  • I believe that most AI-powered smart home technologies are effective in improving daily living.
  • A majority of smart home devices I use demonstrate excellent performance.
  • Most AI-powered smart home technologies include the necessary features for their intended functions.
  • I think these technologies generally enable me to manage my home efficiently and securely.
  • I typically trust new smart home technologies until they show me reasons not to trust them.
Mcknight et al. [47]
Institution-Based Trust in Technology
  • I feel confident using AI-powered smart home devices when they are supported by well-established companies.
  • My trust in smart home devices increases when they are protected by strong security measures and privacy regulations.
  • Product guarantees make it feel all right to use these technologies.
  • I trust that the legal protections in place will ensure the secure use of smart home devices.
  • Vendor guarantees and data privacy laws give me confidence when using AI-powered devices in my home.
Mcknight et al. [47]
Trust in Specific Technology (AI-Powered Smart Home Devices)
  • AI-powered smart home devices are reliable and meet my expectations consistently.
  • I trust that AI-powered smart home devices do not fail me in critical tasks.
  • AI-powered smart home devices are dependable in their performance.
  • These smart home devices have the necessary functionality to meet my daily needs.
  • I feel confident that AI-powered smart home devices operate without frequent malfunctions or disruptions.
Mcknight et al. [47]
Security Awareness
  • I regularly read cybersecurity bulletins or newsletters related to AI-powered smart home devices.
  • I am concerned about potential security incidents with smart home devices and take preventive actions to enhance device security.
  • I actively seek information about cybersecurity measures for protecting my smart home devices.
  • I am consistently mindful of security practices when using and managing AI-powered smart home devices.
Ng et al. [55]
Intention to Use AI-Powered Smart Home Devices
  • I intend to continue using AI-powered smart home devices to enhance my home’s security in the future.
  • I will always make an effort to use AI-powered smart home devices as part of my daily home security practices.
  • I plan to frequently use AI-powered smart home devices to monitor and secure my home.
  • I intend to continually upgrade my smart home devices to utilize the latest advancements in AI technology.
  • I am motivated to experiment with additional functionalities to enhance my home automation.
Mcknight et al. [47] & Moqbel et al. [82]
Deep Structure use of AI-Powered Smart Home Devices
  • When I use my AI-powered smart home device, I utilize features that help me monitor my home’s security status in real-time.
  • I use features on my AI-powered smart home device that allow me to set alerts or notifications for unusual activities.
  • I take advantage of advanced features on my smart home devices to improve overall home security.
  • I use my smart home technologies to monitor and analyze security-related data for better protection.
  • I rely on the AI functionalities of my devices to test and adjust settings to improve home security and prevent unauthorized access.
Mcknight et al. [47] & Moqbel et al. [82]
Table 4. Measurement Model Validation Summary.
Table 4. Measurement Model Validation Summary.
ConstructItemLoadingCronbach’s Alpha (>=0.7)Composite Reliability (>=0.7)Average Variance Extracted (>=0.5)Inner VIF
Propensity to Trust Technology (PTT)PTT10.8400.8560.8560.6981.000
PTT20.817
PTT30.856
PTT40.828
PTT5Deleted
Institution-Based Trust (IBT)IBT10.8510.9010.9020.7171.000
IBT20.842
IBT30.837
IBT40.850
IBT50.853
Trust in Specific Technology (TST)TST10.8370.9000.9090.7151.544
TST20.856
TST30.869
TST40.866
TST50.796
Cybersecurity Awareness (CA)CA10.8330.8800.8940.7331.544
CA20.843
CA30.883
CA40.865
Intention to Use (ITU)ITU10.8900.9210.9220.7601.000
ITU20.849
ITU30.882
ITU40.880
ITU50.857
Deep Structure Use (DSU)DSU10.8970.9210.9260.761
DSU20.877
DSU30.904
DSU40.818
DSU50.863
Table 5. Discriminant Validity Assessment Using the Fornell–Larcker Criterion.
Table 5. Discriminant Validity Assessment Using the Fornell–Larcker Criterion.
PTTIBTTSTCAITUDSU
PTT0.835
IBT0.8340.847
TST0.7300.7130.845
CA0.6730.6590.5930.856
ITU0.7880.7830.7040.6740.872
DSU0.7400.7530.6540.6840.8340.872
The square root of the AVE is represented by the diagonal values in bold.
Table 6. Path Coefficients and Hypothesis Testing Results.
Table 6. Path Coefficients and Hypothesis Testing Results.
RelationshipPath CoefficientSample Mean (M)Standard Deviation (STDEV)T Statistics (|O/STDEV|)p ValuesDecision
H1PTT → IBT0.8340.8340.02929.1960.000Supported
H2IBT → TST0.7130.7130.03918.3270.000Supported
H3TST → ITU0.4690.4680.0627.5110.000Supported
H4CA → ITU0.3960.3980.0606.6020.000Supported
H5ITU → DSU0.8340.8340.02730.8510.000Supported
Table 7. Overview of R2 and R2 Adjusted Values.
Table 7. Overview of R2 and R2 Adjusted Values.
R2R2 Adjusted
DSU0.6960.695
ITU0.5970.593
TST0.5090.507
IBT0.6950.694
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alshammari, M.M.; Al-Mamary, Y.H. Building Trust and Cybersecurity Awareness in Saudi Arabia: Key Drivers of AI-Powered Smart Home Device Adoption. Systems 2025, 13, 863. https://doi.org/10.3390/systems13100863

AMA Style

Alshammari MM, Al-Mamary YH. Building Trust and Cybersecurity Awareness in Saudi Arabia: Key Drivers of AI-Powered Smart Home Device Adoption. Systems. 2025; 13(10):863. https://doi.org/10.3390/systems13100863

Chicago/Turabian Style

Alshammari, Mohammad Mulayh, and Yaser Hasan Al-Mamary. 2025. "Building Trust and Cybersecurity Awareness in Saudi Arabia: Key Drivers of AI-Powered Smart Home Device Adoption" Systems 13, no. 10: 863. https://doi.org/10.3390/systems13100863

APA Style

Alshammari, M. M., & Al-Mamary, Y. H. (2025). Building Trust and Cybersecurity Awareness in Saudi Arabia: Key Drivers of AI-Powered Smart Home Device Adoption. Systems, 13(10), 863. https://doi.org/10.3390/systems13100863

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop