You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

7 April 2023

A Dynamic Trust-Related Attack Detection Model for IoT Devices and Services Based on the Deep Long Short-Term Memory Technique

and
1
Department of Information Technology, College of Computer, Qassim University, Qassim 51452, Saudi Arabia
2
Faculty of Engineering and Information Technology, Taiz University, Taiz 6803, Yemen
*
Author to whom correspondence should be addressed.
This article belongs to the Section Internet of Things

Abstract

The integration of the cloud and Internet of Things (IoT) technology has resulted in a significant rise in futuristic technology that ensures the long-term development of IoT applications, such as intelligent transportation, smart cities, smart healthcare, and other applications. The explosive growth of these technologies has contributed to a significant rise in threats with catastrophic and severe consequences. These consequences affect IoT adoption for both users and industry owners. Trust-based attacks are the primary selected weapon for malicious purposes in the IoT context, either through leveraging established vulnerabilities to act as trusted devices or by utilizing specific features of emerging technologies (i.e., heterogeneity, dynamic nature, and a large number of linked objects). Consequently, developing more efficient trust management techniques for IoT services has become urgent in this community. Trust management is regarded as a viable solution for IoT trust problems. Such a solution has been used in the last few years to improve security, aid decision-making processes, detect suspicious behavior, isolate suspicious objects, and redirect functionality to trusted zones. However, these solutions remain ineffective when dealing with large amounts of data and constantly changing behaviors. As a result, this paper proposes a dynamic trust-related attack detection model for IoT devices and services based on the deep long short-term memory (LSTM) technique. The proposed model aims to identify the untrusted entities in IoT services and isolate untrusted devices. The effectiveness of the proposed model is evaluated using different data samples with different sizes. The experimental results showed that the proposed model obtained a 99.87% and 99.76% accuracy and F-measure, respectively, in the normal situation, without considering trust-related attacks. Furthermore, the model effectively detected trust-related attacks, achieving a 99.28% and 99.28% accuracy and F-measure, respectively.

1. Introduction

The information technology (IT) industry has recently seen rapid growth as it has become an integral part of our daily lives. The Internet of Things (IoT) is a modern technology used in many aspects of life, including agriculture, education, water management, home security, smart grids, and others. As a result, more and more objects are becoming connected daily [1,2,3]. According to [4], the number of objects connected to the IoT reached 50 billion in 2020, and this is expected to increase three-fold by 2025, as shown in Figure 1.
Figure 1. Growth in IoT devices.
The IoT infrastructure supports several cutting-edge services (IoT services), where many heterogeneous objects collaborate to achieve a shared goal. IoT services have received increasing attention in recent years across various industries. The characteristics of the IoT, such as the diversity of shared data, dynamicity, and device heterogeneity, present entirely new challenges to IoT services and devices. These challenges are addressed mainly by focusing on general security issues rather than assessing the subjective risks associated with IoT entities and service fields [5]. Furthermore, it might result in catastrophic harm and unknown dangers if the information were used for malicious purposes. The trust principle in the IoT can be viewed as a critical feature for establishing trustworthy and reliable service provisions between different objects [6]. As a result, trust is one of the essential requirements for achieving security.
Trust is an amorphous concept with varying definitions depending both on the participants and situations, and it is influenced by measurable and non-measurable variables [6]. This demonstrates that trust is a very complex concept. Other factors contributing to trust include an object’s abilities, strength, reliability, goodness, availability, and other characteristics [7]. As a result, trust management is more complex than security itself, particularly in emerging information technology fields such as the IoT [8].
Trust management is regarded as a viable solution for IoT trust problems. Such solutions have improved security, aided decision-making processes, detected suspicious behavior, isolated suspicious objects, and redirected functionality to trusted zones [9]. Researchers have devised several strategies to address trust concerns, such as those in [10,11,12,13]. These solutions, however, are still unable to fully address trust issues and face several difficulties, including inefficiency in handling large amounts of data and constantly changing behaviors, difficulty in quantifying uncertainty for untrusted behaviors and selecting the best trust model components, and dealing with the heterogeneity and dynamic nature of the IoT while concentrating on a single and unique attack.
This paper proposes a dynamic trust-related attack detection model for IoT devices and services based on the LSTM technique to address the concerns above. This model is capable of detecting untrustworthy behaviors and taking appropriate action. The primary contributions of this paper are (1) creating an intelligent solution based on the LSTM technique that combats the continuing change in behavior and is compatible with big data, and (2) evaluating the model under different conditions (three scenarios of trust-related attacks and different sizes of datasets).
The remainder of this paper is structured as follows: Section 2 presents the background on the concept of trust and its related attacks. Section 3 investigates existing related works to highlight their shortcomings that need to be considered. Section 4 describes the proposed model and the underlying technique used. Section 5 reports on the experimental investigation and model evaluation. Section 6 concludes the paper and suggests some future research directions.

2. Trust Concept and Trust-Related Attacks

Trust has been extensively studied in the fields of economics, social sciences, cyberspace, and philosophy [9]. The concept of trust can be seen as complex and needs to be clarified because it depends on the person’s context and view. For instance, the study in [14] defined trust as the desire for the interdependent relationship between the source of trust and the trustee in satisfying the obligations to perform the expectations promised in a particular context, irrespective of the capacity to monitor or control the trustee.
In addition, the study conducted in [15] defined trust in the context of the IoT, such as entity trust, device trust, and data trust, among others, in particular instances where entity trust refers to participants’ expected behavior with respect to aspects such as individual preferences or services. It is also important to acknowledge that device trust could be used for trusted computing and developing computational trust. Moreover, trust data may be extracted via aggregation from untrusted sources or generated from IoT services where trust assessments are necessary [16].
The study conducted in [17] defined trust as a service mechanism that automatically navigates through a wide array of barriers to enable proper decision making based on trust between both parties.
The authors of [18] defined trust as the quality of data generated from IoT systems that flow between sensors and devices.
Another study defined trust as the edge that connects the technological ecosystem with intelligent objects. This definition is based on IoT devices’ capacity to conduct various measurements in smart environments, such as humidity, temperature, and pressure measurements and fire detection, to enable administrator decision making and instant reactions. The suggestions in this study emphasize the need to trust the involved device(s) to properly assess and highlight the relationship between individuals by leveraging trustee feedback to develop appropriate reactionary measures [5].
Based on the previous studies that have defined trust, trust includes multiple concepts such as confidence expectation, dependency, reliability, comfort, vulnerability, context specificity, attitude to risk, utility, and lack of control. This has contributed to the lack of a clear definition of trust. On the other hand, the significant objectives of trust management are, nonetheless, to leverage security, enable decision-making processes, detect untrusted behavior, isolate untrusted entities, and redirect IoT functionality to trustworthy zones.
The explosive development of IoT devices has contributed to a significant rise in threats, since IoT environments integrate many heterogeneous sensors to provide various wide-ranging intelligent services. These sensors may be inaccurate and carry out attacks linked to trust. In contrast, others can enable their allies to collaboratively target a specific service provider to undermine their credibility and improve their allies’ reputations [19]. The most common forms of trust-related attacks are good-mouthing, ballot-stuffing, on–off, bad-mouthing, and discrimination [20]. Table 1 summarizes these attacks.
Table 1. Trust-related attacks.
This paper will cover two types of trust-related attack (bad- and good-mouthing) as part of the experimental evaluation.

4. The Proposed Model

The proposed model is divided into four main stages: data collection, data preparation, trust prediction, and evaluation, as illustrated in Figure 2.
Figure 2. The proposed model.
  • Stage 1. Data Collection. Data from packet captures and batch patches proposed in [38] were used in this investigation. The data were gathered through smart home activities, which included IP (such as Ethernet, Wi-Fi, and PPP), Bluetooth, Z-Wave, RF869, and ZigBee protocols, within 10 days. Table 3 and Table 4 show details of the devices and the number of captures and patches.
  • Stage 2. Data Preparation. In this stage, three sub-stages are present: feature engineering, normalization, and data cleaning.
Table 3. Device deployment locations [38].
Table 3. Device deployment locations [38].
Device TypeProtocolPlacement
Motion sensorZigbeeLiving room
Motion sensorZigbeeKitchen
Motion sensorZigbeeBathroom
Motion sensorZigbeeBedroom
Door sensorZigbeeEntrance door
Door sensorZigbeeDishwasher
Weight scaleBluetoothNearby the gateway
Blood pressure meterBluetoothNearby the gateway
GatewayBluetoothOffice
GatewayZigbeeOffice
Motion sensorZigbeeLiving room
Table 4. Number of packets and patches for each protocol [38].
Table 4. Number of packets and patches for each protocol [38].
Protocol Packet Captures Patches
Zigbee73,87627,385
Bluetooth541,54422,202
(a) 
Feature Engineering
The primary goal of feature engineering is to create new features from existing data [39]. This sub-stage creates new features (e.g., packet loss, delay, and throughput), as achieved in [35].
-
Packet loss refers to a packet’s inability to reach its intended target.
-
Delay is the lag time that results from transmission from one point to another.
-
Throughput refers to the real bandwidth that is measured for the purpose of moving files of a certain size at a specific time and under a specified set of network conditions.
(b) 
Normalization
To produce an accurate result, the data are scaled to values ranging from 0 to 1. This step is required to transform the dataset’s numeric column values so they can be used on a common scale without distorting the variation in value ranges or losing data [40]. Normalization is performed using Equation (1):
z i = x i min x max x min x  
where x i is the dataset’s ith value, min(x) is the dataset’s minimum value, and max(x) is the dataset’s maximum value.
(c) 
Data Cleaning
This sub-stage cleans the data by ensuring the validity of dataset samples, such as by removing null and negative values from records.
  • Stage 3. Trust Prediction. This stage uses the LSTM technique that has recently piqued the scientific community’s interest. LSTM has produced remarkable results when used to solve complex problems such as language translation, text production, and automatic image captioning, among other applications [41]. The method, as seen in [42,43], has been extensively used in recent years to fix security-related concerns. This paper uses LSTM to identify suspicious actions that may indicate problems with trust. The LSTM consists of three gates, as in [44].
(a) 
Forget gate
In the first step of LSTM, the sigmoid cell of the oblivion gate determines what information must be deleted from the LSTM memory. A value of 0 indicates that the item was completely discarded, whereas a value of 1 indicates that it was entirely retained [41]. The value is calculated using Equation (2):
f t = σ W f     h t 1 ,   x t + b f  
where b f   is a constant referred to as the bias value.
(b) 
Input gate
The second step of the LSTM is to use the input gate to determine what information to store in the LSTM memory based on the cell state [41]. The values of the input gate are calculated using Equations (3)–(5):
i t = σ W i     h t 1 ,   x t   + b i
c ¯ t = t a n h W c     h t 1 ,   x t   + b c )
C t = f t     C t 1 + i t     c ¯ t
where i t specifies whether or not the value should be updated, C t represents a new vector of candidate values, C t ¯ is the cell information, and f t is the forget gate parameter with a value between 0 and 1, with 0 indicating complete removal and 1 indicating complete retention.
(c) 
Output gate
Following cell state updates, it is necessary to determine which output cell state attributes depend on the inputs h t 1 and x t [41]. Equations (6) and (7) are used in the computation in this gate:
O t = σ W o     h t 1 ,   x t   + b o
h t = O t   t a n h C t
where O t denotes the output value and   h t denotes a number between −1 and 1. This step decides which portion of the cell state will be sent to the next neural network or instant (Figure 3).
Figure 3. LSTM single cell.
  • Stage 4. Evaluation. In this stage, the proposed model is evaluated using various performance measures routinely used in the literature. Two scenarios are evaluated, namely the proposed models without and with the existence of trust-related attacks.
Five metrics are used to measure the proposed model’s performance: accuracy, loss rate, precision, recall, and F-measure.
The level of agreement between an absolute and actual measurement is called accuracy. Accuracy is one of the most used classification performance measures, defined as the proportion of correctly classified samples to all samples [45] and computed using Equation (8).
A c c u r a c y = T P + T N T P + T N + F P + F N
The loss rate is a function that, to facilitate learning, quantifies the difference between the training’s actual and projected output. Additionally, it helps to reduce error and evaluate the model performance [42]. The loss rate is calculated using Equation (9).
L o s s = Y L o g Y P r e d 1 Y   L o g 1 Y P r e d
Precision describes a classification model’s capacity to select data points from a particular class. It is determined by dividing the quantity of correctly recovered samples by the total quantity of samples retrieved [46] and is given in Equation (10).
P r e c i s i o n = T P T P + F P
Recall is a classification model’s ability to identify each data point in a relevant class. It computes the ratio of the number of actual samples successfully retrieved to the total number of correct samples [47] and is defined in Equation (11).
R e c a l l = T P T P + F N
Another measure, known as the F-Measure, which reflects the behavior of two measures, is obtained from precision and recall [38]. It is computed in Equation (12).
F M e a s u r e = 2 · p r e c i s i o n   ·   r e c a l l   p r e c i s i o n + r e c a l l  

Trust-Related Attack Modeling Scenarios

Trust-related attacks are those committed by an untrusted entity, including offering an inadequate service or providing the trustor with unfavorable recommendations about the trustees [22,30]. Accordingly, this section’s purpose is to describe three scenarios of trust-related attacks.
(a) 
First Scenario (Bad-mouthing Attack). This attack aims to harm the reputation of good nodes to decrease their chances of being chosen as service providers [22]. Therefore, in this scenario, the reputation of trusted devices is manipulated to make them untrusted. As illustrated in Figure 4, the value of trust is manipulated by harming its reputation. Therefore, the devices will be identified as untrusted, despite them being trustworthy.
Figure 4. Sample of dataset before and after applying a bad-mouthing attack.
(b) 
Second Scenario (Good-mouthing Attack). This attack aims to improve the reputation of bad nodes to increase the probability of them being chosen as service providers [30]. Therefore, in this scenario, the reputation of untrusted devices is manipulated to make them trusted. As illustrated in Figure 5, the value of trust is manipulated by enhancing their reputation. Therefore, the devices will be identified as trusted, despite them being untrustworthy.
Figure 5. Sample of dataset before and after applying a good-mouthing attack.
(c) 
Third Scenario (Combination of both Scenario 1 and Scenario 2). This scenario applies both bad-mouthing and good-mouthing attacks at the same time to the same dataset, as illustrated in Figure 6.
Figure 6. Sample of dataset before and after applying good-mouthing and bad-mouthing attacks.

5. Experiments and Results

The experiment was conducted on Google CoLab with Python library packages such as Pandas, Numpy, Scikit-Learn, Matplotlib, and Keras [48]. Table 5 describes the details of the model setup.
Table 5. Model setup.
The used dataset was split into training and test sets in a ratio of 70:30, respectively. To avoid overfitting and underfitting, the data were randomly divided several times until it was verified that the test set represented unseen behaviors.

5.1. Trust-Related Attack Modeling Scenarios

This subsection explores the results of the proposed model without and with trust-related attacks.

5.1.1. Results of the Model without Trust-Related Attacks

This section analyzes the model using database samples of various sizes (25–100) and iterations (50 and 100) to assess the model’s efficacy under different dataset conditions. After 100 iterations, the model had reached 99.37% accuracy, 0.018 loss rate, 100% recall, 99.93% precision, and 99.65% F-measure in 420 s. Both 100 and 50 iterations provided comparable results for large samples of 25% and 50%, with a minor difference in the detection rate. Figure 7, Figure 8 and Figure 9 depict the accuracy and loss rate for each iteration with varying sample size, and Table 6 presents a detailed evaluation of the model.
Figure 7. Results of the 25% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Figure 8. Results of the 50% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Figure 9. Results of the 100% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Table 6. Experimental results.
Table 6 indicates that the proposed model successfully diagnosed behavior deviation using LSTM cells, yielding positive outcomes for various data sample sizes. Changing the sample size in the experiment revealed the suggested model’s adaptability to different circumstances. The minor variation in findings is normal and indicative of the proposed model’s capacity to handle small or large data samples. The model’s performance is improved with a more extensive test sample size. In addition, expanding the number of iterations has minimal effects on accuracy, recall, precision, and F-measure, but significant effects on time and loss rate, indicating that the model is refined with each iteration.

5.1.2. Results of the Model with Trust-Related Attacks

In this section, the model has been tested on two datasets: the first is the normal dataset (the same dataset used in Section 5.1.1), and the other is the dataset after applying trust-related attacks (bad-mouthing and good-mouthing) with different numbers of iterations (50 and 100) and different sizes of dataset (25%, 50%, and 100%) to prove the effectiveness of the proposed model in identifying such attacks.
First Scenario Results
Table 7 indicates that the suggested model for a sample size of 100% yields positive results after 50 iterations, achieving 99.96% accuracy, 0.0130 loss rate, 99.92% recall, 100% precision, and an F-measure of 99.96%, in 15 min. After 20 min and 100 iterations, the model achieved 99.98% accuracy, 0.0723 loss rate, 99.98% recall, 100 percent precision, and 99.98% F-measure. Similar findings are reported for 100 and 50 iterations for sample sizes of 25% and 50%, with a minor change in the detection time. Figure 10, Figure 11 and Figure 12 depict the accuracy and the loss rate for each iteration with different sample sizes for this scenario.
Table 7. Experimental results of the first scenario.
Figure 10. Results of the first scenario with 25% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Figure 11. Results of the first scenario with 50% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Figure 12. Results of the first scenario with 100% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
According to the results shown in Table 7, it is clear that the proposed model can detect the manipulations that occurred in the trust value due to a bad-mouthing attack. Compared to the proposed model without an attack, the results demonstrate an increased detection time with an attack. Hence, the detection time ranged from 5 min to 20 min for all sizes of dataset used, whereas the model without attack had a detection time ranging from less than 1 min to 2 min. With the presence of a bad-mouthing attack, the results showed an increase in detection effectiveness with the expansion in the data size used. With increasing sample size, the performance metrics such as recall, precision, and the F-1 measure increase and the loss rate decreases.
Second Scenario Results
Table 8 shows that the proposed model for the total sample size exhibited better results in 50 iterations, as it achieved 99.96% accuracy, 0.0174 loss rate, 99.92% recall, 100% precision, and 99.96% F-measure in 18 min. After 100 iterations, the model shows some improvement, having achieved 99.98% accuracy, 0.0182 loss rate, 99.98% recall, 100% precision, and 99.98% F-measure in 20 min. Figure 13, Figure 14 and Figure 15 illustrate each iteration’s accuracy and loss rate with varying sample sizes.
Table 8. Experimental Results of the second scenario.
Figure 13. Results of the second scenario with a 25% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Figure 14. Results of the second scenario with 50% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Figure 15. Results of the second scenario with 100% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Based on the results reported in Table 9 and Figure 13, Figure 14 and Figure 15, it was evident that the LSTM method employed in the suggested model could spot changes in the trust value as a result of a good-mouthing attack. Compared to the models without an attack, the findings demonstrated a longer detection time in the presence of an attack. As a result, the model without attack had a detection time of less than 1 to 2 min, but the detection time for all sizes of dataset used in this phase varied from 5 to 20 min. The findings showed the efficacy of detecting a good-mouthing attack as the volume of data utilized grew. In contrast, the loss rate declined as the sample size increased, whereas accuracy, recall, precision, and F-measure all increased.
Table 9. Experimental Results of the third scenario.
Third Scenario Results
Table 9 shows that the proposed model for the total sample size exhibited better results in 50 iterations, as it achieved 99.33% accuracy, 0.0182 loss rate, 98.67% recall, 100% precision, and 99.96% F-measure in 18 min. After 100 iterations, the model shows some improvement, since it achieved 99.28% accuracy, 0.0182 loss rate, 98.59% recall, 100% precision, and 99.29% F-measure in 20 min. Figure 16, Figure 17 and Figure 18 illustrate each iteration’s accuracy and loss rate with varying sample size.
Figure 16. Results of the third scenario with 25% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Figure 17. Results of the third scenario with 50% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Figure 18. Results of the third scenario with 100% sample size: (a) loss at 50 iterations; (b) accuracy at 50 iterations; (c) loss at 100 iterations; and (d) accuracy at 100 iterations.
Based on the results reported in Table 9 and Figure 16, Figure 17 and Figure 18, the model was able to identify untrusted entities with high effectiveness with an increase in time. The outcomes demonstrated a clear correlation between accuracy, recall, and F-measure. In addition, the number of iterations and loss rate are inversely related. The proposed model in the presence of more than one attack required a longer detection time than the model without attacks or in the presence of one attack, as the time ranged from 15 to 20 in the presence of more than one type of attack. The results show that the proposed model can detect untrusted entities during trust-related attacks.
In conclusion, the results of the proposed model revealed that the LSTM technique could deal with different data sizes and more effectively learn complex the behavioral patterns of IoT devices. As a result, the proposed model can improve the reliability of IoT services and devices while also adapting to complex and unknown trust-related behaviors.

5.1.3. Comparison with Existing Deep Learning Models

Deep learning models have been adapted to the field of detection. Three deep learning architectures have been used in the literature (LSTM, MLP, and ANN) to detect untrusted entities in IoT devices and services. It is important to consider the nature of the data that will be tested in order to achieve the goal for which the model was designed. Despite the fact that deep learning is frequently employed to handle issues with vast amounts of data and continuous changes in behavior, each model is developed with a particular goal in mind depending on the dataset.
Comparing the proposed model with other models that have been used in related works reveals that LSTM is frequently utilized for complex learning tasks, such as prediction tasks, spotting behavioral changes [41], machine translation [49], and handwriting creation [50], whereas ANN is frequently used for image processing, character recognition, and forecasting and MLP is typically employed for image processing tasks [51]. The data used in this study come from IoT device actions, hence it is they are behavioral pattern data. As result, the proposed model is more suitable for identifying changes in trustworthy and untrusted behaviors; consequently, it can counter trust-related attacks.

6. Conclusions

Managing trust is an issue with far-reaching consequences for artificial societies, such as those with IoT devices. Increased dependence on IoT devices and services has aggravated this issue recently. Existing models are no longer effective in the age of big data and with IoT devices with a dynamic nature and heterogeneity. The proposed research in this paper suggested a model for trust management in IoT devices and services based on the LSTM technique. The proposed model aimed to identify untrusted entities in IoT services and isolate untrusted devices. The effectiveness of the proposed model was evaluated using different data samples with different sizes. The LSTM technique was used to detect changes in behavior with high accuracy. The experimental results demonstrated the model’s ability to recognize untrusted entities when there are trust-related attacks. As a result, the suggested model can improve IoT device and service reliability and adapt to complicated and unknowable behaviors. Future work will consider more features to calculate the trust value (e.g., energy consumption). In addition, more trust-related attack scenarios will be considered (e.g., on–off and discriminatory attacks).

Author Contributions

Conceptualization, M.A.R.; Methodology, Y.A.; Software, Y.A.; Validation, Y.A.; Formal analysis, Y.A.; Resources, M.A.R.; Data curation, Y.A.; Writing–original draft, Y.A.; Writing–review & editing, M.A.R.; Supervision, M.A.R.; Project administration, M.A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author(s) gratefully acknowledge Qassim University, represented by the Deanship of Scientific Research, on the financial support for this research under the number (COC-2022-1-1-J-25537) during the academic year 1444 AH/2022 AD.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shafique, K.; Khawaja, B.A.; Sabir, F.; Qazi, S.; Mustaqim, M. Internet of things (IoT) for next-generation smart systems: A review of current challenges, future trends and prospects for emerging 5G-IoT scenarios. IEEE Access 2020, 8, 23022–23040. [Google Scholar] [CrossRef]
  2. Atzori, L.; Iera, A.; Morabito, G. The internet of things: A survey. Comput. Netw. 2010, 54, 2787–2805. [Google Scholar] [CrossRef]
  3. Rajesh, G.; Raajini, X.M.; Vinayagasundaram, B. Fuzzy trust-based aggregator sensor node election in internet of things. Int. J. Internet Protoc. Technol. 2016, 9, 151–160. [Google Scholar] [CrossRef]
  4. Bera, 80 Wicked & Insightful IoT Statistics, in Safeatlast. 2020. Available online: https://www.statista.com/statistics/1183457/iot-connected-devices-worldwide/ (accessed on 27 February 2023).
  5. Sfar, A.R.; Natalizio, E.; Challal, Y.; Chtourou, Z. A roadmap for security challenges in the Internet of Things. Digit. Commun. Netw. 2018, 4, 118–137. [Google Scholar] [CrossRef]
  6. Jayasinghe, U.; Lee, G.M.; Um, T.W.; Shi, Q. Machine learning based trust computational model for IoT services. IEEE Trans. Sustain. Comput. 2018, 4, 39–52. [Google Scholar] [CrossRef]
  7. Najib, W.; Sulistyo, S. Survey on trust calculation methods in Internet of Things. Procedia Comput. Sci. 2019, 161, 1300–1307. [Google Scholar] [CrossRef]
  8. Yan, Z.; Zhang, P.; Vasilakos, A.V. A survey on trust management for Internet of Things. J. Netw. Comput. Appl. 2014, 42, 120–134. [Google Scholar] [CrossRef]
  9. Djedjig, N.; Tandjaoui, D.; Romdhani, I.; Medjek, F. Trust management in the internet of things. In Security and Privacy in Smart Sensor Networks; IGI Global: Hershey, PA, USA, 2018; pp. 122–146. [Google Scholar] [CrossRef]
  10. Chen, R.; Guo, J.; Bao, F. Trust management for SOA-based IoT and its application to service composition. IEEE Trans. Serv. Comput. 2014, 9, 482–495. [Google Scholar] [CrossRef]
  11. Mendoza, C.V.; Kleinschmidt, J.H. Mitigating on-off attacks in the internet of things using a distributed trust management scheme. Int. J. Distrib. Sens. Netw. 2015, 11, 859731. [Google Scholar] [CrossRef]
  12. Khalil, A.; Mbarek, N.; Togni, O. Fuzzy Logic based security trust evaluation for IoT environments. In Proceedings of the IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA), Abu Dhabi, United Arab Emirates, 3–7 November 2019; IEEE: New York, NY, USA, 2019; pp. 1–8. [Google Scholar] [CrossRef]
  13. Asiri, S.; Miri, A. An IoT trust and reputation model based on recommender systems. In Proceedings of the 14th Annual Conference on Privacy, Security and Trust. (PST), Auckland, New Zealand, 12–14 December 2016; IEEE: New York, NY, USA, 2016; pp. 561–568. [Google Scholar] [CrossRef]
  14. Aljazzaf, Z.M.; Perry, M.; Capretz, M.A. Online trust: Definition and principles. In Proceedings of the Fifth International Multi-conference on Computing in the Global Information Technology, Valencia, Spain, 20–25 September 2010; IEEE: New York, NY, USA, 2010; pp. 163–168. [Google Scholar] [CrossRef]
  15. Daubert, J.; Wiesmaier, A.; Kikiras, P. A view on privacy & trust in IoT. In Proceedings of the IEEE International Conference on Communication Workshop (ICCW), London, UK, 8–12 June 2015; IEEE: New York, NY, USA, 2015; pp. 2665–2670. [Google Scholar] [CrossRef]
  16. Thierer, A.D. Privacy and Security Implications of the Internet of Things. 2013. Available online: https://ssrn.com/abstract=2273031 (accessed on 27 February 2023).
  17. Wang, J.P.; Bin, S.; Yu, Y.; Niu, X.X. Distributed trust management mechanism for the internet of things. In Applied Mechanics and Materials; Trans Tech Publ.: Stafa-Zurich, Switzerland, 2013; pp. 2463–2467. [Google Scholar] [CrossRef]
  18. Truong, N.B.; Lee, H.; Askwith, B.; Lee, G.M. Toward a trust evaluation mechanism in the social internet of things. Sensors 2017, 17, 1346. [Google Scholar] [CrossRef]
  19. Azzedin, F.; Ghaleb, M. Internet-of-Things and information fusion: Trust. perspective survey. Sensors 2019, 19, 1929. [Google Scholar] [CrossRef]
  20. Farahbakhsh, B.; Fanian, A.; Manshaei, M.H. TGSM: Towards trustworthy group-based service management for social IoT. Internet Things 2021, 13, 100312. [Google Scholar] [CrossRef]
  21. Masmoudi, M.; Abdelghani, W.; Amous, I.; Sèdes, F. Deep Learning for Trust-Related Attacks Detection in Social. Internet of Things. In Proceedings of the International Conference on e-Business Engineering, Shanghai, China, 12–13 October 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 389–404. [Google Scholar] [CrossRef]
  22. Abdelghani, W.; Zayani, C.A.; Amous, I.; Sèdes, F. User-centric IoT: Challenges and perspectives. In Proceedings of the UBICOMM 2018: The Twelfth International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies, Athens, Greece, 18–22 November 2019. [Google Scholar] [CrossRef]
  23. Chae, Y.; DiPippo, L.C.; Sun, Y.L. Trust management for defending on-off attacks. IEEE Trans. Parallel Distrib. Syst. 2014, 26, 1178–1191. [Google Scholar] [CrossRef]
  24. Bao, F.; Chen, R.; Guo, J. Scalable, adaptive and survivable trust management for community of interest based internet of things systems. In Proceedings of the IEEE Eleventh International Symposium on Autonomous Decentralized Systems (ISADS), Mexico City, Mexico, 6–8 March 2013; IEEE: New York, NY, USA, 2013; pp. 1–7. [Google Scholar] [CrossRef]
  25. Che, S.; Feng, R.; Liang, X.; Wang, X. A lightweight trust management based on Bayesian and Entropy for wireless sensor networks. Secur. Commun. Netw. 2015, 8, 168–175. [Google Scholar] [CrossRef]
  26. Din, I.U.; Guizani, M.; Kim, B.S.; Hassan, S.; Khan, M.K. Trust. management techniques for the Internet of Things: A survey. IEEE Access 2018, 7, 29763–29787. [Google Scholar] [CrossRef]
  27. Alshehri, M.D.; Hussain, F.K. A centralized trust management mechanism for the internet of things (CTM-IoT). In Advances on Broad-Band Wireless Computing, Communication and Applications, Proceedings of the 12th International Conference on Broad-Band Wireless Computing, Communication and Applications (BWCCA-2017), Barcelona, Spain, 8–10 November 2017; Springer: Berlin/Heidelberg, Germany, 2018; pp. 533–543. [Google Scholar] [CrossRef]
  28. Sethi, P.; Sarangi, S.R. Internet of things: Architectures, protocols, and applications. J. Electr. Comput. Eng. 2017, 2017, 9324035. [Google Scholar] [CrossRef]
  29. Alaba, F.A.; Othman, M.; Hashem, I.A.T.; Alotaibi, F. Internet of Things security: A survey. J. Netw. Comput. Appl. 2017, 88, 10–28. [Google Scholar] [CrossRef]
  30. Ba-hutair, M.N.; Bouguettaya, A.; Neiat, A.G. Multi-perspective trust management framework for crowdsourced IoT services. IEEE Trans. Serv. Comput. 2021, 15, 2396–2409. [Google Scholar] [CrossRef]
  31. Babar, S.; Mahalle, P. Trust Management Approach for Detection of Malicious Devices in SIoT. Teh. Glas. 2021, 15, 43–50. [Google Scholar] [CrossRef]
  32. Zheng, G.; Gong, B.; Zhang, Y. Dynamic network security mechanism based on trust management in wireless sensor networks. Wirel. Commun. Mob. Comput. 2021, 2021, 6667100. [Google Scholar] [CrossRef]
  33. Lingda, K.; Feng, Z.; Yingjie, Z.; Nan, Q.; Dashuai, L.; Shaotang, C. Evaluation method of trust degree of distribution IoT terminal equipment based on information entropy. J. Phys. Conf. Ser. 2021, 1754, 012108. [Google Scholar] [CrossRef]
  34. Rizwanullah, M.; Singh, S.; Kumar, R.; Alrayes, F.S.; Alharbi, A.; Alnfiai, M.M.; Chaurasia, P.K.; Agrawal, A. Development of a Model. for Trust. Management in the Social. Internet of Things. Electronics 2022, 12, 41. [Google Scholar] [CrossRef]
  35. Alghofaili, Y.; Rassam, M.A. A trust management model for IoT devices and services based on the multi-criteria decision-making approach and deep long short-term memory technique. Sensors 2022, 22, 634. [Google Scholar] [CrossRef] [PubMed]
  36. Yue, Y.; Li, S.; Legg, P.; Li, F. Deep Learning-Based Security Behaviour Analysis in IoT Environments: A Survey. Secur. Commun. Netw. 2021, 2021, 1–13. [Google Scholar] [CrossRef]
  37. Mohammadi, M.; Al-Fuqaha, A.; Sorour, S.; Guizani, M. Deep learning for IoT big data and streaming analytics: A survey. IEEE Commun. Surv. Tutor. 2018, 20, 2923–2960. [Google Scholar] [CrossRef]
  38. Anagnostopoulos, M.; Spathoulas, G.; Viaño, B.; Augusto-Gonzalez, J. Tracing your smart-home devices conversations: A real world IoT traffic data-set. Sensors 2020, 20, 6600. [Google Scholar] [CrossRef]
  39. Crawford, M.; Khoshgoftaar, T.M.; Prusa, J.D.; Richter, A.N.; Al Najada, H. Survey of review spam detection using machine learning techniques. J. Big Data 2015, 2, 1–24. [Google Scholar] [CrossRef]
  40. Zach, Normailzation in Statology 2021, Zach: Statology. Available online: https://www.statology.org/z-score-normalization/ (accessed on 27 February 2023).
  41. Mekruksavanich, S.; Jitpattanakul, A. Biometric user identification based on human activity recognition using wearable sensors: An. experiment using deep learning models. Electronics 2021, 10, 308. [Google Scholar] [CrossRef]
  42. Zhao, Z.; Xu, C.; Li, B. A LSTM-Based Anomaly Detection Model. for Log. Analysis. J. Signal. Process. Syst. 2021, 93, 745–751. [Google Scholar] [CrossRef]
  43. Kim, T.-Y.; Cho, S.-B. Web traffic anomaly detection using C-LSTM neural networks. Expert Syst. Appl. 2018, 106, 66–76. [Google Scholar] [CrossRef]
  44. Lu, B.; Luktarhan, N.; Ding, C.; Zhang, W. ICLSTM: Encrypted Traffic Service Identification Based on Inception-LSTM Neural Network. Symmetry 2021, 13, 1080. [Google Scholar] [CrossRef]
  45. Tharwat, A. Classification assessment methods. Appl. Comput. Inform. 2020, 17, 168–192. [Google Scholar] [CrossRef]
  46. Dalianis, H. Evaluation metrics and evaluation. In Clinical Text. Mining; Springer: Berlin/Heidelberg, Germany, 2018; pp. 45–53. [Google Scholar] [CrossRef]
  47. Fayyaz, Z.; Ebrahimian, M.; Nawara, D.; Ibrahim, A.; Kashef, R. Recommendation Systems: Algorithms, Challenges, Metrics, and Business Opportunities. Appl. Sci. 2020, 10, 7748. [Google Scholar] [CrossRef]
  48. Colab. Available online: https://colab.research.google.com (accessed on 27 February 2023).
  49. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. Advances in neural information processing systems. arXiv 2014, arXiv:1409.3215. [Google Scholar] [CrossRef]
  50. Graves, A. Generating sequences with recurrent neural networks. arXiv 2013, arXiv:1308.0850. [Google Scholar] [CrossRef]
  51. Shawky, O.A.; Hagag, A.; El-Dahshan, E.S.A.; Ismail, M.A. Remote. sensing image scene classification using CNN-MLP with data augmentation. Optik 2020, 221, 165356. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.