sensors-logo

Journal Browser

Journal Browser

Special Issue "Emerging Sensor Communication Network-Based AI/ML Driven Intelligent IoT"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 30 April 2023 | Viewed by 25706

Special Issue Editors

Dr. Bhisham Sharma
E-Mail Website
Guest Editor
Department of Computer Science Engineering, Chitkara University, Himachal Pradesh, Baddi, India
Interests: wireless communication; wireless sensor networks; wireless mesh networks; next generation networking; network security; internet of things; UAV; medical image processing and edge/fog computing
Dr. Deepika Koundal
E-Mail Website
Guest Editor
Department of Computer Science, University of Petroleum & energy Studies, Dehradun 248007, India
Interests: medical image processing; pattern recognition; computer vision; deep learning
Special Issues, Collections and Topics in MDPI journals
Dr. Rabie A. Ramadan
E-Mail Website
Guest Editor
Computer Engineering Department, College of Computer Science and Engineering, Hail University, Hail 81481, Saudi Arabia
Interests: Internet of Things (IoT); computational intelligence; security; brain-computer interface (BCI); big data; artificial intelligence; optimization; deep learning
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Juan M. Corchado
E-Mail Website
Guest Editor
BISITE Research Group, Edificio Multiusos I+D+i, University of Salamanca, 37007 Salamanca, Spain
Interests: artificial Intelligence; machine learning; edge computing; distributed computing; Blockchain; consensus model; smart cities; smart grid
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, the field of the Internet of Things (IoT) is one of the fastest growing areas in terms of Artificial Intelligence (AI) and Machine Learning (ML) techniques. In the future, the IoT market will expand to 24 billion devices globally around 2030. With these new developments in AI and ML, approaches with the support of the IoT have empowered various real-life applications such as industry, e-healthcare, smart cities, smart utilities, smart transportation, and smart homes. AI and ML have both been the most striking topics as these technologies progressively determine their path from everything to everywhere, that is, from pioneering healthcare systems and novel quantum computing to “elegant” peculiar assistants and consumer electronics. Therefore, there is a need to design scalable and resource-efficient sensor-based systems that can perform well in different types of wireless systems with the increase of IoT devices in heterogeneous applications. ML techniques are advantageous in communication systems whereas, deep learning techniques are widely utilized in Big Data analysis for prediction and performance improvement. The utilization of AI and ML is increasingly intertwined with IoT. AI, ML and deep learning are now being utilized for making IoT services and devices smarter and more secure .

In the modern context, one IT megatrend has been identified as Hyperautomation, which is based on AI, ML, and robotic process automation. The COVID-19 pandemic has given birth to this concept in which anything within the organization can be automated, a concept known as digital/intelligent business process automation. These automated processes can adapt to any fluctuating situations and respond to unanticipated circumstances. Some other trends include the provision of security and connectivity to different types of IoT devices. Therefore, there is a need to develop automated, efficient, and scalable strategies that can identify, classify, apply and monitor policies for ensuring appropriate functionality without affecting other services on the network. Moreover, for unlocking the potential of AI in businesses, AI processing and its data have been placed from the cloud, to the edge, to the fog of the network. Further, there is a requirement for a network that can provide dynamic performance, low latency communications, and end-to-end bandwidth. Intent-based networking is a new type of network that leverages the new capabilities for meeting business goals. In this, network uses NLP for communication from any business line and then performs translation into the set of policies to help in taking automatic decisions. Therefore, many application drawbacks, such as unplanned downtime, can be avoided by predicting equipment failure by using data analytics for scheduling the systematic continuation processes. Otherwise, Predictive Maintenance can aid in mitigating the destructive economical cost of unexpected interruption.

Operative efficiency can be increased by predicting working environments and identifying the factors that need to be adapted on the fly for the maintenance of ideal outcomes that can improve operational efficiency. It helps in services with the help of NLP for speaking with machinery, fleet management, AI-enabled robots, and drones. With the integration of AI and IoT, risk management can be enhanced by predicting various types of risks in advance to mechanize a quick reply. AI has been a standard accompaniment to IoT operations, helping to improve operations and offering an economical edge in business performance.

Developments in the IoT are playing a significant role in our daily lives. In IoT, a huge number of devices such as actuators and sensors are deployed and connected for the collection of different types of data such as healthcare, transportation, public safety, energy, manufacturing, and smart city infrastructure espousing systems. At the same time, ML/DL has shown substantial success in the transformation of complex and massive datasets into precise comprehension as output, which can significantly facilitate intelligence, analysis, automation, and decision-making. ML has provided a means of performing giant modeling and intelligence with the integration of developments in big-data analytics, big-networking technologies, and big-data computing, to achieve enormous accomplishments in diverse areas. Despite these achievements, the leveraging of machine learning in IoT faces significant challenges to achieving an AI-enabled Internet of controllable and dependable things, and we must take into account the outstanding necessities for latency, connectivity, accessibility, scalability, resiliency, and security. The unified fusion of ML into IoT, consequently, produces prospects for necessitating interdisciplinary endeavors and novel research in order to provide a solution to various challenges.

This Special Issue focuses on the results of the research presented in the above-mentioned domains. Contributions are invited in the field of AI/ML models for IoT devices and deployed networks. Moreover, research on big-data analytics and approaches and decision-making are also invited, along with new practices and concepts with AI/ML automated systems. Authors are invited from both academia and industry to work on the application of AI/ML techniques to computer systems for the submission of their original articles with designing, optimizing, and implementation of protocols, models, and optimization methods.

Topics: This Special Issue includes the following topics of interest:

  • Machine learning for theoretical foundation and models for IoT;
  • Machine learning for IoT system deployment and operation;
  • Machine learning for IoT assisted industrial automation;
  • Machine learning-enabled real-time IoT data analytics;
  • Machine learning-enabled sensing and decision-making for IoT;
  • Machine learning-enabled cloud/edge computing systems for IoT;
  • Evaluation platforms and hardware-in-the-loop testbeds for machine learning-enabled IoT;
  • Machine learning-assisted intrusion and malware detection for IoT;
  • Machine learning for access congestion management in edge computing IoT networks;
  • AI/DL-based IoT-cloud convergent algorithms/applications for healthcare;
  • ML-driven long-term risk of pandemics prediction;
  • AI/DL-empowered data fusion for healthcare;
  • Sensor based human-centric AI for IoT systems;
  • Explainable AI (XAI) and predictive data analytics for healthcare;
  • DL-techniques for handling post COVID-19 crisis;
  • IoT-cloud healthcare big data storage, processing, analysis using ML/DL techniques;
  • Adversarial attacks, threats, and defenses for DL-enabled healthcare;
  • Protocols and algorithms for intelligent IoT systems;
  • 5G/6G technology-enabled AIoT.

Prof. Bhisham Sharma
Dr. Deepika Koundal
Dr. Rabie A. Ramadan
Prof. Dr. Juan M. Corchado
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Machine Learning-Enabled Smart Industrial Automation Systems Using Internet of Things
Sensors 2023, 23(1), 324; https://doi.org/10.3390/s23010324 - 28 Dec 2022
Viewed by 877
Abstract
Industrial automation uses robotics and software to operate equipment and procedures across industries. Many applications integrate IoT, machine learning, and other technologies to provide smart features that improve the user experience. The use of such technology offers businesses and people tremendous assistance in [...] Read more.
Industrial automation uses robotics and software to operate equipment and procedures across industries. Many applications integrate IoT, machine learning, and other technologies to provide smart features that improve the user experience. The use of such technology offers businesses and people tremendous assistance in successfully achieving commercial and noncommercial requirements. Organizations are expected to automate industrial processes owing to the significant risk management and inefficiency of conventional processes. Hence, we developed an elaborative stepwise stacked artificial neural network (ESSANN) algorithm to greatly improve automation industries in controlling and monitoring the industrial environment. Initially, an industrial dataset provided by KLEEMANN Greece was used. The collected data were then preprocessed. Principal component analysis (PCA) was used to extract features, and feature selection was based on least absolute shrinkage and selection operator (LASSO). Subsequently, the ESSANN approach is proposed to improve automation industries. The performance of the proposed algorithm was also examined and compared with that of existing algorithms. The key factors compared with existing technologies are delay, network bandwidth, scalability, computation time, packet loss, operational cost, accuracy, precision, recall, and mean absolute error (MAE). Compared to traditional algorithms for industrial automation, our proposed techniques achieved high results, such as a delay of approximately 52%, network bandwidth accomplished at 97%, scalability attained at 96%, computation time acquired at 59 s, packet loss achieved at a minimum level of approximately 53%, an operational cost of approximately 59%, accuracy of 98%, precision of 98.95%, recall of 95.02%, and MAE of 80%. By analyzing the results, it can be seen that the proposed system was effectively implemented. Full article
Show Figures

Figure 1

Article
Best Fit DNA-Based Cryptographic Keys: The Genetic Algorithm Approach
Sensors 2022, 22(19), 7332; https://doi.org/10.3390/s22197332 - 27 Sep 2022
Cited by 2 | Viewed by 536
Abstract
DNA (Deoxyribonucleic Acid) Cryptography has revolutionized information security by combining rigorous biological and mathematical concepts to encode original information in terms of a DNA sequence. Such schemes are crucially dependent on corresponding DNA-based cryptographic keys. However, owing to the redundancy or observable patterns, [...] Read more.
DNA (Deoxyribonucleic Acid) Cryptography has revolutionized information security by combining rigorous biological and mathematical concepts to encode original information in terms of a DNA sequence. Such schemes are crucially dependent on corresponding DNA-based cryptographic keys. However, owing to the redundancy or observable patterns, some of the keys are rendered weak as they are prone to intrusions. This paper proposes a Genetic Algorithm inspired method to strengthen weak keys obtained from Random DNA-based Key Generators instead of completely discarding them. Fitness functions and the application of genetic operators have been chosen and modified to suit DNA cryptography fundamentals in contrast to fitness functions for traditional cryptographic schemes. The crossover and mutation rates are reducing with each new population as more keys are passing fitness tests and need not be strengthened. Moreover, with the increasing size of the initial key population, the key space is getting highly exhaustive and less prone to Brute Force attacks. The paper demonstrates that out of an initial 25 × 25 population of DNA Keys, 14 keys are rendered weak. Complete results and calculations of how each weak key can be strengthened by generating 4 new populations are illustrated. The analysis of the proposed scheme for different initial populations shows that a maximum of 8 new populations has to be generated to strengthen all 500 weak keys of a 500 × 500 initial population. Full article
Show Figures

Figure 1

Article
Towards High Accuracy Pedestrian Detection on Edge GPUs
Sensors 2022, 22(16), 5980; https://doi.org/10.3390/s22165980 - 10 Aug 2022
Cited by 4 | Viewed by 651
Abstract
Despite the rapid development of pedestrian detection algorithms, the balance between detection accuracy and efficiency is still far from being achieved due to edge GPUs (low computing power) limiting the parameters of the model. To address this issue, we propose the YOLOv4-TP-Tiny based [...] Read more.
Despite the rapid development of pedestrian detection algorithms, the balance between detection accuracy and efficiency is still far from being achieved due to edge GPUs (low computing power) limiting the parameters of the model. To address this issue, we propose the YOLOv4-TP-Tiny based on the YOLOv4 model, which mainly includes two modules, two-dimensional attention (TA) and pedestrian-based feature extraction (PFM). First, we integrate the TA mechanism into the backbone network, which increases the attention of the network to the visible area of pedestrians and improves the accuracy of pedestrian detection. Then, the PFM is used to replace the original spatial pyramid pooling (SPP) structure in the YOLOv4 to obtain the YOLOv4-TP algorithm, which can adapt to different sizes of people to obtain higher detection accuracy. To maintain detection speed, we replaced the normal convolution with a ghost network with a TA mechanism, resulting in more feature maps with fewer parameters. We constructed a one-way multi-scale feature fusion structure to replace the down-sampling process, thereby reducing network parameters to obtain the YOLOv4-TP-Tiny model. The experimental results show that the YOLOv4-TP-tiny has 58.3% AP and 31 FPS in the winder person pedestrian dataset. With the same hardware conditions and dataset, the AP of the YOLOv4-tiny is 55.9%, and the FPS is 29. Full article
Show Figures

Figure 1

Article
Hybrid SFNet Model for Bone Fracture Detection and Classification Using ML/DL
Sensors 2022, 22(15), 5823; https://doi.org/10.3390/s22155823 - 04 Aug 2022
Cited by 3 | Viewed by 904
Abstract
An expert performs bone fracture diagnosis using an X-ray image manually, which is a time-consuming process. The development of machine learning (ML), as well as deep learning (DL), has set a new path in medical image diagnosis. In this study, we proposed a [...] Read more.
An expert performs bone fracture diagnosis using an X-ray image manually, which is a time-consuming process. The development of machine learning (ML), as well as deep learning (DL), has set a new path in medical image diagnosis. In this study, we proposed a novel multi-scale feature fusion of a convolution neural network (CNN) and an improved canny edge algorithm that segregate fracture and healthy bone image. The hybrid scale fracture network (SFNet) is a novel two-scale sequential DL model. This model is highly efficient for bone fracture diagnosis and takes less computation time compared to other state-of-the-art deep CNN models. The innovation behind this research is that it works with an improved canny edge algorithm to obtain edges in the images that localize the fracture region. After that, grey images and their corresponding canny edge images are fed to the proposed hybrid SFNet for training and evaluation. Furthermore, the performance is also compared with the state-of-the-art deep CNN models on a bone image dataset. Our results showed that SFNet with canny (SFNet + canny) achieved the highest accuracy, F1-score and recall of 99.12%, 99% and 100%, respectively, for bone fracture diagnosis. It showed that using a canny edge algorithm improves the performance of CNN. Full article
Show Figures

Figure 1

Article
An Efficient and Privacy-Preserving Scheme for Disease Prediction in Modern Healthcare Systems
Sensors 2022, 22(15), 5574; https://doi.org/10.3390/s22155574 - 26 Jul 2022
Cited by 2 | Viewed by 764
Abstract
With the Internet of Things (IoT), mobile healthcare applications can now offer a variety of dimensionalities and online services. Disease Prediction Systems (DPS) increase the speed and accuracy of diagnosis, improving the quality of healthcare services. However, privacy is garnering an increasing amount [...] Read more.
With the Internet of Things (IoT), mobile healthcare applications can now offer a variety of dimensionalities and online services. Disease Prediction Systems (DPS) increase the speed and accuracy of diagnosis, improving the quality of healthcare services. However, privacy is garnering an increasing amount of attention these days, especially concerning personal healthcare data, which are sensitive. There are a variety of prevailing privacy preservation techniques for disease prediction that are rendered. Nonetheless, there is a chance of medical users being affected by numerous disparate diseases. Therefore, it is vital to consider multi-label instances, which might decrease the accuracy. Thus, this paper proposes an efficient privacy-preserving (PP) scheme for patient healthcare data collected from IoT devices aimed at disease prediction in the modern Health Care System (HCS). The proposed system utilizes the Log of Round value-based Elliptic Curve Cryptography (LR-ECC) to enhance the security level during data transfer after the initial authentication phase. The authorized healthcare staff can securely download the patient data on the hospital side. Utilizing the Herding Genetic Algorithm-based Deep Learning Neural Network (EHGA-DLNN) can test these data with the trained system to predict the diseases. The experimental results demonstrate that the proposed approach improves prediction accuracy, privacy, and security compared to the existing methods. Full article
Show Figures

Figure 1

Article
Game Theory-Based Authentication Framework to Secure Internet of Vehicles with Blockchain
Sensors 2022, 22(14), 5119; https://doi.org/10.3390/s22145119 - 07 Jul 2022
Cited by 7 | Viewed by 1111
Abstract
The Internet of Vehicles (IoV) is a new paradigm for vehicular networks. Using diverse access methods, IoV enables vehicles to connect with their surroundings. However, without data security, IoV settings might be hazardous. Because of the IoV’s openness and self-organization, they are prone [...] Read more.
The Internet of Vehicles (IoV) is a new paradigm for vehicular networks. Using diverse access methods, IoV enables vehicles to connect with their surroundings. However, without data security, IoV settings might be hazardous. Because of the IoV’s openness and self-organization, they are prone to malevolent attack. To overcome this problem, this paper proposes a revolutionary blockchain-enabled game theory-based authentication mechanism for securing IoVs. Here, a three layer multi-trusted authorization solution is provided in which authentication of vehicles can be performed from initial entry to movement into different trusted authorities’ areas without any delay by the use of Physical Unclonable Functions (PUFs) in the beginning and later through duel gaming, and a dynamic Proof-of-Work (dPoW) consensus mechanism. Formal and informal security analyses justify the framework’s credibility in more depth with mathematical proofs. A rigorous comparative study demonstrates that the suggested framework achieves greater security and functionality characteristics and provides lower transaction and computation overhead than many of the available solutions so far. However, these solutions never considered the prime concerns of physical cloning and side-channel attacks. However, the framework in this paper is capable of handling them along with all the other security attacks the previous work can handle. Finally, the suggested framework has been subjected to a blockchain implementation to demonstrate its efficacy with duel gaming to achieve authentication in addition to its capability of using lower burdened blockchain at the physical layer, which current blockchain-based authentication models for IoVs do not support. Full article
Show Figures

Figure 1

Article
Artificial Intelligence Algorithm-Based Economic Denial of Sustainability Attack Detection Systems: Cloud Computing Environments
Sensors 2022, 22(13), 4685; https://doi.org/10.3390/s22134685 - 21 Jun 2022
Cited by 4 | Viewed by 1017
Abstract
Cloud computing is currently the most cost-effective means of providing commercial and consumer IT services online. However, it is prone to new flaws. An economic denial of sustainability attack (EDoS) specifically leverages the pay-per-use paradigm in building up resource demands over time, culminating [...] Read more.
Cloud computing is currently the most cost-effective means of providing commercial and consumer IT services online. However, it is prone to new flaws. An economic denial of sustainability attack (EDoS) specifically leverages the pay-per-use paradigm in building up resource demands over time, culminating in unanticipated usage charges to the cloud customer. We present an effective approach to mitigating EDoS attacks in cloud computing. To mitigate such distributed attacks, methods for detecting them on different cloud computing smart grids have been suggested. These include hard-threshold, machine, and deep learning, support vector machine (SVM), K-nearest neighbors (KNN), random forest (RF) tree algorithms, namely convolutional neural network (CNN), and long short-term memory (LSTM). These algorithms have greater accuracies and lower false alarm rates and are essential for improving the cloud computing service provider security system. The dataset of nine injection attacks for testing machine and deep learning algorithms was obtained from the Cyber Range Lab at the University of New South Wales (UNSW), Canberra. The experiments were conducted in two categories: binary classification, which included normal and attack datasets, and multi-classification, which included nine classes of attack data. The results of the proposed algorithms showed that the RF approach achieved accuracy of 98% with binary classification, whereas the SVM model achieved accuracy of 97.54% with multi-classification. Moreover, statistical analyses, such as mean square error (MSE), Pearson correlation coefficient (R), and the root mean square error (RMSE), were applied in evaluating the prediction errors between the input data and the prediction values from different machine and deep learning algorithms. The RF tree algorithm achieved a very low prediction level (MSE = 0.01465) and a correlation R2 (R squared) level of 92.02% with the binary classification dataset, whereas the algorithm attained an R2 level of 89.35% with a multi-classification dataset. The findings of the proposed system were compared with different existing EDoS attack detection systems. The proposed attack mitigation algorithms, which were developed based on artificial intelligence, outperformed the few existing systems. The goal of this research is to enable the detection and effective mitigation of EDoS attacks. Full article
Show Figures

Figure 1

Article
A Cloud Computing-Based Modified Symbiotic Organisms Search Algorithm (AI) for Optimal Task Scheduling
Sensors 2022, 22(4), 1674; https://doi.org/10.3390/s22041674 - 21 Feb 2022
Viewed by 1106
Abstract
The search algorithm based on symbiotic organisms’ interactions is a relatively recent bio-inspired algorithm of the swarm intelligence field for solving numerical optimization problems. It is meant to optimize applications based on the simulation of the symbiotic relationship among the distinct species in [...] Read more.
The search algorithm based on symbiotic organisms’ interactions is a relatively recent bio-inspired algorithm of the swarm intelligence field for solving numerical optimization problems. It is meant to optimize applications based on the simulation of the symbiotic relationship among the distinct species in the ecosystem. The task scheduling problem is NP complete, which makes it hard to obtain a correct solution, especially for large-scale tasks. This paper proposes a modified symbiotic organisms search-based scheduling algorithm for the efficient mapping of heterogeneous tasks to access cloud resources of different capacities. The significant contribution of this technique is the simplified representation of the algorithm’s mutualism process, which uses equity as a measure of relationship characteristics or efficiency of species in the current ecosystem to move to the next generation. These relational characteristics are achieved by replacing the original mutual vector, which uses an arithmetic mean to measure the mutual characteristics with a geometric mean that enhances the survival advantage of two distinct species. The modified symbiotic organisms search algorithm (G_SOS) aims to minimize the task execution time (makespan), cost, response time, and degree of imbalance, and improve the convergence speed for an optimal solution in an IaaS cloud. The performance of the proposed technique was evaluated using a CloudSim toolkit simulator, and the percentage of improvement of the proposed G_SOS over classical SOS and PSO-SA in terms of makespan minimization ranges between 0.61–20.08% and 1.92–25.68% over a large-scale task that spans between 100 to 1000 Million Instructions (MI). The solutions are found to be better than the existing standard (SOS) technique and PSO. Full article
Show Figures

Figure 1

Article
ABCanDroid: A Cloud Integrated Android App for Noninvasive Early Breast Cancer Detection Using Transfer Learning
Sensors 2022, 22(3), 832; https://doi.org/10.3390/s22030832 - 22 Jan 2022
Cited by 11 | Viewed by 2012
Abstract
Many patients affected by breast cancer die every year because of improper diagnosis and treatment. In recent years, applications of deep learning algorithms in the field of breast cancer detection have proved to be quite efficient. However, the application of such techniques has [...] Read more.
Many patients affected by breast cancer die every year because of improper diagnosis and treatment. In recent years, applications of deep learning algorithms in the field of breast cancer detection have proved to be quite efficient. However, the application of such techniques has a lot of scope for improvement. Major works have been done in this field, however it can be made more efficient by the use of transfer learning to get impressive results. In the proposed approach, Convolutional Neural Network (CNN) is complemented with Transfer Learning for increasing the efficiency and accuracy of early detection of breast cancer for better diagnosis. The thought process involved using a pre-trained model, which already had some weights assigned rather than building the complete model from scratch. This paper mainly focuses on ResNet101 based Transfer Learning Model paired with the ImageNet dataset. The proposed framework provided us with an accuracy of 99.58%. Extensive experiments and tuning of hyperparameters have been performed to acquire the best possible results in terms of classification. The proposed frameworks aims to be an efficient tool for all doctors and society as a whole and help the user in early detection of breast cancer. Full article
Show Figures

Figure 1

Article
A Low-Cost Multi-Sensor Data Acquisition System for Fault Detection in Fused Deposition Modelling
Sensors 2022, 22(2), 517; https://doi.org/10.3390/s22020517 - 10 Jan 2022
Cited by 12 | Viewed by 2366
Abstract
Fused deposition modelling (FDM)-based 3D printing is a trending technology in the era of Industry 4.0 that manufactures products in layer-by-layer form. It shows remarkable benefits such as rapid prototyping, cost-effectiveness, flexibility, and a sustainable manufacturing approach. Along with such advantages, a few [...] Read more.
Fused deposition modelling (FDM)-based 3D printing is a trending technology in the era of Industry 4.0 that manufactures products in layer-by-layer form. It shows remarkable benefits such as rapid prototyping, cost-effectiveness, flexibility, and a sustainable manufacturing approach. Along with such advantages, a few defects occur in FDM products during the printing stage. Diagnosing defects occurring during 3D printing is a challenging task. Proper data acquisition and monitoring systems need to be developed for effective fault diagnosis. In this paper, the authors proposed a low-cost multi-sensor data acquisition system (DAQ) for detecting various faults in 3D printed products. The data acquisition system was developed using an Arduino micro-controller that collects real-time multi-sensor signals using vibration, current, and sound sensors. The different types of fault conditions are referred to introduce various defects in 3D products to analyze the effect of the fault conditions on the captured sensor data. Time and frequency domain analyses were performed on captured data to create feature vectors by selecting the chi-square method, and the most significant features were selected to train the CNN model. The K-means cluster algorithm was used for data clustering purposes, and the bell curve or normal distribution curve was used to define individual sensor threshold values under normal conditions. The CNN model was used to classify the normal and fault condition data, which gave an accuracy of around 94%, by evaluating the model performance based on recall, precision, and F1 score. Full article
Show Figures

Figure 1

Article
Attacks to Automatous Vehicles: A Deep Learning Algorithm for Cybersecurity
Sensors 2022, 22(1), 360; https://doi.org/10.3390/s22010360 - 04 Jan 2022
Cited by 20 | Viewed by 3034
Abstract
Rapid technological development has changed drastically the automotive industry. Network communication has improved, helping the vehicles transition from completely machine- to software-controlled technologies. The autonomous vehicle network is controlled by the controller area network (CAN) bus protocol. Nevertheless, the autonomous vehicle network still [...] Read more.
Rapid technological development has changed drastically the automotive industry. Network communication has improved, helping the vehicles transition from completely machine- to software-controlled technologies. The autonomous vehicle network is controlled by the controller area network (CAN) bus protocol. Nevertheless, the autonomous vehicle network still has issues and weaknesses concerning cybersecurity due to the complexity of data and traffic behaviors that benefit the unauthorized intrusion to a CAN bus and several types of attacks. Therefore, developing systems to rapidly detect message attacks in CAN is one of the biggest challenges. This study presents a high-performance system with an artificial intelligence approach that protects the vehicle network from cyber threats. The system secures the autonomous vehicle from intrusions by using deep learning approaches. The proposed security system was verified by using a real automatic vehicle network dataset, including spoofing, flood, replaying attacks, and benign packets. Preprocessing was applied to convert the categorical data into numerical. This dataset was processed by using the convolution neural network (CNN) and a hybrid network combining CNN and long short-term memory (CNN-LSTM) models to identify attack messages. The results revealed that the model achieved high performance, as evaluated by the metrics of precision, recall, F1 score, and accuracy. The proposed system achieved high accuracy (97.30%). Along with the empirical demonstration, the proposed system enhanced the detection and classification accuracy compared with the existing systems and was proven to have superior performance for real-time CAN bus security. Full article
Show Figures

Figure 1

Article
An Aggregated Mutual Information Based Feature Selection with Machine Learning Methods for Enhancing IoT Botnet Attack Detection
Sensors 2022, 22(1), 185; https://doi.org/10.3390/s22010185 - 28 Dec 2021
Cited by 3 | Viewed by 1474
Abstract
Due to the wide availability and usage of connected devices in Internet of Things (IoT) networks, the number of attacks on these networks is continually increasing. A particularly serious and dangerous type of attack in the IoT environment is the botnet attack, where [...] Read more.
Due to the wide availability and usage of connected devices in Internet of Things (IoT) networks, the number of attacks on these networks is continually increasing. A particularly serious and dangerous type of attack in the IoT environment is the botnet attack, where the attackers can control the IoT systems to generate enormous networks of “bot” devices for generating malicious activities. To detect this type of attack, several Intrusion Detection Systems (IDSs) have been proposed for IoT networks based on machine learning and deep learning methods. As the main characteristics of IoT systems include their limited battery power and processor capacity, maximizing the efficiency of intrusion detection systems for IoT networks is still a research challenge. It is important to provide efficient and effective methods that use lower computational time and have high detection rates. This paper proposes an aggregated mutual information-based feature selection approach with machine learning methods to enhance detection of IoT botnet attacks. In this study, the N-BaIoT benchmark dataset was used to detect botnet attack types using real traffic data gathered from nine commercial IoT devices. The dataset includes binary and multi-class classifications. The feature selection method incorporates Mutual Information (MI) technique, Principal Component Analysis (PCA) and ANOVA f-test at finely-granulated detection level to select the relevant features for improving the performance of IoT Botnet classifiers. In the classification step, several ensemble and individual classifiers were used, including Random Forest (RF), XGBoost (XGB), Gaussian Naïve Bayes (GNB), k-Nearest Neighbor (k-NN), Logistic Regression (LR) and Support Vector Machine (SVM). The experimental results showed the efficiency and effectiveness of the proposed approach, which outperformed other techniques using various evaluation metrics. Full article
Show Figures

Figure 1

Review

Jump to: Research

Review
Energy System 4.0: Digitalization of the Energy Sector with Inclination towards Sustainability
Sensors 2022, 22(17), 6619; https://doi.org/10.3390/s22176619 - 01 Sep 2022
Cited by 4 | Viewed by 1627
Abstract
The United Nations’ sustainable development goals have emphasized implementing sustainability to ensure environmental security for the future. Affordable energy, clean energy, and innovation in infrastructure are the relevant sustainable development goals that are applied to the energy sector. At present, digital technologies have [...] Read more.
The United Nations’ sustainable development goals have emphasized implementing sustainability to ensure environmental security for the future. Affordable energy, clean energy, and innovation in infrastructure are the relevant sustainable development goals that are applied to the energy sector. At present, digital technologies have a significant capability to realize the target of sustainability in energy. With this motivation, the study aims to discuss the significance of different digital technologies such as the Internet of Things (IoT), artificial intelligence (AI), edge computing, blockchain, and big data and their implementation in the different stages of energy such as generation, distribution, transmission, smart grid, and energy trading. The study also discusses the different architecture that has been implemented by previous studies for smart grid computing. Additionally, we addressed IoT-based microgrids, IoT services in electrical equipment, and blockchain-based energy trading. Finally, the article discusses the challenges and recommendations for the effective implementation of digital technologies in the energy sector for meeting sustainability. Big data for energy analytics, digital twins in smart grid modeling, virtual power plants with Metaverse, and green IoT are the major vital recommendations that are discussed in this study for future enhancement. Full article
Show Figures

Figure 1

Review
Context-Aware Edge-Based AI Models for Wireless Sensor Networks—An Overview
Sensors 2022, 22(15), 5544; https://doi.org/10.3390/s22155544 - 25 Jul 2022
Viewed by 1538
Abstract
Recent advances in sensor technology are expected to lead to a greater use of wireless sensor networks (WSNs) in industry, logistics, healthcare, etc. On the other hand, advances in artificial intelligence (AI), machine learning (ML), and deep learning (DL) are becoming dominant solutions [...] Read more.
Recent advances in sensor technology are expected to lead to a greater use of wireless sensor networks (WSNs) in industry, logistics, healthcare, etc. On the other hand, advances in artificial intelligence (AI), machine learning (ML), and deep learning (DL) are becoming dominant solutions for processing large amounts of data from edge-synthesized heterogeneous sensors and drawing accurate conclusions with better understanding of the situation. Integration of the two areas WSN and AI has resulted in more accurate measurements, context-aware analysis and prediction useful for smart sensing applications. In this paper, a comprehensive overview of the latest developments in context-aware intelligent systems using sensor technology is provided. In addition, it also discusses the areas in which they are used, related challenges, motivations for adopting AI solutions, focusing on edge computing, i.e., sensor and AI techniques, along with analysis of existing research gaps. Another contribution of this study is the use of a semantic-aware approach to extract survey-relevant subjects. The latter specifically identifies eleven main research topics supported by the articles included in the work. These are analyzed from various angles to answer five main research questions. Finally, potential future research directions are also discussed. Full article
Show Figures

Figure 1

Review
A Systematic Review on Machine Learning and Deep Learning Models for Electronic Information Security in Mobile Networks
Sensors 2022, 22(5), 2017; https://doi.org/10.3390/s22052017 - 04 Mar 2022
Cited by 9 | Viewed by 3765
Abstract
Today’s advancements in wireless communication technologies have resulted in a tremendous volume of data being generated. Most of our information is part of a widespread network that connects various devices across the globe. The capabilities of electronic devices are also increasing day by [...] Read more.
Today’s advancements in wireless communication technologies have resulted in a tremendous volume of data being generated. Most of our information is part of a widespread network that connects various devices across the globe. The capabilities of electronic devices are also increasing day by day, which leads to more generation and sharing of information. Similarly, as mobile network topologies become more diverse and complicated, the incidence of security breaches has increased. It has hampered the uptake of smart mobile apps and services, which has been accentuated by the large variety of platforms that provide data, storage, computation, and application services to end-users. It becomes necessary in such scenarios to protect data and check its use and misuse. According to the research, an artificial intelligence-based security model should assure the secrecy, integrity, and authenticity of the system, its equipment, and the protocols that control the network, independent of its generation, in order to deal with such a complicated network. The open difficulties that mobile networks still face, such as unauthorised network scanning, fraud links, and so on, have been thoroughly examined. Numerous ML and DL techniques that can be utilised to create a secure environment, as well as various cyber security threats, are discussed. We address the necessity to develop new approaches to provide high security of electronic data in mobile networks because the possibilities for increasing mobile network security are inexhaustible. Full article
Show Figures

Figure 1

Back to TopTop