Topic Editors

Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
Dr. Robertas Damasevicius
1. Department of Applied Informatics, Vytautas Magnus University, K. Donelaičio g. 58, 44248 Kaunas, Lithuania
2. Institute of Mathematics, Silesian University of Technology, Akademicka 2A, 44-100 Gliwice, Poland
Dr. Hafiz Tayyab Rauf
Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent, UK

Machine Learning in Internet of Things

Abstract submission deadline
closed (31 October 2023)
Manuscript submission deadline
closed (31 December 2023)
Viewed by
41935

Image courtesy of no need

Topic Information

Dear Colleagues,

Technological development has contributed to the enormous amount of data generated by intelligent robots, sensors, cameras, etc. Additionally, this has brought new challenges related to their analysis and understanding, which are especially important due to their practical uses. To make their use possible, there is a need to process, classify, and understand them quickly and effectively.

At the same time, attention should be paid to the diversity of data and their potential for various problems. Data are not always perfect or in large amounts. Hence, with various methods of augmentation, GAN networks find their applications. However, it is not always possible to obtain high efficiencies. For this purpose, new solutions in machine learning and data analysis are emerging. Moreover, the training process is often demanding due to the selection of parameters, the amount of training data, or even the training time. Recent years have brought the idea of federated learning, which enables the training of one model on many clients while maintaining data privacy. However, implementing the solution in practice is associated with transmission security, data poisoning attacks, or even the selection of the model aggregation method.

Improving existing methods and proposing new solutions automate various processes and analyses. Such methods are one of the basic assumptions of intelligent solutions in various disciplines and the Internet of things. Hence, the topics of machine learning, optimization techniques, data processing/analysis, and, above all, the use of artificial intelligence methods in practical IoT solutions are the basic topics of this multidisciplinary topic. The theoretical and practical aspects of the application of intelligent solutions are needed to improve the current state of knowledge; hence, topics related to machine learning, security, and data mining in such systems are welcome.

Dr. Dawid Połap
Dr. Robertas Damasevicius
Dr. Hafiz Tayyab Rauf
Topic Editors

Keywords

  •  6G
  •  artificial intelligence
  •  augmented reality or virtual reality
  •  bioinformatics, biosensors, biomarkers
  •  computational intelligence
  •  data augmentation, data fusion and data mining
  •  decision support systems and theory
  •  dron application
  •  edge computing
  •  explainable AI Federated learning
  •  transfer learning
  •  fuzzy logic
  •  swarm intelligence
  •  hybrid systems
  •  mobile applications

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
3.1 7.2 2020 17.6 Days CHF 1600
Electronics
electronics
2.6 5.3 2012 16.8 Days CHF 2400
IoT
IoT
- 8.5 2020 15.9 Days CHF 1200
Journal of Sensor and Actuator Networks
jsan
3.3 7.9 2012 22.6 Days CHF 2000
Remote Sensing
remotesensing
4.2 8.3 2009 24.7 Days CHF 2700
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (15 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
19 pages, 4499 KiB  
Article
Automated Classification of User Needs for Beginner User Experience Designers: A Kano Model and Text Analysis Approach Using Deep Learning
by Zhejun Zhang, Huiying Chen, Ruonan Huang, Lihong Zhu, Shengling Ma, Larry Leifer and Wei Liu
AI 2024, 5(1), 364-382; https://doi.org/10.3390/ai5010018 - 2 Feb 2024
Viewed by 2458
Abstract
This study introduces a novel tool for classifying user needs in user experience (UX) design, specifically tailored for beginners, with potential applications in education. The tool employs the Kano model, text analysis, and deep learning to classify user needs efficiently into four categories. [...] Read more.
This study introduces a novel tool for classifying user needs in user experience (UX) design, specifically tailored for beginners, with potential applications in education. The tool employs the Kano model, text analysis, and deep learning to classify user needs efficiently into four categories. The data for the study were collected through interviews and web crawling, yielding 19 user needs from Generation Z users (born between 1995 and 2009) of LEGO toys (Billund, Denmark). These needs were then categorized into must-be, one-dimensional, attractive, and indifferent needs through a Kano-based questionnaire survey. A dataset of over 3000 online comments was created through preprocessing and annotating, which was used to train and evaluate seven deep learning models. The most effective model, the Recurrent Convolutional Neural Network (RCNN), was employed to develop a graphical text classification tool that accurately outputs the corresponding category and probability of user input text according to the Kano model. A usability test compared the tool’s performance to the traditional affinity diagram method. The tool outperformed the affinity diagram method in six dimensions and outperformed three qualities of the User Experience Questionnaire (UEQ), indicating a superior UX. The tool also demonstrated a lower perceived workload, as measured using the NASA Task Load Index (NASA-TLX), and received a positive Net Promoter Score (NPS) of 23 from the participants. These findings underscore the potential of this tool as a valuable educational resource in UX design courses. It offers students a more efficient and engaging and less burdensome learning experience while seamlessly integrating artificial intelligence into UX design education. This study provides UX design beginners with a practical and intuitive tool, facilitating a deeper understanding of user needs and innovative design strategies. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

15 pages, 9761 KiB  
Article
Proximity-Based Optical Camera Communication with Multiple Transmitters Using Deep Learning
by Muhammad Rangga Aziz Nasution, Herfandi Herfandi, Ones Sanjerico Sitanggang, Huy Nguyen and Yeong Min Jang
Sensors 2024, 24(2), 702; https://doi.org/10.3390/s24020702 - 22 Jan 2024
Cited by 1 | Viewed by 1518
Abstract
In recent years, optical camera communication (OCC) has garnered attention as a research focus. OCC uses optical light to transmit data by scattering the light in various directions. Although this can be advantageous with multiple transmitter scenarios, there are situations in which only [...] Read more.
In recent years, optical camera communication (OCC) has garnered attention as a research focus. OCC uses optical light to transmit data by scattering the light in various directions. Although this can be advantageous with multiple transmitter scenarios, there are situations in which only a single transmitter is permitted to communicate. Therefore, this method is proposed to fulfill the latter requirement using 2D object size to calculate the proximity of the objects through an AI object detection model. This approach enables prioritization among transmitters based on the transmitter proximity to the receiver for communication, facilitating alternating communication with multiple transmitters. The image processing employed when receiving the signals from transmitters enables communication to be performed without the need to modify the camera parameters. During the implementation, the distance between the transmitter and receiver varied between 1.0 and 5.0 m, and the system demonstrated a maximum data rate of 3.945 kbps with a minimum BER of 4.2×103. Additionally, the system achieved high accuracy from the refined YOLOv8 detection algorithm, reaching 0.98 mAP at a 0.50 IoU. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

20 pages, 2336 KiB  
Article
A Novel Internet of Things-Based System for Ten-Pin Bowling
by Ilias Zosimadis and Ioannis Stamelos
IoT 2023, 4(4), 514-533; https://doi.org/10.3390/iot4040022 - 31 Oct 2023
Viewed by 2671
Abstract
Bowling is a target sport that is popular among all age groups with professionals and amateur players. Delivering an accurate and consistent bowling throw into the lane requires the incorporation of motion techniques. Consequently, this research presents a novel IoT Cloud-based system for [...] Read more.
Bowling is a target sport that is popular among all age groups with professionals and amateur players. Delivering an accurate and consistent bowling throw into the lane requires the incorporation of motion techniques. Consequently, this research presents a novel IoT Cloud-based system for providing real-time monitoring and coaching services to bowling athletes. The system includes two inertial measurement units (IMUs) sensors for capturing motion data, a mobile application, and a Cloud server for processing the data. First, the quality of each phase of a throw is assessed using a Dynamic Time Warping (DTW)-based algorithm. Second, an on-device-level technique is proposed to identify common bowling errors. Finally, an SVM classification model is employed for assessing the skill level of bowler athletes. We recruited nine right-handed bowlers to perform 50 throws wearing the two sensors and using the proposed system. The results of our experiments suggest that the proposed system can effectively and efficiently assess the quality of the throw, detect common bowling errors, and classify the skill level of the bowler. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

18 pages, 3154 KiB  
Article
A Methodology and Open-Source Tools to Implement Convolutional Neural Networks Quantized with TensorFlow Lite on FPGAs
by Dorfell Parra, David Escobar Sanabria and Carlos Camargo
Electronics 2023, 12(20), 4367; https://doi.org/10.3390/electronics12204367 - 21 Oct 2023
Cited by 4 | Viewed by 1544
Abstract
Convolutional neural networks (CNNs) are used for classification, as they can extract complex features from input data. The training and inference of these networks typically require platforms with CPUs and GPUs. To execute the forward propagation of neural networks in low-power devices with [...] Read more.
Convolutional neural networks (CNNs) are used for classification, as they can extract complex features from input data. The training and inference of these networks typically require platforms with CPUs and GPUs. To execute the forward propagation of neural networks in low-power devices with limited resources, TensorFlow introduced TFLite. This library enables the inference process on microcontrollers by quantizing the network parameters and utilizing integer arithmetic. A limitation of TFLite is that it does not support CNNs to perform inference on FPGAs, a critical need for embedded applications that require parallelism. Here, we present a methodology and open-source tools for implementing CNNs quantized with TFLite on FPGAs. We developed a customizable accelerator for AXI-Lite-based systems on chips (SoCs), and we tested it on a Digilent Zybo-Z7 board featuring the XC7Z020 FPGA and an ARM processor at 667 MHz. Moreover, we evaluated this approach by employing CNNs trained to identify handwritten characters using the MNIST dataset and facial expressions with the JAFFE database. We validated the accelerator results with TFLite running on a laptop with an AMD 16-thread CPU running at 4.2 GHz and 16 GB RAM. The accelerator’s power consumption was 11× lower than the laptop while keeping a reasonable execution time. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

21 pages, 3352 KiB  
Article
IoT Intrusion Detection System Based on Machine Learning
by Bayi Xu, Lei Sun, Xiuqing Mao, Ruiyang Ding and Chengwei Liu
Electronics 2023, 12(20), 4289; https://doi.org/10.3390/electronics12204289 - 17 Oct 2023
Cited by 5 | Viewed by 3951
Abstract
With the rapid development of the Internet of Things (IoT), the number of IoT devices is increasing dramatically, making it increasingly important to identify intrusions on these devices. Researchers are using machine learning techniques to design effective intrusion detection systems. In this study, [...] Read more.
With the rapid development of the Internet of Things (IoT), the number of IoT devices is increasing dramatically, making it increasingly important to identify intrusions on these devices. Researchers are using machine learning techniques to design effective intrusion detection systems. In this study, we propose a novel intrusion detection system that efficiently detects network anomalous traffic. To reduce the feature dimensions of the data, we employ the binary grey wolf optimizer (BGWO) heuristic algorithm and recursive feature elimination (RFE) to select the most relevant feature subset for the target variable. The synthetic minority oversampling technique (SMOTE) is used to oversample the minority class and mitigate the impact of data imbalance on the classification results. The preprocessed data are then classified using XGBoost, and the hyperparameters of the model are optimized using Bayesian optimization with tree-structured Parzen estimator (BO-TPE) to achieve the highest detection performance. To validate the effectiveness of the proposed method, we conduct binary and multiclass experiments on five commonly used IoT datasets. The results show that our proposed method outperforms state-of-the-art methods in four out of the five datasets. It is noteworthy that our proposed method achieves perfect accuracy, precision, recall, and an F1 score of 1.0 on the BoT-Iot and WUSTL-IIOT-2021 datasets, further validating the effectiveness of our approach. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

29 pages, 2175 KiB  
Systematic Review
A Comprehensive Overview of IoT-Based Federated Learning: Focusing on Client Selection Methods
by Naghmeh Khajehali, Jun Yan, Yang-Wai Chow and Mahdi Fahmideh
Sensors 2023, 23(16), 7235; https://doi.org/10.3390/s23167235 - 17 Aug 2023
Cited by 11 | Viewed by 3818
Abstract
The integration of the Internet of Things (IoT) with machine learning (ML) is revolutionizing how services and applications impact our daily lives. In traditional ML methods, data are collected and processed centrally. However, modern IoT networks face challenges in implementing this approach due [...] Read more.
The integration of the Internet of Things (IoT) with machine learning (ML) is revolutionizing how services and applications impact our daily lives. In traditional ML methods, data are collected and processed centrally. However, modern IoT networks face challenges in implementing this approach due to their vast amount of data and privacy concerns. To overcome these issues, federated learning (FL) has emerged as a solution. FL allows ML methods to achieve collaborative training by transferring model parameters instead of client data. One of the significant challenges of federated learning is that IoT devices as clients usually have different computation and communication capacities in a dynamic environment. At the same time, their network availability is unstable, and their data quality varies. To achieve high-quality federated learning and handle these challenges, designing the proper client selection process and methods are essential, which involves selecting suitable clients from the candidates. This study presents a comprehensive systematic literature review (SLR) that focuses on the challenges of client selection (CS) in the context of federated learning (FL). The objective of this SLR is to facilitate future research and development of CS methods in FL. Additionally, a detailed and in-depth overview of the CS process is provided, encompassing its abstract implementation and essential characteristics. This comprehensive presentation enables the application of CS in diverse domains. Furthermore, various CS methods are thoroughly categorized and explained based on their key characteristics and their ability to address specific challenges. This categorization offers valuable insights into the current state of the literature while also providing a roadmap for prospective investigations in this area of research. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

29 pages, 8338 KiB  
Article
Application Layer-Based Denial-of-Service Attacks Detection against IoT-CoAP
by Sultan M. Almeghlef, Abdullah AL-Malaise AL-Ghamdi, Muhammad Sher Ramzan and Mahmoud Ragab
Electronics 2023, 12(12), 2563; https://doi.org/10.3390/electronics12122563 - 6 Jun 2023
Cited by 7 | Viewed by 2282
Abstract
Internet of Things (IoT) is a massive network based on tiny devices connected internally and to the internet. Each connected device is uniquely identified in this network through a dedicated IP address and can share the information with other devices. In contrast to [...] Read more.
Internet of Things (IoT) is a massive network based on tiny devices connected internally and to the internet. Each connected device is uniquely identified in this network through a dedicated IP address and can share the information with other devices. In contrast to its alternatives, IoT consumes less power and resources; however, this makes its devices more vulnerable to different types of attacks as they cannot execute heavy security protocols. Moreover, traditionally used heavy protocols for web-based communication, such as the Hyper Text Transport Protocol (HTTP) are quite costly to be executed on IoT devices, and thus specially designed lightweight protocols, such as the Constrained Application Protocol (CoAP) are employed for this purpose. However, while the CoAP remains widely-used, it is also susceptible to attacks, such as the Distributed Denial-of-Service (DDoS) attack, which aims to overwhelm the resources of the target and make them unavailable to legitimate users. While protocols, such as the Datagram Transport Layer Security (DTLS) and Lightweight and the Secure Protocol for Wireless Sensor Network (LSPWSN) can help in securing CoAP against DDoS attacks, they also have their limitations. DTLS is not designed for constrained devices and is considered as a heavy protocol. LSPWSN, on the other hand, operates on the network layer, in contrast to CoAP which operates on the application layer. This paper presents a machine learning model, using the CIDAD dataset (created on 11 July 2022), that can detect the DDoS attacks against CoAP with an accuracy of 98%. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

22 pages, 6604 KiB  
Article
Energy-Efficient AP Selection Using Intelligent Access Point System to Increase the Lifespan of IoT Devices
by Seungjin Lee, Jaeeun Park, Hyungwoo Choi and Hyeontaek Oh
Sensors 2023, 23(11), 5197; https://doi.org/10.3390/s23115197 - 30 May 2023
Cited by 5 | Viewed by 1814
Abstract
With the emergence of various Internet of Things (IoT) technologies, energy-saving schemes for IoT devices have been rapidly developed. To enhance the energy efficiency of IoT devices in crowded environments with multiple overlapping cells, the selection of access points (APs) for IoT devices [...] Read more.
With the emergence of various Internet of Things (IoT) technologies, energy-saving schemes for IoT devices have been rapidly developed. To enhance the energy efficiency of IoT devices in crowded environments with multiple overlapping cells, the selection of access points (APs) for IoT devices should consider energy conservation by reducing unnecessary packet transmission activities caused by collisions. Therefore, in this paper, we present a novel energy-efficient AP selection scheme using reinforcement learning to address the problem of unbalanced load that arises from biased AP connections. Our proposed method utilizes the Energy and Latency Reinforcement Learning (EL-RL) model for energy-efficient AP selection that takes into account the average energy consumption and the average latency of IoT devices. In the EL-RL model, we analyze the collision probability in Wi-Fi networks to reduce the number of retransmissions that induces more energy consumption and higher latency. According to the simulation, the proposed method achieves a maximum improvement of 53% in energy efficiency, 50% in uplink latency, and a 2.1-times longer expected lifespan of IoT devices compared to the conventional AP selection scheme. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

26 pages, 6946 KiB  
Article
Designing Aquaculture Monitoring System Based on Data Fusion through Deep Reinforcement Learning (DRL)
by Wen-Tsai Sung, Indra Griha Tofik Isa and Sung-Jung Hsiao
Electronics 2023, 12(9), 2032; https://doi.org/10.3390/electronics12092032 - 27 Apr 2023
Cited by 6 | Viewed by 2484
Abstract
The aquaculture production sector is one of the suppliers of global food consumption needs. Countries that have a large amount of water contribute to the needs of aquaculture production, especially the freshwater fisheries sector. Indonesia is a country that has a large number [...] Read more.
The aquaculture production sector is one of the suppliers of global food consumption needs. Countries that have a large amount of water contribute to the needs of aquaculture production, especially the freshwater fisheries sector. Indonesia is a country that has a large number of large bodies of water and is the top-five producer of aquaculture production. Technology and engineering continue to be developed to improve the quality and quantity of aquaculture production. One aspect that can be observed is how the condition of fish pond water is healthy and supports fish growth. Various studies have been conducted related to the aquaculture monitoring system, but the problem is how effective it is in terms of accuracy of the resulting output, implementation, and costs. In this research, data fusion (DF) and deep reinforcement learning (DRL) were implemented in an aquaculture monitoring system with temperature, turbidity, and pH parameters to produce valid and accurate output. The stage begins with testing sensor accuracy as part of sensor quality validation, then integrating sensors with wireless sensor networks (WSNs) so they can be accessed in real time. The implemented DF is divided into three layers: first, the signal layer consists of WSNs and their components. Second, the feature layer consists of DRL combined with deep learning (DL). Third, the decision layer determines the output of the condition of the fish pond in “normal” or “not normal” conditions. The analysis and testing of this system look at several factors, i.e., (1) the accuracy of the performance of the sensors used; (2) the performance of the models implemented; (3) the comparison of DF-DRL-based systems with rule-based algorithm systems; and (4) the cost effectiveness compared to labor costs. Of these four factors, the DF-DRL-based aquaculture monitoring system has a higher percentage value and is a low-cost alternative for an accurate aquaculture monitoring system. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

20 pages, 4255 KiB  
Article
Federated Learning for Medical Imaging Segmentation via Dynamic Aggregation on Non-IID Data Silos
by Liuyan Yang, Juanjuan He, Yue Fu and Zilin Luo
Electronics 2023, 12(7), 1687; https://doi.org/10.3390/electronics12071687 - 3 Apr 2023
Cited by 1 | Viewed by 2964
Abstract
A large number of mobile devices, smart wearable devices, and medical and health sensors continue to generate massive amounts of data, making edge devices’ data explode and making it possible to implement data-driven artificial intelligence. However, the “data silos” and other issues still [...] Read more.
A large number of mobile devices, smart wearable devices, and medical and health sensors continue to generate massive amounts of data, making edge devices’ data explode and making it possible to implement data-driven artificial intelligence. However, the “data silos” and other issues still exist and need to be solved. Fortunately, federated learning (FL) can deal with “data silos” in the medical field, facilitating collaborative learning across multiple institutions without sharing local data and avoiding user concerns about data privacy. However, it encounters two main challenges in the medical field. One is statistical heterogeneity, also known as non-IID (non-independent and identically distributed) data, i.e., data being non-IID between clients, which leads to model drift. The second is limited labeling because labels are hard to obtain due to the high cost and expertise requirement. Most existing federated learning algorithms only allow for supervised training settings. In this work, we proposed a novel federated learning framework, MixFedGAN, to tackle the above issues in federated networks with dynamic aggregation and knowledge distillation. A dynamic aggregation scheme was designed to reduce the impact of current low-performing clients and improve stability. Knowledge distillation was introduced into the local generator model with a new distillation regularization loss function to prevent essential parameters of the global generator model from significantly changing. In addition, we considered two scenarios under this framework: complete annotated data and limited labeled data. An experimental analysis on four heterogeneous COVID-19 infection segmentation datasets and three heterogeneous prostate MRI segmentation datasets verified the effectiveness of the proposed federated learning method. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

20 pages, 2951 KiB  
Article
Spatial Visualization Based on Geodata Fusion Using an Autonomous Unmanned Vessel
by Marta Włodarczyk-Sielicka, Dawid Połap, Katarzyna Prokop, Karolina Połap and Andrzej Stateczny
Remote Sens. 2023, 15(7), 1763; https://doi.org/10.3390/rs15071763 - 25 Mar 2023
Cited by 9 | Viewed by 1607
Abstract
The visualization of riverbeds and surface facilities on the banks is crucial for systems that analyze conditions, safety, and changes in this environment. Hence, in this paper, we propose collecting, and processing data from a variety of sensors—sonar, LiDAR, multibeam echosounder (MBES), and [...] Read more.
The visualization of riverbeds and surface facilities on the banks is crucial for systems that analyze conditions, safety, and changes in this environment. Hence, in this paper, we propose collecting, and processing data from a variety of sensors—sonar, LiDAR, multibeam echosounder (MBES), and camera—to create a visualization for further analysis. For this purpose, we took measurements from sensors installed on an autonomous, unmanned hydrographic vessel, and then proposed a data fusion mechanism, to create a visualization using modules under and above the water. A fusion contains key-point analysis on classic images and sonars, augmentation/reduction of point clouds, fitting data and mesh creation. Then, we also propose an analysis module that can be used to compare and extract information from created visualizations. The analysis module is based on artificial intelligence tools for the classification tasks, which helps in further comparison to archival data. Such a model was tested using various techniques to achieve the fastest and most accurate visualizations possible in simulation and real case studies. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

21 pages, 634 KiB  
Article
Intelligent Random Access for Massive-Machine Type Communications in Sliced Mobile Networks
by Bei Yang, Fengsheng Wei, Xiaoming She, Zheng Jiang, Jianchi Zhu, Peng Chen and Jianxiu Wang
Electronics 2023, 12(2), 329; https://doi.org/10.3390/electronics12020329 - 8 Jan 2023
Cited by 5 | Viewed by 2180
Abstract
With the emerging Internet of Things paradigm, massive Machine-Type Communication (mMTC) has been identified as one of the prominent services that enables a broad range of applications with various Quality of Service (QoS) requirements for 5G-and-beyond networks. However, it is very difficult to [...] Read more.
With the emerging Internet of Things paradigm, massive Machine-Type Communication (mMTC) has been identified as one of the prominent services that enables a broad range of applications with various Quality of Service (QoS) requirements for 5G-and-beyond networks. However, it is very difficult to employ a monolithic physical network to support various mMTC applications with differentiated QoS requirements. Moreover, in ultra-dense mobile networks, the scarcity of the preamble and Physical Downlink Control CHannel (PDCCH) resources may easily lead to resource collisions when a large number of devices access the network simultaneously. To tackle these issues, in this paper, we propose a network slicing-enabled intelligent random access framework for mMTC. First, by tailoring a gigantic physical network into multiple lightweight network slices, fine-grained QoS provisioning can be accomplished, and the collision domain of Random Access (RA) can be effectively reduced. In addition, we propose a novel concept of sliced preambles (sPreambles), based on which the transitional RA procedure is optimized, and the issue of preamble shortage is effectively relieved. Furthermore, with the aim of alleviating PDCCH resource shortage and improving transmission efficiency, we propose a learning-based resource-sharing scheme that can intelligently multiplex the PDCCH resources in the naturally dynamic environment. Simulation results show that the proposed framework can efficiently allocate resources to individual mMTC devices while guaranteeing their QoS requirements in random access processes. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

17 pages, 3997 KiB  
Article
Estimation of Occupancy Using IoT Sensors and a Carbon Dioxide-Based Machine Learning Model with Ventilation System and Differential Pressure Data
by Jehyun Kim, JongIl Bang, Anseop Choi, Hyeun Jun Moon and Minki Sung
Sensors 2023, 23(2), 585; https://doi.org/10.3390/s23020585 - 4 Jan 2023
Cited by 8 | Viewed by 3187
Abstract
Infectious diseases such as the COVID-19 pandemic have necessitated preventive measures against the spread of indoor infections. There has been increasing interest in indoor air quality (IAQ) management. Air quality can be managed simply by alleviating the source of infection or pollution, but [...] Read more.
Infectious diseases such as the COVID-19 pandemic have necessitated preventive measures against the spread of indoor infections. There has been increasing interest in indoor air quality (IAQ) management. Air quality can be managed simply by alleviating the source of infection or pollution, but the person within a space can be the source of infection or pollution, thus necessitating an estimation of the exact number of people occupying the space. Generally, management plans for mitigating the spread of infections and maintaining the IAQ, such as ventilation, are based on the number of people occupying the space. In this study, carbon dioxide (CO2)-based machine learning was used to estimate the number of people occupying a space. For machine learning, the CO2 concentration, ventilation system operation status, and indoor–outdoor and indoor–corridor differential pressure data were used. In the random forest (RF) and artificial neural network (ANN) models, where the CO2 concentration and ventilation system operation modes were input, the accuracy was highest at 0.9102 and 0.9180, respectively. When the CO2 concentration and differential pressure data were included, the accuracy was lowest at 0.8916 and 0.8936, respectively. Future differential pressure data will be associated with the change in the CO2 concentration to increase the accuracy of occupancy estimation. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

19 pages, 864 KiB  
Article
Differential Optimization Federated Incremental Learning Algorithm Based on Blockchain
by Xuebin Chen, Changyin Luo, Wei Wei, Jingcheng Xu and Shufen Zhang
Electronics 2022, 11(22), 3814; https://doi.org/10.3390/electronics11223814 - 20 Nov 2022
Cited by 2 | Viewed by 1801
Abstract
Federated learning is a hot area of concern in the field of privacy protection. There are local model parameters that are difficult to integrate, poor model timeliness, and local model training security issues. This paper proposes a blockchain-based differential optimization federated incremental learning [...] Read more.
Federated learning is a hot area of concern in the field of privacy protection. There are local model parameters that are difficult to integrate, poor model timeliness, and local model training security issues. This paper proposes a blockchain-based differential optimization federated incremental learning algorithm, First, we apply differential privacy to the weighted random forest and optimize the parameters in the weighted forest to reduce the impact of adding differential privacy on the accuracy of the local model. Using different ensemble algorithms to integrate the local model parameters can improve the accuracy of the global model. At the same time, the risk of a data leakage caused by gradient update is reduced; then, incremental learning is applied to the framework of federated learning to improve the timeliness of the model; finally, the model parameters in the model training phase are uploaded to the blockchain and synchronized quickly, which reduces the cost of data storage and model parameter transmission. The experimental results show that the accuracy of the stacking ensemble model in each period is above 83.5% and the variance is lower than 104 for training on the public data set. The accuracy of the model has been improved, and the security and privacy of the model have been improved. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

17 pages, 5047 KiB  
Article
Multi-Agent Multi-View Collaborative Perception Based on Semi-Supervised Online Evolutive Learning
by Di Li and Liang Song
Sensors 2022, 22(18), 6893; https://doi.org/10.3390/s22186893 - 13 Sep 2022
Cited by 4 | Viewed by 2070
Abstract
In the edge intelligence environment, multiple sensing devices perceive and recognize the current scene in real time to provide specific user services. However, the generalizability of the fixed recognition model will gradually weaken due to the time-varying perception scene. To ensure the stability [...] Read more.
In the edge intelligence environment, multiple sensing devices perceive and recognize the current scene in real time to provide specific user services. However, the generalizability of the fixed recognition model will gradually weaken due to the time-varying perception scene. To ensure the stability of the perception and recognition service, each edge model/agent needs to continuously learn from the new perception data unassisted to adapt to the perception environment changes and jointly build the online evolutive learning (OEL) system. The generalization degradation problem can be addressed by deploying the semi-supervised learning (SSL) method on multi-view agents and continuously tuning each discriminative model by collaborative perception. This paper proposes a multi-view agent’s collaborative perception (MACP) semi-supervised online evolutive learning method. First, each view model will be initialized based on self-supervised learning methods, and each initialized model can learn differentiated feature-extraction patterns with certain discriminative independence. Then, through the discriminative information fusion of multi-view model predictions on the unlabeled perceptual data, reliable pseudo-labels are obtained for the consistency regularization process of SSL. Moreover, we introduce additional critical parameter constraints to continuously improve the discriminative independence of each view model during training. We compare our method with multiple representative multi-model and single-model SSL methods on various benchmarks. Experimental results show the superiority of the MACP in terms of convergence efficiency and performance. Meanwhile, we construct an ideal multi-view experiment to demonstrate the application potential of MACP in practical perception scenarios. Full article
(This article belongs to the Topic Machine Learning in Internet of Things)
Show Figures

Figure 1

Back to TopTop