sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence and Edge Computing in IoT-Based Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 15 January 2026 | Viewed by 5243

Special Issue Editor

School of Information and Communication Engineering, Beijing University of Posts and Telecommunication,100876 Beijing, China
Interests: IoT; future network; TSN; edge computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The integration of edge computing along with artificial intelligence technology has great potential to gather and process large amounts of IoT data, maximize the potential for rapid, real-time data analysis and intelligent decision-making, and deliver a variety of low-latency, reliable, intelligent, and time-sensitive services.

This Special Issue aims to focus on all aspects of research relevant to artificial intelligence and edge computing technologies in IoT applications. This Special Issue invites research and review papers that report on novel contributions with respect to cloud–edge collaboration, federated learning, the utilization of a large language model (LLM), and related innovative applications within the IoT.

Dr. Fangmin Xu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • artificial intelligence
  • network architecture
  • resource management
  • cyber security
  • edge computing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 2328 KB  
Article
FedPSFV: Personalized Federated Learning via Prototype Sharing for Finger Vein Recognition
by Haoyan Xu, Yuyang Guo, Yunzan Qu, Jian Guo and Hengyi Ren
Sensors 2025, 25(21), 6790; https://doi.org/10.3390/s25216790 - 6 Nov 2025
Abstract
Finger vein recognition algorithms based on deep learning techniques are widely used in many fields. However, the training of finger vein recognition models is hindered by privacy issues and the scarcity of public datasets. Although applying federated learning techniques to finger vein recognition [...] Read more.
Finger vein recognition algorithms based on deep learning techniques are widely used in many fields. However, the training of finger vein recognition models is hindered by privacy issues and the scarcity of public datasets. Although applying federated learning techniques to finger vein recognition can effectively address privacy concerns, data heterogeneity across clients limits the performance of the models, especially on small datasets. To address these problems, in this paper, we propose a new federated finger vein recognition algorithm (FedPSFV). The algorithm is based on the federated learning framework, which increases the interclass distance of each dataset by sharing the prototypes among clients to solve the data heterogeneity problem. The algorithm also integrates and improves the margin-based loss function, which advances the feature differentiation ability of the model. Comparative experiments based on six public datasets (SDUMLA, MMCBNU, USM, UTFVP, VERA, and NUPT) show that FedPSFV has better accuracy and generalizability; the TAR@FAR = 0.01 is improved by 5.00–11.25%, and the EER is reduced by 81.48–90.22% compared to the existing methods. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

21 pages, 2935 KB  
Article
Efficient and Privacy-Preserving Power Distribution Analytics Based on IoT
by Ruichen Xu, Jiayi Xu, Xuhao Ren and Haotian Deng
Sensors 2025, 25(21), 6677; https://doi.org/10.3390/s25216677 - 1 Nov 2025
Viewed by 222
Abstract
The increasing global demand for electricity has heightened the need for stable and reliable power distribution systems. Disruptions in power distribution can cause substantial economic losses and societal impact, underscoring the importance of accurate, timely, and scalable monitoring. The integration of Internet of [...] Read more.
The increasing global demand for electricity has heightened the need for stable and reliable power distribution systems. Disruptions in power distribution can cause substantial economic losses and societal impact, underscoring the importance of accurate, timely, and scalable monitoring. The integration of Internet of Things (IoT) technologies into smart grids offers promising capabilities for real-time data collection and intelligent control. However, the application of IoT has created new challenges such as high communication overhead and insufficient user privacy protection due to the continuous exchange of sensitive data. In this paper, we propose a method for power distribution analytics in smart grids based on IoT called PSDA. PSDA collects real-time power usage data from IoT sensor nodes distributed across different grid regions. The collected data is spatially organized using Hilbert curves to preserve locality and enable efficient encoding for subsequent processing. Meanwhile, we adopt a dual-server architecture and distributed point functions (DPF) to ensure efficient data transmission and privacy protection for power usage data. Experimental results indicate that the proposed approach is capable of accurately analyzing power distribution, thereby facilitating prompt responses within smart grid management systems. Compared with traditional methods, our scheme offers significant advantages in privacy protection and real-time processing, providing an innovative IoT-integrated solution for the secure and efficient operation of smart grids. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

39 pages, 29667 KB  
Article
Frugal Self-Optimization Mechanisms for Edge–Cloud Continuum
by Zofia Wrona, Katarzyna Wasielewska-Michniewska, Maria Ganzha, Marcin Paprzycki and Yutaka Watanobe
Sensors 2025, 25(21), 6556; https://doi.org/10.3390/s25216556 - 24 Oct 2025
Viewed by 387
Abstract
The increasing complexity of the Edge–Cloud Continuum (ECC), driven by the rapid expansion of the Internet of Things (IoT) and data-intensive applications, necessitates implementing innovative methods for automated and efficient system management. In this context, recent studies focused on the utilization of self-* [...] Read more.
The increasing complexity of the Edge–Cloud Continuum (ECC), driven by the rapid expansion of the Internet of Things (IoT) and data-intensive applications, necessitates implementing innovative methods for automated and efficient system management. In this context, recent studies focused on the utilization of self-* capabilities that can be used to enhance system autonomy and increase operational proactiveness. Separately, anomaly detection and adaptive sampling techniques have been explored to optimize data transmission and improve systems’ reliability. The integration of those techniques within a single, lightweight, and extendable self-optimization module is the main subject of this contribution. The module was designed to be well suited for distributed systems, composed of highly resource-constrained operational devices (e.g., wearable health monitors, IoT sensors in vehicles, etc.), where it can be utilized to self-adjust data monitoring and enhance the resilience of critical processes. The focus is put on the implementation of two core mechanisms, derived from the current state-of-the-art: (1) density-based anomaly detection in real-time resource utilization data streams, and (2) a dynamic adaptive sampling technique, which employs Probabilistic Exponential Weighted Moving Average. The performance of the proposed module was validated using both synthetic and real-world datasets, which included a sample collected from the target infrastructure. The main goal of the experiments was to showcase the effectiveness of the implemented techniques in different, close to real-life scenarios. Moreover, the results of the performed experiments were compared with other state-of-the-art algorithms in order to examine their advantages and inherent limitations. With the emphasis put on frugality and real-time operation, this contribution offers a novel perspective on resource-aware autonomic optimization for next-generation ECC. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

21 pages, 912 KB  
Article
UAV-Enabled Maritime IoT D2D Task Offloading: A Potential Game-Accelerated Framework
by Baiyi Li, Jian Zhao and Tingting Yang
Sensors 2025, 25(18), 5820; https://doi.org/10.3390/s25185820 - 18 Sep 2025
Viewed by 436
Abstract
Maritime Internet of Things (IoT) with unmanned surface vessels (USVs) faces tight onboard computing and sparse wireless links. Compute-intensive vision and sensing workloads often exceed latency budgets, which undermines timely decisions. In this paper, we propose a novel distributed computation offloading framework for [...] Read more.
Maritime Internet of Things (IoT) with unmanned surface vessels (USVs) faces tight onboard computing and sparse wireless links. Compute-intensive vision and sensing workloads often exceed latency budgets, which undermines timely decisions. In this paper, we propose a novel distributed computation offloading framework for maritime IoT scenarios. By leveraging the limited computational resources of USVs within a device-to-device (D2D)-assisted edge network and the mobility advantages of UAV-assisted edge computing, we design a breadth-first search (BFS)-based distributed computation offloading game. Building upon this, we formulate a global latency minimization problem that jointly optimizes UAV hovering coordinates and arrival times. This problem is solved by decomposing it into subproblems addressed via a joint Alternating Direction Method of Multipliers (ADMM) and Successive Convex Approximation (SCA) approach, effectively reducing the time between UAV arrivals and hovering coordinates. Extensive simulations verify the effectiveness of our framework, demonstrating up to a 49.6% latency reduction compared with traditional offloading schemes. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

16 pages, 1124 KB  
Article
Hybrid CNN-GRU Model for Real-Time Blood Glucose Forecasting: Enhancing IoT-Based Diabetes Management with AI
by Reem Ibrahim Alkanhel, Hager Saleh, Ahmed Elaraby, Saleh Alharbi, Hela Elmannai, Saad Alaklabi, Saeed Hamood Alsamhi and Sherif Mostafa
Sensors 2024, 24(23), 7670; https://doi.org/10.3390/s24237670 - 30 Nov 2024
Cited by 3 | Viewed by 3473
Abstract
For people with diabetes, controlling blood glucose level (BGL) is a significant issue since the disease affects how the body metabolizes food, which makes careful insulin regulation necessary. Patients have to manually check their blood sugar levels, which can be laborious and inaccurate. [...] Read more.
For people with diabetes, controlling blood glucose level (BGL) is a significant issue since the disease affects how the body metabolizes food, which makes careful insulin regulation necessary. Patients have to manually check their blood sugar levels, which can be laborious and inaccurate. Many variables affect BGL changes, making accurate prediction challenging. To anticipate BGL many steps ahead, we propose a novel hybrid deep learning model framework based on Gated Recurrent Units (GRUs) and Convolutional Neural Networks (CNNs), which can be integrated into the Internet of Things (IoT)-enabled diabetes management systems, improving prediction accuracy and timeliness by allowing real-time data processing on edge devices. While the GRU layer records temporal relationships and sequence information, the CNN layer analyzes the incoming data to extract significant features. Using a publicly accessible type 1 diabetes dataset, the hybrid model’s performance is compared to that of the standalone Long Short-Term Memory (LSTM), CNN, and GRU models. The findings show that the hybrid CNN-GRU model performs better than the single models, indicating its potential to significantly improve real-time BGL forecasting in IoT-based diabetes management systems. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

Back to TopTop