Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (326)

Search Parameters:
Keywords = fog architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5241 KB  
Article
Integrating a Fast and Reliable Robotic Hooking System for Enhanced Stamping Press Processes in Smart Manufacturing
by Yen-Chun Chen, Fu-Yao Chang and Chin-Feng Lai
Automation 2025, 6(4), 55; https://doi.org/10.3390/automation6040055 - 12 Oct 2025
Viewed by 313
Abstract
Facing the diversity of the market, the industry has to move towards Industry 4.0, and smart manufacturing based on cyber-physical systems is the only way to move towards Industry 4.0. However, there are two key concepts in Industry 4.0: cyber-physical systems (CPSs) and [...] Read more.
Facing the diversity of the market, the industry has to move towards Industry 4.0, and smart manufacturing based on cyber-physical systems is the only way to move towards Industry 4.0. However, there are two key concepts in Industry 4.0: cyber-physical systems (CPSs) and digital twins (DTs). In the paper, we propose a smart manufacturing system suitable for stamping press processes based on the CPS concept and use DT to establish a manufacturing-end robot guidance generation model. In the smart manufacturing system of stamping press processes, fog nodes are used to connect three major architectures, including device health diagnosis, manufacturing device, and material traceability. In addition, a special hook end point is designed, and its lightweight visual guidance generation model is established to improve the production efficiency of the manufacturing end in product manufacturing. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

24 pages, 943 KB  
Review
A Review on AI Miniaturization: Trends and Challenges
by Bin Tang, Shengzhi Du and Antonie Johan Smith
Appl. Sci. 2025, 15(20), 10958; https://doi.org/10.3390/app152010958 - 12 Oct 2025
Viewed by 552
Abstract
Artificial intelligence (AI) often suffers from high energy consumption and complex deployment in resource-constrained environments, leading to a structural mismatch between capability and deployability. This review takes two representative scenarios—energy-first and performance-first—as the main thread, systematically comparing cloud, edge, and fog/cloudlet/mobile edge computing [...] Read more.
Artificial intelligence (AI) often suffers from high energy consumption and complex deployment in resource-constrained environments, leading to a structural mismatch between capability and deployability. This review takes two representative scenarios—energy-first and performance-first—as the main thread, systematically comparing cloud, edge, and fog/cloudlet/mobile edge computing (MEC)/micro data center (MDC) architectures. Based on a standardized literature search and screening process, three categories of miniaturization strategies are distilled: redundancy compression (e.g., pruning, quantization, and distillation), knowledge transfer (e.g., distillation and parameter-efficient fine-tuning), and hardware–software co-design (e.g., neural architecture search (NAS), compiler-level, and operator-level optimization). The purposes of this review are threefold: (1) to unify the “architecture–strategy–implementation pathway” from a system-level perspective; (2) to establish technology–budget mapping with verifiable quantitative indicators; and (3) to summarize representative pathways for energy- and performance-prioritized scenarios, while highlighting current deficiencies in data disclosure and device-side validation. The findings indicate that, compared with single techniques, cross-layer combined optimization better balances accuracy, latency, and power consumption. Therefore, AI miniaturization should be regarded as a proactive method of structural reconfiguration for large-scale deployment. Future efforts should advance cross-scenario empirical validation and standardized benchmarking, while reinforcing hardware–software co-design. Compared with existing reviews that mostly focus on a single dimension, this review proposes a cross-level framework and design checklist, systematizing scattered optimization methods into reusable engineering pathways. Full article
Show Figures

Figure 1

28 pages, 5254 KB  
Article
IoT-Enabled Fog-Based Secure Aggregation in Smart Grids Supporting Data Analytics
by Hayat Mohammad Khan, Farhana Jabeen, Abid Khan, Muhammad Waqar and Ajung Kim
Sensors 2025, 25(19), 6240; https://doi.org/10.3390/s25196240 - 8 Oct 2025
Viewed by 686
Abstract
The Internet of Things (IoT) has transformed multiple industries, providing significant potential for automation, efficiency, and enhanced decision-making. The incorporation of IoT and data analytics in smart grid represents a groundbreaking opportunity for the energy sector, delivering substantial advantages in efficiency, sustainability, and [...] Read more.
The Internet of Things (IoT) has transformed multiple industries, providing significant potential for automation, efficiency, and enhanced decision-making. The incorporation of IoT and data analytics in smart grid represents a groundbreaking opportunity for the energy sector, delivering substantial advantages in efficiency, sustainability, and customer empowerment. This integration enables smart grids to autonomously monitor energy flows and adjust to fluctuations in energy demand and supply in a flexible and real-time fashion. Statistical analytics, as a fundamental component of data analytics, provides the necessary tools and techniques to uncover patterns, trends, and insights within datasets. Nevertheless, it is crucial to address privacy and security issues to fully maximize the potential of data analytics in smart grids. This paper makes several significant contributions to the literature on secure, privacy-aware aggregation schemes in smart grids. First, we introduce a Fog-enabled Secure Data Analytics Operations (FESDAO) scheme which offers a distributed architecture incorporating robust security features such as secure aggregation, authentication, fault tolerance and resilience against insider threats. The scheme achieves privacy during data aggregation through a modified Boneh-Goh-Nissim cryptographic scheme along with other mechanisms. Second, FESDAO also supports statistical analytics on metering data at the cloud control center and fog node levels. FESDAO ensures reliable aggregation and accurate data analytical results, even in scenarios where smart meters fail to report data, thereby preserving both analytical operation computation accuracy and latency. We further provide comprehensive security analyses to demonstrate that the proposed approach effectively supports data privacy, source authentication, fault tolerance, and resilience against false data injection and replay attacks. Lastly, we offer thorough performance evaluations to illustrate the efficiency of the suggested scheme in comparison to current state-of-the-art schemes, considering encryption, computation, aggregation, decryption, and communication costs. Moreover, a detailed security analysis has been conducted to verify the scheme’s resistance against insider collusion attacks, replay attack, and false data injection (FDI) attack. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

31 pages, 9679 KB  
Article
Weather-Corrupted Image Enhancement with Removal-Raindrop Diffusion and Mutual Image Translation Modules
by Young-Ho Go and Sung-Hak Lee
Mathematics 2025, 13(19), 3176; https://doi.org/10.3390/math13193176 - 3 Oct 2025
Viewed by 363
Abstract
Artificial intelligence-based image processing is critical for sensor fusion and image transformation in mobility systems. Advanced driver assistance functions such as forward monitoring and digital side mirrors are essential for driving safety. Degradation due to raindrops, fog, and high-dynamic range (HDR) imbalance caused [...] Read more.
Artificial intelligence-based image processing is critical for sensor fusion and image transformation in mobility systems. Advanced driver assistance functions such as forward monitoring and digital side mirrors are essential for driving safety. Degradation due to raindrops, fog, and high-dynamic range (HDR) imbalance caused by lighting changes impairs visibility and reduces object recognition and distance estimation accuracy. This paper proposes a diffusion framework to enhance visibility under multi-degradation conditions. The denoising diffusion probabilistic model (DDPM) offers more stable training and high-resolution restoration than the generative adversarial networks. The DDPM relies on large-scale paired datasets, which are difficult to obtain in raindrop scenarios. This framework applies the Palette diffusion model, comprising data augmentation and raindrop-removal modules. The data augmentation module generates raindrop image masks and learns inpainting-based raindrop synthesis. Synthetic masks simulate raindrop patterns and HDR imbalance scenarios. The raindrop-removal module reconfigures the Palette architecture for image-to-image translation, incorporating the augmented synthetic dataset for raindrop removal learning. Loss functions and normalization strategies improve restoration stability and removal performance. During inference, the framework operates with a single conditional input, and an efficient sampling strategy is introduced to significantly accelerate the process. In post-processing, tone adjustment and chroma compensation enhance visual consistency. The proposed method preserves fine structural details and outperforms existing approaches in visual quality, improving the robustness of vision systems under adverse conditions. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Scientific Computing)
Show Figures

Figure 1

26 pages, 13551 KB  
Article
Hybrid Cloud–Edge Architecture for Real-Time Cryptocurrency Market Forecasting: A Distributed Machine Learning Approach with Blockchain Integration
by Mohammed M. Alenazi and Fawwad Hassan Jaskani
Mathematics 2025, 13(18), 3044; https://doi.org/10.3390/math13183044 - 22 Sep 2025
Viewed by 673
Abstract
The volatile nature of cryptocurrency markets demands real-time analytical capabilities that traditional centralized computing architectures struggle to provide. This paper presents a novel hybrid cloud–edge computing framework for cryptocurrency market forecasting, leveraging distributed systems to enable low-latency prediction models. Our approach integrates machine [...] Read more.
The volatile nature of cryptocurrency markets demands real-time analytical capabilities that traditional centralized computing architectures struggle to provide. This paper presents a novel hybrid cloud–edge computing framework for cryptocurrency market forecasting, leveraging distributed systems to enable low-latency prediction models. Our approach integrates machine learning algorithms across a distributed network: edge nodes perform real-time data preprocessing and feature extraction, while the cloud infrastructure handles deep learning model training and global pattern recognition. The proposed architecture uses a three-tier system comprising edge nodes for immediate data capture, fog layers for intermediate processing and local inference, and cloud servers for comprehensive model training on historical blockchain data. A federated learning mechanism allows edge nodes to contribute to a global prediction model while preserving data locality and reducing network latency. The experimental results show a 40% reduction in prediction latency compared to cloud-only solutions while maintaining comparable accuracy in forecasting Bitcoin and Ethereum price movements. The system processes over 10,000 transactions per second and delivers real-time insights with sub-second response times. Integration with blockchain ensures data integrity and provides transparent audit trails for all predictions. Full article
(This article belongs to the Special Issue Recent Computational Techniques to Forecast Cryptocurrency Markets)
Show Figures

Figure 1

30 pages, 6751 KB  
Article
Web System for Solving the Inverse Kinematics of 6DoF Robotic Arm Using Deep Learning Models: CNN and LSTM
by Mayra A. Torres-Hernández, Teodoro Ibarra-Pérez, Eduardo García-Sánchez, Héctor A. Guerrero-Osuna, Luis O. Solís-Sánchez and Ma. del Rosario Martínez-Blanco
Technologies 2025, 13(9), 405; https://doi.org/10.3390/technologies13090405 - 5 Sep 2025
Viewed by 903
Abstract
This work presents the development of a web system using deep learning (DL) neural networks to solve the inverse kinematics problem of the Quetzal robotic arm, designed for academic and research purposes. Two architectures, LSTM and CNN, were designed, trained, and evaluated using [...] Read more.
This work presents the development of a web system using deep learning (DL) neural networks to solve the inverse kinematics problem of the Quetzal robotic arm, designed for academic and research purposes. Two architectures, LSTM and CNN, were designed, trained, and evaluated using data generated through the Denavit–Hartenberg (D-H) model, considering the robot’s workspace. The evaluation employed the mean squared error (MSE) as the loss metric and mean absolute error (MAE) and accuracy as performance metrics. The CNN model, featuring four convolutional layers and an input of 4 timesteps, achieved the best overall performance (95.9% accuracy, MSE of 0.003, and MAE of 0.040), significantly outperforming the LSTM model in training time. A hybrid web application was implemented, allowing offline training and real-time online inference under one second via an interactive interface developed with Streamlit 1.16. The solution integrates tools such as TensorFlow™ 2.15, Python 3.10, and Anaconda Distribution 2023.03-1, ensuring portability to fog or cloud computing environments. The proposed system stands out for its fast response times (1 s), low computational cost, and high scalability to collaborative robotics environments. It is a viable alternative for applications in educational or research settings, particularly in projects focused on industrial automation. Full article
(This article belongs to the Special Issue AI Robotics Technologies and Their Applications)
Show Figures

Figure 1

20 pages, 5187 KB  
Article
IceSnow-Net: A Deep Semantic Segmentation Network for High-Precision Snow and Ice Mapping from UAV Imagery
by Yulin Liu, Shuyuan Yang, Guangyang Zhang, Minghui Wu, Feng Xiong, Pinglv Yang and Zeming Zhou
Remote Sens. 2025, 17(17), 2964; https://doi.org/10.3390/rs17172964 - 27 Aug 2025
Viewed by 795
Abstract
Accurate monitoring of snow and ice cover is essential for climate research and disaster management, but conventional remote sensing methods often struggle in complex terrain and fog-contaminated conditions. To address the challenges of high-resolution UAV-based snow and ice segmentation—including visual similarity, fragmented spatial [...] Read more.
Accurate monitoring of snow and ice cover is essential for climate research and disaster management, but conventional remote sensing methods often struggle in complex terrain and fog-contaminated conditions. To address the challenges of high-resolution UAV-based snow and ice segmentation—including visual similarity, fragmented spatial distributions, and terrain shadow interference—we introduce IceSnow-Net, a U-Net-based architecture enhanced with three key components: (1) a ResNet50 backbone with atrous convolutions to expand the receptive field, (2) an Atrous Spatial Pyramid Pooling (ASPP) module for multi-scale context aggregation, and (3) an auxiliary path loss for deep supervision to enhance boundary delineation and training stability. The model was trained and validated on UAV-captured orthoimagery from Ganzi Prefecture, Sichuan, China. The experimental results demonstrate that IceSnow-Net achieved excellent performance compared to other models, attaining a mean Intersection over Union (mIoU) of 98.74%, while delivering 27% higher computational efficiency than U-Mamba. Ablation studies further validated the individual contributions of each module. Overall, IceSnow-Net provides an effective and accurate solution for cryosphere monitoring in topographically complex environments using UAV imagery. Full article
(This article belongs to the Special Issue Recent Progress in UAV-AI Remote Sensing II)
Show Figures

Figure 1

25 pages, 5957 KB  
Article
Benchmarking IoT Simulation Frameworks for Edge–Fog–Cloud Architectures: A Comparative and Experimental Study
by Fatima Bendaouch, Hayat Zaydi, Safae Merzouk and Saliha Assoul
Future Internet 2025, 17(9), 382; https://doi.org/10.3390/fi17090382 - 26 Aug 2025
Viewed by 896
Abstract
Current IoT systems are structured around Edge, Fog, and Cloud layers to manage data and resource constraints more effectively. Although several studies have examined IoT simulators from a functional angle, few have combined technical comparisons with experimental validation under realistic conditions. This lack [...] Read more.
Current IoT systems are structured around Edge, Fog, and Cloud layers to manage data and resource constraints more effectively. Although several studies have examined IoT simulators from a functional angle, few have combined technical comparisons with experimental validation under realistic conditions. This lack of integration limits the practical value of prior results and complicates tool selection for distributed architectures. This work introduces a selection and evaluation methodology for simulators that explicitly represent the Edge–Fog–Cloud continuum. Thirteen open-source tools are analyzed based on functional, technical, and operational features. Among them, iFogSim2 and FogNetSim++ are selected for a detailed experimental comparison on their support of mobility, resource allocation, and energy modeling across all layers. A shared hybrid IoT scenario is simulated using eight key metrics: execution time, application loop delay, CPU processing time per tuple, energy consumption, cloud execution cost, network usage, scalability, and robustness. The analysis reveals distinct modeling strategies: FogNetSim++ reduces loop latency by 48% and maintains stable performance at scale but shows high data loss under overload. In contrast, iFogSim2 consumes up to 80% less energy and preserves message continuity in stressful conditions, albeit with longer execution times. These outcomes reflect the trade-offs between modeling granularity, performance stability, and system resilience. Full article
Show Figures

Figure 1

49 pages, 1694 KB  
Review
Analysis of Deep Reinforcement Learning Algorithms for Task Offloading and Resource Allocation in Fog Computing Environments
by Endris Mohammed Ali, Jemal Abawajy, Frezewd Lemma and Samira A. Baho
Sensors 2025, 25(17), 5286; https://doi.org/10.3390/s25175286 - 25 Aug 2025
Viewed by 1595
Abstract
Fog computing is increasingly preferred over cloud computing for processing tasks from Internet of Things (IoT) devices with limited resources. However, placing tasks and allocating resources in distributed and dynamic fog environments remains a major challenge, especially when trying to meet strict Quality [...] Read more.
Fog computing is increasingly preferred over cloud computing for processing tasks from Internet of Things (IoT) devices with limited resources. However, placing tasks and allocating resources in distributed and dynamic fog environments remains a major challenge, especially when trying to meet strict Quality of Service (QoS) requirements. Deep reinforcement learning (DRL) has emerged as a promising solution to these challenges, offering adaptive, data-driven decision-making in real-time and uncertain conditions. While several surveys have explored DRL in fog computing, most focus on traditional centralized offloading approaches or emphasize reinforcement learning (RL) with limited integration of deep learning. To address this gap, this paper presents a comprehensive and focused survey on the full-scale application of DRL to the task offloading problem in fog computing environments involving multiple user devices and multiple fog nodes. We systematically analyze and classify the literature based on architecture, resource allocation methods, QoS objectives, offloading topology and control, optimization strategies, DRL techniques used, and application scenarios. We also introduce a taxonomy of DRL-based task offloading models and highlight key challenges, open issues, and future research directions. This survey serves as a valuable resource for researchers by identifying unexplored areas and suggesting new directions for advancing DRL-based solutions in fog computing. For practitioners, it provides insights into selecting suitable DRL techniques and system designs to implement scalable, efficient, and QoS-aware fog computing applications in real-world environments. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 4538 KB  
Article
CNN–Transformer-Based Model for Maritime Blurred Target Recognition
by Tianyu Huang, Chao Pan, Jin Liu and Zhiwei Kang
Electronics 2025, 14(17), 3354; https://doi.org/10.3390/electronics14173354 - 23 Aug 2025
Viewed by 528
Abstract
In maritime blurred image recognition, ship collision accidents frequently result from three primary blur types: (1) motion blur from vessel movement in complex sea conditions, (2) defocus blur due to water vapor refraction, and (3) scattering blur caused by sea fog interference. This [...] Read more.
In maritime blurred image recognition, ship collision accidents frequently result from three primary blur types: (1) motion blur from vessel movement in complex sea conditions, (2) defocus blur due to water vapor refraction, and (3) scattering blur caused by sea fog interference. This paper proposes a dual-branch recognition method specifically designed for motion blur, which represents the most prevalent blur type in maritime scenarios. Conventional approaches exhibit constrained computational efficiency and limited adaptability across different modalities. To overcome these limitations, we propose a hybrid CNN–Transformer architecture: the CNN branch captures local blur characteristics, while the enhanced Transformer module models long-range dependencies via attention mechanisms. The CNN branch employs a lightweight ResNet variant, in which conventional residual blocks are substituted with Multi-Scale Gradient-Aware Residual Block (MSG-ARB). This architecture employs learnable gradient convolution for explicit local gradient feature extraction and utilizes gradient content gating to strengthen blur-sensitive region representation, significantly improving computational efficiency compared to conventional CNNs. The Transformer branch incorporates a Hierarchical Swin Transformer (HST) framework with Shifted Window-based Multi-head Self-Attention for global context modeling. The proposed method incorporates blur invariant Positional Encoding (PE) to enhance blur spectrum modeling capability, while employing DyT (Dynamic Tanh) module with learnable α parameters to replace traditional normalization layers. This architecture achieves a significant reduction in computational costs while preserving feature representation quality. Moreover, it efficiently computes long-range image dependencies using a compact 16 × 16 window configuration. The proposed feature fusion module synergistically integrates CNN-based local feature extraction with Transformer-enabled global representation learning, achieving comprehensive feature modeling across different scales. To evaluate the model’s performance and generalization ability, we conducted comprehensive experiments on four benchmark datasets: VAIS, GoPro, Mini-ImageNet, and Open Images V4. Experimental results show that our method achieves superior classification accuracy compared to state-of-the-art approaches, while simultaneously enhancing inference speed and reducing GPU memory consumption. Ablation studies confirm that the DyT module effectively suppresses outliers and improves computational efficiency, particularly when processing low-quality input data. Full article
Show Figures

Figure 1

22 pages, 15242 KB  
Article
A Modality Alignment and Fusion-Based Method for Around-the-Clock Remote Sensing Object Detection
by Yongjun Qi, Shaohua Yang, Jiahao Chen, Meng Zhang, Jie Zhu, Xin Liu and Hongxing Zheng
Sensors 2025, 25(16), 4964; https://doi.org/10.3390/s25164964 - 11 Aug 2025
Cited by 1 | Viewed by 937
Abstract
Cross-modal remote sensing object detection holds significant potential for around-the-clock applications. However, the modality differences between cross-modal data and the degradation of feature quality under adverse weather conditions limit detection performance. To address these challenges, this paper presents a novel cross-modal remote sensing [...] Read more.
Cross-modal remote sensing object detection holds significant potential for around-the-clock applications. However, the modality differences between cross-modal data and the degradation of feature quality under adverse weather conditions limit detection performance. To address these challenges, this paper presents a novel cross-modal remote sensing object detection framework designed to overcome two critical challenges in around-the-clock applications: (1) significant modality disparities between visible light, infrared, and synthetic aperture radar data, and (2) severe feature degradation under adverse weather conditions including fog, and nighttime scenarios. Our primary contributions are as follows: First, we develop a multi-scale feature extraction module that employs a hierarchical convolutional architecture to capture both fine-grained details and contextual information, effectively compensating for missing or blurred features in degraded visible-light images. Second, we introduce an innovative feature interaction module that utilizes cross-attention mechanisms to establish long-range dependencies across modalities while dynamically suppressing noise interference through adaptive feature selection. Third, we propose a feature correction fusion module that performs spatial alignment of object boundaries and channel-wise optimization of global feature consistency, enabling robust fusion of complementary information from different modalities. The proposed framework is validated on visible light, infrared, and SAR modalities. Extensive experiments on three challenging datasets (LLVIP, OGSOD, and Drone Vehicle) demonstrate our framework’s superior performance, achieving state-of-the-art mean average precision scores of 66.3%, 58.6%, and 71.7%, respectively, representing significant improvements over existing methods in scenarios with modality differences or extreme weather conditions. The proposed solution not only advances the technical frontier of cross-modal object detection but also provides practical value for mission-critical applications such as 24/7 surveillance systems, military reconnaissance, and emergency response operations where reliable around-the-clock detection is essential. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 9888 KB  
Article
WeatherClean: An Image Restoration Algorithm for UAV-Based Railway Inspection in Adverse Weather
by Kewen Wang, Shaobing Yang, Zexuan Zhang, Zhipeng Wang, Limin Jia, Mengwei Li and Shengjia Yu
Sensors 2025, 25(15), 4799; https://doi.org/10.3390/s25154799 - 4 Aug 2025
Viewed by 662
Abstract
UAV-based inspections are an effective way to ensure railway safety and have gained significant attention. However, images captured during complex weather conditions, such as rain, snow, or fog, often suffer from severe degradation, affecting image recognition accuracy. Existing algorithms for removing rain, snow, [...] Read more.
UAV-based inspections are an effective way to ensure railway safety and have gained significant attention. However, images captured during complex weather conditions, such as rain, snow, or fog, often suffer from severe degradation, affecting image recognition accuracy. Existing algorithms for removing rain, snow, and fog have two main limitations: they do not adaptively learn features under varying weather complexities and struggle with managing complex noise patterns in drone inspections, leading to incomplete noise removal. To address these challenges, this study proposes a novel framework for removing rain, snow, and fog from drone images, called WeatherClean. This framework introduces a Weather Complexity Adjustment Factor (WCAF) in a parameterized adjustable network architecture to process weather degradation of varying degrees adaptively. It also employs a hierarchical multi-scale cropping strategy to enhance the recovery of fine noise and edge structures. Additionally, it incorporates a degradation synthesis method based on atmospheric scattering physical models to generate training samples that align with real-world weather patterns, thereby mitigating data scarcity issues. Experimental results show that WeatherClean outperforms existing methods by effectively removing noise particles while preserving image details. This advancement provides more reliable high-definition visual references for drone-based railway inspections, significantly enhancing inspection capabilities under complex weather conditions and ensuring the safety of railway operations. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 1214 KB  
Article
Predictive Maintenance System to RUL Prediction of Li-Ion Batteries and Identify the Fault Type of Brushless DC Electric Motor from UAVs
by Dragos Alexandru Andrioaia
Sensors 2025, 25(15), 4782; https://doi.org/10.3390/s25154782 - 3 Aug 2025
Cited by 2 | Viewed by 978
Abstract
Unmanned Aerial Vehicles have started to be used more and more due to the benefits they bring. Failure of Unmanned Aerial Vehicle components may result in loss of control, which may cause property damage or personal injury. In order to increase the operational [...] Read more.
Unmanned Aerial Vehicles have started to be used more and more due to the benefits they bring. Failure of Unmanned Aerial Vehicle components may result in loss of control, which may cause property damage or personal injury. In order to increase the operational safety of the Unmanned Aerial Vehicle, the implementation of a Predictive Maintenance system using the Internet of Things is required. In this paper, the authors propose a new architecture of Predictive Maintenance system for Unmanned Aerial Vehicles that is able to identify the fault type of Brushless DC electric motor and determine the Remaining Useful Life of the Li-ion batteries. In order to create the Predictive Maintenance system within the Unmanned Aerial Vehicle, an architecture based on Fog Computing was proposed and Machine Learning was used to extract knowledge from the data. The proposed architecture was practically validated. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

28 pages, 2465 KB  
Article
Latency-Aware and Energy-Efficient Task Offloading in IoT and Cloud Systems with DQN Learning
by Amina Benaboura, Rachid Bechar, Walid Kadri, Tu Dac Ho, Zhenni Pan and Shaaban Sahmoud
Electronics 2025, 14(15), 3090; https://doi.org/10.3390/electronics14153090 - 1 Aug 2025
Viewed by 1724
Abstract
The exponential proliferation of the Internet of Things (IoT) and optical IoT (O-IoT) has introduced substantial challenges concerning computational capacity and energy efficiency. IoT devices generate vast volumes of aggregated data and require intensive processing, often resulting in elevated latency and excessive energy [...] Read more.
The exponential proliferation of the Internet of Things (IoT) and optical IoT (O-IoT) has introduced substantial challenges concerning computational capacity and energy efficiency. IoT devices generate vast volumes of aggregated data and require intensive processing, often resulting in elevated latency and excessive energy consumption. Task offloading has emerged as a viable solution; however, many existing strategies fail to adequately optimize both latency and energy usage. This paper proposes a novel task-offloading approach based on deep Q-network (DQN) learning, designed to intelligently and dynamically balance these critical metrics. The proposed framework continuously refines real-time task offloading decisions by leveraging the adaptive learning capabilities of DQN, thereby substantially reducing latency and energy consumption. To further enhance system performance, the framework incorporates optical networks into the IoT–fog–cloud architecture, capitalizing on their high-bandwidth and low-latency characteristics. This integration facilitates more efficient distribution and processing of tasks, particularly in data-intensive IoT applications. Additionally, we present a comparative analysis between the proposed DQN algorithm and the optimal strategy. Through extensive simulations, we demonstrate the superior effectiveness of the proposed DQN framework across various IoT and O-IoT scenarios compared to the BAT and DJA approaches, achieving improvements in energy consumption and latency of 35%, 50%, 30%, and 40%, respectively. These findings underscore the significance of selecting an appropriate offloading strategy tailored to the specific requirements of IoT and O-IoT applications, particularly with regard to environmental stability and performance demands. Full article
Show Figures

Figure 1

17 pages, 5189 KB  
Article
YOLO-Extreme: Obstacle Detection for Visually Impaired Navigation Under Foggy Weather
by Wei Wang, Bin Jing, Xiaoru Yu, Wei Zhang, Shengyu Wang, Ziqi Tang and Liping Yang
Sensors 2025, 25(14), 4338; https://doi.org/10.3390/s25144338 - 11 Jul 2025
Viewed by 1428
Abstract
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. [...] Read more.
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. The proposed architecture incorporates three novel modules: the Dual-Branch Bottleneck Block (DBB) for capturing both local spatial and global semantic features, the Multi-Dimensional Collaborative Attention Module (MCAM) for joint spatial-channel attention modeling to enhance salient obstacle features and reduce background interference in foggy conditions, and the Channel-Selective Fusion Block (CSFB) for robust multi-scale feature integration. Comprehensive experiments conducted on the Real-world Task-driven Traffic Scene (RTTS) foggy dataset demonstrate that YOLO-Extreme achieves state-of-the-art detection accuracy and maintains high inference speed, outperforming existing dehazing-and-detect and mainstream object detection methods. To further verify the generalization capability of the proposed framework, we also performed cross-dataset experiments on the Foggy Cityscapes dataset, where YOLO-Extreme consistently demonstrated superior detection performance across diverse foggy urban scenes. The proposed framework significantly improves the reliability and safety of assistive navigation for visually impaired individuals under challenging weather conditions, offering practical value for real-world deployment. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Back to TopTop