Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (447)

Search Parameters:
Keywords = multi-focus fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3506 KiB  
Review
A Review of Spatial Positioning Methods Applied to Magnetic Climbing Robots
by Haolei Ru, Meiping Sheng, Jiahui Qi, Zhanghao Li, Lei Cheng, Jiahao Zhang, Jiangjian Xiao, Fei Gao, Baolei Wang and Qingwei Jia
Electronics 2025, 14(15), 3069; https://doi.org/10.3390/electronics14153069 (registering DOI) - 31 Jul 2025
Abstract
Magnetic climbing robots hold significant value for operations in complex industrial environments, particularly for the inspection and maintenance of large-scale metal structures. High-precision spatial positioning is the foundation for enabling autonomous and intelligent operations in such environments. However, the existing literature lacks a [...] Read more.
Magnetic climbing robots hold significant value for operations in complex industrial environments, particularly for the inspection and maintenance of large-scale metal structures. High-precision spatial positioning is the foundation for enabling autonomous and intelligent operations in such environments. However, the existing literature lacks a systematic and comprehensive review of spatial positioning techniques tailored to magnetic climbing robots. This paper addresses this gap by categorizing and evaluating current spatial positioning approaches. Initially, single-sensor-based methods are analyzed with a focus on external sensor approaches. Then, multi-sensor fusion methods are explored to overcome the shortcomings of single-sensor-based approaches. Multi-sensor fusion methods include simultaneous localization and mapping (SLAM), integrated positioning systems, and multi-robot cooperative positioning. To address non-uniform noise and environmental interference, both analytical and learning-based reinforcement approaches are reviewed. Common analytical methods include Kalman-type filtering, particle filtering, and correlation filtering, while typical learning-based approaches involve deep reinforcement learning (DRL) and neural networks (NNs). Finally, challenges and future development trends are discussed. Multi-sensor fusion and lightweight design are the future trends in the advancement of spatial positioning technologies for magnetic climbing robots. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

20 pages, 10161 KiB  
Article
HybridFilm: A Mixed-Reality History Tool Enabling Interoperability Between Screen Space and Immersive Environments
by Lisha Zhou, Meng Zhang, Yapeng Liu and Dongliang Guo
Appl. Sci. 2025, 15(15), 8489; https://doi.org/10.3390/app15158489 (registering DOI) - 31 Jul 2025
Abstract
History tools facilitate iterative analysis data by allowing users to view, retrieve, and revisit visualization states. However, traditional history tools are constrained by screen space limitations, which restrict the user’s ability to fully understand historical states and make it challenging to provide an [...] Read more.
History tools facilitate iterative analysis data by allowing users to view, retrieve, and revisit visualization states. However, traditional history tools are constrained by screen space limitations, which restrict the user’s ability to fully understand historical states and make it challenging to provide an intuitive preview of these states. Most immersive history tools, in contrast, operate independently of screen space and fail to consider their integration. This paper proposes HybridFilm, an innovative mixed-reality history tool that seamlessly integrates screen space and immersive reality. First, it expands the user’s understanding of historical states through a multi-source spatial fusion approach. Second, it proposes a “focus + context”-based multi-source spatial historical data visualization and interaction scheme. Furthermore, we assessed the usability and utility of HybridFilm through experimental evaluation. In comparison to traditional history tools, HybridFilm offers a more intuitive and immersive experience while maintaining a comparable level of interaction comfort and fluency. Full article
(This article belongs to the Special Issue Virtual and Augmented Reality: Theory, Methods, and Applications)
Show Figures

Figure 1

19 pages, 3720 KiB  
Article
Improved of YOLOv8-n Algorithm for Steel Surface Defect Detection
by Qingqing Xiang, Gang Wu, Zhiqiang Liu and Xudong Zeng
Metals 2025, 15(8), 843; https://doi.org/10.3390/met15080843 - 28 Jul 2025
Viewed by 184
Abstract
To address the limitations in multi-scale feature processing and illumination sensitivity of existing steel surface defect detection algorithms, we proposed ADP-YOLOv8-n, enhancing accuracy and computational efficiency through advanced feature fusion and optimized network architecture. Firstly, an adaptive weighted down-sampling (ADSConv) module was proposed, [...] Read more.
To address the limitations in multi-scale feature processing and illumination sensitivity of existing steel surface defect detection algorithms, we proposed ADP-YOLOv8-n, enhancing accuracy and computational efficiency through advanced feature fusion and optimized network architecture. Firstly, an adaptive weighted down-sampling (ADSConv) module was proposed, which improves detector adaptability to diverse defects via the weighted fusion of down-sampled feature maps. Next, the C2f_DWR module was proposed, integrating optimized C2F architecture with a streamlined DWR design to enhance feature extraction efficiency while reducing computational complexity. Then, a Multi-Scale-Focus Diffusion Pyramid was designed to adaptively handle multi-scale object detection by dynamically adjusting feature fusion, thus reducing feature redundancy and information loss while maintaining a balance between detailed and global information. Experiments demonstrate that the proposed ADP-YOLOv8-n detection algorithm achieves superior performance, effectively balancing detection accuracy, inference speed, and model compactness. Full article
(This article belongs to the Special Issue Nondestructive Testing Methods for Metallic Material)
Show Figures

Figure 1

25 pages, 27219 KiB  
Article
KCUNET: Multi-Focus Image Fusion via the Parallel Integration of KAN and Convolutional Layers
by Jing Fang, Ruxian Wang, Xinglin Ning, Ruiqing Wang, Shuyun Teng, Xuran Liu, Zhipeng Zhang, Wenfeng Lu, Shaohai Hu and Jingjing Wang
Entropy 2025, 27(8), 785; https://doi.org/10.3390/e27080785 - 24 Jul 2025
Viewed by 153
Abstract
Multi-focus image fusion (MFIF) is an image-processing method that aims to generate fully focused images by integrating source images from different focal planes. However, the defocus spread effect (DSE) often leads to blurred or jagged focus/defocus boundaries in fused images, which affects the [...] Read more.
Multi-focus image fusion (MFIF) is an image-processing method that aims to generate fully focused images by integrating source images from different focal planes. However, the defocus spread effect (DSE) often leads to blurred or jagged focus/defocus boundaries in fused images, which affects the quality of the image. To address this issue, this paper proposes a novel model that embeds the Kolmogorov–Arnold network with convolutional layers in parallel within the U-Net architecture (KCUNet). This model keeps the spatial dimensions of the feature map constant to maintain high-resolution details while progressively increasing the number of channels to capture multi-level features at the encoding stage. In addition, KCUNet incorporates a content-guided attention mechanism to enhance edge information processing, which is crucial for DSE reduction and edge preservation. The model’s performance is optimized through a hybrid loss function that evaluates in several aspects, including edge alignment, mask prediction, and image quality. Finally, comparative evaluations against 15 state-of-the-art methods demonstrate KCUNet’s superior performance in both qualitative and quantitative analyses. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

35 pages, 1231 KiB  
Review
Toward Intelligent Underwater Acoustic Systems: Systematic Insights into Channel Estimation and Modulation Methods
by Imran A. Tasadduq and Muhammad Rashid
Electronics 2025, 14(15), 2953; https://doi.org/10.3390/electronics14152953 - 24 Jul 2025
Viewed by 271
Abstract
Underwater acoustic (UWA) communication supports many critical applications but still faces several physical-layer signal processing challenges. In response, recent advances in machine learning (ML) and deep learning (DL) offer promising solutions to improve signal detection, modulation adaptability, and classification accuracy. These developments highlight [...] Read more.
Underwater acoustic (UWA) communication supports many critical applications but still faces several physical-layer signal processing challenges. In response, recent advances in machine learning (ML) and deep learning (DL) offer promising solutions to improve signal detection, modulation adaptability, and classification accuracy. These developments highlight the need for a systematic evaluation to compare various ML/DL models and assess their performance across diverse underwater conditions. However, most existing reviews on ML/DL-based UWA communication focus on isolated approaches rather than integrated system-level perspectives, which limits cross-domain insights and reduces their relevance to practical underwater deployments. Consequently, this systematic literature review (SLR) synthesizes 43 studies (2020–2025) on ML and DL approaches for UWA communication, covering channel estimation, adaptive modulation, and modulation recognition across both single- and multi-carrier systems. The findings reveal that models such as convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and generative adversarial networks (GANs) enhance channel estimation performance, achieving error reductions and bit error rate (BER) gains ranging from 103 to 106. Adaptive modulation techniques incorporating support vector machines (SVMs), CNNs, and reinforcement learning (RL) attain classification accuracies exceeding 98% and throughput improvements of up to 25%. For modulation recognition, architectures like sequence CNNs, residual networks, and hybrid convolutional–recurrent models achieve up to 99.38% accuracy with latency below 10 ms. These performance metrics underscore the viability of ML/DL-based solutions in optimizing physical-layer tasks for real-world UWA deployments. Finally, the SLR identifies key challenges in UWA communication, including high complexity, limited data, fragmented performance metrics, deployment realities, energy constraints and poor scalability. It also outlines future directions like lightweight models, physics-informed learning, advanced RL strategies, intelligent resource allocation, and robust feature fusion to build reliable and intelligent underwater systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

41 pages, 2824 KiB  
Review
Assessing Milk Authenticity Using Protein and Peptide Biomarkers: A Decade of Progress in Species Differentiation and Fraud Detection
by Achilleas Karamoutsios, Pelagia Lekka, Chrysoula Chrysa Voidarou, Marilena Dasenaki, Nikolaos S. Thomaidis, Ioannis Skoufos and Athina Tzora
Foods 2025, 14(15), 2588; https://doi.org/10.3390/foods14152588 - 23 Jul 2025
Viewed by 611
Abstract
Milk is a nutritionally rich food and a frequent target of economically motivated adulteration, particularly through substitution with lower-cost milk types. Over the past decade, significant progress has been made in the authentication of milk using advanced proteomic and chemometric approaches, with a [...] Read more.
Milk is a nutritionally rich food and a frequent target of economically motivated adulteration, particularly through substitution with lower-cost milk types. Over the past decade, significant progress has been made in the authentication of milk using advanced proteomic and chemometric approaches, with a focus on the discovery and application of protein and peptide biomarkers for species differentiation and fraud detection. Recent innovations in both top-down and bottom-up proteomics have markedly improved the sensitivity and specificity of detecting key molecular targets, including caseins and whey proteins. Peptide-based methods are especially valuable in processed dairy products due to their thermal stability and resilience to harsh treatment, although their species specificity may be limited when sequences are conserved across related species. Robust chemometric approaches are increasingly integrated with proteomic pipelines to handle high-dimensional datasets and enhance classification performance. Multivariate techniques, such as principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA), are frequently employed to extract discriminatory features and model adulteration scenarios. Despite these advances, key challenges persist, including the lack of standardized protocols, variability in sample preparation, and the need for broader validation across breeds, geographies, and production systems. Future progress will depend on the convergence of high-resolution proteomics with multi-omics integration, structured data fusion, and machine learning frameworks, enabling scalable, specific, and robust solutions for milk authentication in increasingly complex food systems. Full article
Show Figures

Figure 1

25 pages, 2727 KiB  
Review
AI-Powered Next-Generation Technology for Semiconductor Optical Metrology: A Review
by Weiwang Xu, Houdao Zhang, Lingjing Ji and Zhongyu Li
Micromachines 2025, 16(8), 838; https://doi.org/10.3390/mi16080838 - 22 Jul 2025
Viewed by 422
Abstract
As semiconductor manufacturing advances into the angstrom-scale era characterized by three-dimensional integration, conventional metrology technologies face fundamental limitations regarding accuracy, speed, and non-destructiveness. Although optical spectroscopy has emerged as a prominent research focus, its application in complex manufacturing scenarios continues to confront significant [...] Read more.
As semiconductor manufacturing advances into the angstrom-scale era characterized by three-dimensional integration, conventional metrology technologies face fundamental limitations regarding accuracy, speed, and non-destructiveness. Although optical spectroscopy has emerged as a prominent research focus, its application in complex manufacturing scenarios continues to confront significant technical barriers. This review establishes three concrete objectives: To categorize AI–optical spectroscopy integration paradigms spanning forward surrogate modeling, inverse prediction, physics-informed neural networks (PINNs), and multi-level architectures; to benchmark their efficacy against critical industrial metrology challenges including tool-to-tool (T2T) matching and high-aspect-ratio (HAR) structure characterization; and to identify unresolved bottlenecks for guiding next-generation intelligent semiconductor metrology. By categorically elaborating on the innovative applications of AI algorithms—such as forward surrogate models, inverse modeling techniques, physics-informed neural networks (PINNs), and multi-level network architectures—in optical spectroscopy, this work methodically assesses the implementation efficacy and limitations of each technical pathway. Through actual application case studies involving J-profiler software 5.0 and associated algorithms, this review validates the significant efficacy of AI technologies in addressing critical industrial challenges, including tool-to-tool (T2T) matching. The research demonstrates that the fusion of AI and optical spectroscopy delivers technological breakthroughs for semiconductor metrology; however, persistent challenges remain concerning data veracity, insufficient datasets, and cross-scale compatibility. Future research should prioritize enhancing model generalization capability, optimizing data acquisition and utilization strategies, and balancing algorithm real-time performance with accuracy, thereby catalyzing the transformation of semiconductor manufacturing towards an intelligence-driven advanced metrology paradigm. Full article
(This article belongs to the Special Issue Recent Advances in Lithography)
Show Figures

Figure 1

18 pages, 2549 KiB  
Article
A Multi-Fusion Early Warning Method for Vehicle–Pedestrian Collision Risk at Unsignalized Intersections
by Weijing Zhu, Junji Dai, Xiaoqin Zhou, Xu Gao, Rui Cheng, Bingheng Yang, Enchu Li, Qingmei Lü, Wenting Wang and Qiuyan Tan
World Electr. Veh. J. 2025, 16(7), 407; https://doi.org/10.3390/wevj16070407 - 21 Jul 2025
Viewed by 258
Abstract
Traditional collision risk warning methods primarily focus on vehicle-to-vehicle collisions, neglecting conflicts between vehicles and vulnerable road users (VRUs) such as pedestrians, while the difficulty in predicting pedestrian trajectories further limits the accuracy of collision warnings. To address this problem, this study proposes [...] Read more.
Traditional collision risk warning methods primarily focus on vehicle-to-vehicle collisions, neglecting conflicts between vehicles and vulnerable road users (VRUs) such as pedestrians, while the difficulty in predicting pedestrian trajectories further limits the accuracy of collision warnings. To address this problem, this study proposes a vehicle-to-everything-based (V2X) multi-fusion vehicle–pedestrian collision warning method, aiming to enhance the traffic safety protection for VRUs. First, Unmanned Aerial Vehicle aerial imagery combined with the YOLOv7 and DeepSort algorithms is utilized to achieve target detection and tracking at unsignalized intersections, thereby constructing a vehicle–pedestrian interaction trajectory dataset. Subsequently, key foundational modules for collision warning are developed, including the vehicle trajectory module, the pedestrian trajectory module, and the risk detection module. The vehicle trajectory module is based on a kinematic model, while the pedestrian trajectory module adopts an Attention-based Social GAN (AS-GAN) model that integrates a generative adversarial network with a soft attention mechanism, enhancing prediction accuracy through a dual-discriminator strategy involving adversarial loss and displacement loss. The risk detection module applies an elliptical buffer zone algorithm to perform dynamic spatial collision determination. Finally, a collision warning framework based on the Monte Carlo (MC) method is developed. Multiple sampled pedestrian trajectories are generated by applying Gaussian perturbations to the predicted mean trajectory and combined with vehicle trajectories and collision determination results to identify potential collision targets. Furthermore, the driver perception–braking time (TTM) is incorporated to estimate the joint collision probability and assist in warning decision-making. Simulation results show that the proposed warning method achieves an accuracy of 94.5% at unsignalized intersections, outperforming traditional Time-to-Collision (TTC) and braking distance models, and effectively reducing missed and false warnings, thereby improving pedestrian traffic safety at unsignalized intersections. Full article
Show Figures

Figure 1

40 pages, 16352 KiB  
Review
Surface Protection Technologies for Earthen Sites in the 21st Century: Hotspots, Evolution, and Future Trends in Digitalization, Intelligence, and Sustainability
by Yingzhi Xiao, Yi Chen, Yuhao Huang and Yu Yan
Coatings 2025, 15(7), 855; https://doi.org/10.3390/coatings15070855 - 20 Jul 2025
Viewed by 637
Abstract
As vital material carriers of human civilization, earthen sites are experiencing continuous surface deterioration under the combined effects of weathering and anthropogenic damage. Traditional surface conservation techniques, due to their poor compatibility and limited reversibility, struggle to address the compound challenges of micro-scale [...] Read more.
As vital material carriers of human civilization, earthen sites are experiencing continuous surface deterioration under the combined effects of weathering and anthropogenic damage. Traditional surface conservation techniques, due to their poor compatibility and limited reversibility, struggle to address the compound challenges of micro-scale degradation and macro-scale deformation. With the deep integration of digital twin technology, spatial information technologies, intelligent systems, and sustainable concepts, earthen site surface conservation technologies are transitioning from single-point applications to multidimensional integration. However, challenges remain in terms of the insufficient systematization of technology integration and the absence of a comprehensive interdisciplinary theoretical framework. Based on the dual-core databases of Web of Science and Scopus, this study systematically reviews the technological evolution of surface conservation for earthen sites between 2000 and 2025. CiteSpace 6.2 R4 and VOSviewer 1.6 were used for bibliometric visualization analysis, which was innovatively combined with manual close reading of the key literature and GPT-assisted semantic mining (error rate < 5%) to efficiently identify core research themes and infer deeper trends. The results reveal the following: (1) technological evolution follows a three-stage trajectory—from early point-based monitoring technologies, such as remote sensing (RS) and the Global Positioning System (GPS), to spatial modeling technologies, such as light detection and ranging (LiDAR) and geographic information systems (GIS), and, finally, to today’s integrated intelligent monitoring systems based on multi-source fusion; (2) the key surface technology system comprises GIS-based spatial data management, high-precision modeling via LiDAR, 3D reconstruction using oblique photogrammetry, and building information modeling (BIM) for structural protection, while cutting-edge areas focus on digital twin (DT) and the Internet of Things (IoT) for intelligent monitoring, augmented reality (AR) for immersive visualization, and blockchain technologies for digital authentication; (3) future research is expected to integrate big data and cloud computing to enable multidimensional prediction of surface deterioration, while virtual reality (VR) will overcome spatial–temporal limitations and push conservation paradigms toward automation, intelligence, and sustainability. This study, grounded in the technological evolution of surface protection for earthen sites, constructs a triadic framework of “intelligent monitoring–technological integration–collaborative application,” revealing the integration needs between DT and VR for surface technologies. It provides methodological support for addressing current technical bottlenecks and lays the foundation for dynamic surface protection, solution optimization, and interdisciplinary collaboration. Full article
Show Figures

Graphical abstract

22 pages, 32971 KiB  
Article
Spatial-Channel Multiscale Transformer Network for Hyperspectral Unmixing
by Haixin Sun, Qiuguang Cao, Fanlei Meng, Jingwen Xu and Mengdi Cheng
Sensors 2025, 25(14), 4493; https://doi.org/10.3390/s25144493 - 19 Jul 2025
Viewed by 324
Abstract
In recent years, deep learning (DL) has been demonstrated remarkable capabilities in hyperspectral unmixing (HU) due to its powerful feature representation ability. Convolutional neural networks (CNNs) are effective in capturing local spatial information, but limited in modeling long-range dependencies. In contrast, transformer architectures [...] Read more.
In recent years, deep learning (DL) has been demonstrated remarkable capabilities in hyperspectral unmixing (HU) due to its powerful feature representation ability. Convolutional neural networks (CNNs) are effective in capturing local spatial information, but limited in modeling long-range dependencies. In contrast, transformer architectures extract global contextual features via multi-head self-attention (MHSA) mechanisms. However, most existing transformer-based HU methods focus only on spatial or spectral modeling at a single scale, lacking a unified mechanism to jointly explore spatial and channel-wise dependencies. This limitation is particularly critical for multiscale contextual representation in complex scenes. To address these issues, this article proposes a novel Spatial-Channel Multiscale Transformer Network (SCMT-Net) for HU. Specifically, a compact feature projection (CFP) module is first used to extract shallow discriminative features. Then, a spatial multiscale transformer (SMT) and a channel multiscale transformer (CMT) are sequentially applied to model contextual relations across spatial dimensions and long-range dependencies among spectral channels. In addition, a multiscale multi-head self-attention (MMSA) module is designed to extract rich multiscale global contextual and channel information, enabling a balance between accuracy and efficiency. An efficient feed-forward network (E-FFN) is further introduced to enhance inter-channel information flow and fusion. Experiments conducted on three real hyperspectral datasets (Samson, Jasper and Apex) and one synthetic dataset showed that SCMT-Net consistently outperformed existing approaches in both abundance estimation and endmember extraction, demonstrating superior accuracy and robustness. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 824 KiB  
Article
MMF-Gait: A Multi-Model Fusion-Enhanced Gait Recognition Framework Integrating Convolutional and Attention Networks
by Kamrul Hasan, Khandokar Alisha Tuhin, Md Rasul Islam Bapary, Md Shafi Ud Doula, Md Ashraful Alam, Md Atiqur Rahman Ahad and Md. Zasim Uddin
Symmetry 2025, 17(7), 1155; https://doi.org/10.3390/sym17071155 - 19 Jul 2025
Viewed by 358
Abstract
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often [...] Read more.
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often fails to determine the offender’s identity when they conceal their face by wearing helmets and masks to evade identification. In such cases, gait-based recognition is ideal for identifying offenders, and most existing work leverages a deep learning (DL) model. However, a single model often fails to capture a comprehensive selection of refined patterns in input data when external factors are present, such as variation in viewing angle, clothing, and carrying conditions. In response to this, this paper introduces a fusion-based multi-model gait recognition framework that leverages the potential of convolutional neural networks (CNNs) and a vision transformer (ViT) in an ensemble manner to enhance gait recognition performance. Here, CNNs capture spatiotemporal features, and ViT features multiple attention layers that focus on a particular region of the gait image. The first step in this framework is to obtain the Gait Energy Image (GEI) by averaging a height-normalized gait silhouette sequence over a gait cycle, which can handle the left–right gait symmetry of the gait. After that, the GEI image is fed through multiple pre-trained models and fine-tuned precisely to extract the depth spatiotemporal feature. Later, three separate fusion strategies are conducted, and the first one is decision-level fusion (DLF), which takes each model’s decision and employs majority voting for the final decision. The second is feature-level fusion (FLF), which combines the features from individual models through pointwise addition before performing gait recognition. Finally, a hybrid fusion combines DLF and FLF for gait recognition. The performance of the multi-model fusion-based framework was evaluated on three publicly available gait databases: CASIA-B, OU-ISIR D, and the OU-ISIR Large Population dataset. The experimental results demonstrate that the fusion-enhanced framework achieves superior performance. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

28 pages, 7608 KiB  
Article
A Forecasting Method for COVID-19 Epidemic Trends Using VMD and TSMixer-BiKSA Network
by Yuhong Li, Guihong Bi, Taonan Tong and Shirui Li
Computers 2025, 14(7), 290; https://doi.org/10.3390/computers14070290 - 18 Jul 2025
Viewed by 174
Abstract
The spread of COVID-19 is influenced by multiple factors, including control policies, virus characteristics, individual behaviors, and environmental conditions, exhibiting highly complex nonlinear dynamic features. The time series of new confirmed cases shows significant nonlinearity and non-stationarity. Traditional prediction methods that rely solely [...] Read more.
The spread of COVID-19 is influenced by multiple factors, including control policies, virus characteristics, individual behaviors, and environmental conditions, exhibiting highly complex nonlinear dynamic features. The time series of new confirmed cases shows significant nonlinearity and non-stationarity. Traditional prediction methods that rely solely on one-dimensional case data struggle to capture the multi-dimensional features of the data and are limited in handling nonlinear and non-stationary characteristics. Their prediction accuracy and generalization capabilities remain insufficient, and most existing studies focus on single-step forecasting, with limited attention to multi-step prediction. To address these challenges, this paper proposes a multi-module fusion prediction model—TSMixer-BiKSA network—that integrates multi-feature inputs, Variational Mode Decomposition (VMD), and a dual-branch parallel architecture for 1- to 3-day-ahead multi-step forecasting of new COVID-19 cases. First, variables highly correlated with the target sequence are selected through correlation analysis to construct a feature matrix, which serves as one input branch. Simultaneously, the case sequence is decomposed using VMD to extract low-complexity, highly regular multi-scale modal components as the other input branch, enhancing the model’s ability to perceive and represent multi-source information. The two input branches are then processed in parallel by the TSMixer-BiKSA network model. Specifically, the TSMixer module employs a multilayer perceptron (MLP) structure to alternately model along the temporal and feature dimensions, capturing cross-time and cross-variable dependencies. The BiGRU module extracts bidirectional dynamic features of the sequence, improving long-term dependency modeling. The KAN module introduces hierarchical nonlinear transformations to enhance high-order feature interactions. Finally, the SA attention mechanism enables the adaptive weighted fusion of multi-source information, reinforcing inter-module synergy and enhancing the overall feature extraction and representation capability. Experimental results based on COVID-19 case data from Italy and the United States demonstrate that the proposed model significantly outperforms existing mainstream methods across various error metrics, achieving higher prediction accuracy and robustness. Full article
Show Figures

Figure 1

43 pages, 1035 KiB  
Review
A Review of Internet of Things Approaches for Vehicle Accident Detection and Emergency Notification
by Mohammad Ali Sahraei and Said Ramadhan Mubarak Al Mamari
Sustainability 2025, 17(14), 6510; https://doi.org/10.3390/su17146510 - 16 Jul 2025
Viewed by 778
Abstract
The inspiration behind this specific research is based on addressing the growing need to improve road safety via the application of the Internet of Things (IoT) system. Although several investigations have discovered the possibility of IoT-based accident recognition, recent research remains fragmented, usually [...] Read more.
The inspiration behind this specific research is based on addressing the growing need to improve road safety via the application of the Internet of Things (IoT) system. Although several investigations have discovered the possibility of IoT-based accident recognition, recent research remains fragmented, usually concentrating on outdated science or specific use cases. This study aims to fill that gap by carefully examining and conducting a comparative analysis of 101 peer-reviewed articles published between 2008 and 2025, with a focus on IoT systems for accident recognition techniques. The review categorizes approaches depending on the sensor used, incorporation frameworks, and recognition techniques. The study examines numerous sensors, such as Global System for Mobile Communications/Global Positioning System (GSM/GPS), accelerometers, vibration, and many other superior sensors. The research shows the constraints and advantages of existing techniques, concentrating on the significance of multi-sensor utilization in enhancing recognition precision and dependability. Findings indicate that, although substantial improvements have been made in the use of IoT-based systems for accident recognition, problems such as substantial implementation costs, weather conditions, and data precision issues persist. Moreover, the research acknowledges deficiencies in standardization, as well as the requirement for strong communication systems to enhance the responsiveness of emergency services. As a result, the study suggests a plan for upcoming developments, concentrating on the incorporation of IoT-enabled infrastructure, sensor fusion approaches, and artificial intelligence. This study improves knowledge by offering an extensive viewpoint on IoT-based accident recognition, providing insights for upcoming research, and suggesting policies to facilitate implementation, eventually enhancing worldwide road safety. Full article
Show Figures

Figure 1

26 pages, 27107 KiB  
Article
MSFUnet: A Semantic Segmentation Network for Crop Leaf Growth Status Monitoring
by Zhihan Cheng and He Yan
AgriEngineering 2025, 7(7), 238; https://doi.org/10.3390/agriengineering7070238 - 15 Jul 2025
Viewed by 373
Abstract
Monitoring the growth status of crop leaves is an integral part of agricultural management and involves important tasks such as leaf shape analysis and area calculation. To achieve this goal, accurate leaf segmentation is a critical step. However, this task presents a challenge, [...] Read more.
Monitoring the growth status of crop leaves is an integral part of agricultural management and involves important tasks such as leaf shape analysis and area calculation. To achieve this goal, accurate leaf segmentation is a critical step. However, this task presents a challenge, as crop leaf images often feature substantial overlap, obstructing the precise differentiation of individual leaf edges. Moreover, existing segmentation methods fail to preserve fine edge details, a deficiency that compromises precise morphological analysis. To overcome these challenges, we introduce MSFUnet, an innovative network for semantic segmentation. MSFUnet integrates a multi-path feature fusion (MFF) mechanism and an edge-detail focus (EDF) module. The MFF module integrates multi-scale features to improve the model’s capacity for distinguishing overlapping leaf areas, while the EDF module employs extended convolution to accurately capture fine edge details. Collectively, these modules enable MSFUnet to achieve high-precision individual leaf segmentation. In addition, standard image augmentations (e.g., contrast/brightness adjustments) were applied to mitigate the impact of variable lighting conditions on leaf appearance in the input images, thereby improving model robustness. Experimental results indicate that MSFUnet attains an MIoU of 93.35%, outperforming conventional segmentation methods and highlighting its effectiveness in crop leaf growth monitoring. Full article
Show Figures

Figure 1

16 pages, 2721 KiB  
Article
An Adapter and Segmentation Network-Based Approach for Automated Atmospheric Front Detection
by Xinya Ding, Xuan Peng, Yanguang Xue, Liang Zhang, Tianying Wang and Yunpeng Zhang
Appl. Sci. 2025, 15(14), 7855; https://doi.org/10.3390/app15147855 - 14 Jul 2025
Viewed by 155
Abstract
This study presents AD-MRCNN, an advanced deep learning framework for automated atmospheric front detection that addresses two critical limitations in existing methods. First, current approaches directly input raw meteorological data without optimizing feature compatibility, potentially hindering model performance. Second, they typically only provide [...] Read more.
This study presents AD-MRCNN, an advanced deep learning framework for automated atmospheric front detection that addresses two critical limitations in existing methods. First, current approaches directly input raw meteorological data without optimizing feature compatibility, potentially hindering model performance. Second, they typically only provide frontal category information without identifying individual frontal systems. Our solution integrates two key innovations: 1. An intelligent adapter module that performs adaptive feature fusion, automatically weighting and combining multi-source meteorological inputs (including temperature, wind fields, and humidity data) to maximize their synergistic effects while minimizing feature conflicts; the utilized network achieves an average improvement of over 4% across various metrics. 2. An enhanced instance segmentation network based on Mask R-CNN architecture that simultaneously achieves (1) precise frontal type classification (cold/warm/stationary/occluded), (2) accurate spatial localization, and (3) identification of distinct frontal systems. Comprehensive evaluation using ERA5 reanalysis data (2009–2018) demonstrates significant improvements, including an 85.1% F1-score, outperforming traditional methods (TFP: 63.1%) and deep learning approaches (Unet: 83.3%), and a 31% reduction in false alarms compared to semantic segmentation methods. The framework’s modular design allows for potential application to other meteorological feature detection tasks. Future work will focus on incorporating temporal dynamics for frontal evolution prediction. Full article
Show Figures

Figure 1

Back to TopTop