Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,078)

Search Parameters:
Keywords = multi-time fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1659 KB  
Article
A Multi-View-Based Federated Learning Approach for Intrusion Detection
by Jia Yu, Guoqiang Wang, Nianfeng Shi, Raghav Saxena and Brian Lee
Electronics 2025, 14(21), 4166; https://doi.org/10.3390/electronics14214166 (registering DOI) - 24 Oct 2025
Abstract
Intrusion detection aims to identify the unauthorized activities within computer networks or systems by classifying events into normal or abnormal categories. As modern scenarios often involve multi-source data, multi-view fusion deep learning methods are employed to leverage diverse viewpoints for enhancing security threat [...] Read more.
Intrusion detection aims to identify the unauthorized activities within computer networks or systems by classifying events into normal or abnormal categories. As modern scenarios often involve multi-source data, multi-view fusion deep learning methods are employed to leverage diverse viewpoints for enhancing security threat detection. This paper introduces a novel intrusion detection approach using multi-view fusion within a federated learning framework, proposing an integrated AE Neural SVM (AE-NSVM) model that combines auto-encoder (AE) multi-view feature extraction and Support Vector Machine (SVM) classification. This approach simultaneously learns representative features from multiple views and classifies network samples into normal or seven attack categories while employing federated learning across clients to ensure adaptability and robustness in diverse network environments. The experimental results obtained from two benchmark datasets validate its superiority: on TON_IoT, the CAE-NSVM model achieves a highest F1-measure of 0.792 (1.4% higher than traditional pipeline systems); on UNSW-NB15, it delivers an F1-score of 0.829 with a 73% reduced training time and an 89% faster inference compared to baseline models. These results demonstrate the advantages of multi-view fusion in federated learning for balancing accuracy and efficiency in distributed intrusion detection systems. Full article
(This article belongs to the Special Issue Advances in Data Security: Challenges, Technologies, and Applications)
Show Figures

Figure 1

15 pages, 10961 KB  
Article
Research on Visual Target Detection Method for Smart City Unmanned Aerial Vehicles Based on Transformer
by Bo Qi, Hang Shi, Bocheng Zhao, Rongjun Mu and Mingying Huo
Aerospace 2025, 12(11), 949; https://doi.org/10.3390/aerospace12110949 (registering DOI) - 24 Oct 2025
Abstract
Unmanned aerial vehicles play a significant role in the automated inspection of future smart cities, which can ensure the safety of urban residents’ lives and property and the normal operation of the city. However, there may be situations where small targets in drone [...] Read more.
Unmanned aerial vehicles play a significant role in the automated inspection of future smart cities, which can ensure the safety of urban residents’ lives and property and the normal operation of the city. However, there may be situations where small targets in drone images are difficult to detect and the detection is unclear when the targets are similar to the environment. In response to the above problems, this paper proposes a real-time target detection method for unmanned aerial vehicle images based on Transformer. Aiming at the problem of small targets lacking visual features, a feature fusion module was designed, which realizes the interaction and fusion of features at different levels and improves the feature expression ability of small targets. Aiming at the problem of discontinuous features when the target is similar to the environment, a multi-head attention algorithm based on Transformer is designed. By extracting the context information of the target, the recognition ability of targets similar to the environment is improved. On the target image dataset collected by unmanned aerial vehicles in smart cities, the detection accuracy of the method described in this paper has reached 85.9%. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

18 pages, 3445 KB  
Article
Underwater Objective Detection Algorithm Based on YOLOv8-Improved Multimodality Image Fusion Technology
by Yage Qie, Chao Fang, Jinghua Huang, Donghao Wu and Jian Jiang
Machines 2025, 13(11), 982; https://doi.org/10.3390/machines13110982 (registering DOI) - 24 Oct 2025
Abstract
The field of underwater robotics is experiencing rapid growth, wherein accurate object detection constitutes a fundamental component. Given the prevalence of false alarms and omission errors caused by intricate subaquatic conditions and substantial image noise, this study introduces an enhanced detection framework that [...] Read more.
The field of underwater robotics is experiencing rapid growth, wherein accurate object detection constitutes a fundamental component. Given the prevalence of false alarms and omission errors caused by intricate subaquatic conditions and substantial image noise, this study introduces an enhanced detection framework that combines the YOLOv8 architecture with multimodal visual fusion methodology. To solve the problem of degraded detection performance of the model in complex environments like those with low illumination, features from Visible Light Image are fused with the Thermal Distribution Features exhibited by Infrared Image, thereby yielding more comprehensive image information. Furthermore, to precisely focus on crucial target regions and information, a Multi-Scale Cross-Axis Attention Mechanism (MSCA) is introduced, which significantly enhances Detection Accuracy. Finally, to meet the lightweight requirement of the model, an Efficient Shared Convolution Head (ESC_Head) is designed. The experimental findings reveal that the YOLOv8-FUSED framework attains a mean average precision (mAP) of 82.1%, marking an 8.7% enhancement compared to the baseline YOLOv8 architecture. The proposed approach also exhibits superior detection capabilities relative to existing techniques while simultaneously satisfying the critical requirement for real-time underwater object detection. Moreover, the proposed system successfully meets the essential criteria for real-time detection of underwater objects. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

37 pages, 14970 KB  
Article
Research on Strawberry Visual Recognition and 3D Localization Based on Lightweight RAFS-YOLO and RGB-D Camera
by Kaixuan Li, Xinyuan Wei, Qiang Wang and Wuping Zhang
Agriculture 2025, 15(21), 2212; https://doi.org/10.3390/agriculture15212212 (registering DOI) - 24 Oct 2025
Abstract
Improving the accuracy and real-time performance of strawberry recognition and localization algorithms remains a major challenge in intelligent harvesting. To address this, this study presents an integrated approach for strawberry maturity detection and 3D localization that combines a lightweight deep learning model with [...] Read more.
Improving the accuracy and real-time performance of strawberry recognition and localization algorithms remains a major challenge in intelligent harvesting. To address this, this study presents an integrated approach for strawberry maturity detection and 3D localization that combines a lightweight deep learning model with an RGB-D camera. Built upon the YOLOv11 framework, an enhanced RAFS-YOLO model is developed, incorporating three core modules to strengthen multi-scale feature fusion and spatial modeling capabilities. Specifically, the CRA module enhances spatial relationship perception through cross-layer attention, the HSFPN module performs hierarchical semantic filtering to suppress redundant features, and the DySample module dynamically optimizes the upsampling process to improve computational efficiency. By integrating the trained model with RGB-D depth data, the method achieves precise 3D localization of strawberries through coordinate mapping based on detection box centers. Experimental results indicate that RAFS-YOLO surpasses YOLOv11n, improving precision, recall, and mAP@50 by 4.2%, 3.8%, and 2.0%, respectively, while reducing parameters by 36.8% and computational cost by 23.8%. The 3D localization attains millimeter-level precision, with average RMSE values ranging from 0.21 to 0.31 cm across all axes. Overall, the proposed approach achieves a balance between detection accuracy, model efficiency, and localization precision, providing a reliable perception framework for intelligent strawberry-picking robots. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

24 pages, 5556 KB  
Article
Efficient Wearable Sensor-Based Activity Recognition for Human–Robot Collaboration in Agricultural Environments
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
Informatics 2025, 12(4), 115; https://doi.org/10.3390/informatics12040115 - 23 Oct 2025
Abstract
This study focuses on human awareness, a critical component in human–robot interaction, particularly within agricultural environments where interactions are enriched by complex contextual information. The main objective is identifying human activities occurring during collaborative harvesting tasks involving humans and robots. To achieve this, [...] Read more.
This study focuses on human awareness, a critical component in human–robot interaction, particularly within agricultural environments where interactions are enriched by complex contextual information. The main objective is identifying human activities occurring during collaborative harvesting tasks involving humans and robots. To achieve this, we propose a novel and lightweight deep learning model, named 1D-ResNeXt, designed explicitly for recognizing activities in agriculture-related human–robot collaboration. The model is built as an end-to-end architecture incorporating feature fusion and a multi-kernel convolutional block strategy. It utilizes residual connections and a split–transform–merge mechanism to mitigate performance degradation and reduce model complexity by limiting the number of trainable parameters. Sensor data were collected from twenty individuals with five wearable devices placed on different body parts. Each sensor was embedded with tri-axial accelerometers, gyroscopes, and magnetometers. Under real field conditions, the participants performed several sub-tasks commonly associated with agricultural labor, such as lifting and carrying loads. Before classification, the raw sensor signals were pre-processed to eliminate noise. The cleaned time-series data were then input into the proposed deep learning network for sequential pattern recognition. Experimental results showed that the chest-mounted sensor achieved the highest F1-score of 99.86%, outperforming other sensor placements and combinations. An analysis of temporal window sizes (0.5, 1.0, 1.5, and 2.0 s) demonstrated that the 0.5 s window provided the best recognition performance, indicating that key activity features in agriculture can be captured over short intervals. Moreover, a comprehensive evaluation of sensor modalities revealed that multimodal fusion of accelerometer, gyroscope, and magnetometer data yielded the best accuracy at 99.92%. The combination of accelerometer and gyroscope data offered an optimal compromise, achieving 99.49% accuracy while maintaining lower system complexity. These findings highlight the importance of strategic sensor placement and data fusion in enhancing activity recognition performance while reducing the need for extensive data and computational resources. This work contributes to developing intelligent, efficient, and adaptive collaborative systems, offering promising applications in agriculture and beyond, with improved safety, cost-efficiency, and real-time operational capability. Full article
Show Figures

Figure 1

24 pages, 3366 KB  
Article
Study of the Optimal YOLO Visual Detector Model for Enhancing UAV Detection and Classification in Optoelectronic Channels of Sensor Fusion Systems
by Ildar Kurmashev, Vladislav Semenyuk, Alberto Lupidi, Dmitriy Alyoshin, Liliya Kurmasheva and Alessandro Cantelli-Forti
Drones 2025, 9(11), 732; https://doi.org/10.3390/drones9110732 - 23 Oct 2025
Abstract
The rapid spread of unmanned aerial vehicles (UAVs) has created new challenges for airspace security, as drones are increasingly used for surveillance, smuggling, and potentially for attacks near critical infrastructure. A key difficulty lies in reliably distinguishing UAVs from visually similar birds in [...] Read more.
The rapid spread of unmanned aerial vehicles (UAVs) has created new challenges for airspace security, as drones are increasingly used for surveillance, smuggling, and potentially for attacks near critical infrastructure. A key difficulty lies in reliably distinguishing UAVs from visually similar birds in electro-optical surveillance channels, where complex backgrounds and visual noise often increase false alarms. To address this, we investigated recent YOLO architectures and developed an enhanced model named YOLOv12-ADBC, incorporating an adaptive hierarchical feature integration mechanism to strengthen multi-scale spatial fusion. This architectural refinement improves sensitivity to subtle inter-class differences between drones and birds. A dedicated dataset of 7291 images was used to train and evaluate five YOLO versions (v8–v12), together with the proposed YOLOv12-ADBC. Comparative experiments demonstrated that YOLOv12-ADBC achieved the best overall performance, with precision = 0.892, recall = 0.864, mAP50 = 0.881, mAP50–95 = 0.633, and per-class accuracy reaching 96.4% for drones and 80% for birds. In inference tests on three video sequences simulating realistic monitoring conditions, YOLOv12-ADBC consistently outperformed baselines, achieving a detection accuracy of 92.1–95.5% and confidence levels up to 88.6%, while maintaining real-time processing at 118–135 frames per second (FPS). These results demonstrate that YOLOv12-ADBC not only surpasses previous YOLO models but also offers strong potential as the optical module in multi-sensor fusion frameworks. Its integration with radar, RF, and acoustic channels is expected to further enhance system-level robustness, providing a practical pathway toward reliable UAV detection in modern airspace protection systems. Full article
Show Figures

Figure 1

23 pages, 3312 KB  
Article
Automatic Picking Method for the First Arrival Time of Microseismic Signals Based on Fractal Theory and Feature Fusion
by Huicong Xu, Kai Li, Pengfei Shan, Xuefei Wu, Shuai Zhang, Zeyang Wang, Chenguang Liu, Zhongming Yan, Liang Wu and Huachuan Wang
Fractal Fract. 2025, 9(11), 679; https://doi.org/10.3390/fractalfract9110679 - 23 Oct 2025
Abstract
Microseismic signals induced by mining activities often have low signal-to-noise ratios, and traditional picking methods are easily affected by noise, making accurate identification of P-wave arrivals difficult. To address this problem, this study proposes an adaptive denoising algorithm based on wavelet-threshold-enhanced Complete Ensemble [...] Read more.
Microseismic signals induced by mining activities often have low signal-to-noise ratios, and traditional picking methods are easily affected by noise, making accurate identification of P-wave arrivals difficult. To address this problem, this study proposes an adaptive denoising algorithm based on wavelet-threshold-enhanced Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and develops an automatic P-wave arrival picking method incorporating fractal box dimension features, along with a corresponding accuracy evaluation framework. The raw microseismic signals are decomposed using the improved CEEMDAN method, with high-frequency intrinsic mode functions (IMFs) processed by wavelet-threshold denoising and low- and mid-frequency IMFs retained for reconstruction, effectively suppressing background noise and enhancing signal clarity. Fractal box dimension is applied to characterize waveform complexity over short and long-time windows, and by introducing fractal derivatives and short-long window differences, abrupt changes in local-to-global complexity at P-wave arrivals are revealed. Energy mutation features are extracted using the short-term/long-term average (STA/LTA) energy ratio, and noise segments are standardized via Z-score processing. A multi-feature weighted fusion scoring function is constructed to achieve robust identification of P-wave arrivals. Evaluation metrics, including picking error, mean absolute error, and success rate, are used to comprehensively assess the method’s performance in terms of temporal deviation, statistical consistency, and robustness. Case studies using microseismic data from a mining site show that the proposed method can accurately identify P-wave arrivals under different signal-to-noise conditions, with automatic picking results highly consistent with manual labels, mean errors within the sampling interval (2–4 ms), and a picking success rate exceeding 95%. The method provides a reliable tool for seismic source localization and dynamic hazard prediction in mining microseismic monitoring. Full article
Show Figures

Figure 1

15 pages, 7019 KB  
Article
SDO-YOLO: A Lightweight and Efficient Road Object Detection Algorithm Based on Improved YOLOv11
by Peng Ji and Zonglin Jiang
Appl. Sci. 2025, 15(21), 11344; https://doi.org/10.3390/app152111344 - 22 Oct 2025
Abstract
Background: In the field of autonomous driving, existing object detection algorithms still face challenges such as excessive parameter counts and insufficient detection accuracy, particularly when handling dense targets, occlusions, distant small targets, and variable backgrounds in complex road scenarios, where balancing real-time performance [...] Read more.
Background: In the field of autonomous driving, existing object detection algorithms still face challenges such as excessive parameter counts and insufficient detection accuracy, particularly when handling dense targets, occlusions, distant small targets, and variable backgrounds in complex road scenarios, where balancing real-time performance and accuracy remains difficult. Methods: This study introduces the SDO-YOLO algorithm, an enhancement of YOLOv11n. First, to significantly reduce the parameter count while preserving feature representation capabilities, spatial-channel reconstruction convolution is employed to enhance the HGNetv2 network, streamlining redundant computations in feature extraction. Then, a large-kernel separable attention mechanism is introduced, decoupling two-dimensional convolutions into cascaded one-dimensional dilated convolutions, which expands the receptive field while reducing computational complexity. Next, to substantially improve detection accuracy, a reparameterized generalized feature pyramid network is constructed, incorporating CSPStage structures and dynamic channel regulation strategies to optimize multi-scale feature fusion efficiency during inference. Results: Evaluations on the KITTI dataset show that SDO-YOLO achieves a 2.8% increase in mAP@0.5 compared to the baseline, alongside reductions of 7.9% in parameters and 6.3% in computation. Generalization tests on BDD100K and UA-DETRAC datasets yield mAP@0.5 improvements of 1.9% and 3.7%, respectively, over the baseline. Conclusions: SDO-YOLO achieves improvements in both accuracy and efficiency, demonstrating strong robustness across diverse scenarios and adaptability across datasets. Full article
(This article belongs to the Special Issue AI in Object Detection)
Show Figures

Figure 1

30 pages, 11870 KB  
Article
Early Mapping of Farmland and Crop Planting Structures Using Multi-Temporal UAV Remote Sensing
by Lu Wang, Yuan Qi, Juan Zhang, Rui Yang, Hongwei Wang, Jinlong Zhang and Chao Ma
Agriculture 2025, 15(21), 2186; https://doi.org/10.3390/agriculture15212186 - 22 Oct 2025
Abstract
Fine-grained identification of crop planting structures provides key data for precision agriculture, thereby supporting scientific production and evidence-based policy making. This study selected a representative experimental farmland in Qingyang, Gansu Province, and acquired Unmanned Aerial Vehicle (UAV) multi-temporal data (six epochs) from multiple [...] Read more.
Fine-grained identification of crop planting structures provides key data for precision agriculture, thereby supporting scientific production and evidence-based policy making. This study selected a representative experimental farmland in Qingyang, Gansu Province, and acquired Unmanned Aerial Vehicle (UAV) multi-temporal data (six epochs) from multiple sensors (multispectral [visible–NIR], thermal infrared, and LiDAR). By fusing 59 feature indices, we achieved high-accuracy extraction of cropland and planting structures and identified the key feature combinations that discriminate among crops. The results show that (1) multi-source UAV data from April + June can effectively delineate cropland and enable accurate plot segmentation; (2) July is the optimal time window for fine-scale extraction of all planting-structure types in the area (legumes, millet, maize, buckwheat, wheat, sorghum, maize–legume intercropping, and vegetables), with a cumulative importance of 72.26% for the top ten features, while the April + June combination retains most of the separability (67.36%), enabling earlier but slightly less precise mapping; and (3) under July imagery, the SAM (Segment Anything Model) segmentation + RF (Random Forest) classification approach—using the RF-selected top 10 of the 59 features—achieved an overall accuracy of 92.66% with a Kappa of 0.9163, representing a 7.57% improvement over the contemporaneous SAM + CNN (Convolutional Neural Network) method. This work establishes a basis for UAV-based recognition of typical crops in the Qingyang sector of the Loess Plateau and, by deriving optimal recognition timelines and feature combinations from multi-epoch data, offers useful guidance for satellite-based mapping of planting structures across the Loess Plateau following multi-scale data fusion. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

23 pages, 4442 KB  
Article
Efficient and Lightweight LD-SAGE Model for High-Accuracy Leaf Disease Segmentation in Understory Ginseng
by Yanlei Xu, Ziyuan Yu, Dongze Wang, Chao Liu, Zhen Lu, Chen Zhao and Yang Zhou
Agronomy 2025, 15(11), 2450; https://doi.org/10.3390/agronomy15112450 - 22 Oct 2025
Abstract
Understory ginseng, with superior quality compared to field-cultivated varieties, is highly susceptible to diseases, which negatively impact both its yield and quality. Therefore, this paper proposes a lightweight, high-precision leaf spot segmentation model, Lightweight DeepLabv3+ with a StarNet Backbone and Attention-guided Gaussian Edge [...] Read more.
Understory ginseng, with superior quality compared to field-cultivated varieties, is highly susceptible to diseases, which negatively impact both its yield and quality. Therefore, this paper proposes a lightweight, high-precision leaf spot segmentation model, Lightweight DeepLabv3+ with a StarNet Backbone and Attention-guided Gaussian Edge Enhancement (LD-SAGE). This study first introduces StarNet into the DeepLabv3+ framework to replace the Xception backbone, reducing the parameter count and computational complexity. Secondly, the Gaussian-Edge Channel Fusion module uses multi-scale Gaussian convolutions to smooth blurry areas, combining Scharr edge-enhanced features with a lightweight channel attention mechanism for efficient edge and semantic feature integration. Finally, the proposed Multi-scale Attention-guided Context Modulation module replaces the traditional Atrous Spatial Pyramid Pooling. It integrates Multi-scale Grouped Dilated Convolution, Convolutional Multi-Head Self-Attention, and dynamic modulation fusion. This reduces computational costs and improves the model’s ability to capture contextual information and texture details in disease areas. Experimental results show that the LD-SAGE model achieves an mIoU of 92.48%, outperforming other models in terms of precision and recall. The model’s parameter count is only 4.6% of the original, with GFLOPs reduced to 22.1% of the baseline model. Practical deployment experiments on the Jetson Orin Nano device further confirm the advantage of the proposed method in the real-time frame rate, providing support for the diagnosis of leaf diseases in understory ginseng. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

18 pages, 2797 KB  
Article
DW-YOLO: A Model for Identifying Surface Characteristics and Distinguishing Grades of Graphite Ore
by Xin Zhang, Xueyu Huang and Yuxing Yu
Appl. Sci. 2025, 15(21), 11321; https://doi.org/10.3390/app152111321 - 22 Oct 2025
Abstract
Graphite’s critical role in modern industries necessitates efficient ore grade detection to optimize production costs and resource utilization. To overcome the limitations of traditional inspection systems in handling heterogeneous graphite ore samples with varying carbon content, we propose DW-YOLOv8—a YOLOv8s-based framework enhanced through [...] Read more.
Graphite’s critical role in modern industries necessitates efficient ore grade detection to optimize production costs and resource utilization. To overcome the limitations of traditional inspection systems in handling heterogeneous graphite ore samples with varying carbon content, we propose DW-YOLOv8—a YOLOv8s-based framework enhanced through three core innovations: (1) WIoU loss for dynamic anchor prioritization, (2) C2f_UniRepLKNetBlock for multi-scale feature extraction, and (3) the PAFPN for adaptive feature fusion. Evaluated on a dataset collected from the China Minmetals Heilongjiang graphite mine, the model achieves 93.88% mAP50, surpassing the baseline YOLOv8s by 9.6 percentage points. By balancing precision (9.6% improvement) and computational efficiency (9.4% lower Params), DW-YOLOv8 demonstrates robust deployment readiness for real-time industrial applications. Full article
Show Figures

Figure 1

24 pages, 11432 KB  
Article
MRDAM: Satellite Cloud Image Super-Resolution via Multi-Scale Residual Deformable Attention Mechanism
by Liling Zhao, Zichen Liao and Quansen Sun
Remote Sens. 2025, 17(21), 3509; https://doi.org/10.3390/rs17213509 - 22 Oct 2025
Abstract
High-resolution meteorological satellite cloud imagery plays a crucial role in diagnosing and forecasting severe convective weather phenomena characterized by suddenness and locality, such as tropical cyclones. However, constrained by imaging principles and various internal/external interferences during satellite data acquisition, current satellite imagery often [...] Read more.
High-resolution meteorological satellite cloud imagery plays a crucial role in diagnosing and forecasting severe convective weather phenomena characterized by suddenness and locality, such as tropical cyclones. However, constrained by imaging principles and various internal/external interferences during satellite data acquisition, current satellite imagery often fails to meet the spatiotemporal resolution requirements for fine-scale monitoring of these weather systems. Particularly for real-time tracking of tropical cyclone genesis-evolution dynamics and capturing detailed cloud structure variations within cyclone cores, existing spatial resolutions remain insufficient. Therefore, developing super-resolution techniques for meteorological satellite cloud imagery through software-based approaches holds significant application potential. This paper proposes a Multi-scale Residual Deformable Attention Model (MRDAM) based on Generative Adversarial Networks (GANs), specifically designed for satellite cloud image super-resolution tasks considering their morphological diversity and non-rigid deformation characteristics. The generator architecture incorporates two key components: a Multi-scale Feature Progressive Fusion Module (MFPFM), which enhances texture detail preservation and spectral consistency in reconstructed images, and a Deformable Attention Additive Fusion Module (DAAFM), which captures irregular cloud pattern features through adaptive spatial-attention mechanisms. Comparative experiments against multiple GAN-based super-resolution baselines demonstrate that MRDAM achieves superior performance in both objective evaluation metrics (PSNR/SSIM) and subjective visual quality, proving its superior performance for satellite cloud image super-resolution tasks. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning for Satellite Image Processing)
Show Figures

Figure 1

25 pages, 1741 KB  
Article
Event-Aware Multimodal Time-Series Forecasting via Symmetry-Preserving Graph-Based Cross-Regional Transfer Learning
by Shu Cao and Can Zhou
Symmetry 2025, 17(11), 1788; https://doi.org/10.3390/sym17111788 - 22 Oct 2025
Abstract
Forecasting real-world time series in domains with strong event sensitivity and regional variability poses unique challenges, as predictive models must account for sudden disruptions, heterogeneous contextual factors, and structural differences across locations. In tackling these challenges, we draw on the concept of symmetry [...] Read more.
Forecasting real-world time series in domains with strong event sensitivity and regional variability poses unique challenges, as predictive models must account for sudden disruptions, heterogeneous contextual factors, and structural differences across locations. In tackling these challenges, we draw on the concept of symmetry that refers to the balance and invariance patterns across temporal, multimodal, and structural dimensions, which help reveal consistent relationships and recurring patterns within complex systems. This study is based on two multimodal datasets covering 12 tourist regions and more than 3 years of records, ensuring robustness and practical relevance of the results. In many applications, such as monitoring economic indicators, assessing operational performance, or predicting demand patterns, short-term fluctuations are often triggered by discrete events, policy changes, or external incidents, which conventional statistical and deep learning approaches struggle to model effectively. To address these limitations, we propose an event-aware multimodal time-series forecasting framework with graph-based regional transfer built upon an enhanced PatchTST backbone. The framework unifies multimodal feature extraction, event-sensitive temporal reasoning, and graph-based structural adaptation. Unlike Informer, Autoformer, FEDformer, or PatchTST, our model explicitly addresses naive multimodal fusion, event-agnostic modeling, and weak cross-regional transfer by introducing an event-aware Multimodal Encoder, a Temporal Event Reasoner, and a Multiscale Graph Module. Experiments on diverse multi-region multimodal datasets demonstrate that our method achieves substantial improvements over eight state-of-the-art baselines in forecasting accuracy, event response modeling, and transfer efficiency. Specifically, our model achieves a 15.06% improvement in the event recovery index, a 15.1% reduction in MAE, and a 19.7% decrease in event response error compared to PatchTST, highlighting its empirical impact on tourism event economics forecasting. Full article
Show Figures

Figure 1

16 pages, 1300 KB  
Article
Multi-Class Segmentation and Classification of Intestinal Organoids: YOLO Stand-Alone vs. Hybrid Machine Learning Pipelines
by Luana Conte, Giorgio De Nunzio, Giuseppe Raso and Donato Cascio
Appl. Sci. 2025, 15(21), 11311; https://doi.org/10.3390/app152111311 - 22 Oct 2025
Abstract
Background: The automated analysis of intestinal organoids in microscopy images are essential for high-throughput morphological studies, enabling precision and scalability. Traditional manual analysis is time-consuming and subject to observer bias, whereas Machine Learning (ML) approaches have recently demonstrated superior performance. Purpose: [...] Read more.
Background: The automated analysis of intestinal organoids in microscopy images are essential for high-throughput morphological studies, enabling precision and scalability. Traditional manual analysis is time-consuming and subject to observer bias, whereas Machine Learning (ML) approaches have recently demonstrated superior performance. Purpose: This study aims to evaluate YOLO (You Only Look Once) for organoid segmentation and classification, comparing its standalone performance with a hybrid pipeline that integrates DL-based feature extraction and ML classifiers. Methods: The dataset, consisting of 840 light microscopy images and over 23,000 annotated intestinal organoids, was divided into training (756 images) and validation (84 images) sets. Organoids were categorized into four morphological classes: cystic non-budding organoids (Org0), early organoids (Org1), late organoids (Org3), and Spheroids (Sph). YOLO version 10 (YOLOv10) was trained as a segmenter-classifier for the detection and classification of organoids. Performance metrics for YOLOv10 as a standalone model included Average Precision (AP), mean AP at 50% overlap (mAP50), and confusion matrix evaluated on the validation set. In the hybrid pipeline, trained YOLOv10 segmented bounding boxes, and features extracted from these regions using YOLOv10 and ResNet50 were classified with ML algorithms, including Logistic Regression, Naive Bayes, K-Nearest Neighbors (KNN), Random Forest, eXtreme Gradient Boosting (XGBoost), and Multi-Layer Perceptrons (MLP). The performance of these classifiers was assessed using the Receiver Operating Characteristic (ROC) curve and its corresponding Area Under the Curve (AUC), precision, F1 score, and confusion matrix metrics. Principal Component Analysis (PCA) was applied to reduce feature dimensionality while retaining 95% of cumulative variance. To optimize the classification results, an ensemble approach based on AUC-weighted probability fusion was implemented to combine predictions across classifiers. Results: YOLOv10 as a standalone model achieved an overall mAP50 of 0.845, with high AP across all four classes (range 0.797–0.901). In the hybrid pipeline, features extracted with ResNet50 outperformed those extracted with YOLO, with multiple classifiers achieving AUC scores ranging from 0.71 to 0.98 on the validation set. Among all classifiers, Logistic Regression emerged as the best-performing model, achieving the highest AUC scores across multiple classes (range 0.93–0.98). Feature selection using PCA did not improve classification performance. The AUC-weighted ensemble method further enhanced performance, leveraging the strengths of multiple classifiers to optimize prediction, as demonstrated by improved ROC-AUC scores across all organoid classes (range 0.92–0.98). Conclusions: This study demonstrates the effectiveness of YOLOv10 as a standalone model and the robustness of hybrid pipelines combining ResNet50 feature extraction and ML classifiers. Logistic Regression emerged as the best-performing classifier, achieving the highest ROC-AUC across multiple classes. This approach ensures reproducible, automated, and precise morphological analysis, with significant potential for high-throughput organoid studies and live imaging applications. Full article
Show Figures

Figure 1

19 pages, 2186 KB  
Article
A Range Query Method with Data Fusion in Two-Layer Wire-Less Sensor Networks
by Shouxue Chen, Yun Deng and Xiaohui Cheng
Symmetry 2025, 17(11), 1784; https://doi.org/10.3390/sym17111784 - 22 Oct 2025
Viewed by 19
Abstract
Wireless sensor networks play a crucial role in IoT applications. Traditional range query methods have limitations in multi-sensor data fusion and energy efficiency. This paper proposes a new range query method for two-layer wireless sensor networks. The method supports data fusion operations directly [...] Read more.
Wireless sensor networks play a crucial role in IoT applications. Traditional range query methods have limitations in multi-sensor data fusion and energy efficiency. This paper proposes a new range query method for two-layer wireless sensor networks. The method supports data fusion operations directly on storage nodes. Communication costs between sink nodes and storage nodes are significantly reduced. Reverse Z-O coding optimizes the encoding process by focusing only on the most valuable data. This approach shortens both encoding time and length. Data security is ensured using the Paillier homomorphic encryption algorithm. A comparison chain for the most valuable data is generated using Reverse Z-O coding and HMAC. Storage nodes can perform multi-sensor data fusion under encryption. Experiments were conducted on Raspberry Pi 2B+ and NVIDIA TX2 platforms. Performance was evaluated in terms of fusion efficiency, query dimensions, and data volume. The results demonstrate secure and efficient multi-sensor data fusion with lower energy consumption. The method outperforms existing approaches in reducing communication and computational costs. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Embedded Systems)
Show Figures

Figure 1

Back to TopTop