Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (423)

Search Parameters:
Keywords = novel decoding approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 10194 KB  
Article
Multi-Robot Task Allocation with Spatiotemporal Constraints via Edge-Enhanced Attention Networks
by Yixiang Hu, Daxue Liu, Jinhong Li, Junxiang Li and Tao Wu
Appl. Sci. 2026, 16(2), 904; https://doi.org/10.3390/app16020904 - 15 Jan 2026
Abstract
Multi-Robot Task Allocation (MRTA) with spatiotemporal constraints presents significant challenges in environmental adaptability. Existing learning-based methods often overlook environmental spatial constraints, leading to spatial information distortion. To address this, we formulate the problem as an asynchronous Markov Decision Process over a directed heterogeneous [...] Read more.
Multi-Robot Task Allocation (MRTA) with spatiotemporal constraints presents significant challenges in environmental adaptability. Existing learning-based methods often overlook environmental spatial constraints, leading to spatial information distortion. To address this, we formulate the problem as an asynchronous Markov Decision Process over a directed heterogeneous graph and propose a novel heterogeneous graph neural network named the Edge-Enhanced Attention Network (E2AN). This network integrates a specialized encoder, the Edge-Enhanced Heterogeneous Graph Attention Network (E2HGAT), with an attention-based decoder. By incorporating edge attributes to effectively characterize path costs under spatial constraints, E2HGAT corrects spatial distortion. Furthermore, our approach supports flexible extension to diverse payload scenarios via node attribute adaptation. Extensive experiments conducted in simulated environments with obstructed maps demonstrate that the proposed method outperforms baseline algorithms in task success rate. Remarkably, the model maintains its advantages in generalization tests on unseen maps as well as in scalability tests across varying problem sizes. Ablation studies further validate the critical role of the proposed encoder in capturing spatiotemporal dependencies. Additionally, real-time performance analysis confirms the method’s feasibility for online deployment. Overall, this study offers an effective solution for MRTA problems with complex constraints. Full article
(This article belongs to the Special Issue Motion Control for Robots and Automation)
24 pages, 5237 KB  
Article
DCA-UNet: A Cross-Modal Ginkgo Crown Recognition Method Based on Multi-Source Data
by Yunzhi Guo, Yang Yu, Yan Li, Mengyuan Chen, Wenwen Kong, Yunpeng Zhao and Fei Liu
Plants 2026, 15(2), 249; https://doi.org/10.3390/plants15020249 - 13 Jan 2026
Viewed by 29
Abstract
Wild ginkgo, as an endangered species, holds significant value for genetic resource conservation, yet its practical applications face numerous challenges. Traditional field surveys are inefficient in mountainous mixed forests, while satellite remote sensing is limited by spatial resolution. Current deep learning approaches relying [...] Read more.
Wild ginkgo, as an endangered species, holds significant value for genetic resource conservation, yet its practical applications face numerous challenges. Traditional field surveys are inefficient in mountainous mixed forests, while satellite remote sensing is limited by spatial resolution. Current deep learning approaches relying on single-source data or merely simple multi-source fusion fail to fully exploit information, leading to suboptimal recognition performance. This study presents a multimodal ginkgo crown dataset, comprising RGB and multispectral images acquired by an UAV platform. To achieve precise crown segmentation with this data, we propose a novel dual-branch dynamic weighting fusion network, termed dual-branch cross-modal attention-enhanced UNet (DCA-UNet). We design a dual-branch encoder (DBE) with a two-stream architecture for independent feature extraction from each modality. We further develop a cross-modal interaction fusion module (CIF), employing cross-modal attention and learnable dynamic weights to boost multi-source information fusion. Additionally, we introduce an attention-enhanced decoder (AED) that combines progressive upsampling with a hybrid channel-spatial attention mechanism, thereby effectively utilizing multi-scale features and enhancing boundary semantic consistency. Evaluation on the ginkgo dataset demonstrates that DCA-UNet achieves a segmentation performance of 93.42% IoU (Intersection over Union), 96.82% PA (Pixel Accuracy), 96.38% Precision, and 96.60% F1-score. These results outperform differential feature attention fusion network (DFAFNet) by 12.19%, 6.37%, 4.62%, and 6.95%, respectively, and surpasses the single-modality baselines (RGB or multispectral) in all metrics. Superior performance on cross-flight-altitude data further validates the model’s strong generalization capability and robustness in complex scenarios. These results demonstrate the superiority of DCA-UNet in UAV-based multimodal ginkgo crown recognition, offering a reliable and efficient solution for monitoring wild endangered tree species. Full article
(This article belongs to the Special Issue Advanced Remote Sensing and AI Techniques in Agriculture and Forestry)
Show Figures

Figure 1

19 pages, 5302 KB  
Article
LSSCC-Net: Integrating Spatial-Feature Aggregation and Adaptive Attention for Large-Scale Point Cloud Semantic Segmentation
by Wenbo Wang, Xianghong Hua, Cheng Li, Pengju Tian, Yapeng Wang and Lechao Liu
Symmetry 2026, 18(1), 124; https://doi.org/10.3390/sym18010124 - 8 Jan 2026
Viewed by 177
Abstract
Point cloud semantic segmentation is a key technology for applications such as autonomous driving, robotics, and virtual reality. Current approaches are heavily reliant on local relative coordinates and simplistic attention mechanisms to aggregate neighborhood information. This often leads to an ineffective joint representation [...] Read more.
Point cloud semantic segmentation is a key technology for applications such as autonomous driving, robotics, and virtual reality. Current approaches are heavily reliant on local relative coordinates and simplistic attention mechanisms to aggregate neighborhood information. This often leads to an ineffective joint representation of geometric perturbations and feature variations, coupled with a lack of adaptive selection for salient features during context fusion. On this basis, we propose LSSCC-Net, a novel segmentation framework based on LACV-Net. First, the spatial-feature dynamic aggregation module is designed to fuse offset information by symmetric interaction between spatial positions and feature channels, thus supplementing local structural information. Second, a dual-dimensional attention mechanism (spatial and channel) is introduced to symmetrically deploy attention modules in both the encoder and decoder, prioritizing salient information extraction. Finally, Lovász-Softmax Loss is used as an auxiliary loss to optimize the training objective. The proposed method is evaluated on two public benchmark datasets. The mIoU on the Toronto3D and S3DIS datasets is 83.6% and 65.2%, respectively. Compared with the baseline LACV-Net, LSSCC-Net showed notable improvements in challenging categories: the IoU for “road mark” and “fence” on Toronto3D increased by 3.6% and 8.1%, respectively. These results indicate that LSSCC-Net more accurately characterizes complex boundaries and fine-grained structures, enhancing segmentation capabilities for small-scale targets and category boundaries. Full article
Show Figures

Figure 1

26 pages, 8271 KB  
Article
Enhancing EEG Decoding with Selective Augmentation Integration
by Jianbin Ye, Yanjie Sun, Man Xiao, Bo Liu and Kele Xu
Sensors 2026, 26(2), 399; https://doi.org/10.3390/s26020399 - 8 Jan 2026
Viewed by 159
Abstract
Deep learning holds considerable promise for electroencephalography (EEG) analysis but faces challenges due to scarce and noisy EEG data, and the limited generality of existing data augmentation techniques. To address these issues, we propose an end-to-end EEG augmentation framework with an adaptive mechanism. [...] Read more.
Deep learning holds considerable promise for electroencephalography (EEG) analysis but faces challenges due to scarce and noisy EEG data, and the limited generality of existing data augmentation techniques. To address these issues, we propose an end-to-end EEG augmentation framework with an adaptive mechanism. This approach utilizes contrastive learning to mitigate representational distortions caused by augmentation, thereby strengthening the encoder’s feature learning. A selective augmentation strategy is further incorporated to dynamically determine optimal augmentation combinations based on performance. We also introduce NeuroBrain, a novel neural architecture specifically designed for auditory EEG decoding. It effectively captures both local and global dependencies within EEG signals. Comprehensive evaluations on the SparrKULee and WithMe datasets confirm the superiority of our proposed framework and architecture, demonstrating a 29.42% performance gain over HappyQuokka and a 5.45% accuracy improvement compared to EEGNet. These results validate our method’s efficacy in tackling key challenges in EEG analysis and advancing the state of the art. Full article
Show Figures

Figure 1

20 pages, 3699 KB  
Article
Monitoring Rice Blast Disease Progression Through the Fusion of Time-Series Hyperspectral Imaging and Deep Learning
by Wenjuan Wang, Yufen Zhang, Haoyi Huang, Tao Liu, Minyue Zeng, Youqiang Fu, Hua Shu, Jianyuan Yang and Long Yu
Agronomy 2026, 16(1), 136; https://doi.org/10.3390/agronomy16010136 - 5 Jan 2026
Viewed by 302
Abstract
Rice blast, caused by Magnaporthe oryzae, is a devastating disease that jeopardizes global rice production and food security. Precision agriculture demands timely and accurate monitoring tools to enable targeted intervention. This study introduces a novel deep learning framework that fuses time-series hyperspectral [...] Read more.
Rice blast, caused by Magnaporthe oryzae, is a devastating disease that jeopardizes global rice production and food security. Precision agriculture demands timely and accurate monitoring tools to enable targeted intervention. This study introduces a novel deep learning framework that fuses time-series hyperspectral imaging with an advanced Autoformer model (AutoMSD) to dynamically track rice blast progression. The proposed AutoMSD model integrates multi-scale convolution and adaptive sequence decomposition, effectively decoding complex spatio-temporal patterns associated with disease development. When deployed on a 7-day hyperspectral dataset, AutoMSD achieved 86.67% prediction accuracy using only 3 days of historical data, surpassing conventional approaches. This accuracy at an early infection stage underscores the model’s strong potential for practical field deployment. Our work provides a scalable and robust decision-support tool that paves the way for site-specific disease management, reduced pesticide usage, and enhanced sustainability in rice cultivation systems. Full article
Show Figures

Figure 1

20 pages, 5104 KB  
Article
A Novel Ultra-Short-Term PV Power Forecasting Method Based on a Temporal Attention-Variable Parallel Fusion Encoder Network
by Jinman Zhang, Zengbao Zhao, Rongmei Guo, Xue Hu, Tonghui Qu, Chang Ge and Jie Yan
Energies 2026, 19(1), 274; https://doi.org/10.3390/en19010274 - 5 Jan 2026
Viewed by 230
Abstract
Accurate photovoltaic (PV) power forecasting is critical for the stable operation of power systems. Existing methods rely solely on historical data, which significantly decline in forecasting accuracy at 3–4 h ahead. To address this problem, a novel ultra-short-term PV power forecasting method based [...] Read more.
Accurate photovoltaic (PV) power forecasting is critical for the stable operation of power systems. Existing methods rely solely on historical data, which significantly decline in forecasting accuracy at 3–4 h ahead. To address this problem, a novel ultra-short-term PV power forecasting method based on temporal attention-variable parallel fusion encoder network is proposed to enhance the stability of forecasting results by incorporating Numerical Weather Prediction data to correct temporal predictions. Specifically, independent encoding modules are constructed for both historical power sequences and future NWP sequences, enabling deep feature extraction of their respective temporal characteristics. During the decoding phase, a two-stage coupled decoding strategy is employed: for 1–8 steps predictions, the model relies solely on temporal features, while for 9–16 steps horizons, it dynamically fuses encoded information from historical power data and future NWP inputs. This approach allows for accurate characterization of future trend dynamics. Experimental results demonstrate that, compared with conventional methods, the proposed model reduces the average normalized root mean square error (NRMSE) at 4th ultra-short-term forecasting by 0.50–5.20%, while it improves the R2 by 0.047–0.362, validating the effectiveness of the proposed approach. Full article
(This article belongs to the Section A: Sustainable Energy)
Show Figures

Figure 1

16 pages, 1561 KB  
Article
TSAformer: A Traffic Flow Prediction Model Based on Cross-Dimensional Dependency Capture
by Haoning Lv, Xi Chen and Weijie Xiu
Electronics 2026, 15(1), 231; https://doi.org/10.3390/electronics15010231 - 4 Jan 2026
Viewed by 142
Abstract
Accurate multivariate traffic flow forecasting is critical for intelligent transportation systems yet remains challenging due to the complex interplay of temporal dynamics and spatial interactions. While Transformer-based models have shown promise in capturing long-range temporal dependencies, most existing approaches compress multidimensional observations into [...] Read more.
Accurate multivariate traffic flow forecasting is critical for intelligent transportation systems yet remains challenging due to the complex interplay of temporal dynamics and spatial interactions. While Transformer-based models have shown promise in capturing long-range temporal dependencies, most existing approaches compress multidimensional observations into flattened sequences—thereby neglecting explicit modeling of cross-dimensional (i.e., spatial or inter-variable) relationships, which are essential for capturing traffic propagation, network-wide congestion, and node-specific behaviors. To address this limitation, we propose TSAformer, a novel Transformer architecture that explicitly preserves and jointly models time and dimension as dual structural axes. TSAformer begins with a multimodal input embedding layer that encodes raw traffic values alongside temporal context (time-of-day and day-of-week) and node-specific positional features, ensuring rich semantic representation. The core of TSAformer is the Two-Stage Attention (TSA) module, which first models intra-dimensional temporal evolution via time-axis self-attention then captures inter-dimensional spatial interactions through a lightweight routing mechanism—avoiding quadratic complexity while enabling all-to-all cross-node communication. Built upon TSA, a hierarchical encoder–decoder (HED) structure further enhances forecasting by modeling traffic patterns across multiple temporal scales, from fine-grained fluctuations to macroscopic trends, and fusing predictions via cross-scale attention. Extensive experiments on three real-world traffic datasets—including urban road networks and highway systems—demonstrate that TSAformer consistently outperforms state-of-the-art baselines across short-term and long-term forecasting horizons. Notably, it achieves top-ranked performance in 36 out of 58 critical evaluation scenarios, including peak-hour and event-driven congestion prediction. By explicitly modeling both temporal and dimensional dependencies without structural compromise, TSAformer provides a scalable, interpretable, and high-performance solution for spatiotemporal traffic forecasting. Full article
(This article belongs to the Special Issue Artificial Intelligence for Traffic Understanding and Control)
Show Figures

Figure 1

22 pages, 1755 KB  
Article
Knowledge-Augmented Adaptive Mechanism for Radiology Report Generation
by Shuo Yang and Hengliang Tan
Mathematics 2026, 14(1), 173; https://doi.org/10.3390/math14010173 - 2 Jan 2026
Viewed by 182
Abstract
Radiology report generation, which aims to relieve the heavy workload of radiologists and reduce the risks of misdiagnosis and overlooked diagnoses, is of great significance in current clinical medicine. Most existing methods mainly formulate radiology report generation as a problem similar to image [...] Read more.
Radiology report generation, which aims to relieve the heavy workload of radiologists and reduce the risks of misdiagnosis and overlooked diagnoses, is of great significance in current clinical medicine. Most existing methods mainly formulate radiology report generation as a problem similar to image captioning. Nevertheless, in the medical domain, these data-driven methods are plagued by two key issues: the insufficient utilization of expert knowledge and visual–textual biases. To solve these problems, this study presents a novel knowledge-augmented adaptive mechanism (KAM) for radiology report generation. In detail, our KAM first introduces two distinct types of medical knowledge: prior knowledge, which is input-independent and reflects the accumulated expertise of radiologists, and posterior knowledge, which is input-dependent and mimics the process of identifying abnormalities, thereby mitigating the issue of visual–textual bias. To optimize the utilization of both types of knowledge, this study develops a knowledge-augmented adaptive mechanism, which integrates the visual characteristics of radiological images with prior and posterior knowledge into the decoding process. Experimental evaluations on the publicly accessible IU X-ray and MIMIC-CXR datasets indicate that our approach is on par with the current common methods. Full article
Show Figures

Figure 1

36 pages, 630 KB  
Article
Semantic Communication Unlearning: A Variational Information Bottleneck Approach for Backdoor Defense in Wireless Systems
by Sümeye Nur Karahan, Merve Güllü, Mustafa Serdar Osmanca and Necaattin Barışçı
Future Internet 2026, 18(1), 17; https://doi.org/10.3390/fi18010017 - 28 Dec 2025
Viewed by 249
Abstract
Semantic communication systems leverage deep neural networks to extract and transmit essential information, achieving superior performance in bandwidth-constrained wireless environments. However, their vulnerability to backdoor attacks poses critical security threats, where adversaries can inject malicious triggers during training to manipulate system behavior. This [...] Read more.
Semantic communication systems leverage deep neural networks to extract and transmit essential information, achieving superior performance in bandwidth-constrained wireless environments. However, their vulnerability to backdoor attacks poses critical security threats, where adversaries can inject malicious triggers during training to manipulate system behavior. This paper introduces Selective Communication Unlearning (SCU), a novel defense mechanism based on Variational Information Bottleneck (VIB) principles. SCU employs a two-stage approach: (1) joint unlearning to remove backdoor knowledge from both encoder and decoder while preserving legitimate data representations, and (2) contrastive compensation to maximize feature separation between poisoned and clean samples. Extensive experiments on the RML2016.10a wireless signal dataset demonstrate that SCU achieves 629.5 ± 191.2% backdoor mitigation (5-seed average; 95% CI: [364.1%, 895.0%]), with peak performance of 1486% under optimal conditions, while maintaining only 11.5% clean performance degradation. This represents an order-of-magnitude improvement over detection-based defenses and fundamentally outperforms existing unlearning approaches that achieve near-zero or negative mitigation. We validate SCU across seven signal processing domains, four adaptive backdoor types, and varying SNR conditions, demonstrating unprecedented robustness and generalizability. The framework achieves a 243 s unlearning time, making it practical for resource-constrained edge deployments in 6G networks. Full article
(This article belongs to the Special Issue Future Industrial Networks: Technologies, Algorithms, and Protocols)
Show Figures

Figure 1

24 pages, 4080 KB  
Article
An Unsupervised Situation Awareness Framework for UAV Sensor Data Fusion Enabled by a Stabilized Deep Variational Autoencoder
by Anxin Guo, Zhenxing Zhang, Rennong Yang, Ying Zhang, Liping Hu and Leyan Li
Sensors 2026, 26(1), 111; https://doi.org/10.3390/s26010111 - 24 Dec 2025
Viewed by 336
Abstract
Effective situation awareness relies on the robust processing of high-dimensional data streams generated by onboard sensors. However, the application of deep generative models to extract features from complex UAV sensor data (e.g., GPS, IMU, and radar feeds) faces two fundamental challenges: critical training [...] Read more.
Effective situation awareness relies on the robust processing of high-dimensional data streams generated by onboard sensors. However, the application of deep generative models to extract features from complex UAV sensor data (e.g., GPS, IMU, and radar feeds) faces two fundamental challenges: critical training instability and the difficulty of representing multi-modal distributions inherent in dynamic flight maneuvers. To address this, this paper proposes a novel unsupervised sensor data processing framework to overcome these issues. Our core innovation is a deep generative model, VAE-WRBM-MDN, specifically engineered for stable feature extraction from non-linear time-series sensor data. We demonstrate that while standard Variational Autoencoders (VAEs) often struggle to converge on this task, our introduction of Weighted-uncertainty Restricted Boltzmann Machines (WRBM) for layer-wise pre-training ensures stable learning. Furthermore, the integration of a Mixture Density Network (MDN) enables the decoder to accurately reconstruct the complex, multi-modal conditional distributions of sensor readings. Comparative experiments validate our approach, achieving 95.69% classification accuracy in identifying situational patterns. The results confirm that our framework provides robust enabling technology for real-time intelligent sensing and raw data interpretation in autonomous systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

26 pages, 7801 KB  
Article
Enhancing Sustainable Intelligent Transportation Systems Through Lightweight Monocular Depth Estimation Based on Volume Density
by Xianfeng Tan, Chengcheng Wang, Ziyu Zhang, Zhendong Ping, Jieying Pan, Hao Shan, Ruikai Li, Meng Chi and Zhiyong Cui
Sustainability 2025, 17(24), 11271; https://doi.org/10.3390/su172411271 - 16 Dec 2025
Viewed by 282
Abstract
Depth estimation is a critical enabling technology for sustainable intelligent transportation systems (ITSs), as it supports essential functions such as obstacle detection, navigation, and traffic management. However, existing Neural Radiance Field (NeRF)-based monocular depth estimation methods often suffer from high computational costs and [...] Read more.
Depth estimation is a critical enabling technology for sustainable intelligent transportation systems (ITSs), as it supports essential functions such as obstacle detection, navigation, and traffic management. However, existing Neural Radiance Field (NeRF)-based monocular depth estimation methods often suffer from high computational costs and poor performance in occluded regions, limiting their applicability in real-world, resource-constrained environments. To address these challenges, this paper proposes a lightweight monocular depth estimation framework that integrates a novel capacity redistribution strategy and an adaptive occlusion-aware training mechanism. By shifting computational load from resource-intensive multi-layer perceptrons (MLPs) to efficient separable convolutional encoder–decoder networks, our method significantly reduces memory usage to 234 MB while maintaining competitive accuracy. Furthermore, a divide-and-conquer training strategy explicitly handles occluded regions, improving reconstruction quality in complex urban scenarios. Experimental evaluations on the KITTI and V2X-Sim datasets demonstrate that our approach not only achieves superior depth estimation performance but also supports real-time operation on edge devices. This work contributes to the sustainable development of ITS by offering a practical, efficient, and scalable solution for environmental perception, with potential benefits for energy efficiency, system affordability, and large-scale deployment. Full article
Show Figures

Figure 1

21 pages, 1667 KB  
Article
Advanced Retinal Lesion Segmentation via U-Net with Hybrid Focal–Dice Loss and Automated Ground Truth Generation
by Ahmad Sami Al-Shamayleh, Mohammad Qatawneh and Hany A. Elsalamony
Algorithms 2025, 18(12), 790; https://doi.org/10.3390/a18120790 - 14 Dec 2025
Viewed by 480
Abstract
An early and accurate detection of retinal lesions is imperative to intercept the course of sight-threatening ailments, such as Diabetic Retinopathy (DR) or Age-related Macular Degeneration (AMD). Manual expert annotation of all such lesions would take a long time and would be subject [...] Read more.
An early and accurate detection of retinal lesions is imperative to intercept the course of sight-threatening ailments, such as Diabetic Retinopathy (DR) or Age-related Macular Degeneration (AMD). Manual expert annotation of all such lesions would take a long time and would be subject to interobserver tendencies, especially in large screening projects. This work introduces an end-to-end deep learning pipeline for automated retinal lesion segmentation, tailored to datasets without available expert pixel-level reference annotations. The approach is specifically designed for our needs. A novel multi-stage automated ground truth mask generation method, based on colour space analysis, entropy filtering and morphological operations, and creating reliable pseudo-labels from raw retinal images. These pseudo-labels then serve as the training input for a U-Net architecture, a convolutional encoder–decoder architecture for biomedical image segmentation. To address the inherent class imbalance often encountered in medical imaging, we employ and thoroughly evaluate a novel hybrid loss function combining Focal Loss and Dice Loss. The proposed pipeline was rigorously evaluated on the ‘Eye Image Dataset’ from Kaggle, achieving a state-of-the-art segmentation performance with a Dice Similarity Coefficient of 0.932, Intersection over Union (IoU) of 0.865, Precision of 0.913, and Recall of 0.897. This work demonstrates the feasibility of achieving high-quality retinal lesion segmentation even in resource-constrained environments where extensive expert annotations are unavailable, thus paving the way for more accessible and scalable ophthalmological diagnostic tools. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 815 KB  
Review
Advances in Quantitative Techniques for Mapping RNA Modifications
by Ling Tian, Bharathi Vallabhaneni and Yie-Hwa Chang
Life 2025, 15(12), 1888; https://doi.org/10.3390/life15121888 - 10 Dec 2025
Viewed by 863
Abstract
RNA modifications are essential regulators of gene expression and cellular function, modulating RNA stability, splicing, translation, and localization. Dysregulation of these modifications has been linked to cancer, neurodegenerative disorders, viral infections, and other diseases. Precise quantification and mapping of RNA modifications are crucial [...] Read more.
RNA modifications are essential regulators of gene expression and cellular function, modulating RNA stability, splicing, translation, and localization. Dysregulation of these modifications has been linked to cancer, neurodegenerative disorders, viral infections, and other diseases. Precise quantification and mapping of RNA modifications are crucial for understanding their biological roles. This review summarizes current and emerging methodologies for RNA modification analysis, including mass spectrometry, antibody-based and non-antibody-based approaches, PCR- and NMR-based detection, chemical- and enzyme-assisted sequencing, and nanopore direct RNA sequencing. We also highlight advanced techniques for single-cell and single-molecule imaging, enabling the study of modification dynamics and cellular heterogeneity. The advantages, limitations, and challenges of each method are discussed, providing a framework for selecting appropriate analytical strategies. Future perspectives emphasize high-throughput, multiplexed, and single-cell approaches, integrating multiple technologies to decode the epitranscriptome. These approaches form a robust toolkit for uncovering RNA modification functions, discovering biomarkers, and developing novel therapeutic strategies. Full article
(This article belongs to the Section Genetics and Genomics)
Show Figures

Figure 1

24 pages, 4080 KB  
Article
MCRBM–CNN: A Hybrid Deep Learning Framework for Robust SSVEP Classification
by Depeng Gao, Yuhang Zhao, Jieru Zhou, Haifei Zhang and Hongqi Li
Sensors 2025, 25(24), 7456; https://doi.org/10.3390/s25247456 - 8 Dec 2025
Viewed by 488
Abstract
The steady-state visual evoked potential (SSVEP), a non-invasive EEG modality, is a prominent approach for brain–computer interfaces (BCIs) due to its high signal-to-noise ratio and minimal user training. However, its practical utility is often hampered by susceptibility to noise, artifacts, and concurrent brain [...] Read more.
The steady-state visual evoked potential (SSVEP), a non-invasive EEG modality, is a prominent approach for brain–computer interfaces (BCIs) due to its high signal-to-noise ratio and minimal user training. However, its practical utility is often hampered by susceptibility to noise, artifacts, and concurrent brain activities, complicating signal decoding. To address this, we propose a novel hybrid deep learning model that integrates a multi-channel restricted Boltzmann machine (RBM) with a convolutional neural network (CNN). The framework comprises two main modules: a feature extraction module and a classification module. The former employs a multi-channel RBM to unsupervisedly learn latent feature representations from multi-channel EEG data, effectively capturing inter-channel correlations to enhance feature discriminability. The latter leverages convolutional operations to further extract spatiotemporal features, constructing a deep discriminative model for the automatic recognition of SSVEP signals. Comprehensive evaluations on multiple public datasets demonstrate that our proposed method achieves competitive performance compared to various benchmarks, particularly exhibiting superior effectiveness and robustness in short-time window scenarios. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—3rd Edition)
Show Figures

Figure 1

26 pages, 55777 KB  
Article
DELTA-SoyStage: A Lightweight Detection Architecture for Full-Cycle Soybean Growth Stage Monitoring
by Abdellah Lakhssassi, Yasser Salhi, Naoufal Lakhssassi, Khalid Meksem and Khaled Ahmed
Sensors 2025, 25(23), 7303; https://doi.org/10.3390/s25237303 - 1 Dec 2025
Viewed by 455
Abstract
The accurate identification of soybean growth stages is critical for optimizing agricultural interventions, where mistimed treatments can result in yield losses ranging from 2.5% to 40%. Existing deep learning approaches remain limited in scope, targeting isolated developmental phases rather than providing comprehensive phenological [...] Read more.
The accurate identification of soybean growth stages is critical for optimizing agricultural interventions, where mistimed treatments can result in yield losses ranging from 2.5% to 40%. Existing deep learning approaches remain limited in scope, targeting isolated developmental phases rather than providing comprehensive phenological coverage. This paper presents a novel object detection architecture DELTA-SoyStage, combining an EfficientNet backbone with a lightweight ChannelMapper neck and a newly proposed DELTA (Denoising Enhanced Lightweight Task Alignment) detection head for soybean growth stage classification. We introduce a dataset of 17,204 labeled RGB images spanning nine growth stages from emergence (VE) through full maturity (R8), collected under controlled greenhouse conditions with diverse imaging angles and lighting variations. DELTA-SoyStage achieves 73.9% average precision with only 24.4 GFLOPs computational cost, demonstrating 4.2× fewer FLOPs than the best-performing baseline (DINO-Swin: 74.7% AP, 102.5 GFLOPs) with only 0.8% accuracy difference. The lightweight DELTA head combined with the efficient ChannelMapper neck requires only 8.3 M parameters—a 43.5% reduction compared to standard architectures—while maintaining competitive accuracy. Extensive ablation studies validate key design choices including task alignment mechanisms, multi-scale feature extraction strategies, and encoder–decoder depth configurations. The proposed model’s computational efficiency makes it suitable for deployment on resource-constrained edge devices in precision agriculture applications, enabling timely decision-making without reliance on cloud infrastructure. Full article
(This article belongs to the Special Issue Application of Sensors Technologies in Agricultural Engineering)
Show Figures

Figure 1

Back to TopTop