error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (29,954)

Search Parameters:
Keywords = dataset effect

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4244 KB  
Article
UG-Net: An Unsupervised-Guided Framework for Railway Foreign Object Detection
by Zhuowen Tian and Jinbai Zou
Appl. Sci. 2026, 16(2), 689; https://doi.org/10.3390/app16020689 (registering DOI) - 9 Jan 2026
Abstract
Foreign object intrusion severely threatens railway safety. Existing methods struggle with open-set categories, high annotation costs, and poor label-efficient generalization. To address these issues, we propose UG-Net, an unsupervised-guided label-efficient detection framework. The core idea is a two-stage strategy: first, a masked autoencoder [...] Read more.
Foreign object intrusion severely threatens railway safety. Existing methods struggle with open-set categories, high annotation costs, and poor label-efficient generalization. To address these issues, we propose UG-Net, an unsupervised-guided label-efficient detection framework. The core idea is a two-stage strategy: first, a masked autoencoder (MAE) learns “normality” priors from unlabeled data and generates a spatial attention mask via a deep feature difference strategy; then, this mask is fused as a fourth channel into a lightweight YOLOv8n detector. This approach effectively alleviates reliance on manual annotations. On a self-constructed railway dataset, UG-Net achieved 94.56% mAP@0.5 using only 200 labeled samples, significantly outperforming the YOLOv8n baseline (86.91%). The framework provides a label-efficient solution for industrial anomaly detection. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

307 KB  
Proceeding Paper
Quantifying Risk Factors of Violence in Maritime Piracy Incidents Using Categorical Association Measures
by Sonia Rozbiewska
Environ. Earth Sci. Proc. 2026, 41(1), 1; https://doi.org/10.3390/eesp2026041001 (registering DOI) - 8 Jan 2026
Abstract
Maritime piracy remains a persistent security challenge across several global regions, with violent incidents posing the greatest threat to crew safety and vessel operations. This study investigates the relationship between violent escalation in piracy incidents and a set of contextual and operational variables [...] Read more.
Maritime piracy remains a persistent security challenge across several global regions, with violent incidents posing the greatest threat to crew safety and vessel operations. This study investigates the relationship between violent escalation in piracy incidents and a set of contextual and operational variables using classical categorical data statistics. A dataset comprising reported maritime piracy and armed robbery events from 2015–2024 was compiled from IMB, OBP, and IMO sources and analysed through chi-square tests of independence, followed by Cramér’s V to quantify the strength of association. The results demonstrate that violence is not randomly distributed across incident characteristics. Geographic region exhibits the strongest measurable association with violent outcomes, reflecting the influence of regional security dynamics and the presence of organized criminal networks. Attack type and weapon type show additional, though weaker, associations, indicating that close-range engagement and the presence of firearms increase the likelihood of escalation. Vessel type, flag state, and seasonal timing display only marginal effects. Overall, the findings highlight that the probability of violence during piracy events is primarily shaped by spatial context and tactical execution. The study confirms that chi-square and Cramér’s V offer a transparent, interpretable framework for identifying key risk factors and can serve as a foundation for operational threat assessments and maritime security planning. Full article
20 pages, 4142 KB  
Article
Selective Multi-Source Transfer Learning and Ensemble Learning for Piezoelectric Actuator Feedforward Control
by Yaqian Hu, Herong Jin, Xiangcheng Chu and Ran Cao
Actuators 2026, 15(1), 45; https://doi.org/10.3390/act15010045 (registering DOI) - 8 Jan 2026
Abstract
Transfer learning enables the leveraging of knowledge acquired from other piezoelectric actuators (PEAs) to facilitate the positioning control of a target PEA. However, blind knowledge transfer from datasets irrelevant to the target PEA often leads to degraded displacement control performance. To address this [...] Read more.
Transfer learning enables the leveraging of knowledge acquired from other piezoelectric actuators (PEAs) to facilitate the positioning control of a target PEA. However, blind knowledge transfer from datasets irrelevant to the target PEA often leads to degraded displacement control performance. To address this challenge, this study proposes a transfer learning method, termed selective multi-source ensemble transfer learning (SMETL). The SMETL adopts a multi-source transfer learning framework integrated with Proxy A-distance (PAD)-based multi-source domain selection and a greedy ensemble transfer learning strategy. Only when the performance on the target domain validation is improved, fine-tuned GRU-CNN feedforward control models are screened into the ensemble. The outputs of the retained ensemble models are averaged to generate the final prediction. Comparative experiment results demonstrate that SMETL achieves superior control performance across all evaluation metrics. This confirms SMETL’s capability to effectively leverage multi-source domain knowledge and mitigate the risk of introducing irrelevant data. Full article
(This article belongs to the Section Actuator Materials)
Show Figures

Figure 1

20 pages, 1731 KB  
Article
An Architecture-Feature-Enhanced Decision Framework for Deep Learning-Based Prediction of Extreme and Imbalanced Precipitation
by Wenjiu Yu, Yingna Sun, Zhicheng Yue, Zhinan Li and Yujia Liu
Water 2026, 18(2), 176; https://doi.org/10.3390/w18020176 (registering DOI) - 8 Jan 2026
Abstract
Accurate precipitation forecasting is paramount for water security and disaster mitigation, yet it remains formidable due to atmospheric stochasticity and the inherent class imbalance in rainfall datasets. This study proposes an integrated “architecture-feature-augmentation” framework to circumvent these limitations. Through a systematic evaluation of [...] Read more.
Accurate precipitation forecasting is paramount for water security and disaster mitigation, yet it remains formidable due to atmospheric stochasticity and the inherent class imbalance in rainfall datasets. This study proposes an integrated “architecture-feature-augmentation” framework to circumvent these limitations. Through a systematic evaluation of CNN-LSTM and Transformer architectures, we delineate distinct performance profiles: The Transformer model, when coupled with feature engineering and physics-informed augmentation, yields a peak F1-score of 0.1429, marking the optimal configuration for harmonizing precision and recall. Conversely, CNN-LSTM demonstrates superior robustness in extreme event detection, consistently maintaining high recall rates (up to 0.90) across diverse scenarios. We identify feature engineering as a critical performance modulator, substantially bolstering CNN-LSTM’s baseline metrics while enabling the Transformer to realize its maximum predictive capacity. Although synthetic oversampling techniques—such as SMOTE and GAN—effectively extend the detection range for heavy precipitation, physics-informed augmentation provides the most consistent performance gains, particularly in multi-class contexts. We conclude that the Transformer, augmented by physical constraints, is the optimal candidate for high-precision requirements, whereas CNN-LSTM, integrated with synthetic augmentation, offers a more sensitive alternative for early warning systems prioritizing recall. These findings provide empirical guidance for advancing extreme weather preparedness and strategic water resource management. Full article
(This article belongs to the Section Hydrology)
18 pages, 8939 KB  
Article
Research on the Temporal and Spatial Evolution Patterns of Vegetation Cover in Zhaogu Mining Area Based on kNDVI
by Congying Liu, Hebing Zhang, Zhichao Chen, He Qin, Xueqing Liu and Yiheng Jiao
Appl. Sci. 2026, 16(2), 681; https://doi.org/10.3390/app16020681 (registering DOI) - 8 Jan 2026
Abstract
Extensive coal mining activities can exert substantial negative impacts on surface ecosystems. Vegetation indices are widely recognized as effective indicators of land ecological conditions and provide valuable insights into long-term ecological changes in mining areas. In this study, the Zhaogu mining area of [...] Read more.
Extensive coal mining activities can exert substantial negative impacts on surface ecosystems. Vegetation indices are widely recognized as effective indicators of land ecological conditions and provide valuable insights into long-term ecological changes in mining areas. In this study, the Zhaogu mining area of the Jiaozuo Coalfield was selected as the study site. Using the Google Earth Engine (GEE) platform, the Kernel Normalized Difference Vegetation Index (kNDVI) was constructed to generate a vegetation dataset covering the period from 2010 to 2024. The temporal dynamics and future trends of vegetation coverage were analyzed using Theil–Sen median trend analysis, the Mann–Kendall test, the Hurst index, and residual analysis. Furthermore, the relative contributions of climatic factors and human activities to vegetation changes were quantitatively assessed. The results indicate that: (1) vegetation coverage in the Zhaogu mining area exhibits an overall improving trend, affecting approximately 77.1% of the study area, while slight degradation is mainly concentrated in the southeastern region, accounting for about 15.2%; (2) vegetation dynamics are predominantly characterized by low and relatively low fluctuations, covering approximately 78.5% of the region, whereas areas with high fluctuations are limited and mainly distributed in zones with intensive mining activities; although the current vegetation trend is generally increasing, future projections suggest a potential decline in approximately 55.8% of the area; and (3) vegetation changes in the Zhaogu mining area are jointly influenced by climatic factors and human activities, with climatic factors promoting vegetation growth in approximately 70.6% of the study area, while human activities exert inhibitory effects in about 24.2%, particularly in regions affected by mining operations and urban expansion. Full article
Show Figures

Figure 1

26 pages, 8271 KB  
Article
Enhancing EEG Decoding with Selective Augmentation Integration
by Jianbin Ye, Yanjie Sun, Man Xiao, Bo Liu and Kele Xu
Sensors 2026, 26(2), 399; https://doi.org/10.3390/s26020399 (registering DOI) - 8 Jan 2026
Abstract
Deep learning holds considerable promise for electroencephalography (EEG) analysis but faces challenges due to scarce and noisy EEG data, and the limited generality of existing data augmentation techniques. To address these issues, we propose an end-to-end EEG augmentation framework with an adaptive mechanism. [...] Read more.
Deep learning holds considerable promise for electroencephalography (EEG) analysis but faces challenges due to scarce and noisy EEG data, and the limited generality of existing data augmentation techniques. To address these issues, we propose an end-to-end EEG augmentation framework with an adaptive mechanism. This approach utilizes contrastive learning to mitigate representational distortions caused by augmentation, thereby strengthening the encoder’s feature learning. A selective augmentation strategy is further incorporated to dynamically determine optimal augmentation combinations based on performance. We also introduce NeuroBrain, a novel neural architecture specifically designed for auditory EEG decoding. It effectively captures both local and global dependencies within EEG signals. Comprehensive evaluations on the SparrKULee and WithMe datasets confirm the superiority of our proposed framework and architecture, demonstrating a 29.42% performance gain over HappyQuokka and a 5.45% accuracy improvement compared to EEGNet. These results validate our method’s efficacy in tackling key challenges in EEG analysis and advancing the state of the art. Full article
Show Figures

Figure 1

21 pages, 3352 KB  
Article
DHAG-Net: A Small Object Semantic Segmentation Network Integrating Edge Supervision and Dense Hybrid Dilated Convolution
by Qin Qin, Huyuan Shen, Qing Wang, Qun Yang and Xin Wang
Appl. Sci. 2026, 16(2), 684; https://doi.org/10.3390/app16020684 (registering DOI) - 8 Jan 2026
Abstract
Small-object semantic segmentation remains challenging in urban driving scenes due to limited pixel occupancy, blurred boundaries, and the constraints imposed by lightweight deployment. To address these issues, this paper presents a lightweight semantic segmentation framework that enhances boundary awareness and contextual representation while [...] Read more.
Small-object semantic segmentation remains challenging in urban driving scenes due to limited pixel occupancy, blurred boundaries, and the constraints imposed by lightweight deployment. To address these issues, this paper presents a lightweight semantic segmentation framework that enhances boundary awareness and contextual representation while maintaining computational efficiency. The proposed method integrates an edge-supervised boundary gating module to emphasize object boundaries, an efficient multi-scale context aggregation strategy to mitigate scale variation, and a lightweight feature enhancement mechanism for effective feature fusion. Edge supervision is introduced as an auxiliary regularization signal and does not require additional manual annotations. Extensive experiments conducted on multiple benchmark datasets, including Cityscapes, CamVid, PASCAL VOC 2012, and IDDA, demonstrate that the proposed framework consistently improves segmentation performance, particularly for small-object categories, while preserving a favorable balance between accuracy and efficiency. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
27 pages, 7553 KB  
Article
Deep Learning Applied to Spaceborne SAR Interferometry for Detecting Sinkhole-Induced Land Subsidence Along the Dead Sea
by Gali Dekel, Ran Novitsky Nof, Ron Sarafian and Yinon Rudich
Remote Sens. 2026, 18(2), 211; https://doi.org/10.3390/rs18020211 (registering DOI) - 8 Jan 2026
Abstract
The Dead Sea (DS) region has experienced a sharp increase in sinkhole formation in recent years, posing environmental and infrastructure risks. The Geological Survey of Israel (GSI) employs Interferometric Synthetic Aperture Radar (InSAR) to monitor sinkhole activity and manually map land subsidence along [...] Read more.
The Dead Sea (DS) region has experienced a sharp increase in sinkhole formation in recent years, posing environmental and infrastructure risks. The Geological Survey of Israel (GSI) employs Interferometric Synthetic Aperture Radar (InSAR) to monitor sinkhole activity and manually map land subsidence along the western shore of the DS. This process is both time-consuming and prone to human error. Automating detection with Deep Learning (DL) offers a transformative opportunity to enhance monitoring precision, scalability, and real-time decision-making. DL segmentation architectures such as UNet, Attention UNet, SAM, TransUNet, and SegFormer have shown effectiveness in learning geospatial deformation patterns in InSAR and related remote sensing data. This study provides a first comprehensive evaluation of a DL segmentation model applied to InSAR data for detecting land subsidence areas that occur as part of the sinkhole-formation process along the western shores of the DS. Unlike image-based tasks, our new model learns interferometric phase patterns that capture subtle ground deformations rather than direct visual features. As the ground truth in the supervised learning process, we use subsidence areas delineated on the phase maps by the GSI team over the years as part of the operational subsidence surveillance and monitoring activities. This unique data poses challenges for annotation, learning, and interpretability, making the dataset both non-trivial and valuable for advancing research in applied remote sensing and its application in the DS. We train the model across three partition schemes, each representing a different type and level of generalization, and introduce object-level metrics to assess its detection ability. Our results show that the model effectively identifies and generalizes subsidence areas in InSAR data across different setups and temporal conditions and shows promising potential for geographical generalization in previously unseen areas. Finally, large-scale subsidence trends are inferred by reconstructing smaller-scale patches and evaluated for different confidence thresholds. Full article
31 pages, 22609 KB  
Article
From Sparse to Refined Samples: Iterative Enhancement-Based PDLCM for Multi-Annual 10 m Rice Mapping in the Middle-Lower Yangtze
by Lingbo Yang, Jiancong Dong, Cong Xu, Jingfeng Huang, Yichen Wang, Huiqin Ma, Zhongxin Chen, Limin Wang and Jingcheng Zhang
Remote Sens. 2026, 18(2), 209; https://doi.org/10.3390/rs18020209 (registering DOI) - 8 Jan 2026
Abstract
Accurate mapping of rice cultivation is vital for ensuring food security, reducing greenhouse gas emissions, and achieving sustainable development goals. However, large-scale deep learning–based crop mapping remains limited due to the demand for vast, uniformly distributed, high-quality samples. To address this challenge, we [...] Read more.
Accurate mapping of rice cultivation is vital for ensuring food security, reducing greenhouse gas emissions, and achieving sustainable development goals. However, large-scale deep learning–based crop mapping remains limited due to the demand for vast, uniformly distributed, high-quality samples. To address this challenge, we propose a Progressive Deep Learning Crop Mapping (PDLCM) framework for national-scale, high-resolution rice mapping. Beginning with a small set of localized rice and non-rice samples, PDLCM progressively refines model performance through iterative enhancement of positive and negative samples, effectively mitigating sample scarcity and spatial heterogeneity. By combining time-series Sentinel-2 optical data with Sentinel-1 synthetic aperture radar imagery, the framework captures distinctive phenological characteristics of rice while resolving spatiotemporal inconsistencies in large datasets. Applying PDLCM, we produced 10 m rice maps from 2022 to 2024 across the middle and lower Yangtze River Basin, covering more than one million square kilometers. The results achieved an overall accuracy of 96.8% and an F1 score of 0.88, demonstrating strong spatial and temporal generalization. All datasets and source codes are publicly accessible, supporting SDG 2 and providing a transferable paradigm for operational large-scale crop mapping. Full article
18 pages, 19599 KB  
Article
A Semi-Supervised Approach to Microseismic Source Localization with Masked Pre-Training and Residual Convolutional Autoencoder
by Zhe Wang, Xiangbo Gong, Qiao Cheng, Zhuo Xu, Zhiyu Cao and Xiaolong Li
Appl. Sci. 2026, 16(2), 683; https://doi.org/10.3390/app16020683 (registering DOI) - 8 Jan 2026
Abstract
Microseismic monitoring is extensively applied in hydraulic fracturing and mineral extraction, with accurate event localization being a critical component. Recently, deep learning approaches have shown promise for microseismic event localization; however, most of these supervised methods depend on large, labeled datasets, which are [...] Read more.
Microseismic monitoring is extensively applied in hydraulic fracturing and mineral extraction, with accurate event localization being a critical component. Recently, deep learning approaches have shown promise for microseismic event localization; however, most of these supervised methods depend on large, labeled datasets, which are costly and challenging to acquire. To mitigate this issue, we propose a semi-supervised approach based on a residual convolutional autoencoder (RCAE) for automated microseismic localization, designed to leverage limited labeled data effectively and improve source localization accuracy even with small sample sizes. Our method employs pre-training by masking and reconstructing unlabeled seismic records, while integrating residual connections within the encoder to enhance feature extraction from seismic signals. This enables high localization accuracy with minimal labeled data, resulting in significant cost savings. Experimental results indicate that our method surpasses purely supervised approaches on both a 2D salt dome model and a 3D homogeneous half-space model, validating its effectiveness in microseismic localization. Further comparisons with baseline models highlight the method’s advantages, providing an innovative solution for improving cost-efficiency in practical applications. Full article
(This article belongs to the Special Issue Machine Learning Applications in Seismology: 2nd Edition)
26 pages, 3482 KB  
Article
MBS: A Modality-Balanced Strategy for Multimodal Sample Selection
by Yuntao Xu, Bing Chen, Feng Hu, Jiawei Liu, Changjie Zhao and Hongtao Wu
Mach. Learn. Knowl. Extr. 2026, 8(1), 17; https://doi.org/10.3390/make8010017 - 8 Jan 2026
Abstract
With the rapid development of applications such as edge computing, the Internet of Things (IoT), and embodied intelligence, massive multimodal data are continuously generated on end devices in a streaming manner. To maintain model adaptability and robustness in dynamic environments, incremental learning has [...] Read more.
With the rapid development of applications such as edge computing, the Internet of Things (IoT), and embodied intelligence, massive multimodal data are continuously generated on end devices in a streaming manner. To maintain model adaptability and robustness in dynamic environments, incremental learning has gradually become the core training paradigm on edge devices. However, edge devices are constrained by limited computational, storage, and communication resources, making it infeasible to retain and process all data samples over time. This necessitates efficient data selection strategies to reduce redundancy and improve training efficiency. Existing sample selection methods primarily focus on overall sample difficulty or gradient contribution, but they overlook the heterogeneity of multimodal data in terms of information content and discriminative power. This often leads to modality imbalance, causing the model to over-rely on a single modality and suffer performance degradation. To address this issue, this paper proposes a multimodal sample selection strategy based on the Modality Balance Score (MBS). The method computes confidence scores at the modality level for each sample and further quantifies the contribution differences across modalities. In the selection process, samples with balanced modality contributions are prioritized, thereby improving training efficiency while alleviating modality bias. Experiments conducted on two benchmark datasets, CREMA-D and AVE, demonstrate that compared with existing approaches, the MBS strategy achieves the most stable performance under medium-to-high selection ratios (0.25–0.4), yielding superior results in both accuracy and robustness. These findings validate the effectiveness of the proposed strategy in resource-constrained scenarios, providing both theoretical insights and practical guidance for multimodal sample selection in learning tasks. Full article
(This article belongs to the Section Learning)
24 pages, 6072 KB  
Article
Atrial Fibrillation Detection from At-Rest PPG Signals Using an SDOF-TF Method
by Mamun Hasan and Zhili Hao
Sensors 2026, 26(2), 416; https://doi.org/10.3390/s26020416 - 8 Jan 2026
Abstract
At-rest PPG signals have been explored for detecting atrial fibrillation (AF), yet current signal-processing techniques do not achieve perfect accuracy even under low-motion artifact (MA) conditions. This study evaluates the effectiveness of a single-degree-of-freedom time–frequency (SDOF-TF) method in analyzing at-rest PPG signals for [...] Read more.
At-rest PPG signals have been explored for detecting atrial fibrillation (AF), yet current signal-processing techniques do not achieve perfect accuracy even under low-motion artifact (MA) conditions. This study evaluates the effectiveness of a single-degree-of-freedom time–frequency (SDOF-TF) method in analyzing at-rest PPG signals for AF detection. The method leverages the influence of MA on the instant parameters of each harmonic, which is identified using an SDOF model in which the tissue–contact–sensor (TCS) stack is treated as an SDOF system. In this model, MA induces baseline drift and time-varying system parameters. The SDOF-TF method enables the quantification and removal of MA and noise, allowing for the accurate extraction of the arterial pulse waveform, heart rate (HR), heart rate variability (HRV), respiration rate (RR), and respiration modulation (RM). Using data from the MIMIC PERform AF dataset, the method achieved 100% accuracy in distinguishing AF from non-AF cases based on three features: (1) RM, (2) HRV derived from instant frequency and instant initial phase, and (3) standard deviation of HR across harmonics. Compared with non-AF, the RM for each harmonic was increased by AF. RM exhibited an increasing trend with harmonic order in non-AF subjects, whereas this trend was diminished in AF subjects. Full article
39 pages, 3295 KB  
Article
EODE-PFA: A Multi-Strategy Enhanced Pathfinder Algorithm for Engineering Optimization and Feature Selection
by Meiyan Li, Chuxin Cao and Mingyang Du
Biomimetics 2026, 11(1), 57; https://doi.org/10.3390/biomimetics11010057 - 8 Jan 2026
Abstract
The Pathfinder Algorithm (PFA) is a bionic swarm intelligence optimization algorithm inspired by simulating the cooperative movement of animal groups in nature to search for prey. Based on fitness, the algorithm classifies search individuals into leaders and followers. However, PFA fails to effectively [...] Read more.
The Pathfinder Algorithm (PFA) is a bionic swarm intelligence optimization algorithm inspired by simulating the cooperative movement of animal groups in nature to search for prey. Based on fitness, the algorithm classifies search individuals into leaders and followers. However, PFA fails to effectively balance the optimization capabilities of leaders and followers, leading to problems such as insufficient population diversity and slow convergence speed in the original algorithm. To address these issues, this paper proposes an enhanced pathfinder algorithm based on multi-strategy (EODE-PFA). Through the synergistic effects of multiple improved strategies, it effectively solves the balance problem between global exploration and local optimization of the algorithm. To verify the performance of EODE-PFA, this paper applies it to CEC2022 benchmark functions, three types of complex engineering optimization problems, and six sets of feature selection problems, respectively, and compares it with eight mature optimization algorithms. Experimental results show that in three different scenarios, EODE-PFA has significant advantages and competitiveness in both convergence speed and solution accuracy, fully verifying its engineering practicality and scenario universality. To highlight the synergistic effects and overall gains of multiple improved strategies, ablation experiments are conducted on key strategies. To further verify the statistical significance of the experimental results, the Wilcoxon signed-rank test is performed in this study. In addition, for feature selection problems, this study selects UCI real datasets with different real-world scenarios and dimensions, and the results show that the algorithm can still effectively balance exploration and exploitation capabilities in discrete scenarios. Full article
30 pages, 588 KB  
Article
Comparative Performance Analysis of Large Language Models for Structured Data Processing: An Evaluation Framework Applied to Bibliometric Analysis
by Maryam Abbasi, Paulo Váz, José Silva, Filipe Cardoso, Filipe Sá and Pedro Martins
Appl. Sci. 2026, 16(2), 669; https://doi.org/10.3390/app16020669 - 8 Jan 2026
Abstract
The proliferation of Large Language Models (LLMs) has transformed natural language processing (NLP) applications across diverse domains. This paper presents a comprehensive comparative analysis of three state-of-the-art language models—GPT-4o, Claude-3, and Julius AI—evaluating their performance across systematic NLP tasks using standardized datasets and [...] Read more.
The proliferation of Large Language Models (LLMs) has transformed natural language processing (NLP) applications across diverse domains. This paper presents a comprehensive comparative analysis of three state-of-the-art language models—GPT-4o, Claude-3, and Julius AI—evaluating their performance across systematic NLP tasks using standardized datasets and evaluation frameworks. We introduce a reusable evaluation methodology incorporating five distinct prompt engineering techniques (Prefix, Cloze, Anticipatory, Heuristic, and Chain of Thought) applied to three categories of linguistic challenges: data extraction, aggregation, and contextual reasoning. Using a bibliometric analysis use case as our evaluation domain, we demonstrate the framework’s application to structured data processing tasks common in academic research, business intelligence, and data analytics applications. Our experimental design utilized a curated Scopus bibliographic dataset containing 3212 academic publications to ensure reproducible and objective comparisons, representing structured data processing tasks. The results demonstrated significant performance variations across models and tasks, with GPT-4o achieving 89.3% average accuracy, Julius AI reaching 85.7%, and Claude-3 demonstrating 72.1%. The results demonstrated significant performance variations across models and tasks, with Claude-3 showing notably high prompt sensitivity (consistency score: 74.3%, compared with GPT-4o: 91.2% and Julius AI: 86.7%). This study revealed critical insights into prompt sensitivity, contextual understanding limitations, and the effectiveness of different prompting strategies for specific task categories. Statistical analysis using repeated measures ANOVA and pairwise t-tests with Bonferroni’s correction confirmed significant differences between models (F(2, 132) = 142.3, p < 0.001), with effect sizes ranging from 0.51 to 1.33. Response time analysis showed task-dependent latency patterns: for data extraction tasks, Claude-3 averaged 1.9 s (fastest), GPT-4o 2.1 s, and Julius AI 2.8 s; however, for contextual reasoning tasks, latency increased as follows for Claude-3: 3.8 s, GPT-4o: 4.5 s, and Julius AI: 5.8 s. Overall averages were as follows for GPT-4o: 3.2 s, Julius AI: 4.1 s, and Claude-3: 2.8 s. While specific performance metrics reflect current model versions (GPT-4o: gpt-4o-2024-05-13; Claude-3 Opus: 20240229; Julius AI: v2.1.4), the evaluation framework provides a reusable methodology for ongoing LLM assessment as new versions emerge. These findings provide practical guidance for researchers and practitioners in selecting appropriate LLMs for domain-specific applications and highlight areas requiring further development in language model capabilities. While demonstrated on bibliometric data, this evaluation framework is generalizable to other structured data processing domains. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 1480 KB  
Article
Adaptive Cross-Modal Denoising: Enhancing LiDAR–Camera Fusion Perception in Adverse Circumstances
by Muhammad Arslan Ghaffar, Kangshuai Zhang, Nuo Pan and Lei Peng
Sensors 2026, 26(2), 408; https://doi.org/10.3390/s26020408 - 8 Jan 2026
Abstract
Autonomous vehicles (AVs) rely on LiDAR and camera sensors to perceive their environment. However, adverse weather conditions, such as rain, snow, and fog, negatively affect these sensors, reducing their reliability by introducing unwanted noise. Effective denoising of multimodal sensor data is crucial for [...] Read more.
Autonomous vehicles (AVs) rely on LiDAR and camera sensors to perceive their environment. However, adverse weather conditions, such as rain, snow, and fog, negatively affect these sensors, reducing their reliability by introducing unwanted noise. Effective denoising of multimodal sensor data is crucial for safe and reliable AV operation in such circumstances. Existing denoising methods primarily focus on unimodal approaches, addressing noise in individual modalities without fully leveraging the complementary nature of LiDAR and camera data. To enhance multimodal perception in adverse weather, we propose a novel Adaptive Cross-Modal Denoising (ACMD) framework, which leverages modality-specific self-denoising encoders, followed by an Adaptive Bridge Controller (ABC) to evaluate residual noise and guide the direction of cross-modal denoising. Following this, the Cross-Modal Denoising (CMD) module is introduced, which selectively refines the noisier modality using semantic guidance from the cleaner modality. Synthetic noise was added to both sensors’ data during training to simulate real-world noisy conditions. Experiments on the WeatherKITTI dataset show that ACMD surpasses traditional unimodal denoising methods (Restormer, PathNet, BM3D, PointCleanNet) by 28.2% in PSNR and 33.3% in CD, and outperforms state-of-the-art fusion models by 16.2% in JDE. The ACMD framework enhances AV reliability in adverse weather conditions, supporting safe autonomous driving. Full article
(This article belongs to the Section Vehicular Sensing)
Back to TopTop