Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (122)

Search Parameters:
Keywords = noisy data space

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3524 KB  
Article
Transformer-Embedded Task-Adaptive-Regularized Prototypical Network for Few-Shot Fault Diagnosis
by Mingkai Xu, Huichao Pan, Siyuan Wang and Shiying Sun
Electronics 2025, 14(19), 3838; https://doi.org/10.3390/electronics14193838 - 27 Sep 2025
Abstract
Few-shot fault diagnosis (FSFD) seeks to build accurate models from scarce labeled data, a frequent challenge in industrial settings with noisy measurements and varying operating conditions. Conventional metric-based meta-learning (MBML) often assumes task-invariant, class-separable feature spaces, which rarely hold in heterogeneous environments. To [...] Read more.
Few-shot fault diagnosis (FSFD) seeks to build accurate models from scarce labeled data, a frequent challenge in industrial settings with noisy measurements and varying operating conditions. Conventional metric-based meta-learning (MBML) often assumes task-invariant, class-separable feature spaces, which rarely hold in heterogeneous environments. To address this, we propose a Transformer-embedded Task-Adaptive-Regularized Prototypical Network (TETARPN). A tailored Transformer-based Temporal Encoder Module is integrated into MBML to capture long-range dependencies and global temporal correlations in industrial time series. In parallel, a task-adaptive prototype regularization dynamically adjusts constraints according to task difficulty, enhancing intra-class compactness and inter-class separability. This combination improves both adaptability and robustness in FSFD. Experiments on bearing benchmark datasets show that TETARPN consistently outperforms state-of-the-art methods under diverse fault types and operating conditions, demonstrating its effectiveness and potential for real-world deployment. Full article
Show Figures

Figure 1

20 pages, 1837 KB  
Article
Unlabeled Insight, Labeled Boost: Contrastive Learning and Class-Adaptive Pseudo-Labeling for Semi-Supervised Medical Image Classification
by Jing Yang, Mingliang Chen, Qinhao Jia and Shuxian Liu
Entropy 2025, 27(10), 1015; https://doi.org/10.3390/e27101015 - 27 Sep 2025
Abstract
The medical imaging domain frequently encounters the dual challenges of annotation scarcity and class imbalance. A critical issue lies in effectively extracting information from limited labeled data while mitigating the dominance of head classes. The existing approaches often overlook in-depth modeling of sample [...] Read more.
The medical imaging domain frequently encounters the dual challenges of annotation scarcity and class imbalance. A critical issue lies in effectively extracting information from limited labeled data while mitigating the dominance of head classes. The existing approaches often overlook in-depth modeling of sample relationships in low-dimensional spaces, while rigid or suboptimal dynamic thresholding strategies in pseudo-label generation are susceptible to noisy label interference, leading to cumulative bias amplification during the early training phases. To address these issues, we propose a semi-supervised medical image classification framework combining labeled data-contrastive learning with class-adaptive pseudo-labeling (CLCP-MT), comprising two key components: the semantic discrimination enhancement (SDE) module and the class-adaptive pseudo-label refinement (CAPR) module. The former incorporates supervised contrastive learning on limited labeled data to fully exploit discriminative information in latent structural spaces, thereby significantly amplifying the value of sparse annotations. The latter dynamically calibrates pseudo-label confidence thresholds according to real-time learning progress across different classes, effectively reducing head-class dominance while enhancing tail-class recognition performance. These synergistic modules collectively achieve breakthroughs in both information utilization efficiency and model robustness, demonstrating superior performance in class-imbalanced scenarios. Extensive experiments on the ISIC2018 skin lesion dataset and Chest X-ray14 thoracic disease dataset validate CLCP-MT’s efficacy. With only 20% labeled and 80% unlabeled data, our framework achieves a 10.38% F1-score improvement on ISIC2018 and a 2.64% AUC increase on Chest X-ray14 compared to the baselines, confirming its effectiveness and superiority under annotation-deficient and class-imbalanced conditions. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

26 pages, 2590 KB  
Article
IoT-Based Unsupervised Learning for Characterizing Laboratory Operational States to Improve Safety and Sustainability
by Bibars Amangeldy, Timur Imankulov, Nurdaulet Tasmurzayev, Baglan Imanbek, Gulmira Dikhanbayeva and Yedil Nurakhov
Sustainability 2025, 17(18), 8340; https://doi.org/10.3390/su17188340 - 17 Sep 2025
Viewed by 301
Abstract
Laboratory buildings represent some of the highest energy-consuming infrastructure due to stringent environmental requirements and the continuous operation of specialized equipment. Ensuring both energy efficiency and indoor air quality (IAQ) in such spaces remains a central challenge for sustainable building design and operation. [...] Read more.
Laboratory buildings represent some of the highest energy-consuming infrastructure due to stringent environmental requirements and the continuous operation of specialized equipment. Ensuring both energy efficiency and indoor air quality (IAQ) in such spaces remains a central challenge for sustainable building design and operation. Recent advances in Internet of Things (IoT) systems allow for real-time monitoring of multivariate environmental parameters, including CO2, total volatile organic compounds (TVOC), PM2.5, temperature, humidity, and noise. However, these datasets are often noisy or incomplete, complicating conventional monitoring approaches. Supervised anomaly detection methods are ill-suited to such contexts due to the lack of labeled data. In contrast, unsupervised machine learning (ML) techniques can autonomously detect patterns and deviations without annotations, offering a scalable alternative. The challenge of identifying anomalous environmental conditions and latent operational states in laboratory environments is addressed through the application of unsupervised models to 1808 hourly observations collected over four months. Anomaly detection was conducted using Isolation Forest (300 trees, contamination = 0.05) and One-Class Support Vector Machine (One-Class SVM) (RBF kernel, ν = 0.05, γ auto-scaled). Standardized six-dimensional feature vectors captured key environmental and energy-related variables. K-means clustering (k = 3) revealed three persistent operational states: Empty/Cool (42.6%), Experiment (37.6%), and Crowded (19.8%). Detected anomalies included CO2 surges above 1800 ppm, TVOC concentrations exceeding 4000 ppb, and compound deviations in noise and temperature. The models demonstrated sensitivity to both abrupt and structural anomalies. Latent states were shown to correspond with occupancy patterns, experimental activities, and inactive system operation, offering interpretable environmental profiles. The methodology supports integration into adaptive heating, ventilation, and air conditioning (HVAC) frameworks, enabling real-time, label-free environmental management. Findings contribute to intelligent infrastructure development, particularly in resource-constrained laboratories, and advance progress toward sustainability targets in energy, health, and automation. Full article
Show Figures

Figure 1

21 pages, 8247 KB  
Article
Energy Minimization for Underwater Multipath Time-Delay Estimation
by Miao Feng, Shiliang Fang, Liang An, Chuanqi Zhu, Shuxia Huang, Qing Fan and Yifan Zhou
J. Mar. Sci. Eng. 2025, 13(9), 1764; https://doi.org/10.3390/jmse13091764 - 12 Sep 2025
Viewed by 184
Abstract
To address the multipath delay estimation problem in distributed hydrophone passive localization systems, a global energy minimization-based method is proposed in this paper. In this method, correlation pulses are treated as tracking targets, and their trajectories are estimated from correlograms formed by multiple [...] Read more.
To address the multipath delay estimation problem in distributed hydrophone passive localization systems, a global energy minimization-based method is proposed in this paper. In this method, correlation pulses are treated as tracking targets, and their trajectories are estimated from correlograms formed by multiple frames. Specifically, an energy function is designed to jointly encode pulse similarity, motion continuity, trajectory persistence, data fidelity, and regularization, thereby reformulating multipath delay estimation as a global optimization problem. In order to balance the discreteness of observations and the continuity of trajectories, the optimization process is implemented alternating between discrete association (solved via α-expansion) and continuous trajectory fitting (using weighted cubic splines). Furthermore, a dynamic hypothesis space expansion strategy based on trajectory merging and splitting is introduced to improve robustness while accelerating convergence. By exploiting both the intrinsic characteristics of correlation pulses in multi-frame processing and the physical properties of motion trajectories, the proposed method achieves higher tracking accuracy without requiring prior knowledge of the number of delay trajectories in a noisy environment. Numerical simulations under various noise conditions and sea trial results validate the superiorities of the proposed multipath delay estimation method. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

24 pages, 7601 KB  
Article
Network Intrusion Detection Integrating Feature Dimensionality Reduction and Transfer Learning
by Hui Wang, Wei Jiang, Junjie Yang, Zitao Xu and Boxin Zhi
Technologies 2025, 13(9), 409; https://doi.org/10.3390/technologies13090409 - 10 Sep 2025
Viewed by 337
Abstract
In the Internet era, network malicious intrusion behaviors occur frequently and network intrusion detection is increasingly in demand. Addressing the challenges of high-dimensional data, nonlinearity and noisy network traffic data in network intrusion detection, a net-work intrusion detection model is proposed in this [...] Read more.
In the Internet era, network malicious intrusion behaviors occur frequently and network intrusion detection is increasingly in demand. Addressing the challenges of high-dimensional data, nonlinearity and noisy network traffic data in network intrusion detection, a net-work intrusion detection model is proposed in this paper. Firstly, a hybrid multi-model feature selection and kernel-based dimensionality reduction algorithm is proposed to map high-dimensional features to low-dimensional space to achieve feature dimensionality reduction and enhance nonlinear differentiability. Then the semantic feature mapping is introduced to convert the low-dimensional features into color images which represent distinct data characteristic. For classifying these images, an integrated convolutional neural network is constructed. Moreover, sub-model fine-tuning is performed through transfer learning and weights are assigned to improve the performance of multi-classification detection. Experiments on the UNSW-NB15 and CICIDS 2017 datasets show that the proposed model achieves accuracies of 99.99% and 99.96%. The F1-scores of 99.98% and 99.91% are achieved respectively. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

31 pages, 6007 KB  
Article
Geometry and Topology Preservable Line Structure Construction for Indoor Point Cloud Based on the Encoding and Extracting Framework
by Haiyang Lyu, Hongxiao Xu, Donglai Jiao and Hanru Zhang
Remote Sens. 2025, 17(17), 3033; https://doi.org/10.3390/rs17173033 - 1 Sep 2025
Viewed by 912
Abstract
The line structure is an efficient form of representation and modeling for LiDAR point clouds, while the Line Structure Construction (LSC) method aims to extract complete and coherent line structures from complex 3D point clouds, thereby providing a foundation for geometric modeling, scene [...] Read more.
The line structure is an efficient form of representation and modeling for LiDAR point clouds, while the Line Structure Construction (LSC) method aims to extract complete and coherent line structures from complex 3D point clouds, thereby providing a foundation for geometric modeling, scene understanding, and downstream applications. However, traditional LSC methods often fall short in preserving both the geometric integrity and topological connectivity of line structures derived from such datasets. To address this issue, we propose the Geometry and Topology Preservable Line Structure Construction (GTP-LSC) method, based on the Encoding and Extracting Framework (EEF). First, in the encoding phase, point cloud features related to line structures are mapped into a high-dimensional feature space. A 3D U-Net is then employed to compute Subsets with Structure feature of Line (SSL) from the dense, unstructured, and noisy indoor LiDAR point cloud data. Next, in the extraction phase, the SSL is transformed into a 3D field enriched with line features. Initially extracted line structures are then constructed based on Morse theory, effectively preserving the topological relationships. In the final step, these line structures are optimized using RANdom SAmple Consensus (RANSAC) and Constructive Solid Geometry (CSG) to ensure geometric completeness. This step also facilitates the generation of complex entities, enabling an accurate and comprehensive representation of both geometric and topological aspects of the line structures. Experiments were conducted using the Indoor Laser Scanning Dataset, focusing on the parking garage (D1), the corridor (D2), and the multi-room structure (D3). The results demonstrated that the proposed GTP-LSC method outperformed existing approaches in terms of both geometric integrity and topological connectivity. To evaluate the performance of different LSC methods, the IoU Buffer Ratio (IBR) was used to measure the overlap between the actual and constructed line structures. The proposed method achieved IBR scores of 92.5% (D1), 94.2% (D2), and 90.8% (D3) for these scenes. Additionally, Precision, Recall, and F-Score were calculated to further assess the LSC results. The F-Score of the proposed method was 0.89 (D1), 0.92 (D2), and 0.89 (D3), demonstrating superior performance in both visual analysis and quantitative results compared to other methods. Full article
(This article belongs to the Special Issue Point Cloud Data Analysis and Applications)
Show Figures

Graphical abstract

30 pages, 1831 KB  
Article
Integrating Cacao Physicochemical-Sensory Profiles via Gaussian Processes Crowd Learning and Localized Annotator Trustworthiness
by Juan Camilo Lugo-Rojas, Maria José Chica-Morales, Sergio Leonardo Florez-González, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Foods 2025, 14(17), 2961; https://doi.org/10.3390/foods14172961 - 25 Aug 2025
Viewed by 455
Abstract
Understanding the intricate relationship between sensory perception and physicochemical properties of cacao-based products is crucial for advancing quality control and driving product innovation. However, effectively integrating these heterogeneous data sources poses a significant challenge, particularly when sensory evaluations are derived from low-quality, subjective, [...] Read more.
Understanding the intricate relationship between sensory perception and physicochemical properties of cacao-based products is crucial for advancing quality control and driving product innovation. However, effectively integrating these heterogeneous data sources poses a significant challenge, particularly when sensory evaluations are derived from low-quality, subjective, and often inconsistent annotations provided by multiple experts. We propose a comprehensive framework that leverages a correlated chained Gaussian processes model for learning from crowds, termed MAR-CCGP, specifically designed for a customized Casa Luker database that integrates sensory and physicochemical data on cacao-based products. By formulating sensory evaluations as regression tasks, our approach enables the estimation of continuous perceptual scores from physicochemical inputs, while concurrently inferring the latent, input-dependent reliability of each annotator. To address the inherent noise, subjectivity, and non-stationarity in expert-generated sensory data, we introduce a three-stage methodology: (i) construction of an integrated database that unifies physicochemical parameters with corresponding sensory descriptors; (ii) application of a MAR-CCGP model to infer the underlying ground truth from noisy, crowd-sourced, and non-stationary sensory annotations; and (iii) development of a novel localized expert trustworthiness approach, also based on MAR-CCGP, which dynamically adjusts for variations in annotator consistency across the input space. Our approach provides a robust, interpretable, and scalable solution for learning from heterogeneous and noisy sensory data, establishing a principled foundation for advancing data-driven sensory analysis and product optimization in the food science domain. We validate the effectiveness of our method through a series of experiments on both semi-synthetic data and a novel real-world dataset developed in collaboration with Casa Luker, which integrates sensory evaluations with detailed physicochemical profiles of cacao-based products. Compared to state-of-the-art learning-from-crowds baselines, our framework consistently achieves superior predictive performance and more precise annotator reliability estimation, demonstrating its efficacy in multi-annotator regression settings. Of note, our unique combination of a novel database, robust noisy-data regression, and input-dependent trust scoring sets MAR-CCGP apart from existing approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine Learning for Foods)
Show Figures

Figure 1

29 pages, 12228 KB  
Article
Conditional Domain Adaptation with α-Rényi Entropy Regularization and Noise-Aware Label Weighting
by Diego Armando Pérez-Rosero, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Mathematics 2025, 13(16), 2602; https://doi.org/10.3390/math13162602 - 14 Aug 2025
Viewed by 485
Abstract
Domain adaptation is a key approach to ensure that artificial intelligence models maintain reliable performance when facing distributional shifts between training (source) and testing (target) domains. However, existing methods often struggle to simultaneously preserve domain-invariant representations and discriminative class structures, particularly in the [...] Read more.
Domain adaptation is a key approach to ensure that artificial intelligence models maintain reliable performance when facing distributional shifts between training (source) and testing (target) domains. However, existing methods often struggle to simultaneously preserve domain-invariant representations and discriminative class structures, particularly in the presence of complex covariate shifts and noisy pseudo-labels in the target domain. In this work, we introduce Conditional Rényi α-Entropy Domain Adaptation, named CREDA, a novel deep learning framework for domain adaptation that integrates kernel-based conditional alignment with a differentiable, matrix-based formulation of Rényi’s quadratic entropy. The proposed method comprises three main components: (i) a deep feature extractor that learns domain-invariant representations from labeled source and unlabeled target data; (ii) an entropy-weighted approach that down-weights low-confidence pseudo-labels, enhancing stability in uncertain regions; and (iii) a class-conditional alignment loss, formulated as a Rényi-based entropy kernel estimator, that enforces semantic consistency in the latent space. We validate CREDA on standard benchmark datasets for image classification, including Digits, ImageCLEF-DA, and Office-31, showing competitive performance against both classical and deep learning-based approaches. Furthermore, we employ nonlinear dimensionality reduction and class activation maps visualizations to provide interpretability, revealing meaningful alignment in feature space and offering insights into the relevance of individual samples and attributes. Experimental results confirm that CREDA improves cross-domain generalization while promoting accuracy, robustness, and interpretability. Full article
Show Figures

Figure 1

25 pages, 7802 KB  
Article
A Hybrid Ensemble Equilibrium Optimizer Gene Selection Algorithm for Microarray Data
by Peng Su, Yuxin Zhao, Xiaobo Li, Zhendi Ma and Hui Wang
Biomimetics 2025, 10(8), 523; https://doi.org/10.3390/biomimetics10080523 - 10 Aug 2025
Viewed by 637
Abstract
As modern medical technology advances, the utilization of gene expression data has proliferated across diverse domains, particularly in cancer diagnosis and prognosis monitoring. However, gene expression data is often characterized by high dimensionality and a prevalence of redundant and noisy information, prompting the [...] Read more.
As modern medical technology advances, the utilization of gene expression data has proliferated across diverse domains, particularly in cancer diagnosis and prognosis monitoring. However, gene expression data is often characterized by high dimensionality and a prevalence of redundant and noisy information, prompting the need for effective strategies to mitigate issues like the curse of dimensionality and overfitting. This study introduces a novel hybrid ensemble equilibrium optimizer gene selection algorithm in response. In the first stage, a hybrid approach, combining multiple filters and gene correlation-based methods, is used to select an optimal subset of genes, which is achieved by evaluating the redundancy and complementary relationships among genes to obtain a subset with maximal information content. In the second stage, an equilibrium optimizer algorithm incorporating Gaussian Barebone and a novel gene pruning strategy is employed to further search for the optimal gene subset within the candidate gene space selected in the first stage. To demonstrate the superiority of the proposed method, it was compared with nine feature selection techniques on 15 datasets. The results indicate that the ensemble filtering method in the first stage exhibits strong stability and effectively reduces the search space of the gene selection algorithms. The improved equilibrium optimizer algorithm enhances the prediction accuracy while significantly reducing the number of selected features. These findings highlight the effectiveness of the proposed method as a valuable approach for gene selection. Full article
(This article belongs to the Special Issue Exploration of Bio-Inspired Computing: 2nd Edition)
Show Figures

Figure 1

14 pages, 2178 KB  
Article
State-of-the-Art Document Image Binarization Using a Decision Tree Ensemble Trained on Classic Local Binarization Algorithms and Image Statistics
by Nicolae Tarbă, Costin-Anton Boiangiu and Mihai-Lucian Voncilă
Appl. Sci. 2025, 15(15), 8374; https://doi.org/10.3390/app15158374 - 28 Jul 2025
Viewed by 625
Abstract
Image binarization algorithms reduce the original color space to only two values, black and white. They are an important preprocessing step in many computer vision applications. Image binarization is typically performed using a threshold value by classifying the pixels into two categories: lower [...] Read more.
Image binarization algorithms reduce the original color space to only two values, black and white. They are an important preprocessing step in many computer vision applications. Image binarization is typically performed using a threshold value by classifying the pixels into two categories: lower and higher than the threshold. Global thresholding uses a single threshold value for the entire image, whereas local thresholding uses different values for the different pixels. Although slower and more complex than global thresholding, local thresholding can better classify pixels in noisy areas of an image by considering not only the pixel’s value, but also its surrounding neighborhood. This study introduces a local thresholding method that uses the results of several local thresholding algorithms and other image statistics to train a decision tree ensemble. Through cross-validation, we demonstrate that the model is robust and performs well on new data. We compare the results with state-of-the-art solutions and reveal significant improvements in the average F-measure for all DIBCO datasets, obtaining an F-measure of 95.8%, whereas the previous high score was 93.1%. The proposed solution significantly outperformed the previous state-of-the-art algorithms on the DIBCO 2019 dataset, obtaining an F-measure of 95.8%, whereas the previous high score was 73.8%. Full article
(This article belongs to the Special Issue Statistical Signal Processing: Theory, Methods and Applications)
Show Figures

Figure 1

17 pages, 1467 KB  
Article
Confidence-Based Knowledge Distillation to Reduce Training Costs and Carbon Footprint for Low-Resource Neural Machine Translation
by Maria Zafar, Patrick J. Wall, Souhail Bakkali and Rejwanul Haque
Appl. Sci. 2025, 15(14), 8091; https://doi.org/10.3390/app15148091 - 21 Jul 2025
Viewed by 1069
Abstract
The transformer-based deep learning approach represents the current state-of-the-art in machine translation (MT) research. Large-scale pretrained transformer models produce state-of-the-art performance across a wide range of MT tasks for many languages. However, such deep neural network (NN) models are often data-, compute-, space-, [...] Read more.
The transformer-based deep learning approach represents the current state-of-the-art in machine translation (MT) research. Large-scale pretrained transformer models produce state-of-the-art performance across a wide range of MT tasks for many languages. However, such deep neural network (NN) models are often data-, compute-, space-, power-, and energy-hungry, typically requiring powerful GPUs or large-scale clusters to train and deploy. As a result, they are often regarded as “non-green” and “unsustainable” technologies. Distilling knowledge from large deep NN models (teachers) to smaller NN models (students) is a widely adopted sustainable development approach in MT as well as in broader areas of natural language processing (NLP), including speech, and image processing. However, distilling large pretrained models presents several challenges. First, increased training time and cost that scales with the volume of data used for training a student model. This could pose a challenge for translation service providers (TSPs), as they may have limited budgets for training. Moreover, CO2 emissions generated during model training are typically proportional to the amount of data used, contributing to environmental harm. Second, when querying teacher models, including encoder–decoder models such as NLLB, the translations they produce for low-resource languages may be noisy or of low quality. This can undermine sequence-level knowledge distillation (SKD), as student models may inherit and reinforce errors from inaccurate labels. In this study, the teacher model’s confidence estimation is employed to filter those instances from the distilled training data for which the teacher exhibits low confidence. We tested our methods on a low-resource Urdu-to-English translation task operating within a constrained training budget in an industrial translation setting. Our findings show that confidence estimation-based filtering can significantly reduce the cost and CO2 emissions associated with training a student model without drop in translation quality, making it a practical and environmentally sustainable solution for the TSPs. Full article
(This article belongs to the Special Issue Deep Learning and Its Applications in Natural Language Processing)
Show Figures

Figure 1

29 pages, 8563 KB  
Article
A Bridge Crack Segmentation Algorithm Based on Fuzzy C-Means Clustering and Feature Fusion
by Yadong Yao, Yurui Zhang, Zai Liu and Heming Yuan
Sensors 2025, 25(14), 4399; https://doi.org/10.3390/s25144399 - 14 Jul 2025
Viewed by 608
Abstract
In response to the limitations of traditional image processing algorithms, such as high noise sensitivity and threshold dependency in bridge crack detection, and the extensive labeled data requirements of deep learning methods, this study proposes a novel crack segmentation algorithm based on fuzzy [...] Read more.
In response to the limitations of traditional image processing algorithms, such as high noise sensitivity and threshold dependency in bridge crack detection, and the extensive labeled data requirements of deep learning methods, this study proposes a novel crack segmentation algorithm based on fuzzy C-means (FCM) clustering and multi-feature fusion. A three-dimensional feature space is constructed using B-channel pixels and fuzzy clustering with c = 3, justified by the distinct distribution patterns of these three regions in the image, enabling effective preliminary segmentation. To enhance accuracy, connected domain labeling combined with a circularity threshold is introduced to differentiate linear cracks from granular noise. Furthermore, a 5 × 5 neighborhood search strategy, based on crack pixel amplitude, is designed to restore the continuity of fragmented cracks. Experimental results on the Concrete Crack and SDNET2018 datasets demonstrate that the proposed algorithm achieves an accuracy of 0.885 and a recall rate of 0.891, outperforming DeepLabv3+ by 4.2%. Notably, with a processing time of only 0.8 s per image, the algorithm balances high accuracy with real-time efficiency, effectively addressing challenges, such as missed fine cracks and misjudged broken cracks in noisy environments by integrating geometric features and pixel distribution characteristics. This study provides an efficient unsupervised solution for bridge damage detection. Full article
Show Figures

Figure 1

18 pages, 300 KB  
Article
Applications of Complex Uncertain Sequences via Lacunary Almost Statistical Convergence
by Xiu-Liang Qiu, Kuldip Raj, Sanjeev Verma, Samrati Gorka, Shixiao Xiao and Qing-Bo Cai
Axioms 2025, 14(7), 526; https://doi.org/10.3390/axioms14070526 - 10 Jul 2025
Viewed by 394
Abstract
We explore the realm of uncertainty theory by investigating diverse notions of convergence and statistical convergence concerning complex uncertain sequences. Complex uncertain variables can be described as measurable functions mapping from an uncertainty space to the set of complex numbers. They are employed [...] Read more.
We explore the realm of uncertainty theory by investigating diverse notions of convergence and statistical convergence concerning complex uncertain sequences. Complex uncertain variables can be described as measurable functions mapping from an uncertainty space to the set of complex numbers. They are employed to represent and model complex uncertain quantities. We introduce the concept of lacunary almost statistical convergence of order α(0<α1) for complex uncertain sequences, examining various aspects of uncertainty such as distribution, mean, measure, uniformly almost sure convergence and almost sure convergence. Additionally, we establish connections between the constructed sequence spaces by providing illustrative instances. Importantly, lacunary almost statistical convergence provides a flexible framework for handling sequences with irregular behavior, which often arise in uncertain environments with imprecise data. This makes our approach particularly useful in practical fields such as engineering, data modeling and decision-making, where traditional deterministic methods are not always applicable. Our approach offers a more flexible and realistic framework for approximating functions in uncertain environments where classical convergence may not apply. Thus, this study contributes to approximation theory by extending its tools to settings involving imprecise or noisy data. Full article
Show Figures

Figure 1

23 pages, 3677 KB  
Article
HG-Mamba: A Hybrid Geometry-Aware Bidirectional Mamba Network for Hyperspectral Image Classification
by Xiaofei Yang, Jiafeng Yang, Lin Li, Suihua Xue, Haotian Shi, Haojin Tang and Xiaohui Huang
Remote Sens. 2025, 17(13), 2234; https://doi.org/10.3390/rs17132234 - 29 Jun 2025
Viewed by 837
Abstract
Deep learning has demonstrated significant success in hyperspectral image (HSI) classification by effectively leveraging spatial–spectral feature learning. However, current approaches encounter three challenges: (1) high spectral redundancy and the presence of noisy bands, which impair the extraction of discriminative features; (2) limited spatial [...] Read more.
Deep learning has demonstrated significant success in hyperspectral image (HSI) classification by effectively leveraging spatial–spectral feature learning. However, current approaches encounter three challenges: (1) high spectral redundancy and the presence of noisy bands, which impair the extraction of discriminative features; (2) limited spatial receptive fields inherent in convolutional operations; and (3) unidirectional context modeling that inadequately captures bidirectional dependencies in non-causal HSI data. To address these challenges, this paper proposes HG-Mamba, a novel hybrid geometry-aware bidirectional Mamba network for HSI classification. The proposed HG-Mamba synergistically integrates convolutional operations, geometry-aware filtering, and bidirectional state-space models (SSMs) to achieve robust spectral–spatial representation learning. The proposed framework comprises two stages. The first stage, termed spectral compression and discrimination enhancement, employs multi-scale spectral convolutions alongside a spectral bidirectional Mamba (SeBM) module to suppress redundant bands while modeling long-range spectral dependencies. The second stage, designated spatial structure perception and context modeling, incorporates a Gaussian Distance Decay (GDD) mechanism to adaptively reweight spatial neighbors based on geometric distances, coupled with a spatial bidirectional Mamba (SaBM) module for comprehensive global context modeling. The GDD mechanism facilitates boundary-aware feature extraction by prioritizing spatially proximate pixels, while the bidirectional SSMs mitigate unidirectional bias through parallel forward–backward state transitions. Extensiveexperiments on the Indian Pines, Houston2013, and WHU-Hi-LongKou datasets demonstrate the superior performance of HG-Mamba, achieving overall accuracies of 94.91%, 98.41%, and 98.67%, respectively. Full article
(This article belongs to the Special Issue AI-Driven Hyperspectral Remote Sensing of Atmosphere and Land)
Show Figures

Graphical abstract

16 pages, 4334 KB  
Article
Dynamic Monitoring of a Bridge from GNSS-RTK Sensor Using an Improved Hybrid Denoising Method
by Chunbao Xiong, Zhi Shang, Meng Wang and Sida Lian
Sensors 2025, 25(12), 3723; https://doi.org/10.3390/s25123723 - 13 Jun 2025
Viewed by 483
Abstract
This study focused on the monitoring of a bridge using the global navigation satellite system real-time kinematic (GNSS-RTK) sensor. An improved hybrid denoising method was developed to enhance the GNSS-RTK’s accuracy. The improved hybrid denoising method consists of the improved complete ensemble empirical [...] Read more.
This study focused on the monitoring of a bridge using the global navigation satellite system real-time kinematic (GNSS-RTK) sensor. An improved hybrid denoising method was developed to enhance the GNSS-RTK’s accuracy. The improved hybrid denoising method consists of the improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN), the detrended fluctuation analysis (DFA), and an improved wavelet threshold denoising method. The stability experiment demonstrated the superiority of the improved wavelet threshold denoising method in reducing the noise of the GNSS-RTK. A noisy simulation signal was created to assess the performance of the proposed method. Compared to the ICEEMDAN method and the CEEMDAN-WT method, the proposed method achieves lower RMSE and higher SNR. The signal obtained by the proposed method is similar to the original signal. Then, GNSS-RTK was used to monitor a bridge in maintenance and rehabilitation construction. The bridge monitoring experiment lasted for four hours. (Considering the space limitation of the article, only representative 600 s data is displayed in the paper.) The bridge is located in Tianjin, China. The original displacement ranges are −14.9~19.3 in the north–south direction; −26.9~24.7 in the east–west direction; and −46.7~52.3 in the vertical direction. The displacement ranges processed by the proposed method are −12.3~17.2 in the north–south direction; −24.6~24.1 in the east–west direction; and −46.7~51.1 in the vertical direction. The proposed method processed fewer displacements than the initial monitoring displacements. It indicates the proposed method reduces noise significantly when monitoring the bridge based on the GNSS-RTK sensor. The average sixth-order frequency from PSD is 1.0043 Hz. The difference between the PSD and FEA is only 0.99%. The sixth-order frequency from the PSD is similar to that from the FEA. The lower modes’ natural frequencies from the PSD are smaller than those from the FEA. It illustrates the fact that, during the repair process, the missing load-bearing rods made the bridge less stiff and strong. The smaller natural frequencies of the bridge, the complex construction environment, the diversity of workers’ operations, and some unforeseen circumstances occurring in the construction all bring risks to the safety of the bridge. We should pay more attention to the dynamic monitoring of the bridge during construction in order to understand the structural status in time to prevent accidents. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop