Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (243)

Search Parameters:
Keywords = multiplicative dimensional reduction method

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 14851 KB  
Article
Investigation on the Evolution Mechanism of the Mechanical Performance of Road Tunnel Linings Under Reinforcement Corrosion
by Jianyu Hong, Xuezeng Liu, Dexing Wu and Jiahui Fu
Buildings 2025, 15(20), 3723; https://doi.org/10.3390/buildings15203723 - 16 Oct 2025
Abstract
To clarify the influence of reinforcement corrosion on the mechanical performance of road tunnel linings, localized tests on reinforcement-induced concrete expansion are conducted to identify cracking patterns and their effects on load-bearing behavior. Refined three-dimensional finite element models of localized concrete and the [...] Read more.
To clarify the influence of reinforcement corrosion on the mechanical performance of road tunnel linings, localized tests on reinforcement-induced concrete expansion are conducted to identify cracking patterns and their effects on load-bearing behavior. Refined three-dimensional finite element models of localized concrete and the entire tunnel are developed using the concrete damaged plasticity model and the extended finite element method and validated against experimental results. The mechanical response and crack evolution of the lining under corrosion are analyzed. Results show that in single-reinforcement specimens, cracks propagate perpendicular to the reinforcement axis, whereas in multiple-reinforcement specimens, interacting cracks coalesce to form a π-shaped pattern. The cover-layer crack width exhibits a linear relationship with the corrosion rate. Corrosion leads to a reduction in the stiffness and load-bearing capacity of the local concrete. At the tunnel scale, however, its influence remains highly localized, and the additional deflection exhibits little correlation with the initial deflection. Local corrosion causes a decrease in bending moment and an increase in axial force in adjacent linings; when the corrosion rate exceeds about 15%, stiffness damage and internal force distribution tend to stabilize. Damage and cracks initiate around corroded reinforcement holes, extend toward the cover layer, and connect longitudinally, forming potential spalling zones. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

15 pages, 2232 KB  
Article
Image-Based Deep Learning for Brain Tumour Transcriptomics: A Benchmark of DeepInsight, Fotomics, and Saliency-Guided CNNs
by Ali Alyatimi, Vera Chung, Muhammad Atif Iqbal and Ali Anaissi
Mach. Learn. Knowl. Extr. 2025, 7(4), 119; https://doi.org/10.3390/make7040119 - 15 Oct 2025
Abstract
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic [...] Read more.
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic classification. DeepInsight utilises dimensionality reduction to spatially arrange gene features, while Fotomics applies Fourier transforms to encode expression patterns into structured images. The proposed method transforms each single-cell gene expression profile into an RGB image using PCA, UMAP, or t-SNE, enabling CNNs such as ResNet to learn spatially organised molecular features. Gradient-based saliency maps are employed to highlight gene regions most influential in model predictions. Evaluation is conducted on two biologically and technologically different datasets: single-cell RNA-seq from glioblastoma GSM3828672 and bulk microarray data from medulloblastoma GSE85217. Outcomes demonstrate that image-based deep learning methods, particularly those incorporating saliency guidance, provide a robust and interpretable framework for uncovering biologically meaningful patterns in complex high-dimensional omics data. For instance, ResNet-18 achieved the highest accuracy of 97.25% on the GSE85217 dataset and 91.02% on GSM3828672, respectively, outperforming other baseline models across multiple metrics. Full article
Show Figures

Graphical abstract

35 pages, 3978 KB  
Article
A Dynamic Surrogate-Assisted Hybrid Breeding Algorithm for High-Dimensional Imbalanced Feature Selection
by Yujun Ma, Binjing Liao and Zhiwei Ye
Symmetry 2025, 17(10), 1735; https://doi.org/10.3390/sym17101735 - 14 Oct 2025
Abstract
With the growing complexity of high-dimensional imbalanced datasets in critical fields such as medical diagnosis and bioinformatics, feature selection has become essential to reduce computational costs, alleviate model bias, and improve classification performance. DS-IHBO, a dynamic surrogate-assisted feature selection algorithm integrating relevance-based redundant [...] Read more.
With the growing complexity of high-dimensional imbalanced datasets in critical fields such as medical diagnosis and bioinformatics, feature selection has become essential to reduce computational costs, alleviate model bias, and improve classification performance. DS-IHBO, a dynamic surrogate-assisted feature selection algorithm integrating relevance-based redundant feature filtering and an improved hybrid breeding algorithm, is presented in this paper. Departing from traditional surrogate-assisted approaches that use static approximations, DS-IHBO employs a dynamic surrogate switching mechanism capable of adapting to diverse data distributions and imbalance ratios through multiple surrogate units built via clustering. It enhances the hybrid breeding algorithm with asymmetric stratified population initialization, adaptive differential operators, and t-distribution mutation strategies to strengthen its global exploration and convergence accuracy. Tests on 12 real-world imbalanced datasets (4–98% imbalance) show that DS-IHBO achieves a 3.48% improvement in accuracy, a 4.80% improvement in F1 score, and an 83.85% reduction in computational time compared with leading methods. These results demonstrate its effectiveness for high-dimensional imbalanced feature selection and strong potential for real-world applications. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 4760 KB  
Article
Hybrid Supervised–Unsupervised Fusion Clustering for Intelligent Classification of Horizontal Gas Wells Leveraging Integrated Dynamic–Static Parameters
by Han Gao, Jia Wang, Tao Liu, Siyu Lai, Bo Wang, Ling Guo, Zhao Zhang, Guowei Wang and Ruiquan Liao
Processes 2025, 13(10), 3278; https://doi.org/10.3390/pr13103278 - 14 Oct 2025
Abstract
To address the decision-making requirements for drainage gas recovery in horizontal gas wells within low-permeability tight reservoirs, this study proposes an intelligent classification approach that integrates supervised and unsupervised learning techniques. Initially, the static and dynamic performance characteristics of gas wells are characterized [...] Read more.
To address the decision-making requirements for drainage gas recovery in horizontal gas wells within low-permeability tight reservoirs, this study proposes an intelligent classification approach that integrates supervised and unsupervised learning techniques. Initially, the static and dynamic performance characteristics of gas wells are characterized across multiple dimensions, including static performance, liquid production intensity, liquid drainage capacity, and liquid carrying efficiency. These features are then quantitatively categorized using Linear Discriminant Analysis (LDA). Subsequently, a hybrid classification framework is developed by integrating LDA with the K-means clustering algorithm. The effectiveness of this supervised–unsupervised fusion method is validated through comparative analysis against direct K-means clustering, demonstrating enhanced classification accuracy and interpretability. Key findings are summarized as follows: (1) Classification based on individual dynamic or static parameters exhibits low consistency, indicating that single-parameter approaches are insufficient to fully capture the complexity of actual production conditions. (2) By incorporating both dynamic and static parameters and applying a strategy combining LDA-based dimensionality reduction with K-means clustering, gas wells are precisely classified into five distinct categories. (3) Tailored optimization strategies are proposed for each well type, including production allocation optimization, continuous production (without the need for drainage gas production measures), mandatory drainage measures, foam-assisted drainage, and optimal tubing or plunger lift systems. The methodologies and findings of this study offer theoretical insights and technical guidance applicable to the classification and management of horizontal gas wells in other unconventional reservoirs, such as shale gas formations. Full article
Show Figures

Figure 1

18 pages, 6555 KB  
Article
Bioinformatics Analysis of Tumor-Associated Macrophages in Hepatocellular Carcinoma and Establishment of a Survival Model Based on Transformer
by Zhuo Zeng, Shenghua Rao and Jiemeng Zhang
Int. J. Mol. Sci. 2025, 26(19), 9825; https://doi.org/10.3390/ijms26199825 - 9 Oct 2025
Viewed by 283
Abstract
Hepatocellular carcinoma (HCC) ranks among the most prevalent malignancies globally. Although treatment strategies have improved, the prognosis for patients with advanced HCC remains unfavorable. Tumor-associated macrophages (TAMs) play a dual role, exhibiting both anti-tumor and pro-tumor functions. In this study, we analyzed single-cell [...] Read more.
Hepatocellular carcinoma (HCC) ranks among the most prevalent malignancies globally. Although treatment strategies have improved, the prognosis for patients with advanced HCC remains unfavorable. Tumor-associated macrophages (TAMs) play a dual role, exhibiting both anti-tumor and pro-tumor functions. In this study, we analyzed single-cell RNA sequencing data from 10 HCC tumor cores and 8 adjacent non-tumor liver tissues available in the dataset GSE149614. Using dimensionality reduction and clustering approaches, we identified six major cell types and nine distinct TAM subtypes. We employed Monocle2 for cell trajectory analysis, hdWGCNA for co-expression network analysis, and CellChat to investigate functional communication between TAMs and other components of the tumor microenvironment. Furthermore, we estimated TAM abundance in TCGA-LIHC samples using CIBERSORT and observed that the relative proportions of specific TAM subtypes were significantly correlated with patient survival. To identify TAM-related genes influencing patient outcomes, we developed a high-dimensional, gene-based transformer survival model. This model achieved superior concordance index (C-index) values across multiple datasets, including TCGA-LIHC, OEP000321, and GSE14520, outperforming other methods. Our results emphasize the heterogeneity of tumor-associated macrophages in hepatocellular carcinoma and highlight the practicality of our deep learning framework in survival analysis. Full article
(This article belongs to the Section Molecular Informatics)
Show Figures

Graphical abstract

22 pages, 7067 KB  
Article
New Evaluation System for Extra-Heavy Oil Viscosity Reducer Effectiveness: From 1D Static Viscosity Reduction to 3D SAGD Chemical–Thermal Synergy
by Hongbo Li, Enhui Pei, Chao Xu and Jing Yang
Energies 2025, 18(19), 5307; https://doi.org/10.3390/en18195307 - 8 Oct 2025
Viewed by 360
Abstract
To overcome the production bottleneck induced by the high viscosity of extra-heavy oil and resolve the issues of limited efficiency in traditional thermal oil recovery methods (including cyclic steam stimulation (CSS), steam flooding, and steam-assisted gravity drainage (SAGD)) as well as the fragmentation [...] Read more.
To overcome the production bottleneck induced by the high viscosity of extra-heavy oil and resolve the issues of limited efficiency in traditional thermal oil recovery methods (including cyclic steam stimulation (CSS), steam flooding, and steam-assisted gravity drainage (SAGD)) as well as the fragmentation of existing viscosity reducer evaluation systems, this study establishes a multi-dimensional evaluation system for the effectiveness of viscosity reducers, with stage-averaged remaining oil saturation as the core benchmarks. A “1D static → 2D dynamic → 3D synergistic” progressive sequential experimental design was adopted. In the 1D static experiments, multi-gradient concentration tests were conducted to analyze the variation law of the viscosity reduction rate of viscosity reducers, thereby screening out the optimal adapted concentration for subsequent experiments. For the 2D dynamic experiments, sand-packed tubes were used as the experimental carrier to compare the oil recovery efficiencies of ultimate steam flooding, viscosity reducer flooding with different concentrations, and the composite process of “steam flooding → viscosity reducer flooding → secondary steam flooding”, which clarified the functional value of viscosity reducers in dynamic displacement. In the 3D synergistic experiments, slab cores were employed to simulate the SAGD development process after multiple rounds of cyclic steam stimulation, aiming to explore the regulatory effect of viscosity reducers on residual oil distribution and oil recovery factor. This novel evaluation system clearly elaborates the synergistic mechanism of viscosity reducers, i.e., “chemical empowerment (emulsification and viscosity reduction, wettability alteration) + thermal amplification (steam carrying and displacement, steam chamber expansion)”. It fills the gap in the existing evaluation chain, which previously lacked a connection from static performance to dynamic displacement and further to multi-process synergistic adaptation. Moreover, it provides quantifiable and implementable evaluation criteria for steam–chemical composite flooding of extra-heavy oil, effectively releasing the efficiency-enhancing potential of viscosity reducers. This study holds critical supporting significance for promoting the efficient and economical development of extra-heavy oil resources. Full article
Show Figures

Figure 1

13 pages, 379 KB  
Article
Nyström-Based 2D DOA Estimation for URA: Bridging Performance–Complexity Trade-Offs
by Liping Yuan, Ke Wang and Fengkai Luan
Mathematics 2025, 13(19), 3198; https://doi.org/10.3390/math13193198 - 6 Oct 2025
Viewed by 216
Abstract
To address the computational efficiency challenges in two-dimensional (2D) direction-of-arrival (DOA) estimation, a two-stage framework integrating the Nyström approximation with subspace decomposition techniques is proposed in this paper. The methodology strategically integrates the Nyström approximation with subspace decomposition techniques to bridge the critical [...] Read more.
To address the computational efficiency challenges in two-dimensional (2D) direction-of-arrival (DOA) estimation, a two-stage framework integrating the Nyström approximation with subspace decomposition techniques is proposed in this paper. The methodology strategically integrates the Nyström approximation with subspace decomposition techniques to bridge the critical performance–complexity trade-off inherent in high-resolution parameter estimation scenarios. In the first stage, the Nyström method is applied to approximate the signal subspace while simultaneously enabling construction of a reduced rank covariance matrix, which effectively reduces the computational complexity compared with eigenvalue decomposition (EVD) or singular value decomposition (SVD). This innovative approach efficiently derives two distinct signal subspaces that closely approximate those obtained from the full-dimensional covariance matrix but at substantially reduced computational cost. The second stage employs a sophisticated subspace-based estimation technique that leverages the principal singular vectors associated with these approximated subspaces. This process incorporates an iterative refinement mechanism to accurately resolve the paired azimuth and elevation angles comprising the 2D DOA solution. With the use of the Nyström approximation and reduced rank framework, the entire DOA estimation process completely circumvents traditional EVD/SVD operations. This elimination constitutes the core mechanism enabling substantial computational savings without compromising estimation accuracy. Comprehensive numerical simulations rigorously demonstrate that the proposed framework maintains performance competitive with conventional high-complexity estimators while achieving significant complexity reduction. The evaluation benchmarks the method against multiple state-of-the-art DOA estimation techniques across diverse operational scenarios, confirming both its efficacy and robustness under varying signal conditions. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

24 pages, 3701 KB  
Article
Optimization of Genomic Breeding Value Estimation Model for Abdominal Fat Traits Based on Machine Learning
by Hengcong Chen, Dachang Dou, Min Lu, Xintong Liu, Cheng Chang, Fuyang Zhang, Shengwei Yang, Zhiping Cao, Peng Luan, Yumao Li and Hui Zhang
Animals 2025, 15(19), 2843; https://doi.org/10.3390/ani15192843 - 29 Sep 2025
Viewed by 257
Abstract
Abdominal fat is a key indicator of chicken meat quality. Excessive deposition not only reduces meat quality but also decreases feed conversion efficiency, making the breeding of low-abdominal-fat strains economically important. Genomic selection (GS) uses information from genome-wide association studies (GWASs) and high-throughput [...] Read more.
Abdominal fat is a key indicator of chicken meat quality. Excessive deposition not only reduces meat quality but also decreases feed conversion efficiency, making the breeding of low-abdominal-fat strains economically important. Genomic selection (GS) uses information from genome-wide association studies (GWASs) and high-throughput sequencing data. It estimates genomic breeding values (GEBVs) from genotypes, which enables early and precise selection. Given that abdominal fat is a polygenic trait controlled by numerous small-effect loci, this study combined population genetic analyses with machine learning (ML)-based feature selection. Relevant single-nucleotide polymorphisms (SNPs) were first identified using a combined GWAS and linkage disequilibrium (LD) approach, followed by a two-stage feature selection process—Lasso for dimensionality reduction and recursive feature elimination (RFE) for refinement—to generate the model input set. We evaluated multiple machine learning models for predicting genomic estimated breeding values (GEBVs). The results showed that linear models and certain nonlinear models achieved higher accuracy and were well suited as base learners for ensemble methods. Building on these findings, we developed a Dynamic Adaptive Weighted Stacking Ensemble Learning Framework (DAWSELF), which applies dynamic weighting and voting to heterogeneous base learners and integrates them layer by layer, with Ridge serving as the meta-learner. In three independent validation populations, DAWSELF consistently outperformed individual models and conventional stacking frameworks in prediction accuracy. This work establishes an efficient GEBV prediction framework for complex traits such as chicken abdominal fat and provides a reusable SNP feature selection strategy, offering practical value for enhancing the precision of poultry breeding and improving product quality. Full article
(This article belongs to the Section Animal Genetics and Genomics)
Show Figures

Figure 1

17 pages, 3854 KB  
Article
Denoising and Mosaicking Methods for Radar Images of Road Interiors
by Changrong Li, Zhiyong Huang, Bo Zang and Huayang Yu
Appl. Sci. 2025, 15(19), 10485; https://doi.org/10.3390/app151910485 - 28 Sep 2025
Viewed by 260
Abstract
Three-dimensional ground-penetrating radar can quickly visualize the internal condition of the road; however, it faces challenges such as data splicing difficulties and image noise interference. Scanning antenna and lane size differences, as well as equipment and environmental interference, make the radar image difficult [...] Read more.
Three-dimensional ground-penetrating radar can quickly visualize the internal condition of the road; however, it faces challenges such as data splicing difficulties and image noise interference. Scanning antenna and lane size differences, as well as equipment and environmental interference, make the radar image difficult to interpret, which affects disease identification accuracy. For this reason, this paper focuses on road radar image splicing and noise reduction. The primary research includes the following: (1) We make use of backward projection imaging algorithms to visualize the internal information of the road, combined with a high-precision positioning system, splicing of multi-lane data, and the use of bilinear interpolation algorithms to make the three-dimensional radar data uniformly distributed. (2) Aiming at the defects of the low computational efficiency of the traditional adaptive median filter sliding window, a Deep Q-learning algorithm is introduced to construct a reward and punishment mechanism, and the feedback reward function quickly determines the filter window size. The results show that the method is outstanding in improving the peak signal-to-noise ratio, compared with the traditional algorithm, improving the denoising performance by 2–7 times. It effectively suppresses multiple noise types while precisely preserving fine details such as 0.1–0.5 mm microcrack edges, significantly enhancing image clarity. After processing, images were automatically recognized using YOLOv8x. The detection rate for transverse cracks in images improved significantly from being undetectable in mixed noise and original images to exceeding 90% in damage detection. This effectively validates the critical role of denoising in enhancing the automatic interpretation capability of internal road cracks. Full article
Show Figures

Figure 1

11 pages, 6412 KB  
Article
High-Throughput Evaluation of Mechanical Exfoliation Using Optical Classification of Two-Dimensional Materials
by Anthony Gasbarro, Yong-Sung D. Masuda and Victor M. Lubecke
Micromachines 2025, 16(10), 1084; https://doi.org/10.3390/mi16101084 - 25 Sep 2025
Viewed by 395
Abstract
Mechanical exfoliation remains the most common method for producing high-quality two-dimensional (2D) materials, but its inherently low yield requires screening large numbers of samples to identify usable flakes. Efficient optimization of the exfoliation process demands scalable methods to analyze deposited material across extensive [...] Read more.
Mechanical exfoliation remains the most common method for producing high-quality two-dimensional (2D) materials, but its inherently low yield requires screening large numbers of samples to identify usable flakes. Efficient optimization of the exfoliation process demands scalable methods to analyze deposited material across extensive datasets. While machine learning clustering techniques have demonstrated ~95% accuracy in classifying 2D material thicknesses from optical microscopy images, current tools are limited by slow processing speeds and heavy reliance on manual user input. This work presents an open-source, GPU-accelerated software platform that builds upon existing classification methods to enable high-throughput analysis of 2D material samples. By leveraging parallel computation, optimizing core algorithms, and automating preprocessing steps, the software can quantify flake coverage and thickness across uncompressed optical images at scale. Benchmark comparisons show that this implementation processes over 200× more pixel data with a 60× reduction in processing time relative to the original software. Specifically, a full dataset of2916 uncompressed images can be classified in 35 min, compared to an estimated 32 h required by the baseline method using compressed images. This platform enables rapid evaluation of exfoliation results across multiple trials, providing a practical tool for optimizing deposition techniques and improving the yield of high-quality 2D materials. Full article
Show Figures

Figure 1

36 pages, 35564 KB  
Article
Enhancing Soundscape Characterization and Pattern Analysis Using Low-Dimensional Deep Embeddings on a Large-Scale Dataset
by Daniel Alexis Nieto Mora, Leonardo Duque-Muñoz and Juan David Martínez Vargas
Mach. Learn. Knowl. Extr. 2025, 7(4), 109; https://doi.org/10.3390/make7040109 - 24 Sep 2025
Viewed by 447
Abstract
Soundscape monitoring has become an increasingly important tool for studying ecological processes and supporting habitat conservation. While many recent advances focus on identifying species through supervised learning, there is growing interest in understanding the soundscape as a whole while considering patterns that extend [...] Read more.
Soundscape monitoring has become an increasingly important tool for studying ecological processes and supporting habitat conservation. While many recent advances focus on identifying species through supervised learning, there is growing interest in understanding the soundscape as a whole while considering patterns that extend beyond individual vocalizations. This broader view requires unsupervised approaches capable of capturing meaningful structures related to temporal dynamics, frequency content, spatial distribution, and ecological variability. In this study, we present a fully unsupervised framework for analyzing large-scale soundscape data using deep learning. We applied a convolutional autoencoder (Soundscape-Net) to extract acoustic representations from over 60,000 recordings collected across a grid-based sampling design in the Rey Zamuro Reserve in Colombia. These features were initially compared with other audio characterization methods, showing superior performance in multiclass classification, with accuracies of 0.85 for habitat cover identification and 0.89 for time-of-day classification across 13 days. For the unsupervised study, optimized dimensionality reduction methods (Uniform Manifold Approximation and Projection and Pairwise Controlled Manifold Approximation and Projection) were applied to project the learned features, achieving trustworthiness scores above 0.96. Subsequently, clustering was performed using KMeans and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), with evaluations based on metrics such as the silhouette, where scores above 0.45 were obtained, thus supporting the robustness of the discovered latent acoustic structures. To interpret and validate the resulting clusters, we combined multiple strategies: spatial mapping through interpolation, analysis of acoustic index variance to understand the cluster structure, and graph-based connectivity analysis to identify ecological relationships between the recording sites. Our results demonstrate that this approach can uncover both local and broad-scale patterns in the soundscape, providing a flexible and interpretable pathway for unsupervised ecological monitoring. Full article
Show Figures

Figure 1

25 pages, 17562 KB  
Article
SGFNet: Redundancy-Reduced Spectral–Spatial Fusion Network for Hyperspectral Image Classification
by Boyu Wang, Chi Cao and Dexing Kong
Entropy 2025, 27(10), 995; https://doi.org/10.3390/e27100995 - 24 Sep 2025
Viewed by 341
Abstract
Hyperspectral image classification (HSIC) involves analyzing high-dimensional data that contain substantial spectral redundancy and spatial noise, which increases the entropy and uncertainty of feature representations. Reducing such redundancy while retaining informative content in spectral–spatial interactions remains a fundamental challenge for building efficient and [...] Read more.
Hyperspectral image classification (HSIC) involves analyzing high-dimensional data that contain substantial spectral redundancy and spatial noise, which increases the entropy and uncertainty of feature representations. Reducing such redundancy while retaining informative content in spectral–spatial interactions remains a fundamental challenge for building efficient and accurate HSIC models. Traditional deep learning methods often rely on redundant modules or lack sufficient spectral–spatial coupling, limiting their ability to fully exploit the information content of hyperspectral data. To address these challenges, we propose SGFNet, which is a spectral-guided fusion network designed from an information–theoretic perspective to reduce feature redundancy and uncertainty. First, we designed a Spectral-Aware Filtering Module (SAFM) that suppresses noisy spectral components and reduces redundant entropy, encoding the raw pixel-wise spectrum into a compact spectral representation accessible to all encoder blocks. Second, we introduced a Spectral–Spatial Adaptive Fusion (SSAF) module, which strengthens spectral–spatial interactions and enhances the discriminative information in the fused features. Finally, we developed a Spectral Guidance Gated CNN (SGGC), which is a lightweight gated convolutional module that uses spectral guidance to more effectively extract spatial representations while avoiding unnecessary sequence modeling overhead. We conducted extensive experiments on four widely used hyperspectral benchmarks and compared SGFNet with eight state-of-the-art models. The results demonstrate that SGFNet consistently achieves superior performance across multiple metrics. From an information–theoretic perspective, SGFNet implicitly balances redundancy reduction and information preservation, providing an efficient and effective solution for HSIC. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 3339 KB  
Article
Study on Maximum Temperature Under Multi-Factor Influence of Tunnel Fire Based on Machine Learning
by Yuanyi Xie, Guanghui Yao and Zhongyuan Yuan
Buildings 2025, 15(18), 3401; https://doi.org/10.3390/buildings15183401 - 19 Sep 2025
Viewed by 342
Abstract
This study proposes a machine learning framework utilizing physical feature dimensionality reduction to address the problem of predicting the maximum excess temperature beneath the tunnel ceiling under the influence of multiple factors. First, theoretical analysis is used to systematically explore the impacts of [...] Read more.
This study proposes a machine learning framework utilizing physical feature dimensionality reduction to address the problem of predicting the maximum excess temperature beneath the tunnel ceiling under the influence of multiple factors. First, theoretical analysis is used to systematically explore the impacts of various factors on the maximum excess temperature, including the heat release rate of the fire source, tunnel height, slope, and ambient air pressure. Physical relationships are established to identify key factors, remove redundant features, and construct a simplified feature vector set. Five typical machine learning models are selected: Random Forest (RF), Support Vector Regression (SVR), Fully Connected Neural Network (FCNN), Multi-Layer Perceptron (MLP), and Bayesian Neural Network (BNN). A hybrid data collection strategy combining scale model tests and CFD numerical simulations constructs a small-sample structured dataset with physical backgrounds. The models are evaluated regarding prediction accuracy, stability, and generalization ability. Results show that the Bayesian Neural Network (BNN) optimized by random search parameter optimization and Bayesian regularization significantly outperforms other comparative models in evaluation indices such as root mean square error (RMSE), and mean absolute error (MAE), and coefficient of determination (R2), making it the optimal model and algorithm combination for such tasks. This study provides a reliable quantitative analysis method for tunnel fire safety assessment and offers a new methodological reference for the research on fire dynamics in underground spaces. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

28 pages, 45524 KB  
Article
A Comparative Analysis of U-Net Architectures with Dimensionality Reduction for Agricultural Crop Classification Using Hyperspectral Data
by Georgios Dimitrios Gkologkinas, Konstantinos Ntouros, Eftychios Protopapadakis and Ioannis Rallis
Algorithms 2025, 18(9), 588; https://doi.org/10.3390/a18090588 - 17 Sep 2025
Viewed by 1062
Abstract
The inherent high dimensionality of hyperspectral imagery presents both opportunities and challenges for agricultural crop classification. This study offers a rigorous comparative evaluation of three U-Net-based architectures, i.e., U-Net, U-Net++, and Atrous U-Net, applied to EnMAP hyperspectral data over the heterogeneous agricultural region [...] Read more.
The inherent high dimensionality of hyperspectral imagery presents both opportunities and challenges for agricultural crop classification. This study offers a rigorous comparative evaluation of three U-Net-based architectures, i.e., U-Net, U-Net++, and Atrous U-Net, applied to EnMAP hyperspectral data over the heterogeneous agricultural region of Lake Vegoritida, Greece. To address the spectral redundancy, we integrated multiple dimensionality-reduction strategies, including Linear Discriminant Analysis, SHAP-based model-driven feature selection, and unsupervised clustering approaches. Results reveal that model performance is contingent on (a) the network’s architecture and (b) the features’ space provided by band selection. While U-Net++ consistently excels when the full spectrum or ACS-derived subsets are employed, standard U-Net achieves great performance under LDA reduction, and Atrous U-Net benefits from SHAP-driven compact representations. Importantly, band selection methods such as ACS and SHAP substantially reduce spectral dimensionality without sacrificing accuracy, with the U-Net++–ACS configuration delivering the highest F1-score (0.77). These findings demonstrate that effective hyperspectral crop classification requires a joint optimization of architecture and spectral representation, underscoring the potential of compact, interpretable pipelines for scalable and operational precision agriculture. Full article
Show Figures

Figure 1

27 pages, 12061 KB  
Article
Ultrasonic Localization of Transformer Patrol Robot Based on Wavelet Transform and Narrowband Beamforming
by Hongxin Ji, Zijian Tang, Jiaqi Li, Chao Zheng, Xinghua Liu and Liqing Liu
Sensors 2025, 25(18), 5723; https://doi.org/10.3390/s25185723 - 13 Sep 2025
Viewed by 521
Abstract
The large size and metal-enclosed casings of oil-immersed power transformers present significant challenges for patrol robots attempting to accurately locate their position within the transformer. Therefore, this paper proposes a three-dimensional spatial localization method for transformer patrol robots using a nine-element ultrasonic array. [...] Read more.
The large size and metal-enclosed casings of oil-immersed power transformers present significant challenges for patrol robots attempting to accurately locate their position within the transformer. Therefore, this paper proposes a three-dimensional spatial localization method for transformer patrol robots using a nine-element ultrasonic array. This method is based on wavelet decomposition and weighted filter beamforming (WD-WFB) algorithms. To address the issue of strong noise interference in the field, the ultrasonic localization signals are adaptively decomposed into wavelet coefficients at different frequencies and scales. An improved semi-soft thresholding function is applied to the decomposed wavelet coefficients to reduce noise and reconstruct the localization signals, resulting in localization signals with low distortion and a high signal-to-noise ratio(SNR). To overcome the limitations of traditional beamforming algorithms regarding interference resistance and signal resolution, this paper presents an improved WFB algorithm. By obtaining the energy distribution of the scanning area and determining the position of the maximum energy point, the spatial position of the transformer patrol robot can be determined. The test results show that the proposed improved semi-soft threshold function demonstrates superior denoising performance compared to traditional threshold functions. When compared to the soft threshold function, it achieves improvements of 15.32% in SNR and 15.57% in normalized correlation coefficient (NCC), along with a 48.91% reduction in root mean square error (RMSE). Compared with the hard threshold function, the improvement is even more significant: the SNR is improved by 60.55%, the NCC is improved by 24.90%, and the RMSE is reduced by 58.77%. The denoising effect was significantly improved compared to the traditional threshold function. In a 1200 mm × 1000 mm × 1000 mm transformer test box, the improved WFB algorithm in this paper was used to perform multiple localizations of the transformer patrol robot at different positions after denoising the field signals using the semi-soft threshold function. The maximum relative localization error was 3.47%, and the absolute error was within 2.6 cm, meeting engineering application requirements. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Back to TopTop