Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,233)

Search Parameters:
Keywords = high-dimensional solutions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 2418 KB  
Article
Probabilistic Safety Guarantees for Learned Control Barrier Functions: Theory and Application to Multi-Objective Human–Robot Collaborative Optimization
by Claudio Urrea
Mathematics 2026, 14(3), 516; https://doi.org/10.3390/math14030516 (registering DOI) - 31 Jan 2026
Abstract
Designing provably safe controllers for high-dimensional nonlinear systems with formal guarantees represents a fundamental challenge in control theory. While control barrier functions (CBFs) provide safety certificates through forward invariance, manually crafting these barriers for complex systems becomes intractable. Neural network approximation offers expressiveness [...] Read more.
Designing provably safe controllers for high-dimensional nonlinear systems with formal guarantees represents a fundamental challenge in control theory. While control barrier functions (CBFs) provide safety certificates through forward invariance, manually crafting these barriers for complex systems becomes intractable. Neural network approximation offers expressiveness but traditionally lacks formal guarantees on approximation error and Lipschitz continuity essential for safety-critical applications. This work establishes rigorous theoretical foundations for learned barrier functions through explicit probabilistic bounds relating neural approximation error to safety failure probability. The framework integrates Lipschitz-constrained neural networks trained via PAC learning within multi-objective model predictive control. Three principal results emerge: a probabilistic forward invariance theorem establishing P(violation)Tδlocal+exp(hmin2/(2L2Tσ2)), explicitly connecting network parameters to failure probability; sample complexity analysis proving O(N1/4) safe set expansion; and computational complexity bounds of O(H3m3) enabling 50 Hz real-time control. An experimental validation across 648,000 time steps demonstrates a 99.8% success rate with zero violations, a measured approximation error of σ=0.047 m, a matching theoretical bound of σ0.05 m, and a 16.2 ms average solution time. The framework achieves a 52% conservatism reduction compared to manual barriers and a 21% improvement in multi-objective Pareto hypervolume while maintaining formal safety guarantees. Full article
20 pages, 1275 KB  
Article
QEKI: A Quantum–Classical Framework for Efficient Bayesian Inversion of PDEs
by Jiawei Yong and Sihai Tang
Entropy 2026, 28(2), 156; https://doi.org/10.3390/e28020156 - 30 Jan 2026
Abstract
Solving Bayesian inverse problems efficiently stands as a major bottleneck in scientific computing. Although Bayesian Physics-Informed Neural Networks (B-PINNs) have introduced a robust way to quantify uncertainty, the high-dimensional parameter spaces inherent in deep learning often lead to prohibitive sampling costs. Addressing this, [...] Read more.
Solving Bayesian inverse problems efficiently stands as a major bottleneck in scientific computing. Although Bayesian Physics-Informed Neural Networks (B-PINNs) have introduced a robust way to quantify uncertainty, the high-dimensional parameter spaces inherent in deep learning often lead to prohibitive sampling costs. Addressing this, our work introduces Quantum-Encodable Bayesian PINNs trained via Classical Ensemble Kalman Inversion (QEKI), a framework that pairs Quantum Neural Networks (QNNs) with Ensemble Kalman Inversion (EKI). The core advantage lies in the QNN’s ability to act as a compact surrogate for PDE solutions, capturing complex physics with significantly fewer parameters than classical networks. By adopting the gradient-free EKI for training, we mitigate the barren plateau issue that plagues quantum optimization. Through several benchmarks on 1D and 2D nonlinear PDEs, we show that QEKI yields precise inversions and substantial parameter compression, even in the presence of noise. While large-scale applications are constrained by current quantum hardware, this research outlines a viable hybrid framework for including quantum features within Bayesian uncertainty quantification. Full article
(This article belongs to the Special Issue Quantum Computation, Quantum AI, and Quantum Information)
Show Figures

Figure 1

17 pages, 1324 KB  
Article
Classification of Heart Sound Recordings (PCG) via Recurrence Plot-Derived Features and Machine Learning Techniques
by Abdulmajeed M. Almosained, Turky N. Alotaiby, Rawad A. Alqahtani and Hanan S. Murayshid
Electronics 2026, 15(3), 601; https://doi.org/10.3390/electronics15030601 - 29 Jan 2026
Abstract
Early and reliable detection of cardiac disease is crucial for preventing complications and enhancing patient outcomes. Phonocardiogram (PCG) signals, which encode rich information about cardiac function, offer a non-invasive and cost-effective way to identify abnormalities such as valvular disorders, arrhythmias, and other heart [...] Read more.
Early and reliable detection of cardiac disease is crucial for preventing complications and enhancing patient outcomes. Phonocardiogram (PCG) signals, which encode rich information about cardiac function, offer a non-invasive and cost-effective way to identify abnormalities such as valvular disorders, arrhythmias, and other heart pathologies. This study investigates advanced diagnostic methods for heart sound analysis to improve the detection and classification of cardiac abnormalities. In the proposed framework, recurrence plots (RPs) are used for feature extraction, while machine learning algorithms are applied for classification, creating a diagnostic model that can recognize cardiac conditions from composite acoustic signals. This method serves as an efficient alternative to more computationally intensive deep learning methods and other high-dimensional ML-based solutions. Experimental results demonstrate that the multiclass classification task achieves up to 98.4% accuracy, and the binary classification reaches 99.5% accuracy using 2 s signal segments. The techniques assessed in this research demonstrate the potential of automated heart sound analysis as a screening tool in both clinical and remote healthcare settings. Overall, the findings highlight the significance of machine learning in heart sound classification and its potential to facilitate timely, accessible, and cost-effective cardiovascular care. Full article
(This article belongs to the Section Artificial Intelligence)
37 pages, 4647 KB  
Review
Multi-Camera Simultaneous Localization and Mapping for Unmanned Systems: A Survey
by Guoyan Wang, Likun Wang, Jun He, Yanwen Jiang, Qiming Qi and Yueshang Zhou
Electronics 2026, 15(3), 602; https://doi.org/10.3390/electronics15030602 - 29 Jan 2026
Viewed by 5
Abstract
Autonomous navigation in unmanned systems increasingly relies on robust perception and mapping capabilities under large-scale, dynamic, and unstructured environments. Multi-camera simultaneous localization and mapping (MCSLAM) has emerged as a promising solution due to its improved field-of-view coverage, redundancy, and robustness compared to single-camera [...] Read more.
Autonomous navigation in unmanned systems increasingly relies on robust perception and mapping capabilities under large-scale, dynamic, and unstructured environments. Multi-camera simultaneous localization and mapping (MCSLAM) has emerged as a promising solution due to its improved field-of-view coverage, redundancy, and robustness compared to single-camera systems. However, the deployment of MCSLAM introduces several technical challenges that remain insufficiently addressed in existing literature. These challenges include the high-dimensional nature of multi-view visual data, the computational cost associated with multi-view geometry and large-scale bundle adjustment, and the strict requirements on camera calibration, temporal synchronization, and geometric consistency across heterogeneous viewpoints. This survey provides a comprehensive review of recent advances in MCSLAM for unmanned systems, categorizing existing approaches based on system configuration, field-of-view overlap, calibration strategies, and optimization frameworks. We further analyze common failure modes, evaluate representative algorithms, and identify emerging research trends toward scalable, real-time, and uncertainty-aware MCSLAM in complex operational environments. Full article
28 pages, 8359 KB  
Article
Intelligent Evolutionary Optimisation Method for Ventilation-on-Demand Airflow Augmentation in Mine Ventilation Systems Based on JADE
by Gengxin Niu and Cunmiao Li
Buildings 2026, 16(3), 568; https://doi.org/10.3390/buildings16030568 - 29 Jan 2026
Viewed by 23
Abstract
For mine ventilation-on-demand (VOD) scenarios, conventional joint optimisation of airflow augmentation and energy saving in mine ventilation systems is often constrained in practical engineering applications by shrinkage of the feasible region, limited adjustable resistance margins, and strongly multi-modal objective functions. These factors tend [...] Read more.
For mine ventilation-on-demand (VOD) scenarios, conventional joint optimisation of airflow augmentation and energy saving in mine ventilation systems is often constrained in practical engineering applications by shrinkage of the feasible region, limited adjustable resistance margins, and strongly multi-modal objective functions. These factors tend to result in low solution efficiency, pronounced sensitivity to initial values and insufficient solution robustness. In response to these challenges, a two-layer intelligent evolutionary optimisation framework, termed ES–Hybrid JADE with Competitive Niching, is developed in this study. In the outer layer, four classes of evolutionary algorithms—CMAES, DE, ES, and GA—are comparatively assessed over 50 repeated test runs, with a combined ranking based on convergence speed and solution quality adopted as the evaluation metric. ES, with a rank_mean of 2.0, is ultimately selected as the global hyper-parameter self-adaptive regulator. In the inner layer, four algorithms—COBYLA, JADE, PSO and TPE—are compared. The results indicate that JADE achieves the best overall performance in terms of terminal objective value, multi-dimensional performance trade-offs and robustness across random seeds. Furthermore, all four inner-layer algorithms attain feasible solutions with a success rate of 1.0 under the prescribed constraints, thereby ensuring that the entire optimisation process remains within the feasible domain. The proposed framework is applied to an exhaust-type dual-fan ventilation system in a coal mine in Shaanxi Province as an engineering case study. By integrating GA-based automatic ventilation network drawing (longest-path/connected-path) with roadway sensitivity analysis and maximum resistance increment assessment, two solution schemes—direct optimisation and composite optimisation—are constructed and compared. The results show that, within the airflow augmentation interval [0.40, 0.55], the two schemes are essentially equivalent in terms of the optimal augmentation effect, whereas the computation time of the composite optimisation scheme is reduced significantly from approximately 29 min to about 13 s, and a set of multi-modal elite solutions can be provided to support dispatch and decision-making. Under global constraints, a maximum achievable airflow increment of approximately 0.66 m3·s−1 is obtained for branch 10, and optimal dual-branch and triple-branch cooperative augmentation combinations, together with the corresponding power projections, are further derived. To the best of our knowledge, prior VOD airflow-augmentation studies have not combined feasibility-region contraction (via sensitivity- and resistance-margin gating) with a two-layer ES-tuned JADE optimiser equipped with Competitive Niching to output multiple feasible optima. This work provides new insight that the constrained airflow-augmentation problem is intrinsically multimodal, and that retaining multiple basins of attraction yields dispatch-ready elite solutions while achieving orders-of-magnitude runtime reduction through prediction-based constraints. The study demonstrates that the proposed two-layer intelligent evolutionary framework combines fast convergence with high solution stability under strict feasibility constraints, and can be employed as an engineering algorithmic core for energy-efficiency co-ordination in mine VOD control. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
18 pages, 5229 KB  
Article
HF-EdgeFormer: A Hybrid High-Order Focus and Transformer-Based Model for Oral Ulcer Segmentation
by Dragoș-Ciprian Cornoiu and Călin-Adrian Popa
Electronics 2026, 15(3), 595; https://doi.org/10.3390/electronics15030595 - 29 Jan 2026
Viewed by 44
Abstract
Precise medical segmentation of oral ulcers is mandatory and crucial for early diagnosis, but it remains a very challenging task due to rich backgrounds, overexposed or underexposed lesions, and the complex surrounding areas. Therefore, in order to address this challenge, this paper introduces [...] Read more.
Precise medical segmentation of oral ulcers is mandatory and crucial for early diagnosis, but it remains a very challenging task due to rich backgrounds, overexposed or underexposed lesions, and the complex surrounding areas. Therefore, in order to address this challenge, this paper introduces HF-EdgeFormer, a novel hybrid model for oral ulcer segmentation on the AutoOral dataset. This U-shaped transformer-like architecture is, based on publicly available models, the second documented solution for oral ulcer segmentation and it explicitly integrates high-order frequency interactions by using multi-dimensional edge cues. At the encoding stage, a HFConv (High-order Focus Convolution) module divides the feature channels into local streams and global streams, performing learnable filtering via FFT and depth-wise convolutions. After that, it fuses them through stacks of focal transformers and attention gates. In addition to the HFConv block, there are two edge-aware units: the EdgeAware Localization module (that uses eight-direction Sobel filters) and a new Precision EdgeEnhance module (channel-wise Sobel fusion), both used in order to reinforce the boundaries. Skip connections imply Multi-dilated Attention Gates, accompanied by a Spacial-Channel Attention Bridge to accentuate lesion-consistent activations. Moreover, the novel architecture employs an innovative lightweight vision transformer-based bottleneck. It consists of four SegFormerBlock modules localized at the network’s deepest point, so we can achieve global relational modeling exactly where the largest receptive field is present. The model is trained on the AutoOral dataset (introduced by the same team that developed the HF-Unet arhitecture), but due to the limited available images, it needed to be extended by using extensive geometric and photometric augmentations (like RandomAffine, flips, and rotations). This novel architecture achieves a test Dice score of almost 82% and a little over 85% sensitivity while maintaining high precision and specificity, highly valuable in medical segmentation. These results surpass prior HF-UNet baselines while maintaining the model light, with minimal inference memory gains. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning Techniques for Healthcare)
Show Figures

Figure 1

18 pages, 4967 KB  
Article
An Analytical Model for High-Velocity Impacts of Flaky Projectile on Woven Composite Plates
by Chao Hang, Xiaochuan Liu, Yonghui Chen and Tao Suo
Aerospace 2026, 13(2), 126; https://doi.org/10.3390/aerospace13020126 - 28 Jan 2026
Viewed by 110
Abstract
Three-dimensional (3D) woven composites have good impact resistance and are expected to become the fan casing material for the next generation of turbofan engines. Conducting research on the performance of woven composite plates under high-velocity impact of flaky projectiles is of great significance [...] Read more.
Three-dimensional (3D) woven composites have good impact resistance and are expected to become the fan casing material for the next generation of turbofan engines. Conducting research on the performance of woven composite plates under high-velocity impact of flaky projectiles is of great significance for the containment design of the fan casing. Based on the principle of energy conservation, an analytical model for the high-velocity impact of flaky projectiles on carbon fiber woven composite plates was established for three typical failure modes: shear plugging, fiber failure, and momentum transfer. A segmented solution method combining analytical and numerical calculations was developed for the model. The critical penetration velocity of the plate obtained by the analytical method at different roll angles of the projectile is in good agreement with the experimental results, which verifies the accuracy of the analytical model. Moreover, the analytical results indicate that the critical penetration velocity of the plate increases first and then decreases with the roll angle of the projectile. Further energy conversion analysis points out that shear plugging is the main form of energy dissipation for woven composite plates, and the energy dissipation of shear plugging at a roll angle of 30° is higher than that at 0° and 60°. This elucidates the mechanism by which the roll angle of the projectile affects the critical penetration velocity of the plate from the perspective of energy dissipation. Full article
Show Figures

Figure 1

18 pages, 4050 KB  
Article
Pore-Scale Evolution of Effective Properties in Porous Rocks During Dissolution/Erosion and Precipitation
by Xiaoyu Wang, Songqing Zheng, Yingfu He, Yujie Wang, Enhao Liu, Yandong Zhang, Fengchang Yang and Bowen Ling
Appl. Sci. 2026, 16(3), 1287; https://doi.org/10.3390/app16031287 - 27 Jan 2026
Viewed by 86
Abstract
Reactive transport in porous media exists ubiquitously in natural and industrial systems—reformation of geological energy repository, carbon dioxide (CO2) sequestration, CO2 storage via mineralization, and soil remediation are just some examples where geo-/bio-chemical reactions play a key role. Reactive transport [...] Read more.
Reactive transport in porous media exists ubiquitously in natural and industrial systems—reformation of geological energy repository, carbon dioxide (CO2) sequestration, CO2 storage via mineralization, and soil remediation are just some examples where geo-/bio-chemical reactions play a key role. Reactive transport models are expected to provide assessments of (1) the effective property variation and (2) the reaction capability. However, the synergy among flow, solute transport, and reaction undermines the predictability of the existing model. In recent decades, the Micro-Continuum Approach (MCA) has demonstrated advantages for modeling pore-scale reactive transport and high accuracy compared with experiments. In this study, we present an MCA-based numerical framework that simulates dissolution/erosion or precipitation in digital rocks. The framework imports two- or three-dimensional digital rock samples, conducts reactive transport simulations, and evaluates dynamic changes in porosity, surface area, permeability tensor, tortuosity, mass change, and reaction rate. The results show that samples with similar effective properties, e.g., porosity or permeability, may exhibit different reaction abilities, suggesting that the pore-scale geometry has a strong impact on reactive transport. Additionally, the numerical framework demonstrates the advantage of conducting multiple reaction studies on the same sample, in contrast to reality, where there is often only one physical experiment. This advantage enables the identification of the optimal condition, quantified by the dimensionless Pe´clet number and Damko¨hler number, to reach the maximum reaction. We believe that the newly developed framework serves as a toolbox for evaluating reactivity capacity and predicting effective properties of digital samples. Full article
(This article belongs to the Special Issue Geochemistry and Geochronology of Rocks)
26 pages, 8779 KB  
Article
TAUT: A Remote Sensing-Based Terrain-Adaptive U-Net Transformer for High-Resolution Spatiotemporal Downscaling of Temperature over Southwest China
by Zezhi Cheng, Jiping Guan, Li Xiang, Jingnan Wang and Jie Xiang
Remote Sens. 2026, 18(3), 416; https://doi.org/10.3390/rs18030416 - 27 Jan 2026
Viewed by 257
Abstract
High-precision temperature prediction is crucial for dealing with extreme weather events under the background of global warming. However, due to the limitations of computing resources, numerical weather prediction models are difficult to directly provide high spatio-temporal resolution data that meets the specific application [...] Read more.
High-precision temperature prediction is crucial for dealing with extreme weather events under the background of global warming. However, due to the limitations of computing resources, numerical weather prediction models are difficult to directly provide high spatio-temporal resolution data that meets the specific application requirements of a certain region. This problem is particularly prominent in areas with complex terrain. The use of remote sensing data, especially high-resolution terrain data, provides key information for understanding and simulating the interaction between land and atmosphere in complex terrain, making the integration of remote sensing and NWP outputs to achieve high-precision meteorological element downscaling a core challenge. Aiming at the challenge of temperature scaling in complex terrain areas of Southwest China, this paper proposes a novel deep learning model—Terrain Adaptive U-Net Transformer (TAUT). This model takes the encoder–decoder structure of U-Net as the skeleton, deeply integrates the global attention mechanism of Swin Transformer and the local spatiotemporal feature extraction ability of three-dimensional convolution, and innovatively introduces the multi-branch terrain adaptive module (MBTA). The adaptive integration of terrain remote sensing data with various meteorological data, such as temperature fields and wind fields, has been achieved. Eventually, in the complex terrain area of Southwest China, a spatio-temporal high-resolution downscaling of 2 m temperature was realized (from 0.1° in space to 0.01°, and from 3 h intervals to 1 h intervals in time). The experimental results show that within the 48 h downscaling window period, the TAUT model outperforms the comparison models such as bilinear interpolation, SRCNN, U-Net, and EDVR in all evaluation metrics (MAE, RMSE, COR, ACC, PSNR, SSIM). The systematic ablation experiment verified the independent contributions and synergistic effects of the Swin Transformer module, the 3D convolution module, and the MBTA module in improving the performance of each model. In addition, the regional terrain verification shows that this model demonstrates good adaptability and stability under different terrain types (mountains, plateaus, basins). Especially in cases of high-temperature extreme weather, it can more precisely restore the temperature distribution details and spatial textures affected by the terrain, verifying the significant impact of terrain remote sensing data on the accuracy of temperature downscaling. The core contribution of this study lies in the successful construction of a hybrid architecture that can jointly leverage the local feature extraction advantages of CNN and the global context modeling capabilities of Transformer, and effectively integrate key terrain remote sensing data through dedicated modules. The TAUT model offers an effective deep learning solution for precise temperature prediction in complex terrain areas and also provides a referential framework for the integration of remote sensing data and numerical model data in deep learning models. Full article
Show Figures

Figure 1

20 pages, 731 KB  
Perspective
Reinforcement Learning-Driven Control Strategies for DC Flexible Microgrids: Challenges and Future
by Jialu Shi, Wenping Xue and Kangji Li
Energies 2026, 19(3), 648; https://doi.org/10.3390/en19030648 - 27 Jan 2026
Viewed by 105
Abstract
The increasing penetration of photovoltaic (PV) generation, energy storage systems, and flexible loads within modern buildings demands advanced control strategies capable of harnessing dynamic assets while maintaining grid reliability. This Perspective article presents a comprehensive overview of reinforcement learning-driven (RL-driven) control methods for [...] Read more.
The increasing penetration of photovoltaic (PV) generation, energy storage systems, and flexible loads within modern buildings demands advanced control strategies capable of harnessing dynamic assets while maintaining grid reliability. This Perspective article presents a comprehensive overview of reinforcement learning-driven (RL-driven) control methods for DC flexible microgrids—focusing in particular on building-integrated systems that shift from AC microgrid architectures to true PV–Energy storage–DC flexible (PEDF) systems. We examine the structural evolution from traditional AC microgrids through DC microgrids to PEDF architectures, highlight core system components (PV arrays, battery storage, DC bus networks, and flexible demand interfaces), and elucidate their coupling within building clusters and urban energy networks. We then identify key challenges for RL applications in this domain—including high-dimensional state and action spaces, safety-critical constraints, sample efficiency, and real-time deployment in building energy systems—and propose future research directions, such as multi-agent deep RL, transfer learning across building portfolios, and real-time safety assurance frameworks. By synthesizing recent developments and mapping open research avenues, this work aims to guide researchers and practitioners toward robust, scalable control solutions for next-generation DC flexible microgrids. Full article
Show Figures

Figure 1

23 pages, 2393 KB  
Article
Information-Theoretic Intrinsic Motivation for Reinforcement Learning in Combinatorial Routing
by Ruozhang Xi, Yao Ni and Wangyu Wu
Entropy 2026, 28(2), 140; https://doi.org/10.3390/e28020140 - 27 Jan 2026
Viewed by 115
Abstract
Intrinsic motivation provides a principled mechanism for driving exploration in reinforcement learning when external rewards are sparse or delayed. A central challenge, however, lies in defining meaningful novelty signals in high-dimensional and combinatorial state spaces, where observation-level density estimation and prediction-error heuristics often [...] Read more.
Intrinsic motivation provides a principled mechanism for driving exploration in reinforcement learning when external rewards are sparse or delayed. A central challenge, however, lies in defining meaningful novelty signals in high-dimensional and combinatorial state spaces, where observation-level density estimation and prediction-error heuristics often become unreliable. In this work, we propose an information-theoretic framework for intrinsically motivated reinforcement learning grounded in the Information Bottleneck principle. Our approach learns compact latent state representations by explicitly balancing the compression of observations and the preservation of predictive information about future state transitions. Within this bottlenecked latent space, intrinsic rewards are defined through information-theoretic quantities that characterize the novelty of state–action transitions in terms of mutual information, rather than raw observation dissimilarity. To enable scalable estimation in continuous and high-dimensional settings, we employ neural mutual information estimators that avoid explicit density modeling and contrastive objectives based on the construction of positive–negative pairs. We evaluate the proposed method on two representative combinatorial routing problems, the Travelling Salesman Problem and the Split Delivery Vehicle Routing Problem, formulated as Markov decision processes with sparse terminal rewards. These problems serve as controlled testbeds for studying exploration and representation learning under long-horizon decision making. Experimental results demonstrate that the proposed information bottleneck-driven intrinsic motivation improves exploration efficiency, training stability, and solution quality compared to standard reinforcement learning baselines. Full article
(This article belongs to the Special Issue The Information Bottleneck Method: Theory and Applications)
Show Figures

Figure 1

24 pages, 9506 KB  
Article
An SBAS-InSAR Analysis and Assessment of Landslide Deformation in the Loess Plateau, China
by Yan Yang, Rongmei Liu, Liang Wu, Tao Wang and Shoutao Jiao
Remote Sens. 2026, 18(3), 411; https://doi.org/10.3390/rs18030411 - 26 Jan 2026
Viewed by 270
Abstract
This study conducts a landslide deformation assessment in Tianshui, Gansu Province, on the Chinese Loess Plateau, utilizing the Small Baseline Subset InSAR (SBAS-InSAR) method integrated with velocity direction conversion and Z-score clustering. The Chinese Loess Plateau is one of the most landslide-prone regions [...] Read more.
This study conducts a landslide deformation assessment in Tianshui, Gansu Province, on the Chinese Loess Plateau, utilizing the Small Baseline Subset InSAR (SBAS-InSAR) method integrated with velocity direction conversion and Z-score clustering. The Chinese Loess Plateau is one of the most landslide-prone regions in China due to frequent rains, strong topographical gradients and severe soil erosion. By constructing subsets of interferograms, SBAS-InSAR can mitigate the influence of decorrelation to a certain extent, making it a highly effective technique for monitoring regional surface deformation and identifying landslides. To overcome the limitations of the satellite’s one-dimensional Line-of-Sight (LOS) measurements and the challenge of distinguishing true landslide signals from noise, two optimization strategies were implemented. First, LOS velocities were projected onto the local steepest slope direction, assuming translational movement parallel to the slope. Second, a Z-score clustering algorithm was employed to aggregate measurement points with consistent kinematic signatures, enhancing identification robustness, with a slight trade-off in spatial completeness. Based on 205 Sentinel-1 Single-Look Complex (SLC) images acquired from 2014 to 2024, the integrated workflow identified 69 “active, very slow” and 63 “active, extremely slow” landslides. These results were validated through high-resolution historical optical imagery. Time series analysis reveals that creep deformation in this region is highly sensitive to seasonal rainfall patterns. This study demonstrates that the SBAS-InSAR post-processing framework provides a cost-effective, millimeter-scale solution for updating landslide inventories and supporting regional risk management and early warning systems in loess-covered terrains, with the exception of densely forested areas. Full article
Show Figures

Figure 1

21 pages, 6374 KB  
Article
Identification of Microseismic Signals in Coal Mine Rockbursts Based on Hybrid Feature Selection and a Transformer
by Jizhi Zhang, Hongwei Wang and Tianwei Shi
Appl. Sci. 2026, 16(3), 1241; https://doi.org/10.3390/app16031241 - 26 Jan 2026
Viewed by 78
Abstract
Deep learning algorithms are pivotal in the identification and classification of microseismic signals in mines subjected to impact pressure. However, conventional machine learning techniques often struggle to balance interpretability, computational efficiency, and accuracy. To address these challenges, this paper presents a hybrid feature [...] Read more.
Deep learning algorithms are pivotal in the identification and classification of microseismic signals in mines subjected to impact pressure. However, conventional machine learning techniques often struggle to balance interpretability, computational efficiency, and accuracy. To address these challenges, this paper presents a hybrid feature selection and Transformer-based model for microseismic signal classification. The proposed model employs a hybrid feature selection method for data preprocessing, followed by an enhanced Transformer for signal classification. The study first outlines the underlying principles of the method, then extracts key seismic features—such as zero-crossing rate, maximum amplitude, and dominant frequency—from various microseismic signal types. These features undergo importance and correlation analyses to facilitate dimensionality reduction. Finally, a Transformer-based classification framework is developed and compared against several traditional deep learning models. The results reveal significant differences in the waveforms and spectra of different microseismic signal types. The selected feature parameters exhibit high representativeness and stability. The proposed model achieves an accuracy of 90.86%, outperforming traditional deep learning approaches such as CNN (85.2%) and LSTM (83.7%) by a considerable margin. This approach provides a reliable and efficient solution for the rapid identification of microseismic events in rockburst-prone mines. Full article
(This article belongs to the Special Issue Advanced Technology and Data Analysis in Seismology)
Show Figures

Figure 1

28 pages, 3390 KB  
Article
Enhancing Multi-Agent Reinforcement Learning via Knowledge-Embedded Modular Framework for Online Basketball Games
by Junhyuk Kim, Jisun Park and Kyungeun Cho
Mathematics 2026, 14(3), 419; https://doi.org/10.3390/math14030419 - 25 Jan 2026
Viewed by 200
Abstract
High sample complexity presents a major challenge in applying multi-agent reinforcement learning (MARL) to dynamic, high-dimensional sports such as basketball. To address this problem, we proposed the knowledge-embedded modular framework (KEMF), which partitions the environment into offense, defense, and loose-ball modules. Each module [...] Read more.
High sample complexity presents a major challenge in applying multi-agent reinforcement learning (MARL) to dynamic, high-dimensional sports such as basketball. To address this problem, we proposed the knowledge-embedded modular framework (KEMF), which partitions the environment into offense, defense, and loose-ball modules. Each module employs specialized policies and a knowledge-based observation layer enriched with basketball-specific metrics such as shooting success and defensive accuracy. These metrics are also incorporated into a dynamic and dense reward scheme that offers more direct and situation-specific feedback than sparse win/loss signals. We integrated these components into a multi-agent proximal policy optimization (MAPPO) algorithm to enhance training speed and improve sample efficiency. Evaluations using the commercial basketball game Freestyle indicate that KEMF outperformed previous methods in terms of the average points, winning rate, and overall training efficiency. An ablation study confirmed the synergistic effects of modularity, knowledge-embedded observations, and dense rewards. Moreover, a real-world deployment in 1457 live matches demonstrated the robustness of the framework, with trained agents achieving a 52.43% win rate against experienced human players. These results underscore the promise of the KEMF to enable efficient, adaptive, and strategically coherent MARL solutions in complex sporting environments. Full article
(This article belongs to the Special Issue Applications of Intelligent Game and Reinforcement Learning)
17 pages, 566 KB  
Article
AE-CTGAN: Autoencoder–Conditional Tabular GAN for Multi-Omics Imbalanced Class Handling and Cancer Outcome Prediction
by Ibrahim Al-Hurani, Sara H. ElFar, Abedalrhman Alkhateeb and Salama Ikki
Algorithms 2026, 19(2), 95; https://doi.org/10.3390/a19020095 - 25 Jan 2026
Viewed by 84
Abstract
The rapid advancement of sequencing technologies has led to the generation of complex multi-omics data, which are often high-dimensional, noisy, and imbalanced, posing significant challenges for traditional machine learning methods. The novelty of this work resides in the architecture-level integration of autoencoders with [...] Read more.
The rapid advancement of sequencing technologies has led to the generation of complex multi-omics data, which are often high-dimensional, noisy, and imbalanced, posing significant challenges for traditional machine learning methods. The novelty of this work resides in the architecture-level integration of autoencoders with Generative Adversarial Network (GAN) and Conditional Tabular Generative Adversarial Network (CTGAN) models, where the autoencoder is employed for latent feature extraction and noise reduction, while GAN-based models are used for realistic sample generation and class imbalance mitigation in multi-omics cancer datasets. This study proposes a novel framework that combines an autoencoder for dimensionality reduction and a CTGAN for generating synthetic samples to balance underrepresented classes. The process starts with selecting the most discriminative features, then extracting latent representations for each omic type, merging them, and generating new minority samples. Finally, all samples are used to train a neural network to predict specific cancer outcomes, defined here as clinically relevant biomarkers or patient characteristics. In this work, the considered outcome in the bladder cancer is Tumor Mutational Burden (TMB), while the breast cancer outcome is menopausal status, a key factor in treatment planning. Experimental results show that the proposed model achieves high precision, with an average precision of 0.9929 for TMB prediction in bladder cancer and 0.9748 for menopausal status in breast cancer, and reaches perfect precision (1.000) for the positive class in both cases. In addition, the proposed AE–CTGAN framework consistently outperformed an autoencoder combined with a standard GAN across all evaluation metrics, achieving average accuracies of 0.9929 and 0.9748, recall values of 0.9846 and 0.9777, and F1-scores of 0.9922 for bladder and breast cancer datasets, respectively. A comparative fidelity analysis in the latent space further demonstrated the superiority of CTGAN, reducing the average Euclidean distance between real and synthetic samples by approximately 72% for bladder cancer and by up to 84% for breast cancer compared to a standard GAN. These findings confirm that CTGAN generates high-fidelity synthetic samples that preserve the structural characteristics of real multi-omics data, leading to more reliable class balancing and improved predictive performance. Overall, the proposed framework provides an effective and robust solution for handling class imbalance in multi-omics cancer data and enhances the accuracy of clinically relevant outcome prediction. Full article
Show Figures

Figure 1

Back to TopTop