Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,683)

Search Parameters:
Keywords = high dimensional data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2918 KB  
Article
MV-RiskNet: Multi-View Attention-Based Deep Learning Model for Regional Epidemic Risk Prediction and Mapping
by Beyzanur Okudan and Abdullah Ammar Karcioglu
Appl. Sci. 2026, 16(4), 2135; https://doi.org/10.3390/app16042135 (registering DOI) - 22 Feb 2026
Abstract
Regional epidemic risk prediction requires holistic modeling of heterogeneous data sources such as demographic structure, health capacity, geographical features and human mobility. In this study, a unique and multi-modal epidemiological data set integrating demographic, health, geographic and mobility indicators of Türkiye and its [...] Read more.
Regional epidemic risk prediction requires holistic modeling of heterogeneous data sources such as demographic structure, health capacity, geographical features and human mobility. In this study, a unique and multi-modal epidemiological data set integrating demographic, health, geographic and mobility indicators of Türkiye and its neighboring countries was collected. Türkiye’s neighboring countries are Greece, Bulgaria, Georgia, Armenia, Iran, and Iraq. This dataset, created by combining raw data from these neighboring countries, provides a comprehensive regional representation that allows for both quantitative classification and spatial mapping of epidemiological risk. To address the class imbalance problem, Conditional GAN (CGAN), a class-conditional synthetic example generation approach that enhances high-risk category representation was used. In this study, we proposed a multi-view deep learning model named MV-RiskNet, which effectively models the multi-dimensional data structure by processing each view into independent subnetworks and integrating the representations with an attention-based fusion mechanism for regional epidemic risk prediction. Experimental studies were compared using Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Autoencoder classifier, and Graph Convolutional Network (GCN) models. The proposed MV-RiskNet with CGAN model achieved better results compared to other models, with 97.22% accuracy and 97.40% F1-score. The generated risk maps reveal regional clustering patterns in a spatially consistent manner, while attention analyses show that demographic and geographic features are the dominant determinants, while mobility plays a complementary role, especially in high-risk regions. Full article
Show Figures

Figure 1

25 pages, 1060 KB  
Article
Kernel-Based Optimal Subspaces (KOS): A Method for Data Classification
by Lakhdar Remaki
Mach. Learn. Knowl. Extr. 2026, 8(2), 52; https://doi.org/10.3390/make8020052 (registering DOI) - 22 Feb 2026
Abstract
Support Vector Machine (SVM) is a popular kernel-based method for data classification that has demonstrated high efficiency across a wide range of practical applications. However, SVM suffers from several limitations, including the potential failure of the optimization process, especially in high-dimensional spaces; the [...] Read more.
Support Vector Machine (SVM) is a popular kernel-based method for data classification that has demonstrated high efficiency across a wide range of practical applications. However, SVM suffers from several limitations, including the potential failure of the optimization process, especially in high-dimensional spaces; the inherently high computational cost; the lack of a systematic approach to multi-class classification; difficulties in handling imbalanced classes; and the prohibitive cost of real-time or dynamic classification. This paper proposes an alternative method, referred to as Kernel-based Optimal Subspaces (KOS), which belongs to the family of kernel subspace methods. Mathematically similar to Kernel PCA (KPCA), KOS achieves performance comparable to SVM while addressing the aforementioned weaknesses. The method is based on computing the minimum distance to optimal feature subspaces of the mapped data. Because no optimization process is required, KOS is robust, fast, and easy to implement. The optimal subspaces are constructed independently, enabling high parallelizability and making the approach well-suited for dynamic classification and real-time applications. Furthermore, the issue of imbalanced classes is naturally handled by subdividing large classes into smaller sub-classes, thereby creating appropriately sized sub-subspaces within the feature space. Full article
(This article belongs to the Section Data)
18 pages, 56175 KB  
Article
Enhanced Three-Dimensional Double Random Phase Encryption: Overcoming Phase Information Loss in Zero-Amplitude Singularities for Simultaneous Two Primary Data
by Myungjin Cho and Min-Chul Lee
Electronics 2026, 15(4), 896; https://doi.org/10.3390/electronics15040896 (registering DOI) - 22 Feb 2026
Abstract
This paper proposes an advanced three-dimensional optical encryption technique based on double random phase encryption for the simultaneous encryption of two primary datasets. While conventional double random phase encryption offers high-speed encryption, it suffers from low data efficiency. To address this issue, the [...] Read more.
This paper proposes an advanced three-dimensional optical encryption technique based on double random phase encryption for the simultaneous encryption of two primary datasets. While conventional double random phase encryption offers high-speed encryption, it suffers from low data efficiency. To address this issue, the proposed method assigns the first primary dataset to the amplitude and the second to the phase. However, this approach faces a critical limitation: the phase information becomes undefined or lost when the amplitude is zero. Therefore, we introduce a biased amplitude encoding scheme for double random phase encryption to ensure the mathematical recoverability of the phase component. In the proposed method, a biased value ϵ is added to the amplitude part during the double random phase encryption encryption process and subsequently subtracted from the decrypted data to recover the two primary datasets. To verify the effectiveness of our approach, we employ synthetic aperture integral imaging and volumetric computational reconstruction. The experimental results show that while the first dataset remains lossless, the lossy characteristics of the second dataset are significantly mitigated. Full article
Show Figures

Figure 1

15 pages, 1369 KB  
Article
A Hybrid-Driven Fault Diagnosis Method for Railway Freight Car Braking System
by Yanhui Bai, Honghui Li, Guoliang Gong, Nahao Shen and Yi Xu
Electronics 2026, 15(4), 895; https://doi.org/10.3390/electronics15040895 (registering DOI) - 21 Feb 2026
Abstract
With the increasing demand for heavy-haul railway freight, both the number and volume of heavy-haul freight cars continue to grow. As the core system of railway freight transportation, the reliable operation of the brake system is fundamental to ensuring train safety. The freight [...] Read more.
With the increasing demand for heavy-haul railway freight, both the number and volume of heavy-haul freight cars continue to grow. As the core system of railway freight transportation, the reliable operation of the brake system is fundamental to ensuring train safety. The freight car braking system fault diagnosis model, which relies on historical data while failing to account for changes in braking curves when locomotives are coupled with different vehicles, is the main reason why early failures of the braking system are not diagnosed. Consequently, real-time monitoring of the freight car braking system and early fault diagnosis have emerged as a pivotal technical challenge that necessitates resolution within the framework of the railway freight maintenance reform. This paper proposes a novel hybrid-driven prediction method that effectively combines Convolutional Neural Networks, Adaptive Radial Basis Function Neural Networks, and Extreme Learning Machines (CARE). To achieve comprehensive fault feature extraction, based on CNN of the image data classification, the K-means clustering algorithm is introduced to adaptively initialize the radial basis centers of the RBF and recalculate the radial basis radii. Moreover, to improve the real-time performance and accuracy of fault diagnosis, the network layers are expanded, and the ELM algorithm is employed to construct an optimization strategy for high-dimensional data processing in the network layers. The experimental results demonstrate that when considering the coupling of different vehicles in the railway freight car, the proposed CARE model exhibits faster convergence speed and significantly improves the effectiveness and real-time performance of fault diagnosis in the railway freight car braking system. Full article
22 pages, 4040 KB  
Article
Data-Driven Design of Epoxy–Granite Machine Foundations: Bayesian Optimization for Enhanced Compressive Strength and Vibration Damping
by Mohammed Y. Abdellah, Osama M. Irfan and Hanafy M. Omar
Polymers 2026, 18(4), 532; https://doi.org/10.3390/polym18040532 (registering DOI) - 21 Feb 2026
Abstract
Epoxy–granite (EG) composites, comprising granite quarry waste and low-cost epoxy, present a sustainable alternative to cast iron for machine tool foundations. This study develops a data-driven simulation framework to enhance the mechanical properties of epoxy–granite systems by integrating published experimental data with Gaussian [...] Read more.
Epoxy–granite (EG) composites, comprising granite quarry waste and low-cost epoxy, present a sustainable alternative to cast iron for machine tool foundations. This study develops a data-driven simulation framework to enhance the mechanical properties of epoxy–granite systems by integrating published experimental data with Gaussian Process Regression (GPR) surrogate modeling and Bayesian optimization (BO). The objective is to maximize compressive strength and vibration damping—both critical factors for machining accuracy and dynamic stability. Experimental results from composites with 12–25 wt% epoxy and varied aggregate gradations demonstrate compressive strengths up to 76.8 MPa and flexural strengths reaching 35.4 MPa. The peak damping ratio of 0.0202 was observed at intermediate epoxy content. Mixtures enriched with fine particles also exhibited enhanced fracture toughness and low water absorption, outperforming cementitious concretes, polymer concretes, and natural granite. To address the limitations of experimental coverage, a GPR-based simulation model was employed to explore the four-dimensional design space defined by epoxy content and aggregate fractions. Integrated with BO under realistic manufacturing constraints, the framework identifies optimal formulations comprising 22–26 wt% epoxy and 55–70% fine aggregates. These compositions yield predicted compressive strengths of 78–85 MPa and damping ratios approaching 0.022, indicating significant improvement in overall mechanical properties. Bayesian Weibull analysis further quantifies reliability, revealing shape parameters α ≈ 2.4–2.9, which indicate consistent performance with moderate variability. This work presents the first reported application of an integrated GPR-BO-Bayesian Weibull simulation framework to epoxy–granite composites, enabling simultaneous optimization of conflicting objectives and probabilistic reliability assessment of key mechanical properties. The approach reduces experimental effort by over 70% and supports the circular economy through valorization of granite waste in high-value manufacturing. Nonetheless, predictive uncertainty remains high in under-sampled regions (e.g., damping with n = 2). Future experimental validation—comprising at least 10–15 data points across varied epoxy ratios and gradations—is essential to corroborate the predicted optimum. Full article
(This article belongs to the Section Artificial Intelligence in Polymer Science)
Show Figures

Figure 1

18 pages, 502 KB  
Article
Construction of an Evaluation System for Big Food Concept Education and Its Behavioral Impact Mechanism Among College Students—An Empirical Study Based on a Survey of Students
by Yong He, Ruirui Tang, Minlun Hu, Fang Chen, Xiaoqian Gao, Dandan Li and Yaowen Liu
Foods 2026, 15(4), 776; https://doi.org/10.3390/foods15040776 (registering DOI) - 21 Feb 2026
Abstract
Education on the Big Food Concept, as a strategic framework for ensuring national food security and promoting high-quality agricultural development, represents a key nexus between ideological and political education and quality-oriented education for college students. Based on survey data from 1268 students across [...] Read more.
Education on the Big Food Concept, as a strategic framework for ensuring national food security and promoting high-quality agricultural development, represents a key nexus between ideological and political education and quality-oriented education for college students. Based on survey data from 1268 students across six provinces in China, this study utilized the Delphi method, the analytic hierarchy process (AHP), and structural equation modeling (SEM) to develop a four-dimensional evaluation system encompassing cognitive, affective, value, and behavioral dimensions. It examined the relationship and underlying mechanism through which Big Food Concept education influences student behavior. The results indicate that college students’ overall understanding of the Big Food Concept remains at a moderate level, with particularly limited awareness of diversified food supply systems. The weights of the dimensions in the educational evaluation system were as follows: behavioral dimension (0.342) > cognitive dimension (0.287) > value dimension (0.221) > affective dimension (0.150). Big Food Concept education shapes student behavior through the sequential pathway of cognitive enlightenment, affective resonance, and value internalization, with value internalization demonstrating the strongest mediating effect (β = 0.413, p < 0.001). The evaluation system developed in this study is a practical tool for assessing the effectiveness of Big Food Concept education in higher institutions, while the identified mechanism provides a theoretical basis for implementing targeted educational practices. Full article
(This article belongs to the Section Sensory and Consumer Sciences)
Show Figures

Figure 1

20 pages, 1420 KB  
Article
High-Level Synthesis (HLS)-Enabled Field-Programmable Gate Array (FPGA) Algorithms for Latency-Critical Plasma Diagnostics and Neural Trigger Prototyping in Next-Generation Energy Projects
by Radosław Cieszewski, Krzysztof Poźniak, Ryszard Romaniuk and Maciej Linczuk
Energies 2026, 19(4), 1091; https://doi.org/10.3390/en19041091 (registering DOI) - 21 Feb 2026
Abstract
Large-scale advanced energy systems, including fusion devices, high-power plasma sources, and accelerator-driven energy platforms, increasingly depend on real-time, hardware-level data processing for diagnostics, control, and protection. In such installations, ultra-low latency, deterministic throughput, and multi-decade operational lifetimes are not optional design goals but [...] Read more.
Large-scale advanced energy systems, including fusion devices, high-power plasma sources, and accelerator-driven energy platforms, increasingly depend on real-time, hardware-level data processing for diagnostics, control, and protection. In such installations, ultra-low latency, deterministic throughput, and multi-decade operational lifetimes are not optional design goals but strict system-level requirements. While similar timing constraints exist in high-energy physics infrastructures, energy applications place a stronger emphasis on long-term stability, maintainability, and reproducibility of digital signal processing pipelines. This work investigates whether high-level synthesis (HLS) provides a practical and sustainable design methodology for implementing both classical pattern-based and compact neural network (NN) trigger logic on Field-Programmable Gate Arrays (FPGAs) under realistic energy-system constraints. Using representative commercial toolchains (Intel HLS and hls4ml) as reference workflows, we demonstrate the capabilities of fixed-point, fully pipelined streaming architectures, while also identifying critical shortcomings of pragma-driven HLS approaches in terms of architecture transparency, long-term portability, and systematic multi-objective design-space exploration, all of which are crucial for long-lived energy projects and plasma diagnostic systems. These limitations directly motivate the development of a custom, vendor-agnostic, extensible HLS framework (PyHLS), specifically oriented toward deterministic latency, reproducibility, and physics-grade verification demands of advanced energy infrastructures. Gas Electron Multipliers (GEMs) are modern gaseous detectors increasingly employed in plasma diagnostics, radiation monitoring, and high-power energy experiments, where high rate capability, fine spatial resolution, and radiation tolerance are required. Their massively parallel signal structure and continuous data streams make GEMs a representative and demanding benchmark for FPGA-based real-time trigger and preprocessing systems in energy-related environments. The primary objective of this study is to establish a pragmatic technological baseline, demonstrating that contemporary HLS workflows can reliably support both template-based and neural inference-based trigger architectures within strict timing, resource, and power constraints typical for advanced energy installations. Furthermore, we outline a scalable development path toward multi-channel and two-dimensional (pixelated) GEM readout architectures, directly applicable to fusion diagnostics, plasma accelerators, beam–plasma interaction studies, and radiation-hard energy monitoring platforms. Although the proposed methodology remains fully transferable to large-scale physics trigger systems, its principal relevance is directed toward real-time diagnostics and protection layers in next-generation energy systems. Full article
Show Figures

Figure 1

25 pages, 101353 KB  
Article
A Metaheuristic Optimization Algorithm for Task Clustering in Collaborative Multi-Cluster Systems
by Meixuan Li, Yongping Hao, Hui Zhang and Jiulong Xu
Sensors 2026, 26(4), 1364; https://doi.org/10.3390/s26041364 - 20 Feb 2026
Abstract
To address the task-grouping problem for air–ground integrated Unmanned Aerial Vehicle (UAV) swarm missions in three-dimensional (3D) environments, this study proposes a data-preprocessing and hybrid initialization clustering method based on 3D spatial features. A dual-modal prototype meta-heuristic optimization model, Dual-Prototype Metaheuristic K-Means (DPM-Kmeans), [...] Read more.
To address the task-grouping problem for air–ground integrated Unmanned Aerial Vehicle (UAV) swarm missions in three-dimensional (3D) environments, this study proposes a data-preprocessing and hybrid initialization clustering method based on 3D spatial features. A dual-modal prototype meta-heuristic optimization model, Dual-Prototype Metaheuristic K-Means (DPM-Kmeans), is constructed accordingly. First, to overcome spatial information loss in high-dimensional task allocation, a 3D spatial task data preprocessing technique and a hybrid initialization strategy based on the golden spiral distribution are designed. This ensures the diversity and environmental adaptability of the initial solutions. Second, a dual-modal prototype optimization framework incorporating row prototypes (local refinement) and column prototypes (global combination) was constructed using meta-heuristics and clustering algorithms. The prototype-driven replacement update mechanism simultaneously performs global and local search, balancing the algorithm’s exploration and exploitation capabilities while expanding the solution space. This effectively addresses premature convergence issues in complex search spaces. Simultaneously, a collaborative multi-constraint, dynamically weighted optimization model was constructed, incorporating task requirements and flight distance constraints to ensure that the grouping scheme approximates the global optimum. Simulation results demonstrate that compared to traditional K-means and mainstream meta-heuristic optimization algorithms, DPM-Kmeans achieves an overall improvement of 2–10% in Sum of Squared Errors (SSE), Silhouette Coefficient (SC), and Davies–Bouldin Index (DB) metrics. It exhibits superior convergence speed and solution quality, proving the method’s excellent scalability and robustness in multi-constraint, large-scale 3D scenarios. Full article
(This article belongs to the Section Sensors and Robotics)
31 pages, 12352 KB  
Review
MXene- and MOF-Based Hydrogels: Emerging Platforms for Electrochemical Biosensing and Health Monitoring
by Kandaswamy Theyagarajan, Sairaman Saikrithika and Young-Joon Kim
Micromachines 2026, 17(2), 267; https://doi.org/10.3390/mi17020267 - 20 Feb 2026
Viewed by 39
Abstract
Smart healthcare is rapidly emerging as a transformative paradigm, enabling simultaneous health monitoring, therapeutic intervention, and early prediction of disease onset. In this context, electrochemical monitoring systems have attracted growing interest due to their cost-effectiveness, ease of operation, miniaturization and compatibility with wearable [...] Read more.
Smart healthcare is rapidly emerging as a transformative paradigm, enabling simultaneous health monitoring, therapeutic intervention, and early prediction of disease onset. In this context, electrochemical monitoring systems have attracted growing interest due to their cost-effectiveness, ease of operation, miniaturization and compatibility with wearable platforms. Accordingly, conductive hydrogel-based electrochemical (bio)sensors have gained significant attention for health monitoring owing to their soft mechanical properties, high water content, excellent biocompatibility, and ability to form intimate, conformal interfaces with biological tissues. Their three-dimensional polymeric networks facilitate efficient ion transport and mechanical flexibility, making them particularly suitable for wearable and noninvasive sensing and monitoring applications. However, the intrinsically limited conductivity and catalytic activity of pristine hydrogels often constrain their electrochemical performance. To overcome these limitations, functional nanomaterials such as metal–organic frameworks (MOFs) and MXene (MX) nanosheets have been increasingly integrated into hydrogel matrices to enhance conductivity and electrochemical activity. This review provides a comprehensive and critical comparison of recent advances in MOF- and MX-integrated conductive hydrogels for electrochemical health monitoring. In addition to material design strategies and sensing performance, emerging trends in data-driven sensing aimed at improving signal interpretation and multi-analyte discrimination are systematically discussed. Key challenges related to long-term stability, biocompatibility, scalability, and intelligent system integration are critically assessed, and the future potential of these platforms within closed-loop architectures is highlighted, paving the way for next-generation conductive hydrogel-based electrochemical sensors in smart healthcare applications. Full article
(This article belongs to the Special Issue Bioelectronics and Its Limitless Possibilities)
23 pages, 2389 KB  
Article
Spatiotemporal Evolution Monitoring of Small Water Body Coverage Associated with Land Subsidence Using SAR Data: A Case Study in Geleshan, Chongqing, China
by Tianhao Jiang, Faming Gong, Qiankun Kong and Kui Zhang
Remote Sens. 2026, 18(4), 644; https://doi.org/10.3390/rs18040644 - 19 Feb 2026
Viewed by 71
Abstract
Monitoring small water body coverage spatiotemporal evolution in karst areas of complex hydrogeology is pivotal for water resource management and disaster assessment. With recent infrastructure expansion, intensive tunnel excavation has occurred in Chongqing’s Geleshan, a typical karst region with fragile aquifers. It has [...] Read more.
Monitoring small water body coverage spatiotemporal evolution in karst areas of complex hydrogeology is pivotal for water resource management and disaster assessment. With recent infrastructure expansion, intensive tunnel excavation has occurred in Chongqing’s Geleshan, a typical karst region with fragile aquifers. It has disrupted hydrogeological systems, triggering ground subsidence, groundwater leakage, and subsequent reservoir desiccation, as well as threatening regional water security and ecology. Thus, monitoring reservoir coverage evolution is critical to clarify dynamics and driving mechanisms. Synthetic Aperture Radar (SAR) is ideal for water body mapping, enabling data acquisition independent of illumination and weather. However, traditional SAR-based water extraction methods are hampered by low-scatter noise and poor adaptability to hydrological fluctuations. To address this, a two-stage dual-polarization SAR clustering algorithm (TSDPS-Clus) was developed using 452 time-series Sentinel-1 images (7 February 2017–24 August 2025). Specifically, the Kolmogorov–Smirnov test via pixel-wise time-series statistics screened core water areas, built candidate regions, and mitigated noise. Subsequently, dual-polarization and positional features were fused via singular value decomposition (SVD) to generate a high-discrimination low-dimensional feature set, followed by the Iterative Self-Organizing Data Analysis Techniques Algorithm (ISODATA) clustering for high-precision extraction. Results demonstrate that the algorithm suits reservoir storage-desiccation dynamics; dual-polarization complementarity boosts accuracy and clarifies six reservoirs’ spatiotemporal evolution. Notably, post-2023, tunnel excavation-induced land subsidence increased drying frequency and duration, with a 24-month maximum cumulative desiccation period. Full article
21 pages, 2437 KB  
Article
Evaluating SWIR Spectral Data and Random Forest Models for Copper Mineralization Discrimination in the Zhunuo Porphyry Deposit
by Jiale Cao, Lifang Wang, Xiaofeng Liu and Song Wu
Minerals 2026, 16(2), 213; https://doi.org/10.3390/min16020213 - 19 Feb 2026
Viewed by 76
Abstract
In recent years, with the widespread application of shortwave infrared (SWIR) spectroscopy in mineral identification and hydrothermal alteration studies, an increasing number of studies have attempted to integrate SWIR spectral data with machine learning approaches to fully exploit mineralization-related discriminative information embedded in [...] Read more.
In recent years, with the widespread application of shortwave infrared (SWIR) spectroscopy in mineral identification and hydrothermal alteration studies, an increasing number of studies have attempted to integrate SWIR spectral data with machine learning approaches to fully exploit mineralization-related discriminative information embedded in high-dimensional spectral datasets. In this study, the Zhunuo porphyry copper deposit in Tibet was selected as the research target. SWIR drill core spectral data were systematically acquired, and a random forest (RF) machine learning model was applied to full-band SWIR spectra (1300–2500 nm) to conduct integrated analyses of copper grade regression and mineralization discrimination. A total of 2140 drill core samples were measured, with three replicate measurements per sample, yielding 6420 spectra. After standardized preprocessing and interpolation resampling, a unified spectral feature dataset was constructed for regression and classification analyses. SWIR spectral data are characterized by a large number of bands, strong inter-band correlations, and relatively limited sample sizes; under such conditions, model generalization ability and stability become critical factors in method selection. Based on ensemble learning, the random forest model constructs multiple decision trees and aggregates their predictions through voting or averaging, effectively reducing model variance and mitigating overfitting, and is therefore well suited for high-dimensional, small-sample, and highly correlated geological spectral datasets. In porphyry copper systems, the spectral characteristics of hydrothermal alteration minerals and mineralization intensity commonly exhibit complex nonlinear relationships, which can be effectively captured by random forest models without requiring predefined functional forms. The regression results indicate that accurate quantitative prediction of copper grade based solely on SWIR spectral data remains limited. In contrast, when a threshold-based binary classification was introduced using an industrial cutoff grade of 0.2% Cu, the model achieved an overall accuracy of 75%, an F1 score of 0.69, and an area under the ROC curve (AUC) of 0.80, demonstrating strong mineralization discrimination capability and stability. Overall, the integration of SWIR spectroscopy with machine learning methods provides an efficient, reliable, and geologically interpretable technical approach for early-stage exploration and detailed drill core interpretation in porphyry copper deposits. Full article
30 pages, 1973 KB  
Article
Human-Centered AI Perception Prediction in Construction: A Regularized Machine Learning Approach for Industry 5.0
by Annamária Behúnová, Matúš Pohorenec, Tomáš Mandičák and Marcel Behún
Appl. Sci. 2026, 16(4), 2057; https://doi.org/10.3390/app16042057 - 19 Feb 2026
Viewed by 125
Abstract
Industry 5.0 emphasizes human-centered integration of artificial intelligence in industrial contexts, yet successful adoption depends critically on workforce perception and acceptance. This research develops and validates a machine learning framework for predicting AI-related perceptions and expected impacts in the construction industry under small [...] Read more.
Industry 5.0 emphasizes human-centered integration of artificial intelligence in industrial contexts, yet successful adoption depends critically on workforce perception and acceptance. This research develops and validates a machine learning framework for predicting AI-related perceptions and expected impacts in the construction industry under small sample constraints typical of specialized industrial surveys. Specifically, the study aims to develop and empirically validate a predictive AI decision support model that estimates the expected impact of AI adoption in the construction sector based on digital competencies, ICT utilization, AI training and experience, and AI usage at both individual and organizational levels, operationalized through a composite AI Impact Index and two process-oriented outcomes (perceived task automation and perceived cost reduction). Using a dataset of 51 survey responses from Slovak construction professionals collected in 2025, we implement a methodologically rigorous approach specifically designed for limited-data regimes. The framework encompasses ordinal target simplification from five to three classes, dimensionality reduction through theoretically grounded composite indices reducing features from 15 to 7, exclusive deployment of low variance regularized models, and leave-one-out cross-validation for unbiased performance estimation. The optimal model (Lasso regression with recursive feature elimination) predicts cost reduction perception with R2 = 0.501, MAE = 0.551, and RMSE = 0.709, while six classification targets achieve weighted F1 = 0.681, representing statistically optimal performance given sample constraints and perception measurement variability. Comparative evaluation confirms regularized models outperform high variance alternatives: random forest (R2 = 0.412) and gradient boosting (R2 = 0.292) exhibit substantially lower generalization performance, empirically validating the bias-variance trade-off rationale. Key methodological contributions include explicit bias-variance optimization preventing overfitting, feature selection via RFE reducing input space to six predictors (personal AI usage, AI impact on budgeting, ICT utilization, AI training, company size, and age), and demonstration that principled statistical approaches achieve meaningful predictions without requiring large-scale datasets or complex architectures. The framework provides a replicable blueprint for perception and impact prediction in data-constrained Industry 5.0 contexts, enabling targeted interventions, including customized training programs, strategic communication prioritization, and resource allocation for change management initiatives aligned with predicted adoption patterns. Full article
Show Figures

Figure 1

13 pages, 274 KB  
Article
Penalized Likelihood Estimation of Continuation Ratio Models for Ordinal Response and Its Application in CGSS Data
by Huihui Sun and Yemin Cui
Stats 2026, 9(1), 20; https://doi.org/10.3390/stats9010020 - 19 Feb 2026
Viewed by 126
Abstract
The continuation ratio model is a crucial tool for analyzing ordinal response data. However, its explanatory power diminishes under high-dimensional settings where the number of covariates p is large. To address this, we introduce, for the first time, the smoothly clipped absolute deviation [...] Read more.
The continuation ratio model is a crucial tool for analyzing ordinal response data. However, its explanatory power diminishes under high-dimensional settings where the number of covariates p is large. To address this, we introduce, for the first time, the smoothly clipped absolute deviation (SCAD) penalty into the forward continuation ratio model framework. We propose a corresponding penalized likelihood estimation method that performs simultaneous variable selection and parameter estimation and provides an efficient algorithm for its implementation. Numerical simulations demonstrate the favorable properties of the SCAD penalty: it precisely identifies significant variables while more aggressively shrinking the coefficients of irrelevant ones to zero, outperforming alternative penalties like Lasso and elastic net in selection accuracy. Finally, we illustrate the practical utility of our method through an empirical application using data from the Chinese General Social Survey (CGSS). Full article
21 pages, 1504 KB  
Article
A Data-Driven Reduced-Order Model for Rotary Kiln Temperature Field Prediction Using Autoencoder and TabPFN
by Ya Mao, Yuhang Li, Yanhui Lai and Fangshuo Fan
Appl. Sci. 2026, 16(4), 2029; https://doi.org/10.3390/app16042029 - 18 Feb 2026
Viewed by 95
Abstract
The accurate reconstruction of the internal temperature field in rotary kilns is critical for optimizing the clinker calcination process and ensuring energy efficiency. In this study, a rapid and high-fidelity surrogate modeling framework is proposed, utilizing snapshot ensembles generated by full-order Computational Fluid [...] Read more.
The accurate reconstruction of the internal temperature field in rotary kilns is critical for optimizing the clinker calcination process and ensuring energy efficiency. In this study, a rapid and high-fidelity surrogate modeling framework is proposed, utilizing snapshot ensembles generated by full-order Computational Fluid Dynamics (CFD) simulations to reconstruct the temperature field of the axial center section. The framework incorporates a symmetric Autoencoder (AE) coupled with a TabPFN network as its core components. Capitalizing on the kiln’s strong axial symmetry, this reduction–regression system efficiently maps the high-dimensional nonlinear thermodynamic topology of the central section into a compact low-dimensional latent manifold via AE, while utilizing TabPFN to establish a robust mapping between operating boundary conditions and these latent features. By leveraging the In-Context Learning (ICL) mechanism for prior-data fitting, TabPFN effectively overcomes the data scarcity inherent in high-cost CFD sampling. Predictive results demonstrate that the model achieves a coefficient of determination (R2) of 0.897 for latent feature regression, outperforming traditional algorithms by 6.53%. In terms of field reconstruction on the test set, the model yields an average temperature error of 15.31 K. Notably, 93.83% of the nodal errors are confined within a narrow range of 0–50 K, and the reconstructed distributions exhibit high consistency with the CFD benchmarks. Furthermore, compared to the hours required for full-scale simulations, the inference time is reduced to 0.45 s, representing a speedup of four orders of magnitude. Consequently, the predictive system demonstrates excellent accuracy and efficiency, serving as an effective substitute for traditional models to realize online monitoring and intelligent optimization. Full article
(This article belongs to the Special Issue Fuel Cell Technologies in Power Generation and Energy Recovery)
18 pages, 4470 KB  
Article
DDES-Informed Development of a Helicity-Based Turbulence Model: Validation on Corner Separation and Aeronautical Flows
by Wei Sun, Haijin Yan, Bangmeng Xue, Feng Feng and Zhouteng Ye
Aerospace 2026, 13(2), 197; https://doi.org/10.3390/aerospace13020197 - 18 Feb 2026
Viewed by 93
Abstract
Accurate prediction of separated flows remains a critical challenge for Reynolds-Averaged Navier–Stokes (RANS) simulations, primarily due to the tendency of standard turbulence models to overpredict separation. To address this limitation, this study develops and validates a helicity-augmented variant of Menter’s Shear Stress Transport [...] Read more.
Accurate prediction of separated flows remains a critical challenge for Reynolds-Averaged Navier–Stokes (RANS) simulations, primarily due to the tendency of standard turbulence models to overpredict separation. To address this limitation, this study develops and validates a helicity-augmented variant of Menter’s Shear Stress Transport (SST) model within a high-fidelity, data-guided framework. First, a scale-resolving database, capturing the physics of corner separation, is established via an improved Delayed Detached Eddy Simulation (DDES) of a linear compressor cascade. Insights from this database directly inform the integration of a normalized helicity parameter into the SST formulation, enabling dynamic modulation of the turbulent eddy viscosity to account for non-equilibrium turbulence and energy backscatter in three-dimensional (3D) vortical flows. The enhanced SST model is subsequently validated against experimental data for two benchmark aerodynamic configurations: ARA M100 wing–fuselage and DLR-F6 aircraft models. Results demonstrate that the proposed correction significantly improves the prediction of separation topology and aerodynamic coefficients, delays the predicted onset of stall, and achieves closer agreement with measurements. These findings confirm the DDES-guided helicity correction as an effective strategy for enhancing the predictive fidelity of RANS models in simulating the complex separated flows encountered in practical aeronautical applications. Full article
(This article belongs to the Section Aeronautics)
Back to TopTop