Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (74,354)

Search Parameters:
Keywords = model validation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1131 KB  
Article
Comparative Analysis of the Effectiveness of Three Proposed Network Screening Methods for Safety Improvement Sites on Rural Highways
by Bishal Dhakal and Ahmed Al-Kaisy
Sustainability 2026, 18(4), 2008; https://doi.org/10.3390/su18042008 (registering DOI) - 15 Feb 2026
Abstract
Effective network screening methods play a significant role in highway safety management programs and contribute to sustainable mobility by facilitating the reduction in all crashes, including fatalities and injuries across the transportation system. This study presents a comprehensive analysis comparing the effectiveness of [...] Read more.
Effective network screening methods play a significant role in highway safety management programs and contribute to sustainable mobility by facilitating the reduction in all crashes, including fatalities and injuries across the transportation system. This study presents a comprehensive analysis comparing the effectiveness of three new network screening techniques for pinpointing safety improvement locations on rural roads. The proposed methods are the Global Risk Scoring (GRS), the Crash Risk Index (CRI), and the Predicted Empirical Bayes (P-EB) methods. The analysis utilized 10 years of roadway geometry, traffic, and crash data from state-owned rural highways in Oregon, with the first five years (2011–2015) used for model development and the subsequent five years (2016–2020) for validation. Comparative tests assessed consistency with historical crash rankings and temporal stability across observation periods. The analysis revealed distinct strengths among the screening methods. The GRS method demonstrated a high level of consistency with historical crash data, while the P-EB method exhibited superior consistency across different time periods, suggesting its value for long-term safety planning. The CRI method demonstrated reasonable consistency in performance, irrespective of the test carried out. While no single method outperforms the others in all scenarios, each has unique advantages and data requirements that can better suit the agency’s needs, given available resources. This research provides actionable insights for improving safety management strategies and advancing sustainable mobility. Full article
22 pages, 4598 KB  
Article
Deep Learning Based Correction Algorithms for 3D Medical Reconstruction in Computed Tomography and Macroscopic Imaging
by Tomasz Les, Tomasz Markiewicz, Malgorzata Lorent, Miroslaw Dziekiewicz and Krzysztof Siwek
Appl. Sci. 2026, 16(4), 1954; https://doi.org/10.3390/app16041954 (registering DOI) - 15 Feb 2026
Abstract
This paper introduces a hybrid two-stage registration framework for reconstructing three-dimensional (3D) kidney anatomy from macroscopic slices, using CT-derived models as the geometric reference standard. The approach addresses the data-scarcity and high-distortion challenges typical of macroscopic imaging, where fully learning-based registration (e.g., VoxelMorph) [...] Read more.
This paper introduces a hybrid two-stage registration framework for reconstructing three-dimensional (3D) kidney anatomy from macroscopic slices, using CT-derived models as the geometric reference standard. The approach addresses the data-scarcity and high-distortion challenges typical of macroscopic imaging, where fully learning-based registration (e.g., VoxelMorph) often fails to generalize due to limited training diversity and large nonrigid deformations that exceed the capture range of unconstrained convolutional filters. In the proposed pipeline, the Optimal Cross-section Matching (OCM) algorithm first performs constrained global alignment—translation, rotation, and uniform scaling—to establish anatomically consistent slice initialization. Next, a lightweight deep-learning refinement network, inspired by VoxelMorph, predicts residual local deformations between consecutive slices. The core novelty of this architecture lies in its hierarchical decomposition of the registration manifold: the OCM acts as a deterministic geometric anchor that neutralizes high-amplitude variance, thereby constraining the learning task to a low-dimensional residual manifold. This hybrid OCM + DL design integrates explicit geometric priors with the flexible learning capacity of neural networks, ensuring stable optimization and plausible deformation fields even with few training examples. Experiments on an original dataset of 40 kidneys demonstrated that the OCM + DL method achieved the highest registration accuracy across all evaluated metrics: NCC = 0.91, SSIM = 0.81, Dice = 0.90, IoU = 0.81, HD95 = 1.9 mm, and volumetric agreement DCVol = 0.89. Compared to single-stage baselines, this represents an average improvement of approximately 17% over DL-only and 14% over OCM-only, validating the synergistic contribution of the proposed hybrid strategy over standalone iterative or data-driven methods. The pipeline maintains physical calibration via Hough-based grid detection and employs Bézier-based contour smoothing for robust meshing and volume estimation. Although validated on kidney data, the proposed framework generalizes to other soft-tissue organs reconstructed from optical or photographic cross-sections. By decoupling interpretable global optimization from data-efficient deep refinement, the method advances the precision, reproducibility, and anatomical realism of multimodal 3D reconstructions for surgical planning, morphological assessment, and medical education. Full article
(This article belongs to the Special Issue Engineering Applications of Hybrid Artificial Intelligence Tools)
Show Figures

Figure 1

28 pages, 1862 KB  
Article
Digital Transformation and Sustainable Education: A Framework for Integrating Multimodal VR into TVET
by Lucheng Li, Chen Kim Lim, Zi Yan and Ridzwan Che Rus
Sustainability 2026, 18(4), 2007; https://doi.org/10.3390/su18042007 (registering DOI) - 15 Feb 2026
Abstract
In the current era of educational digitalization, Learning Management Systems (LMS) serve as the critical backbone of online learning for content delivery, administration, and communication. This study addresses key limitations in delivering hands-on training for online Technical and Vocational Education and Training (TVET), [...] Read more.
In the current era of educational digitalization, Learning Management Systems (LMS) serve as the critical backbone of online learning for content delivery, administration, and communication. This study addresses key limitations in delivering hands-on training for online Technical and Vocational Education and Training (TVET), using Malaysian automotive programs as a case. It develops and provides an initial validation for a sustainable Virtual Reality-LMS (VR-LMS) framework to explore its potential to enhance immersive learning, engagement, and skill assessment. This study firstly triangulated literature, stakeholder interviews, and national data to define the problem and quantitatively evaluated a VR intervention with 100 automotive engineering students using an extended Unified Theory of Acceptance and Use of Technology (UTAUT) model; further designed a validated multimodal VR-LMS conceptual model; and finally developed a sustainable implementation strategy. Results show high training performance (M = 92.55) and examination achievement (M = 89.78). Structural Equation Modeling indicated that Performance Expectancy (β = 0.78), Hedonic Motivation (β = 0.25), and Effort Expectancy (β = 0.45) are significant predictors, with the model explaining 66.3% of the variance in learning outcomes (R2 = 0.663). The findings provide integrated empirical evidence that embedding multimodal VR into an LMS can contribute to creating a more sustainable and effective educational model by fostering engagement, practical competence, and instructional effectiveness, which offers a promising sustainable solution framework for TVET institutions, educators, and policymakers, aligning with Malaysia’s digital transformation and workforce development agendas. Full article
28 pages, 17682 KB  
Article
Causal-Enhanced Spatio-Temporal Markov Graph Convolutional Network for Traffic Flow Prediction
by Jing Hu and Shuhua Mao
Symmetry 2026, 18(2), 366; https://doi.org/10.3390/sym18020366 (registering DOI) - 15 Feb 2026
Abstract
Traffic flow prediction is a pivotal task in intelligent transportation systems. The primary challenge lies in accurately modeling the dynamically evolving and directional spatio-temporal dependencies inherent in road networks. Existing graph neural network-based methods suffer from three main limitations: (1) symmetric adjacency matrices [...] Read more.
Traffic flow prediction is a pivotal task in intelligent transportation systems. The primary challenge lies in accurately modeling the dynamically evolving and directional spatio-temporal dependencies inherent in road networks. Existing graph neural network-based methods suffer from three main limitations: (1) symmetric adjacency matrices fail to capture the causal propagation of traffic flow from upstream to downstream; (2) the serial combination of graph and temporal convolutions lacks an explicit modeling of joint spatio-temporal state transition probabilities; (3) the inherent low-pass filtering property of temporal convolutional networks tends to smooth high-frequency abrupt signals, thereby weakening responsiveness to sudden events. To address these issues, this paper proposes a causal-enhanced spatio-temporal Markov graph convolutional network (CSHGCN). At the spatial modeling level, we construct an asymmetric causal adjacency matrix by decoupling source and target node embeddings to learn directional traffic flow influences. At the spatio-temporal joint modeling level, we design a spatio-temporal Markov transition module (STMTM) based on spatio-temporal Markov chain theory, which explicitly learns conditional transition patterns through temporal dependency encoders, spatial dependency encoders, and a joint transition network. At the temporal modeling level, we introduce differential feature enhancement and high-frequency residual compensation mechanisms to preserve key abrupt change information through frequency-domain complementarity. Experiments on four datasets—PEMS03, PEMS04, PEMS07, and PEMS08—demonstrate that CSHGCN outperforms existing baselines in terms of MAE, RMSE, and MAPE, with ablation studies validating the effectiveness of each module. Full article
(This article belongs to the Section Computer)
24 pages, 2150 KB  
Article
Non-Destructive Freshness Assessment of Atlantic Salmon (Salmo salar) via Hyperspectral Imaging and an SPA-Enhanced Transformer Framework
by Zhongquan Jiang, Yu Li, Mincheng Xie, Hanye Zhang, Haiyan Zhang, Guangxin Yang, Peng Wang, Tao Yuan and Xiaosheng Shen
Foods 2026, 15(4), 725; https://doi.org/10.3390/foods15040725 (registering DOI) - 15 Feb 2026
Abstract
Monitoring the freshness of Salmo salar within cold chain logistics is paramount for ensuring food safety. However, conventional physicochemical and microbiological assays are impeded by inherent limitations, including destructiveness and significant time latency, rendering them inadequate for the real-time, non-invasive inspection demands of [...] Read more.
Monitoring the freshness of Salmo salar within cold chain logistics is paramount for ensuring food safety. However, conventional physicochemical and microbiological assays are impeded by inherent limitations, including destructiveness and significant time latency, rendering them inadequate for the real-time, non-invasive inspection demands of modern industry. Here, we present a novel detection framework synergizing hyperspectral imaging (400–1000 nm) with the Transformer deep learning architecture. Through a rigorous comparative analysis of twelve preprocessing protocols and four feature wavelength selection algorithms (Lasso, Genetic Algorithm, Successive Projections Algorithm, and Random Frog), prediction models for Total Volatile Basic Nitrogen (TVB-N) and Total Viable Count (TVC) were established. Furthermore, the capacity of the Transformer to capture long-range spectral dependencies was systematically investigated. Experimental results demonstrate that the model integrating Savitzky-Golay (SG) smoothing with the Transformer yielded optimal performance across the full spectrum, achieving determination coefficients (R2) of 0.9716 and 0.9721 for the Prediction Sets of TVB-N and TVC, respectively. Following the extraction of 30 characteristic wavelengths via the Successive Projections Algorithm (SPA), the streamlined model retained exceptional predictive precision (R2 ≥ 0.95) while enhancing computational efficiency by a factor of approximately six. This study validates the superiority of attention-mechanism-based deep learning algorithms in hyperspectral data analysis. These findings provide a theoretical foundation and technical underpinning for the development of cost-effective, high-efficiency portable multispectral sensors, thereby facilitating the intelligent transformation of the aquatic product supply chain. Full article
Show Figures

Figure 1

23 pages, 623 KB  
Article
Radiomics-Driven Hybrid Deep Learning for MRI-Based Prediction of Glioma Grade and 1p/19q Codeletion
by Abdullah Bin Sawad and Muhammad Binsawad
Tomography 2026, 12(2), 25; https://doi.org/10.3390/tomography12020025 (registering DOI) - 15 Feb 2026
Abstract
Background: Correct preoperative evaluation of glioma grade and molecular profile is a prerequisite for tailored treatment strategies. Specifically, the 1p/19q codeletion status represents a major prognostic and therapeutic marker in low-grade gliomas (LGGs). Nevertheless, its assessment is presently performed through invasive histopathological and [...] Read more.
Background: Correct preoperative evaluation of glioma grade and molecular profile is a prerequisite for tailored treatment strategies. Specifically, the 1p/19q codeletion status represents a major prognostic and therapeutic marker in low-grade gliomas (LGGs). Nevertheless, its assessment is presently performed through invasive histopathological and genetic studies, thus underlining the need for non-invasive alternative approaches. Methods: We introduce a non-invasive radiomics framework that combines quantitative MRI features with sophisticated ML and DL approaches for glioma grading and 1p/19q codeletion status prediction. High-dimensional radiomic features characterizing tumor geometry, intensity, and texture were derived from preoperative MRI-based tumor delineations. Features were normalized and optimized using correlation-based feature selection. Several traditional ML classifiers were compared and contrasted with DL models, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and a CNN-Long Short-Term Memory (LSTM) hybrid model tailored to exploit both spatial feature hierarchies and feature correlations. Model validation was conducted using five-fold cross-validation and an independent test dataset, with accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) metrics. Results: Among all the models tested, the hybrid CNN-LSTM model performed the best, with an accuracy of 88.1% and an AUC of 0.93, outperforming conventional ML approaches and single-model DL architectures. Explainability analysis showed that the radiomic features of tumor heterogeneity and morphology had the most prominent impact on model performance. Conclusions: These findings indicate that the combination of radiomic features with hybrid DL models is capable of making non-invasive predictions of glioma grade and 1p/19q codeletion status. The new computational model has the potential to be used as a supplementary approach in precision neuro-oncology. Full article
Show Figures

Figure 1

26 pages, 7718 KB  
Article
Automated Dynamic Adjustment of Runoff Threshold in Ungauged Basins Using Remote Sensing Data
by Laura D. Pachón-Acuña, Jorge López-Rebollo, Junior A. Calvo-Montañez, Susana Del Pozo and Diego González-Aguilera
Remote Sens. 2026, 18(4), 616; https://doi.org/10.3390/rs18040616 (registering DOI) - 15 Feb 2026
Abstract
Accurate runoff estimation in ungauged basins is critical for water resource management but often relies on static parameters like the runoff threshold (P0), derived from the Soil Conservation Service Curve Number method, which fail to capture spatiotemporal soil moisture variability. [...] Read more.
Accurate runoff estimation in ungauged basins is critical for water resource management but often relies on static parameters like the runoff threshold (P0), derived from the Soil Conservation Service Curve Number method, which fail to capture spatiotemporal soil moisture variability. This study proposes an automated methodology utilising Google Earth Engine to dynamically adjust P0 by integrating daily soil moisture data from SMAP L4, land cover from MODIS, and precipitation from GSMaP. Unlike traditional approaches that use antecedent precipitation as a proxy, this method classifies moisture conditions using historical percentiles to update the threshold daily. The methodology was validated in two sub-basins within the Guadiana River basin (Spain). The results highlight a stark contrast between methods: while static regulatory values remained invariant (36 and 48 mm), the proposed dynamic model revealed significant fluctuations, with P0 values ranging from over 50 mm in dry periods down to less than 14 mm during saturation. Conversely, the proposed dynamic method effectively captures real-time soil saturation, exhibiting adaptability with reductions in P0 of up to 72% immediately following rainfall events. This satellite-based approach provides a scalable, physically consistent alternative for assessing runoff potential in data-scarce regions, significantly enhancing the reliability of hydrological modelling compared to conventional regulatory standards. Full article
(This article belongs to the Special Issue Remote Sensing in Natural Resource and Water Environment II)
Show Figures

Figure 1

24 pages, 16509 KB  
Article
Lithology Identification via MSC-Transformer Network with Time-Frequency Feature Fusion
by Shiyi Xu, Sheng Wang, Jun Bai, Kun Lai, Jie Zhang, Qingfeng Wang and Jie Zhang
Appl. Sci. 2026, 16(4), 1949; https://doi.org/10.3390/app16041949 (registering DOI) - 15 Feb 2026
Abstract
Real-time lithology identification during drilling faces challenges such as indistinct boundaries and difficulties in feature extraction. To address these, this study proposes the MSC-Transformer, a novel model integrating time-frequency features with a deep neural network. A series of drilling experiments were conducted using [...] Read more.
Real-time lithology identification during drilling faces challenges such as indistinct boundaries and difficulties in feature extraction. To address these, this study proposes the MSC-Transformer, a novel model integrating time-frequency features with a deep neural network. A series of drilling experiments were conducted using an intelligent drilling platform, during which triaxial vibration signals were collected from five types of rock specimens: anthracite, granite, bituminous coal, sandstone, and shale. Short-time Fourier Transform (STFT) was applied to generate multi-channel power spectral density (PSD) maps, which were then fused into a three-channel tensor to preserve directional frequency information and used as inputs to the model. The proposed MSC-Transformer combines a multi-scale convolutional (MSC) module with a lightweight Transformer encoder to jointly capture local texture patterns and global dependency features, thereby enabling accurate classification of complex lithologies. Experimental results demonstrate that the model achieves an average accuracy of 98.21 ± 0.49% on the test set, outperforming convolutional neural networks (CNNs), visual geometry group (VGG), residual network (ResNet), and bidirectional long short-term memory (Bi-LSTM) by 5.93 ± 0.90%, 2.54 ± 1.11%, 6.38 ± 2.63%, and 10.56 ± 3.11%, respectively, with statistically significant improvements (p < 0.05). Ablation studies and visualization analyses further validate the effectiveness and interpretability of the model architecture. These findings indicate that lithology recognition based on time-frequency representations of vibration signals is both stable and generalizable, offering technical support for real-time intelligent lithology identification during drilling operations. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
17 pages, 1365 KB  
Article
A Transfer-Learning Approach for Detection of Multiclass Synthetic Skin Cancer Images Generated by Deep Generative Models to Prevent Medical Insurance Fraud
by Osama Tariq, Muhammad Asad Arshed, Muhammad Kabir, Khalid Ijaz, Ştefan Cristian Gherghina and Hafiza Bukhtawer Batool
Math. Comput. Appl. 2026, 31(1), 31; https://doi.org/10.3390/mca31010031 (registering DOI) - 15 Feb 2026
Abstract
Artificial Intelligence is advancing rapidly, raising critical concerns about the integrity of digital content, particularly in sensitive domains such as medical imaging. Recent AI techniques, such as Generative Adversarial Networks (GANs) and diffusion models, can generate highly realistic synthetic medical images, posing risks [...] Read more.
Artificial Intelligence is advancing rapidly, raising critical concerns about the integrity of digital content, particularly in sensitive domains such as medical imaging. Recent AI techniques, such as Generative Adversarial Networks (GANs) and diffusion models, can generate highly realistic synthetic medical images, posing risks of misdiagnosis, inappropriate treatment, and other adverse outcomes. This paper presents a deep learning-based approach to distinguish between authentic and synthetic images of skin malignancies generated by DCGAN, Wasserstein GAN (WGAN), and Stable Diffusion. A comprehensive dataset was constructed using authentic malignant skin images from an open-source Kaggle repository, alongside artificially generated images. Multiple deep learning models were trained and evaluated, with DenseNet169 achieving the highest performance, reaching 99.67% training accuracy, 97.50% validation accuracy, and 98.50% test accuracy—along with substantial precision, recall, and F1 scores across all classes. These results demonstrate the model’s efficacy in identifying both real and fake medical images. This work contributes to the emerging field of medical image forensics, highlighting its potential integration into clinical and insurance workflows to prevent fraud, strengthen trust, and mitigate risks. Furthermore, it lays the groundwork for future studies involving larger datasets, additional Deepfake generation methods, and real-time clinical applications. Full article
Show Figures

Figure 1

17 pages, 5756 KB  
Article
An Incorporating Pore Water Pressure Constitutive Model for Overconsolidated Clay and Calibration of Transient FE Parameters
by Yu Jiang, Zewei Xu and Run Liu
J. Mar. Sci. Eng. 2026, 14(4), 376; https://doi.org/10.3390/jmse14040376 (registering DOI) - 15 Feb 2026
Abstract
The simulation accuracy of triaxial tests for oversolidated clay in transient finite element analysis is affected by soil constitutive model, permeability coefficient, overconsolidation ratio, shear rate and mesh size. This study introduces the concepts of overconsolidation parameters, potential strength, and hardening parameters from [...] Read more.
The simulation accuracy of triaxial tests for oversolidated clay in transient finite element analysis is affected by soil constitutive model, permeability coefficient, overconsolidation ratio, shear rate and mesh size. This study introduces the concepts of overconsolidation parameters, potential strength, and hardening parameters from the unified hardening model into the modified Cam-Clay model. By integrating the generation mechanism of pore water pressure, a constitutive model for overconsolidated clay incorporating pore water pressure was developed, and its accuracy was validated through triaxial tests. By invoking the UMAT subroutine, accurate simulation of the undrained triaxial tests of overconsolidated clay was achieved in the static/general analysis in Abaqus. Based on this, model parameters for simulating triaxial tests of overconsolidated clay in transient analysis (Soils) were calibrated. The relationships between shear rate, mesh size, and soil parameters were quantified, providing a reference for similar engineering numerical simulations. Full article
(This article belongs to the Special Issue Advances in Marine Geotechnical Engineering—2nd Edition)
Show Figures

Figure 1

61 pages, 10422 KB  
Article
Hybrid Computational Framework Integrating Ensemble Learning, Molecular Docking, and Dynamics for Predicting Antimalarial Efficacy of Malaria Box Compounds
by Martín Moreno, Sebastián A. Cuesta, José R. Mora, Edgar A. Márquez Brazon, José L. Paz, Guillermin Agüero-Chapin, Noel Pérez-Pérez and César R. García-Jacas
Int. J. Mol. Sci. 2026, 27(4), 1875; https://doi.org/10.3390/ijms27041875 (registering DOI) - 15 Feb 2026
Abstract
The emergence of drug-resistant strains of Plasmodium falciparum continues to challenge global malaria control efforts, underscoring the urgent need for novel therapeutic strategies. In this study, we present an integrative computational framework that combines ensemble machine learning, molecular docking, and molecular dynamics simulations [...] Read more.
The emergence of drug-resistant strains of Plasmodium falciparum continues to challenge global malaria control efforts, underscoring the urgent need for novel therapeutic strategies. In this study, we present an integrative computational framework that combines ensemble machine learning, molecular docking, and molecular dynamics simulations to predict and characterize the antimalarial activity of compounds from the Malaria Box database. Initially, topographical and quantum mechanical descriptors were used to construct regression models for predicting pEC50 values, but due to the limited predictive performance in the global regression, a classification strategy was adopted, categorizing compounds into “active” and “very active” classes. The best ensemble classifier achieved robust performance (Acc10-fold = 0.738, Accext = 0.675), with good sensitivity and specificity over individual models. Subsequent regression modeling within each class yielded high predictive accuracy, with ensemble models reaching Q210-fold values of 0.810 and 0.793 for the very active and active classes, respectively. To explore potential mechanisms of action, molecular docking was performed against P. falciparum Cytochrome B, revealing strong binding affinities for most compounds, particularly those forming π–π stacking and hydrogen bonds with Glu272. Molecular dynamics simulations over 200 ns confirmed the stability of several ligand–protein complexes, including unexpected behavior from compound M31, which demonstrated stable binding despite poor docking scores, suggesting a possible competitive inhibition mechanism. Binding free energy calculations further validated these findings, highlighting several promising candidates for future experimental evaluation. This integrative approach offers a powerful platform for accelerating antimalarial drug discovery by combining predictive modeling with mechanistic insights. Full article
(This article belongs to the Section Molecular Informatics)
Show Figures

Figure 1

24 pages, 1131 KB  
Article
A Dynamic Model for Adjusting Online Ratings Based on Consumer Distrust Perception
by José Ignacio Peláez, Gustavo F. Vaccaro, Félix Infante León and David Santo
Appl. Sci. 2026, 16(4), 1948; https://doi.org/10.3390/app16041948 (registering DOI) - 15 Feb 2026
Abstract
Online reputation systems display aggregated ratings derived from numerical scores and textual reviews of real consumer experiences. These ratings serve as operational estimates of a product or service’s value and are used by consumers and organizations as a direct reference for decision-making. However, [...] Read more.
Online reputation systems display aggregated ratings derived from numerical scores and textual reviews of real consumer experiences. These ratings serve as operational estimates of a product or service’s value and are used by consumers and organizations as a direct reference for decision-making. However, when suspicious review patterns emerge, such as repetition, extreme ratings, temporal concentration, or low diversity, the perceived value is systematically altered, and the aggregated score no longer reflects the practical evaluation used by users. This perceptual dimension of reputational value has not been modeled in conventional reputation indices. This paper proposes a soft-computing-based reputation adjustment model that quantifies this perceptual change. The model does not replace or reorder the original reputation index (ORI); instead, it introduces a continuous correction layer operating on the displayed rating, modeling the mapping between the aggregated score and the value internalized by users through entropy-weighted indicators of informational disorder. Experimental validation was conducted on 60 participants’ product evaluations across eight products. Results show that the conventional rating exhibits a systematic upward bias relative to perceived trust (mean absolute error = 1.27), whereas the adjusted index significantly reduces this bias (mean absolute error = 0.12; paired t-test, p < 0.001). The proposed model corrects perceptual overestimation while preserving the original reputation signal, improving alignment between displayed ratings and effective user trust. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 11740 KB  
Article
Towards Cost-Optimal Zero-Defect Manufacturing in Injection Molding: An Explainable and Transferable Machine Learning Framework
by Lucas Greif, Jonas Ortner, Peer Kummert, Andreas Kimmig, Simon Kreuzwieser, Jakob Bönsch and Jivka Ovtcharova
Sustainability 2026, 18(4), 2001; https://doi.org/10.3390/su18042001 (registering DOI) - 15 Feb 2026
Abstract
In the era of Industry 4.0, Zero-Defect Manufacturing is critical for injection molding but faces three major hurdles: severe class imbalance, the “black-box” nature of AI models, and the lack of scalability across machines. This study presents a comprehensive framework addressing these challenges. [...] Read more.
In the era of Industry 4.0, Zero-Defect Manufacturing is critical for injection molding but faces three major hurdles: severe class imbalance, the “black-box” nature of AI models, and the lack of scalability across machines. This study presents a comprehensive framework addressing these challenges. Using industrial datasets, we evaluated state-of-the-art supervised algorithms. Results show that CatBoost outperforms other architectures. Crucially, we demonstrate that maximizing accuracy is insufficient; instead, we introduce a cost-sensitive threshold optimization that minimizes economic risk, identifying an optimal classification threshold significantly lower than the standard. To enhance trust, SHAP analysis reveals that motor power and specific nozzle temperatures are the primary defect drivers. Finally, we validate a transfer learning approach using LightGBM, proving that models can be adapted to new datasets with minimal retraining. The implementation of cost-sensitive thresholding reduces total failure costs by over 75% compared to standard classification, while the transfer learning approach cuts the data requirements for new machine adaptation by more than half, providing a high-impact, scalable solution for sustainable smart manufacturing. Full article
(This article belongs to the Special Issue Smart Technologies for Sustainable Production)
33 pages, 7637 KB  
Article
Revisiting Thermal Performance of Shallow Ground-Heat Exchangers Based on Response Factor Methods and Dimension Reduction Algorithms
by Wentan Wang, Haoran Cheng, Jiangtao Wen, Xi Wang, Kui Yin, Xin Wang, Weiwei Liu and Yongqiang Luo
Processes 2026, 14(4), 672; https://doi.org/10.3390/pr14040672 (registering DOI) - 15 Feb 2026
Abstract
Geothermal energy assumes an increasingly crucial role in advancing carbon neutrality. However, heat transfer calculations for shallow ground-heat exchangers (GHE) face challenges, including large computational loads for pipe arrays and insufficient long-term operational analysis. This study proposes two key innovations: first, the introduction [...] Read more.
Geothermal energy assumes an increasingly crucial role in advancing carbon neutrality. However, heat transfer calculations for shallow ground-heat exchangers (GHE) face challenges, including large computational loads for pipe arrays and insufficient long-term operational analysis. This study proposes two key innovations: first, the introduction of the Response Factor Method (RFM), which accelerates long-term heat-transfer calculations by constructing a coefficient matrix library; second, a dimension-reduction algorithm for large-scale pipe arrays (LADR), balancing simulation speed and accuracy. The simulation model is developed and validated experimentally, with the simulated outlet temperature showing a 0.2% average relative error compared to measured values, with a 20-times speed-up of simulation time compared to the original method. Moreover, the LADR can realize a reduction in calculation load into only two or three boreholes while the neglectable errors do not affect numerical results. The study found that heat extraction increases linearly with borehole depth, but with diminishing returns. Increasing pipe diameter and spacing enhances heat extraction, while overloading reduces reliability. Intermittent operation significantly boosts the load-bearing capacity of individual pipes. The thermal effect radius during the transitional period is larger than that during the heating/cooling periods. We observed and explained the ground heat accumulation in a thermally balanced system for the first time. Additionally, there are differences in thermal performance at different borehole locations within the array, along with a load transfer effect. This research provides valuable insights for optimizing shallow GSHPs. Full article
(This article belongs to the Section Energy Systems)
22 pages, 1268 KB  
Article
Vector-Guided Post-Earthquake Damaged Road Extraction Using Diffusion-Augmented Remote Sensing Imagery
by Chenyao Qu, Jinxiang Jiang, Zhimin Wu, Talha Hassan, Wei Wang, Zelang Miao, Hong Tang, Kun Liu and Lixin Wu
Remote Sens. 2026, 18(4), 613; https://doi.org/10.3390/rs18040613 (registering DOI) - 15 Feb 2026
Abstract
Destructive earthquakes frequently sever transportation lifelines, significantly impeding the progress of emergency rescue and post-disaster reconstruction efforts. The automated identification of road damage utilizing high-resolution remote sensing imagery is strictly constrained by the scarcity of post-disaster labeled samples and the morphological complexity of [...] Read more.
Destructive earthquakes frequently sever transportation lifelines, significantly impeding the progress of emergency rescue and post-disaster reconstruction efforts. The automated identification of road damage utilizing high-resolution remote sensing imagery is strictly constrained by the scarcity of post-disaster labeled samples and the morphological complexity of road networks. Consequently, model segmentation results frequently suffer from discontinuities in topological connectivity and confusion between background features and damaged roads. To address these challenges, this study proposes a road damage detection framework that integrates generative artificial intelligence with vector prior knowledge. A data simulation pipeline utilizing a stable diffusion model was constructed, employing topologically constrained masking to generate high-fidelity synthetic damage samples based on the DeepGlobe dataset, thereby mitigating the data deficit. The proposed Vector-Guided Damaged Road Segmentation Network (VRD-U2Net) employs wavelet convolutions (WTConv) to decouple high-frequency noise from low-frequency structural components and utilizes a Multi-Scale Residual Attention (MSRA) module to align visual features with vector priors. Furthermore, a vector-prior-driven dynamic upsampling mechanism is introduced to enforce geometric constraints on model predictions. Experimental results demonstrate that the method achieves an mIoU of 0.884 on the synthetic dataset. In validation using real-world imagery from the 2023 Turkey earthquake, the model attained an F1-score of 65.3% and recall of 72.3% without fine-tuning, exhibiting robust generalization capabilities to support manual damage assessment in data-scarce emergency scenarios. Full article
Back to TopTop