Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (11,187)

Search Parameters:
Keywords = learning observer

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 4041 KB  
Article
Detection of Phosphorus Deficiency Using Hyperspectral Imaging for Early Characterization of Asymptomatic Growth and Photosynthetic Symptoms in Maize
by Sutee Kiddee, Chalongrat Daengngam, Surachet Wongarrayapanich, Jing Yi Lau, Acga Cheng and Lompong Klinnawee
Agronomy 2026, 16(8), 772; https://doi.org/10.3390/agronomy16080772 (registering DOI) - 8 Apr 2026
Abstract
Phosphorus (P) deficiency severely limits maize growth and yield, yet early detection remains challenging, as visible symptoms appear only after prolonged starvation. This study evaluated the capability of hyperspectral imaging (HSI) combined with machine learning to detect P deficiency in maize seedlings at [...] Read more.
Phosphorus (P) deficiency severely limits maize growth and yield, yet early detection remains challenging, as visible symptoms appear only after prolonged starvation. This study evaluated the capability of hyperspectral imaging (HSI) combined with machine learning to detect P deficiency in maize seedlings at both symptomatic and pre-symptomatic stages. Two greenhouse experiments were conducted: a long-term pot system under high and low P conditions and a short-term hydroponic experiment with three P concentrations of 500, 100, and 0 μmol/L phosphate (Pi). After long-term P deficiency, significant reductions in shoot biomass and Pi content were observed, while root biomass increased and nutrient profiles were altered. Hyperspectral signatures revealed distinct wavelength-specific differences across visible, red-edge, and near-infrared (NIR) regions, with P-deficient leaves showing lower reflectance in green and NIR regions but higher reflectance in the red band. A multilayer perceptron machine learning model achieved 99.65% accuracy in discriminating between P treatments. In the short-term experiment, P deficiency significantly reduced tissue Pi content within one week without affecting pigment composition or photosynthetic parameters. Despite the absence of visible symptoms, hyperspectral measurements detected subtle spectral changes, particularly in older leaves, enabling classification accuracies of 80.71–84.56% in the first week and 85.88–90.98% in the second week of P treatment. Conventional vegetation indices showed weak correlations with Pi content and failed to detect early P deficiency. These findings demonstrate that HSI combined with machine learning can effectively detect P deficiency before visible symptoms emerge, offering a non-destructive, rapid diagnostic tool for precision nutrient management in maize production systems. Full article
(This article belongs to the Special Issue Nutrient Enrichment and Crop Quality in Sustainable Agriculture)
Show Figures

Figure 1

27 pages, 5739 KB  
Article
Baseline-Conditioned Spatial Heterogeneity in Ensemble-Learning Correction for Global Hourly Sea-Level Reconstruction
by Yu Hao, Yixuan Tang, Wen Du, Yang Li and Min Xu
J. Mar. Sci. Eng. 2026, 14(8), 697; https://doi.org/10.3390/jmse14080697 (registering DOI) - 8 Apr 2026
Abstract
This study examines how assessments of coastal extreme sea levels depend on the separability and reconstructability of the astronomical tide in hourly sea-level records. Using a global tide-gauge network, it proposes an ensemble-learning correction framework that integrates a physical-baseline threshold with multi-criteria consistency [...] Read more.
This study examines how assessments of coastal extreme sea levels depend on the separability and reconstructability of the astronomical tide in hourly sea-level records. Using a global tide-gauge network, it proposes an ensemble-learning correction framework that integrates a physical-baseline threshold with multi-criteria consistency testing to determine whether machine-learning enhancement is genuinely effective across stations and time windows. The analysis uses hourly records from 528 UHSLC tide gauges, with 31-day short sequences used to reconstruct 180-day sea-level variability. Taking the physical tidal model as the baseline, residuals are corrected using Extremely Randomized Trees, Random Forest, and Gradient Boosting. To avoid false improvement driven solely by error reduction, a hierarchical decision framework is established. Baseline model quality is first screened using NSE and the coefficient of determination, after which mathematical artefacts are identified through diagnostics of peak suppression and variance shrinkage. A five-level classification is then derived from the convergent evidence of twelve performance metrics and four statistical significance tests. The results show a consistent global pattern across all three algorithms. Approximately 57% of stations meet the criterion for genuine improvement, whereas about 42% are associated with an unreliable physical baseline, indicating that the dominant source of failure arises not from the ensemble-learning algorithms themselves, but from spatially varying limitations in the underlying physical baseline. Spatially, the credibility of machine-learning correction is strongly conditioned by baseline quality: stations with effective correction are more continuous along the eastern North Atlantic and European coasts, whereas stations with ineffective correction are more concentrated in the Gulf of Mexico, the Caribbean, and the marginal seas and archipelagic regions of the western Pacific. These results indicate that the observed spatial heterogeneity primarily reflects geographically varying physical and dynamical conditions that control baseline reliability and residual learnability, rather than a standalone difference in the intrinsic capability of ensemble learning itself. Full article
(This article belongs to the Special Issue AI-Enhanced Dynamics and Reliability Analysis of Marine Structures)
Show Figures

Figure 1

22 pages, 5849 KB  
Article
Multi-Scale Fourier Temporal Network for Multi-Source Precipitation Nowcasting
by Jing Huang, Shanmin Yang, Xiaojie Li and Xi Wu
Sensors 2026, 26(8), 2303; https://doi.org/10.3390/s26082303 - 8 Apr 2026
Abstract
Accurate precipitation nowcasting plays an important role in disaster prevention and hydrometeorological applications, yet it remains highly challenging due to the complex spatiotemporal variability and multi-scale structural characteristics of precipitation systems. Existing deep learning methods are largely data-driven and often struggle to effectively [...] Read more.
Accurate precipitation nowcasting plays an important role in disaster prevention and hydrometeorological applications, yet it remains highly challenging due to the complex spatiotemporal variability and multi-scale structural characteristics of precipitation systems. Existing deep learning methods are largely data-driven and often struggle to effectively exploit multi-source observations or learn physically meaningful representations. To address these limitations, this study proposes a Multi-Scale Frequency–Temporal Network (MS-FTNet) for precipitation nowcasting. The framework leverages Fourier transform-based frequency-domain modeling to achieve an interpretable multi-scale decomposition of precipitation dynamics. Specifically, low-frequency components capture large-scale stratiform patterns and their temporal evolution, while high-frequency components represent localized convective structures and abrupt variations. Building on this, a Global Feature Collaboration (GFC) module integrates global frequency-domain representations with multi-scale convolutional features, and an Adaptive Temporal Fusion (ATF) module enhances temporal dependency modeling. Experiments on the SEVIR dataset demonstrate that MS-FTNet consistently outperforms representative baseline models in terms of MSE, CSI, and LPIPS, particularly for heavy precipitation events and longer forecast lead times. Full article
Show Figures

Figure 1

21 pages, 4572 KB  
Article
Development of a Control System for a Hydraulic Injection Molding Machine Using an AFC Controller and Utilization of Learning Parameters
by Takahiro Shinpuku, Takumi Kobayashi, Shota Yabui, Kento Fujita, Yusuke Uematsu, Shota Suzuki and Yusuke Uchiyama
Polymers 2026, 18(8), 911; https://doi.org/10.3390/polym18080911 (registering DOI) - 8 Apr 2026
Abstract
Maintaining stable molding quality in hydraulic injection molding machines is difficult because the internal state of molten resin cannot be directly observed and varies with material properties and operating conditions. This difficulty is intensified by variations in hydraulic characteristics caused by oil temperature [...] Read more.
Maintaining stable molding quality in hydraulic injection molding machines is difficult because the internal state of molten resin cannot be directly observed and varies with material properties and operating conditions. This difficulty is intensified by variations in hydraulic characteristics caused by oil temperature changes. This study proposes an adaptive feedforward control (AFC) framework that improves injection velocity tracking while utilizing AFC learning parameters as indicators of resin state. AFC is implemented as a multi-frequency feedforward controller whose parameters are updated through repetitive injection cycles. To overcome the limited learning duration within a single injection shot, a shot-to-shot compensation mechanism accumulates and transfers learning results across consecutive shots. Experiments are conducted on a hydraulic injection molding machine using polypropylene materials with different viscosities. The results show that the converged AFC learning parameters vary systematically with material changes and correspond to differences in molded product appearance. Furthermore, by adjusting the cylinder temperature of another material, the AFC parameters converge to values close to those of a reference material, resulting in similar molded products. These findings demonstrate that AFC learning parameters reflect variations in resin state and can serve as practical state indicators for aligning molding conditions. Full article
(This article belongs to the Special Issue Advances in Polymer Processing Technologies: Injection Molding)
Show Figures

Figure 1

32 pages, 716 KB  
Article
Adaptive Sensitivity-Aware Differential Privacy Accounting for Federated Smart-Meter Theft Detection
by Diego Labate, Dipanwita Thakur and Giancarlo Fortino
Big Data Cogn. Comput. 2026, 10(4), 113; https://doi.org/10.3390/bdcc10040113 - 8 Apr 2026
Abstract
Smart-meter theft detection requires learning from fine-grained electricity consumption data, whose centralized processing poses significant privacy risks. Federated learning (FL) mitigates these risks by decentralizing training, but providing rigorous user-level differential privacy (DP) under non-IID data and heterogeneous client behavior remains challenging. Existing [...] Read more.
Smart-meter theft detection requires learning from fine-grained electricity consumption data, whose centralized processing poses significant privacy risks. Federated learning (FL) mitigates these risks by decentralizing training, but providing rigorous user-level differential privacy (DP) under non-IID data and heterogeneous client behavior remains challenging. Existing DP-FL approaches rely on fixed global clipping bounds for client updates, which substantially overestimate sensitivity when privacy loss is composed using Rényi Differential Privacy (RDP), zero-Concentrated DP (zCDP), or Moments Accountant (MA) frameworks, leading to excessive noise and degraded utility. This work proposes an adaptive clipping-based RDP accountant that incorporates empirical, round-wise update magnitudes into privacy accounting by rescaling each round’s RDP contribution according to the observed clipping ratio. The method is optimizer-agnostic and is evaluated with FedAvg, FedProx, and SCAFFOLD on the SGCC smart-meter theft dataset under IID and Dirichlet non-IID partitions. Experimental results show consistently tighter privacy bounds and improved model utility compared to classical DP accountants, demonstrating the effectiveness of sensitivity-aware privacy accounting for practical differentially private FL. Full article
26 pages, 17314 KB  
Article
An AESRGAN Remote Sensing Super-Resolution Model for Accurate Water Extraction
by Hongjie Liu, Wenlong Song, Juan Lv, Yizhu Lu, Long Chen, Yutong Zhao, Shaobo Linghu, Yifan Duan, Pengyu Chen, Tianshi Feng and Rongjie Gui
Remote Sens. 2026, 18(8), 1108; https://doi.org/10.3390/rs18081108 - 8 Apr 2026
Abstract
Accurate monitoring of water spatiotemporal dynamics is critical for hydrological process analysis and climate impact assessment. While remote sensing enables effective water monitoring, public satellite imagery is limited by mixed-pixel effects that hinder small river detection, and high-resolution commercial data suffers from low [...] Read more.
Accurate monitoring of water spatiotemporal dynamics is critical for hydrological process analysis and climate impact assessment. While remote sensing enables effective water monitoring, public satellite imagery is limited by mixed-pixel effects that hinder small river detection, and high-resolution commercial data suffers from low temporal frequency and restricted coverage. To address these limitations, this study proposes a deep learning-based super-resolution (SR) framework for multispectral remote sensing imagery. This paper constructs a matched dataset for GF2 and Sentinel-2 imagery and develops an Attention Enhanced Super Resolution Generative Adversarial Network (AESRGAN). By integrating attention mechanisms and a spectral-structural loss design, the network is optimized to adapt to the characteristics of multispectral remote sensing imagery. Experimental results demonstrate that AESRGAN achieves strong reconstruction performance, with a Peak Signal-to-Noise Ratio (PSNR) of 33.83 dB and a Structural Similarity Index Measure (SSIM) of 0.882. Water extraction based on the reconstructed imagery using the U-Net++ model achieved an overall accuracy of 0.97 and a Kappa coefficient of 0.92. In addition, the reconstructed imagery improved the estimation accuracy of river length, width, and area by 0.34%, 3.28%, and 8.51%, respectively. The proposed framework provides an effective solution for multi-source remote sensing data fusion and high-precision surface water monitoring, offering new potential for long-term hydrological observation using medium-resolution satellite imagery. Full article
Show Figures

Figure 1

33 pages, 736 KB  
Article
Analysis of Chip Electronic Components’ Typical Yield in Taping Process Based on Virtual Metrology
by Shiqi Zhang, Lizhen Chen, Jiangcheng Fu, Chenghu Yang and Guangli Chen
Sensors 2026, 26(8), 2292; https://doi.org/10.3390/s26082292 - 8 Apr 2026
Abstract
This study addresses virtual metrology for the taping process of chip electronic components, in which partial observability, unmeasured disturbances, and severe label imbalance make direct batch-wise yield prediction unstable. Rather than proposing a new standalone learning algorithm, we develop a data-centric VM framework [...] Read more.
This study addresses virtual metrology for the taping process of chip electronic components, in which partial observability, unmeasured disturbances, and severe label imbalance make direct batch-wise yield prediction unstable. Rather than proposing a new standalone learning algorithm, we develop a data-centric VM framework that reformulates the task as the prediction of operating-condition-level typical yield. First, physically relevant features are retained based on process knowledge and analyzed using Pearson correlation, Spearman correlation, and mutual information. We then perform multidimensional equal-frequency binning to partition the observable feature space into locally homogeneous operating condition groups, and define the within-bin median yield as the typical yield, thereby constructing an operating condition dictionary. Based on this dictionary-based representation, low-yield-oriented sample weighting is combined with nested cross-validation and Bayesian optimization for model comparison and hyperparameter tuning. Using desensitized production data from an electronic component taping process, the results under this representation show more stable prediction than direct modeling on unbinned batch samples while also improving tail-oriented fitting relative to unweighted baselines. These findings suggest that, for partially observable manufacturing data, operating condition stratification provides a practical basis for stabilizing VM prediction, while low-yield-oriented sample weighting further improves sensitivity to the low-yield tail, supporting picture yield early warning and process-level decision making. Full article
Show Figures

Figure 1

25 pages, 7549 KB  
Article
Unseen-Crop Plant Disease Classification via Disentangled Representation Learning
by Zhenzhen Wu, Jianli Guo, Wei Hou, Kun Zhou, Kerang Cao and Hoekyung Jung
Electronics 2026, 15(8), 1553; https://doi.org/10.3390/electronics15081553 - 8 Apr 2026
Abstract
Deep learning has accelerated progress in plant disease recognition, providing strong technical support for early diagnosis and precision management. However, models often lack robustness and generalization when confronted with novel crops absent from the training set, leading to a marked performance drop in [...] Read more.
Deep learning has accelerated progress in plant disease recognition, providing strong technical support for early diagnosis and precision management. However, models often lack robustness and generalization when confronted with novel crops absent from the training set, leading to a marked performance drop in cross-unseen-crop scenarios. Cross-crop generalization for plant disease recognition requires models to identify known disease categories in crop domains never observed during training. A central challenge is that disease symptoms are strongly coupled with crop-specific appearance cues, which severely degrades generalization. Here, TDC (Text-guided feature Disentanglement Contrast) is introduced as a feature-disentanglement framework for cross-crop plant disease recognition. The proposed method employs a dual-branch visual encoder to separately capture disease semantic representations and crop-domain representations, and it leverages a frozen CLIP text encoder to use disease and crop prompts for text-guided semantic anchoring. A semantic-anchor-only contrastive disentanglement strategy is further formulated under a hybrid label space, where crop-branch features are incorporated as stop-gradient hard negatives to suppress semantic–domain information leakage and strengthen the intra-class aggregation of the same disease across crops. Residual domain-discriminative cues are mitigated via domain-adversarial learning. During inference, only the disease branch is retained for classification, improving generalization while reducing deployment overhead. Experiments demonstrate that under the PlantVillage cross-crop setting, the method achieves 98.04% and 74.29% Top-1 accuracy on seen and unseen crop domains, respectively. Moreover, it attains 81.99% on a real-world field dataset of strawberry powdery mildew and 76.31% on a low-illumination degradation set, validating robustness under realistic imaging distribution shifts. Full article
(This article belongs to the Special Issue Advances in Data-Driven Artificial Intelligence, 2nd Edition)
Show Figures

Figure 1

25 pages, 16852 KB  
Article
The Impact of Noise on Machine Learning-Based Lake Ice Detection on Lake Śniardwy Using Sentinel-1 SAR Data
by Augustyn Crane and Mariusz Sojka
Water 2026, 18(8), 890; https://doi.org/10.3390/w18080890 - 8 Apr 2026
Abstract
Lake ice monitoring is critical for assessing climate change, but in-situ observations are often limited. Sentinel-1 Synthetic Aperture Radar (SAR) data is a strong method for ice detection because it is not restricted by cloud cover and it is readily available. However, SAR-based [...] Read more.
Lake ice monitoring is critical for assessing climate change, but in-situ observations are often limited. Sentinel-1 Synthetic Aperture Radar (SAR) data is a strong method for ice detection because it is not restricted by cloud cover and it is readily available. However, SAR-based classification can be affected by atmospheric and surface-related noise. This study examines the impact of noise on machine learning-based lake ice detection over Lake Śniardwy, Poland, using Sentinel-1 Vertical-Vertical (VV) and Vertical-Horizontal (VH) backscatter data. Binary logistic regression models were trained on scenes with strong class separability between ice and water and then validated on separate low- and high-noise datasets. The models achieved high accuracy under low-noise scenes, reaching up to 96.9%, but performed poorly on high-noise scenes. The results show that wind-related surface roughness and associated atmospheric conditions can significantly reduce classification reliability. Comparison with backscatter from a nearby coniferous forest confirmed that the main disturbances were concentrated over the lake surface. The study highlights the importance of careful scene selection and noise assessment in SAR-based lake ice classification. Full article
Show Figures

Figure 1

21 pages, 2215 KB  
Article
Machine Learning Approaches for Probabilistic Prediction of Coastal Freak Waves
by Dong-Jiing Doong, Wei-Cheng Chen, Fan-Ju Lin, Chi Pan and Cheng-Han Tsai
J. Mar. Sci. Eng. 2026, 14(8), 689; https://doi.org/10.3390/jmse14080689 - 8 Apr 2026
Abstract
Coastal freak waves (CFWs) are sudden and hazardous wave events that occur near shorelines and can pose serious threats to coastal visitors and infrastructure. Due to the complex interactions among coastal bathymetry, wave dynamics, and environmental conditions, the mechanisms governing CFW formation remain [...] Read more.
Coastal freak waves (CFWs) are sudden and hazardous wave events that occur near shorelines and can pose serious threats to coastal visitors and infrastructure. Due to the complex interactions among coastal bathymetry, wave dynamics, and environmental conditions, the mechanisms governing CFW formation remain poorly understood, making reliable prediction difficult. This study investigates the feasibility of applying machine learning techniques to predict CFW occurrences using observational environmental data. Three machine learning algorithms, the Random Forest (RF), Support Vector Machine (SVM), and Artificial Neural Network (ANN), were developed to generate probability-based predictions of CFW events. Environmental variables derived from buoy observations, including wave characteristics, wind conditions, swell parameters, wave grouping indicators, and nonlinear wave interaction indices, were used as model inputs. Hyperparameters were optimized using grid search combined with k-fold cross-validation. The results show that all three models achieved comparable predictive performance, with AUC values close to 0.80 and overall prediction accuracy around 74%. The ANN model achieved the highest recall, indicating strong capability in detecting CFW events, while the RF and SVM models showed more balanced precision and recall. Analysis of high-probability prediction events suggests that CFW occurrences are associated with swell-dominated conditions, strong wave grouping behavior, and enhanced nonlinear wave interactions. These results demonstrate that machine learning provides a promising framework for probabilistic prediction of coastal freak waves and has potential applications in coastal hazard assessment and early warning systems. Full article
(This article belongs to the Special Issue Coastal Disaster Assessment and Response—2nd Edition)
Show Figures

Figure 1

18 pages, 5511 KB  
Article
Exploring the Application of Large Language Models (LLMs) in Data Structure Instruction: An Empirical Analysis of Student Learning Outcomes in Computer Science
by Hongzhi Li, Lijun Xiao, Kezhong Lu, Dun Li, Zheqing Zhang and Qishou Xia
Information 2026, 17(4), 353; https://doi.org/10.3390/info17040353 - 8 Apr 2026
Abstract
Recent advancements in Large Language Models (LLMs), including ChatGPT, DeepSeek, and Claude, have facilitated their growing integration into computer science education, including data structure courses. Despite their widespread adoption, the association between sustained and informal LLM usage and students’ learning outcomes remains insufficiently [...] Read more.
Recent advancements in Large Language Models (LLMs), including ChatGPT, DeepSeek, and Claude, have facilitated their growing integration into computer science education, including data structure courses. Despite their widespread adoption, the association between sustained and informal LLM usage and students’ learning outcomes remains insufficiently understood. This study seeks to address this gap by empirically examining the association between LLM usage and undergraduate performance in data structure education. We conduct a twelve-week empirical study involving fifty-four undergraduate students, in which LLMs were made freely accessible but neither explicitly encouraged nor discouraged during coursework and assignments. Students’ LLM usage patterns are analyzed in relation to their academic performance across different task types. Findings reveal a significant negative association between extensive reliance on LLMs for cognitively demanding tasks and overall learning outcomes. Additionally, an inverse associative trend is observed between the frequency of LLM usage across some learning activities and academic performance. In contrast, the use of LLMs for supplementary purposes, including conceptual clarification and theoretical understanding, exhibits a notably positive association with final performance. These findings suggest a task-dependent associative relationship between LLM usage and learning outcomes: LLM usage for conceptual learning shows a positive association with the mastery of relevant knowledge when used as a supplementary learning tool, while excessive LLM usage shows a negative association with the development of fundamental analytical and problem-solving skills. This study highlights the importance of carefully integrating LLMs into data structure education to support learning while preserving students’ independent cognitive engagement. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
Show Figures

Graphical abstract

31 pages, 4926 KB  
Article
Interpretable Optimized Extreme Gradient Boosting for Prediction of Higher Heating Value from Elemental Composition of Coal Resource to Energy Conversion
by Paulino José García-Nieto, Esperanza García-Gonzalo, José Pablo Paredes-Sánchez and Luis Alfonso Menéndez-García
Big Data Cogn. Comput. 2026, 10(4), 112; https://doi.org/10.3390/bdcc10040112 - 7 Apr 2026
Abstract
The higher heating value (HHV), sometimes referred to as the gross calorific value, is a crucial metric for determining a fuel’s primary energy potential in energy production systems. By combining extreme gradient boosting (XGBoost) with the differential evolution (DE) optimizer, an innovative machine [...] Read more.
The higher heating value (HHV), sometimes referred to as the gross calorific value, is a crucial metric for determining a fuel’s primary energy potential in energy production systems. By combining extreme gradient boosting (XGBoost) with the differential evolution (DE) optimizer, an innovative machine learning-based model was created in this study to forecast the HHV (dependent variable). As input variables, the model included the constituents of the coal’s ultimate analysis: carbon (C), oxygen (O), hydrogen (H), nitrogen (N), and sulfur (S). For comparative purposes, random forest regression (RFR), M5 model tree, multivariate linear regression (MLR), and previously reported empirical correlations were also applied to the experimental dataset. The results showed that the XGBoost strategy produced the most accurate predictions. An initial XGBoost analysis was carried out to identify the relative contribution of the input variables to coal HHV prediction. In particular, for coal HHV estimates reliant on experimental samples, the XGBoost regression produced a correlation coefficient of 0.9858 and a coefficient of determination of 0.9691. The excellent agreement between observed and anticipated values shows that the DE/XGBoost-based approximation performed satisfactorily. Lastly, a synopsis of the investigation’s key conclusions is provided. Full article
(This article belongs to the Special Issue Smart Manufacturing in the AI Era)
24 pages, 3925 KB  
Article
Personal Identification Using Eye Movements During Manga Reading: Effects of Stimulus Variation and Template Aging
by Yuichi Wada
Appl. Sci. 2026, 16(7), 3601; https://doi.org/10.3390/app16073601 - 7 Apr 2026
Abstract
Eye movements are difficult to observe and replicate, making them a promising yet understudied modality for behavioral biometrics. This study is the first to examine the feasibility of using eye movement patterns during manga reading as a biometric identifier, leveraging the medium’s rich [...] Read more.
Eye movements are difficult to observe and replicate, making them a promising yet understudied modality for behavioral biometrics. This study is the first to examine the feasibility of using eye movement patterns during manga reading as a biometric identifier, leveraging the medium’s rich behavioral data from diverse reading behaviors. Eye movement data from 59 participants were recorded while they read two manga works on a screen. A comprehensive set of gaze features was extracted and evaluated using five machine learning classifiers, among which Random Forest (RF) consistently achieved the best performance. Under constrained experimental conditions, the RF classifier achieved a Rank-1 identification rate of 95.0% and an equal error rate (EER) of 1.9%. Furthermore, this study systematically investigated two critical challenges for practical deployment: stimulus dependency and template aging. Cross-stimulus evaluation revealed substantial performance degradation when training and testing used different manga works, and template aging analysis over an approximately 90-day interval demonstrated notable declines in identification accuracy. These results provide preliminary evidence supporting the potential of natural reading behaviors for biometric continuous authentication systems while highlighting the need for further research into cross-stimulus generalization and temporal stability. Full article
(This article belongs to the Special Issue Eye Tracking Technology and Its Applications)
Show Figures

Figure 1

26 pages, 1802 KB  
Article
Integrating Generative AI and Cultural Storytelling to Enhance Geometry Learning in Vietnamese Primary Classrooms: A Quasi-Experimental Study
by Nguyen Huu Hau, Pham Sy Nam, Trinh Cong Son, Dao Chung Lan Anh, Nguyen Thuy Van, Pham Thi Thanh Tu, Tran Thuy Nga and Vo Xuan Mai
Educ. Sci. 2026, 16(4), 588; https://doi.org/10.3390/educsci16040588 - 7 Apr 2026
Abstract
In Vietnamese primary mathematics education, geometry instruction often emphasizes rote calculation and formula memorization rather than meaningful contextualization, leaving students disconnected from abstract concepts and lacking opportunities to connect learning with cultural identity. This quasi-experimental study investigates how integrating generative AI tools (ChatGPT, [...] Read more.
In Vietnamese primary mathematics education, geometry instruction often emphasizes rote calculation and formula memorization rather than meaningful contextualization, leaving students disconnected from abstract concepts and lacking opportunities to connect learning with cultural identity. This quasi-experimental study investigates how integrating generative AI tools (ChatGPT, DALL·E, Canva) with the culturally grounded Vietnamese folktale Bánh Chưng—Bánh Giầy can support Grade 5 students’ understanding of circle geometry. Employing a mixed-methods design with 30 students divided into experimental (AI + storytelling) and control (traditional instruction) groups, the study measured cognitive and affective learning outcomes through pre/post-tests, a validated 25-item questionnaire, interviews, and classroom observations. Quantitative results revealed significant improvements in the experimental group across all measured dimensions, learning interest, attentional focus, conceptual understanding, mathematics passion, and cultural preservation awareness, with large effect sizes. Qualitative findings confirmed enhanced engagement, multimodal conceptual clarity, and cultural affective resonance. The study demonstrates that low-cost, teacher-mediated generative AI can effectively support learning in resource-constrained primary settings when anchored in local narratives. Implications for ethical AI integration and teacher professional development in Vietnamese contexts are discussed. Full article
Show Figures

Figure 1

24 pages, 648 KB  
Article
Intuitive Risk Equation for Post-Transplant Bloodstream Infection Prediction: A Symbolic Regression Approach
by Sungsu Oh, Jeogin Jang, Yunseong Ko, Hyunsu Lee and Seungjin Lim
Biomedicines 2026, 14(4), 840; https://doi.org/10.3390/biomedicines14040840 - 7 Apr 2026
Abstract
Background: Liver transplant recipients are highly susceptible to infectious complications due to surgical invasiveness and immunosuppressive therapy, and post-transplant bloodstream infection is associated with substantial morbidity and mortality. Although several prediction models for bloodstream infection have been proposed, most focus on emergency department [...] Read more.
Background: Liver transplant recipients are highly susceptible to infectious complications due to surgical invasiveness and immunosuppressive therapy, and post-transplant bloodstream infection is associated with substantial morbidity and mortality. Although several prediction models for bloodstream infection have been proposed, most focus on emergency department or general ward populations and rely on black-box approaches. This limits their applicability and clinical interpretability in liver transplant settings. Therefore, this study aimed to develop predictive models for post-transplant bloodstream infection using preoperative and perioperative clinical data and to derive an interpretable risk equation through symbolic regression. Methods: We conducted a retrospective observational study including 245 adult liver transplant recipients treated at a single tertiary center. Clinical and laboratory variables were extracted from electronic medical records and analyzed using standard statistical methods. For prediction tasks, multiple conventional machine learning models were developed and compared with a symbolic regression-based model. Predictive performance and model interpretability were evaluated using discrimination metrics and Shapley Additive Explanations. Results: Post-transplant bloodstream infection occurred in 82 patients (33.4%). In the test set, conventional machine learning models showed modest discriminative performance (area under the curve, 0.53–0.64). The symbolic regression model achieved comparable discrimination (area under the curve, 0.63) while providing transparent, threshold-based risk equations. While conventional models primarily relied on laboratory variables, symbolic regression additionally identified perioperative clinical factors and viral serologic markers as important predictors. Discussion: Although overall predictive performance was modest, symbolic regression highlighted viral serologic markers as potential indicators of immunologic vulnerability, extending beyond standard laboratory predictors. Conclusions: This interpretability-focused approach may inform future risk stratification models incorporating richer perioperative data. Full article
(This article belongs to the Section Microbiology in Human Health and Disease)
Show Figures

Figure 1

Back to TopTop