Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (22,546)

Search Parameters:
Keywords = detail model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 2800 KB  
Article
An Automotive Fault Diagnosis Framework Based on Knowledge Graphs and Large Language Models
by Weikun Lin and Kehua Miao
Electronics 2025, 14(21), 4180; https://doi.org/10.3390/electronics14214180 (registering DOI) - 26 Oct 2025
Abstract
In recent years, the rapid advancement of large language models (LLMs) has driven significant breakthroughs in artificial intelligence. Leveraging LLMs in conjunction with domain-specific knowledge to develop intelligent assistants can reduce operational costs and facilitate industrial upgrading. In the field of automotive fault [...] Read more.
In recent years, the rapid advancement of large language models (LLMs) has driven significant breakthroughs in artificial intelligence. Leveraging LLMs in conjunction with domain-specific knowledge to develop intelligent assistants can reduce operational costs and facilitate industrial upgrading. In the field of automotive fault diagnosis, traditional methods rely heavily on technicians’ experience, resulting in limitations in both efficiency and accuracy. Misdiagnosis or insufficient expertise can lead to repair delays, while information asymmetry may cause trust issues between service providers and customers. To address these challenges, we propose a vehicle fault diagnosis framework based on knowledge graphs and large language models. Unlike traditional retrieval-augmented generation (RAG) methods, our framework actively queries for missing information and delivers precise repair recommendations. Experimental evaluations demonstrate that our framework achieves a diagnosis accuracy of 77.3%, representing a 46.1% improvement over direct diagnosis using a pretrained LLM (GPT-3.5) and over a 14% increase compared to other existing frameworks. Ablation studies confirm the effectiveness of each module, and our findings are further illustrated through detailed charts and visualizations. Overall, this study highlights the potential of integrating knowledge graphs with large language models for automotive fault diagnosis, with promising applicability to other traditional industries. Full article
Show Figures

Figure 1

36 pages, 20321 KB  
Article
Spatial Bias Correction of ERA5_Ag Reanalysis Precipitation Using Machine Learning Models in Semi-Arid Region of Morocco
by Achraf Chakri, Sana Abakarim, João C. Antunes Rodrigues, Nour-Eddine Laftouhi, Hassan Ibouh, Lahcen Zouhri and Elena Zaitseva
Atmosphere 2025, 16(11), 1234; https://doi.org/10.3390/atmos16111234 (registering DOI) - 26 Oct 2025
Abstract
Accurate precipitation data are essential for effective water resource management. This study aimed to correct precipitation values from the ERA5_Ag reanalysis dataset using observational data from 20 meteorological stations located in the Tensift basin, Morocco. Five machine learning models were evaluated: MLP, XGBoost, [...] Read more.
Accurate precipitation data are essential for effective water resource management. This study aimed to correct precipitation values from the ERA5_Ag reanalysis dataset using observational data from 20 meteorological stations located in the Tensift basin, Morocco. Five machine learning models were evaluated: MLP, XGBoost, CatBoost, LightGBM, and Random Forest. Model performance was assessed using RMSE, MAE, R², and bias metrics, enabling the selection of the best−performing model to apply the correction. The results showed significant improvements in the accuracy of precipitation estimates, with R² ranging between 0.80 and 0.90 in most stations. The best model was subsequently used to correct and generate raster maps of corrected precipitation over 42 years, providing a spatially detailed tool of great value for water resource management. This study is particularly important in semi−arid regions such as the Tensift basin, where water scarcity demands more accurate and informed decision−making. Full article
16 pages, 6905 KB  
Article
A Hybrid Fuzzy-PSO Framework for Multi-Objective Optimization of Stereolithography Process Parameters
by Mohanned M. H. AL-Khafaji, Abdulkader Ali Abdulkader Kadauw, Mustafa Mohammed Abdulrazaq, Hussein M. H. Al-Khafaji and Henning Zeidler
Micromachines 2025, 16(11), 1218; https://doi.org/10.3390/mi16111218 (registering DOI) - 26 Oct 2025
Abstract
Additive manufacturing is driving a significant change in industry, extending beyond prototyping to the inclusion of printed parts in final designs. Stereolithography (SLA) is a polymerization technique valued for producing highly detailed parts with smooth surface finishes. This study presents a hybrid intelligent [...] Read more.
Additive manufacturing is driving a significant change in industry, extending beyond prototyping to the inclusion of printed parts in final designs. Stereolithography (SLA) is a polymerization technique valued for producing highly detailed parts with smooth surface finishes. This study presents a hybrid intelligent framework for modeling and optimizing the SLA 3D printer process’s parameters for Acrylonitrile Butadiene Styrene (ABS) photopolymer parts. The nonlinear relationships between the process’s parameters (Orientation, Lifting Speed, Lifting Distance, Exposure Time) and multiple performance characteristics (ultimate tensile strength, yield strength, modulus of elasticity, Shore D hardness, and surface roughness), which represent complex relationships, were investigated. A Taguchi design of the experiment with an L18 orthogonal array was employed as an efficient experimental design. A novel hybrid fuzzy logic–Particle Swarm Optimization (PSO) algorithm, ARGOS (Adaptive Rule Generation with Optimized Structure), was developed to automatically generate high-accuracy Mamdani-type fuzzy inference systems (FISs) from experimental data. The algorithm starts by customizing Modified Learn From Example (MLFE) to create an initial FIS. Subsequently, the generated FIS is tuned using PSO to develop and enhance predictive accuracy. The ARGOS models provided excellent performances, achieving correlation coefficients (R2) exceeding 0.9999 for all five output responses. Once the FISs were tuned, a multi-objective optimization was carried out based on the weighted sum method. This step helped to identify a well-balanced set of parameters that optimizes the key qualities of the printed parts, ensuring that the results are not just mathematically ideal, but also genuinely helpful for real-world manufacturing. The results showed that the proposed hybrid approach is a robust and highly accurate method for the modeling and multi-objective optimization of the SLA 3D process. Full article
Show Figures

Figure 1

18 pages, 3092 KB  
Article
Adverse-Weather Image Restoration Method Based on VMT-Net
by Zhongmin Liu, Xuewen Yu and Wenjin Hu
J. Imaging 2025, 11(11), 376; https://doi.org/10.3390/jimaging11110376 (registering DOI) - 26 Oct 2025
Abstract
To address global semantic loss, local detail blurring, and spatial–semantic conflict during image restoration under adverse weather conditions, we propose an image restoration network that integrates Mamba with Transformer architectures. We first design a Vision-Mamba–Transformer (VMT) module that combines the long-range dependency modeling [...] Read more.
To address global semantic loss, local detail blurring, and spatial–semantic conflict during image restoration under adverse weather conditions, we propose an image restoration network that integrates Mamba with Transformer architectures. We first design a Vision-Mamba–Transformer (VMT) module that combines the long-range dependency modeling of Vision Mamba with the global contextual reasoning of Transformers, facilitating the joint modeling of global structures and local details, thus mitigating information loss and detail blurring during restoration. Second, we introduce an Adaptive Content Guidance (ACG) module that employs dynamic gating and spatial–channel attention to enable effective inter-layer feature fusion, thereby enhancing cross-layer semantic consistency. Finally, we embed the VMT and ACG modules into a U-Net backbone, achieving efficient integration of multi-scale feature modeling and cross-layer fusion, significantly improving reconstruction quality under complex weather conditions. The experimental results show that on Snow100K-S/L, VMT-Net improves PSNR over the baseline by approximately 0.89 dB and 0.36 dB, with SSIM gains of about 0.91% and 0.11%, respectively. On Outdoor-Rain and Raindrop, it performs similarly to the baseline and exhibits superior detail recovery in real-world scenes. Overall, the method demonstrates robustness and strong detail restoration across diverse adverse-weather conditions. Full article
Show Figures

Figure 1

26 pages, 4803 KB  
Article
Fatigue Life Evaluation of Suspended Monorail Track Beams Using Scaled Testing and FE Analysis
by Xu Han, Longsheng Bao, Baoxian Li and Tongfeng Zhao
Buildings 2025, 15(21), 3862; https://doi.org/10.3390/buildings15213862 (registering DOI) - 25 Oct 2025
Abstract
Suspended monorail systems are increasingly adopted in urban rail transit due to their small land requirements and environmental benefits. However, welded details in track beams are prone to fatigue cracking under repeated service loads, posing risks to long-term structural safety. This study investigates [...] Read more.
Suspended monorail systems are increasingly adopted in urban rail transit due to their small land requirements and environmental benefits. However, welded details in track beams are prone to fatigue cracking under repeated service loads, posing risks to long-term structural safety. This study investigates the fatigue performance of suspended monorail track beams through 1:4 scaled fatigue experiments and finite element (FE) simulations. Critical fatigue-sensitive locations were identified at the mid-span longitudinal stiffener–bottom flange weld toe and the mid-span web–bottom flange weld toe. Under the most unfavorable operating condition (train speed of 30 km/h), the corresponding hot-spot stresses were 28.48 MPa and 27.54 MPa, respectively. Stress deviations between scaled and full-scale models were within 7%, verifying the feasibility of using scaled models for fatigue studies. Fatigue life predictions based on the IIW hot-spot stress method and Eurocode S–N curves showed that the critical details exceeded the 100-year design requirement, with estimated fatigue lives of 2.39 × 108 and 5.95 × 108 cycles. Furthermore, a modified damage equivalent coefficient method that accounts for traffic volume and train speed was proposed, yielding coefficients of 2.54 and 3.06 for the two fatigue-prone locations. The results provide a theoretical basis and practical reference for fatigue life evaluation, design optimization, and code development of suspended monorail track beam structures. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

15 pages, 750 KB  
Review
Computational Modeling Approaches for Optimizing Microencapsulation Processes: From Molecular Dynamics to CFD and FEM Techniques
by Karen Isela Vargas-Rubio, Efrén Delgado, Cristian Patricia Cabrales-Arellano, Claudia Ivette Gamboa-Gómez and Damián Reyes-Jáquez
Biophysica 2025, 5(4), 49; https://doi.org/10.3390/biophysica5040049 (registering DOI) - 25 Oct 2025
Abstract
Microencapsulation is a fundamental technology for protecting active compounds from environmental degradation by factors such as light, heat, and oxygen. This process significantly improves their stability, bioavailability, and shelf life by entrapping an active core within a protective matrix. Therefore, a thorough understanding [...] Read more.
Microencapsulation is a fundamental technology for protecting active compounds from environmental degradation by factors such as light, heat, and oxygen. This process significantly improves their stability, bioavailability, and shelf life by entrapping an active core within a protective matrix. Therefore, a thorough understanding of the physicochemical interactions between these components is essential for developing stable and efficient delivery systems. The composition of the microcapsule and the encapsulation method are key determinants of system stability and the retention of encapsulated materials. Recently, the application of computational tools to predict and optimize microencapsulation processes has emerged as a promising area of research. In this context, molecular dynamics (MD) simulation has become an indispensable computational technique. By solving Newton’s equations of motion, MD simulations enable a detailed study of the dynamic behavior of atoms and molecules in a simulated environment. For example, MD-based analyses have quantitatively demonstrated that optimizing polymer–core interaction energies can enhance encapsulation efficiency by over 20% and improve the thermal stability of active compounds. This approach provides invaluable insights into the molecular interactions between the core material and the matrix, ultimately facilitating the rational design of optimized microstructures for diverse applications, including pharmaceuticals, thereby opening new avenues for innovation in the field. Ultimately, the integration of computational modeling into microencapsulation research not only represents a methodological advancement but also pivotal opportunity to accelerate innovation, optimize processes, and develop more effective and sustainable therapeutic systems. Full article
Show Figures

Figure 1

29 pages, 23790 KB  
Article
Tone Mapping of HDR Images via Meta-Guided Bayesian Optimization and Virtual Diffraction Modeling
by Deju Huang, Xifeng Zheng, Jingxu Li, Ran Zhan, Jiachang Dong, Yuanyi Wen, Xinyue Mao, Yufeng Chen and Yu Chen
Sensors 2025, 25(21), 6577; https://doi.org/10.3390/s25216577 (registering DOI) - 25 Oct 2025
Abstract
This paper proposes a novel image tone-mapping framework that incorporates meta-learning, a psychophysical model, Bayesian optimization, and light-field virtual diffraction. First, we formalize the virtual diffraction process as a mathematical operator defined in the frequency domain to reconstruct high-dynamic-range (HDR) images through phase [...] Read more.
This paper proposes a novel image tone-mapping framework that incorporates meta-learning, a psychophysical model, Bayesian optimization, and light-field virtual diffraction. First, we formalize the virtual diffraction process as a mathematical operator defined in the frequency domain to reconstruct high-dynamic-range (HDR) images through phase modulation, enabling the precise control of image details and contrast. In parallel, we apply the Stevens power law to simulate the nonlinear luminance perception of the human visual system, thereby adjusting the overall brightness distribution of the HDR image and improving the visual experience. Unlike existing methods that primarily emphasize structural fidelity, the proposed method strikes a balance between perceptual fidelity and visual naturalness. Secondly, an adaptive parameter tuning system based on Bayesian optimization is developed to conduct optimization of the Tone Mapping Quality Index (TMQI), quantifying uncertainty using probabilistic models to approximate the global optimum with fewer evaluations. Furthermore, we propose a task-distribution-oriented meta-learning framework: a meta-feature space based on image statistics is constructed, and task clustering is combined with a gated meta-learner to rapidly predict initial parameters. This approach significantly enhances the robustness of the algorithm in generalizing to diverse HDR content and effectively mitigates the cold-start problem in the early stage of Bayesian optimization, thereby accelerating the convergence of the overall optimization process. Experimental results demonstrate that the proposed method substantially outperforms state-of-the-art tone-mapping algorithms across multiple benchmark datasets, with an average improvement of up to 27% in naturalness. Furthermore, the meta-learning-guided Bayesian optimization achieves two- to five-fold faster convergence. In the trade-off between computational time and performance, the proposed method consistently dominates the Pareto frontier, achieving high-quality results and efficient convergence with a low computational cost. Full article
(This article belongs to the Section Sensing and Imaging)
19 pages, 321 KB  
Article
Entropy Production and Irreversibility in the Linearized Stochastic Amari Neural Model
by Dario Lucente, Giacomo Gradenigo and Luca Salasnich
Entropy 2025, 27(11), 1104; https://doi.org/10.3390/e27111104 (registering DOI) - 25 Oct 2025
Abstract
One among the most intriguing results coming from the application of statistical mechanics to the study of the brain is the understanding that it, as a dynamical system, is inherently out of equilibrium. In the realm of non-equilibrium statistical mechanics and stochastic processes, [...] Read more.
One among the most intriguing results coming from the application of statistical mechanics to the study of the brain is the understanding that it, as a dynamical system, is inherently out of equilibrium. In the realm of non-equilibrium statistical mechanics and stochastic processes, the standard observable computed to determine whether a system is at equilibrium or not is the entropy produced along the dynamics. For this reason, we present here a detailed calculation of the entropy production in the Amari model, a coarse-grained model of the brain neural network, consisting of an integro-differential equation for the neural activity field, when stochasticity is added to the original dynamics. Since the way to add stochasticity is always to some extent arbitrary, particularly for coarse-grained models, there is no general prescription to do so. We precisely investigate the interplay between noise properties and the original model features, discussing in which cases the stationary state is in thermal equilibrium and which cases it is out of equilibrium, providing explicit and simple formulae. Following the derivation for the particular case considered, we also show how the entropy production rate is related to the variation in time of the Shannon entropy of the system. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
23 pages, 7485 KB  
Article
Deep Learning-Driven Automatic Segmentation of Weeds and Crops in UAV Imagery
by Jianghan Tao, Qian Qiao, Jian Song, Shan Sun, Yijia Chen, Qingyang Wu, Yongying Liu, Feng Xue, Hao Wu and Fan Zhao
Sensors 2025, 25(21), 6576; https://doi.org/10.3390/s25216576 (registering DOI) - 25 Oct 2025
Abstract
Accurate segmentation of crops and weeds is essential for enhancing crop yield, optimizing herbicide usage, and mitigating environmental impacts. Traditional weed management practices, such as manual weeding or broad-spectrum herbicide application, are labor-intensive, environmentally harmful, and economically inefficient. In response, this study introduces [...] Read more.
Accurate segmentation of crops and weeds is essential for enhancing crop yield, optimizing herbicide usage, and mitigating environmental impacts. Traditional weed management practices, such as manual weeding or broad-spectrum herbicide application, are labor-intensive, environmentally harmful, and economically inefficient. In response, this study introduces a novel precision agriculture framework integrating Unmanned Aerial Vehicle (UAV)-based remote sensing with advanced deep learning techniques, combining Super-Resolution Reconstruction (SRR) and semantic segmentation. This study is the first to integrate UAV-based SRR and semantic segmentation for tobacco fields, systematically evaluate recent Transformer and Mamba-based models alongside traditional CNNs, and release an annotated dataset that not only ensures reproducibility but also provides a resource for the research community to develop and benchmark future models. Initially, SRR enhanced the resolution of low-quality UAV imagery, significantly improving detailed feature extraction. Subsequently, to identify the optimal segmentation model for the proposed framework, semantic segmentation models incorporating CNN, Transformer, and Mamba architectures were used to differentiate crops from weeds. Among evaluated SRR methods, RCAN achieved the optimal reconstruction performance, reaching a Peak Signal-to-Noise Ratio (PSNR) of 24.98 dB and a Structural Similarity Index (SSIM) of 69.48%. In semantic segmentation, the ensemble model integrating Transformer (DPT with DINOv2) and Mamba-based architectures achieved the highest mean Intersection over Union (mIoU) of 90.75%, demonstrating superior robustness across diverse field conditions. Additionally, comprehensive experiments quantified the impact of magnification factors, Gaussian blur, and Gaussian noise, identifying an optimal magnification factor of 4×, proving that the method was robust to common environmental disturbances at optimal parameters. Overall, this research established an efficient, precise framework for crop cultivation management, offering valuable insights for precision agriculture and sustainable farming practices. Full article
(This article belongs to the Special Issue Smart Sensing and Control for Autonomous Intelligent Unmanned Systems)
21 pages, 3381 KB  
Article
Aero-Engine Ablation Defect Detection with Improved CLR-YOLOv11 Algorithm
by Yi Liu, Jiatian Liu, Yaxi Xu, Qiang Fu, Jide Qian and Xin Wang
Sensors 2025, 25(21), 6574; https://doi.org/10.3390/s25216574 (registering DOI) - 25 Oct 2025
Abstract
Aero-engine ablation detection is a critical task in aircraft health management, yet existing rotation-based object detection methods often face challenges of high computational complexity and insufficient local feature extraction. This paper proposes an improved YOLOv11 algorithm incorporating Context-guided Large-kernel attention and Rotated detection [...] Read more.
Aero-engine ablation detection is a critical task in aircraft health management, yet existing rotation-based object detection methods often face challenges of high computational complexity and insufficient local feature extraction. This paper proposes an improved YOLOv11 algorithm incorporating Context-guided Large-kernel attention and Rotated detection head, called CLR-YOLOv11. The model achieves synergistic improvement in both detection efficiency and accuracy through dual structural optimization, with its innovations primarily embodied in the following three tightly coupled strategies: (1) Targeted Data Preprocessing Pipeline Design: To address challenges such as limited sample size, low overall image brightness, and noise interference, we designed an ordered data augmentation and normalization pipeline. This pipeline is not a mere stacking of techniques but strategically enhances sample diversity through geometric transformations (random flipping, rotation), hybrid augmentations (Mixup, Mosaic), and pixel-value transformations (histogram equalization, Gaussian filtering). All processed images subsequently undergo Z-Score normalization. This order-aware pipeline design effectively improves the quality, diversity, and consistency of the input data. (2) Context-Guided Feature Fusion Mechanism: To overcome the limitations of traditional Convolutional Neural Networks in modeling long-range contextual dependencies between ablation areas and surrounding structures, we replaced the original C3k2 layer with the C3K2CG module. This module adaptively fuses local textural details with global semantic information through a context-guided mechanism, enabling the model to more accurately understand the gradual boundaries and spatial context of ablation regions. (3) Efficiency-Oriented Large-Kernel Attention Optimization: To expand the receptive field while strictly controlling the additional computational overhead introduced by rotated detection, we replaced the C2PSA module with the C2PSLA module. By employing large-kernel decomposition and a spatial selective focusing strategy, this module significantly reduces computational load while maintaining multi-scale feature perception capability, ensuring the model meets the demands of high real-time applications. Experiments on a self-built aero-engine ablation dataset demonstrate that the improved model achieves 78.5% mAP@0.5:0.95, representing a 4.2% improvement over the YOLOv11-obb which model without the specialized data augmentation. This study provides an effective solution for high-precision real-time aviation inspection tasks. Full article
(This article belongs to the Special Issue Advanced Neural Architectures for Anomaly Detection in Sensory Data)
Show Figures

Figure 1

20 pages, 3084 KB  
Article
Decoding Construction Accident Causality: A Decade of Textual Reports Analyzed
by Yuelin Wang and Patrick X. W. Zou
Buildings 2025, 15(21), 3859; https://doi.org/10.3390/buildings15213859 (registering DOI) - 25 Oct 2025
Abstract
Analyzing accident reports to absorb past experiences is crucial for construction site safety. Current methods of processing textual accident reports are time-consuming and labor-intensive. This research applied the LDA topic model to analyze construction accident reports, successfully identifying five main types of accidents: [...] Read more.
Analyzing accident reports to absorb past experiences is crucial for construction site safety. Current methods of processing textual accident reports are time-consuming and labor-intensive. This research applied the LDA topic model to analyze construction accident reports, successfully identifying five main types of accidents: Falls from Height (23.5%), Struck-by and Contact Injuries (22.4%), Slips, Trips, and Falls (21.8%), Hot Work & Vehicle Hazards (18.1%), and Lifting and Machinery Accidents (14.2%). By mining the rich contextual details within unstructured textual descriptions, this research revealed that environmental factors constituted the most prevalent category of contributing causes, followed by human factors. Further analysis traced the root causes to deficiencies in management systems, particularly poor task planning and inadequate training. The LDA model demonstrated superior effectiveness in extracting interpretable topics directly mappable to engineering knowledge and uncovering these latent factors from large-scale, decade-spanning textual data at low computational cost. The findings offer transformative perspectives for improving construction site safety by prioritizing environmental control and management system enhancement. The main theoretical contributions of this research are threefold. First, it demonstrates the efficacy of LDA topic modeling as a powerful tool for extracting interpretable and actionable knowledge from large-scale, unstructured textual safety data, aligning with the growing interest in data-driven safety management in the construction sector. Second, it provides large-scale, empirical evidence that challenges the traditional dogma of “human factor dominance” by systematically quantifying the critical role of environmental and managerial root causes. Third, it presents a transparent, data-driven protocol for transitioning from topic identification to causal analysis, moving from assertion to evidence. Future work should focus on integrating multi-dimensional data for comprehensive accident analysis. Full article
(This article belongs to the Special Issue Digitization and Automation Applied to Construction Safety Management)
Show Figures

Figure 1

20 pages, 2753 KB  
Article
Evaluation of the Accuracy and Reliability of Responses Generated by Artificial Intelligence Related to Clinical Pharmacology
by Michal Ordak, Julia Adamczyk, Agata Oskroba, Michal Majewski and Tadeusz Nasierowski
J. Clin. Med. 2025, 14(21), 7563; https://doi.org/10.3390/jcm14217563 (registering DOI) - 25 Oct 2025
Abstract
Background/Objectives: Artificial intelligence (AI) is gaining importance in clinical pharmacology, supporting therapeutic decisions and the prediction of drug interactions, although its applications have significant limitations. The aim of the study was to evaluate the accuracy of the responses of four large language models [...] Read more.
Background/Objectives: Artificial intelligence (AI) is gaining importance in clinical pharmacology, supporting therapeutic decisions and the prediction of drug interactions, although its applications have significant limitations. The aim of the study was to evaluate the accuracy of the responses of four large language models (LLMs), namely ChatGPT-4o, ChatGPT-3.5, Gemini Advanced 2.0, and DeepSeek, in the field of clinical pharmacology and drug interactions, as well as to analyze the impact of prompting and questions from the National Specialization Examination for Pharmacists (PESF) on the results. Methods: In the analysis, three datasets were used: 20 case reports of successful pharmacotherapy, 20 reports of drug–drug interactions, and 240 test questions from the PESF (spring 2018 and autumn 2019 sessions). The responses generated by the models were compared with source data and the official examination key and were independently evaluated by clinical-pharmacotherapy experts. Additionally, the impact of prompting techniques was analyzed by expanding the content of the queries with detailed clinical and organizational elements to assess their influence on the accuracy of the obtained recommendations. Results: The analysis revealed differences in the accuracy of responses between the examined AI tools (p < 0.001), with ChatGPT-4o achieving the highest effectiveness and Gemini Advanced 2.0 the lowest. Responses generated by Gemini were more often imprecise and less consistent, which was reflected in their significantly lower level of substantive accuracy (p < 0.001). The analysis of more precisely formulated questions demonstrated a significant main effect of the AI tool (p < 0.001), with Gemini Advanced 2.0 performing significantly worse than all other models (p < 0.001). An additional analysis comparing responses to simple and extended questions, which incorporated additional clinical factors and the mode of source presentation, did not reveal significant differences either between AI tools or within individual models (p = 0.34). In the area of drug interactions, it was also shown that ChatGPT-4o achieved a higher level of response accuracy compared with the other tools (p < 0.001). Regarding the PESF exam questions, all models achieved similar results, ranging between 83 and 86% correct answers, and the differences between them were not statistically significant (p = 0.67). Conclusions: AI models demonstrate potential in the analysis of clinical pharmacology; however, their limitations require further refinement and cautious application in practice. Full article
Show Figures

Figure 1

19 pages, 1116 KB  
Article
Education, Sex, and Age Shape Rey Complex Figure Performance in Cognitively Normal Adults: An Interpretable Machine Learning Study
by Albert J. B. Lee, Benjamin Zhao, James J. Lah, Samantha E. John, David W. Loring and Cassie S. Mitchell
J. Clin. Med. 2025, 14(21), 7562; https://doi.org/10.3390/jcm14217562 (registering DOI) - 25 Oct 2025
Abstract
Background: Demographic factors such as education, sex, and age can significantly influence cognitive test performance, yet their impact on the Montreal Cognitive Assessment (MoCA) and Rey Complex Figure (CF) test has not been fully characterized in large, cognitively normal samples. Understanding these [...] Read more.
Background: Demographic factors such as education, sex, and age can significantly influence cognitive test performance, yet their impact on the Montreal Cognitive Assessment (MoCA) and Rey Complex Figure (CF) test has not been fully characterized in large, cognitively normal samples. Understanding these effects is critical for refining normative standards and improving the clinical interpretation of neuropsychological assessments. Methods: Data from 926 cognitively healthy adults (MoCA ≥ 24) were analyzed using supervised machine learning classifiers and complementary statistical models to identify the most predictive MoCA and CF features associated with education, sex, and age, while including race as a covariate. Feature importance analyses were conducted to quantify the relative contributions of accuracy-based and time-based measures after adjusting for demographic confounding. Results: Distinct patterns emerged across demographic groups. Higher educational attainment was associated with longer encoding times and improved recall performance, suggesting more deliberate encoding strategies. Sex differences were most apparent in the recall of visuospatial details and language-related subtests, with women showing relative advantages in fine detail reproduction and verbal fluency. Age-related differences were primarily reflected in slower task completion and reduced spatial memory accuracy. Conclusions: Leveraging one of the largest reported samples of cognitively healthy adults, this study demonstrates that education, sex, and age systematically influence MoCA and CF performance. These findings highlight the importance of incorporating demographic factors into normative frameworks to enhance diagnostic precision and the interpretability of cognitive assessments. Full article
(This article belongs to the Section Clinical Neurology)
Show Figures

Figure 1

16 pages, 14135 KB  
Article
Underwater Image Enhancement with a Hybrid U-Net-Transformer and Recurrent Multi-Scale Modulation
by Zaiming Geng, Jiabin Huang, Xiaotian Wang, Yu Zhang, Xinnan Fan and Pengfei Shi
Mathematics 2025, 13(21), 3398; https://doi.org/10.3390/math13213398 (registering DOI) - 25 Oct 2025
Abstract
The quality of underwater imagery is inherently degraded by light absorption and scattering, a challenge that severely limits its application in critical domains such as marine robotics and archeology. While existing enhancement methods, including recent hybrid models, attempt to address this, they often [...] Read more.
The quality of underwater imagery is inherently degraded by light absorption and scattering, a challenge that severely limits its application in critical domains such as marine robotics and archeology. While existing enhancement methods, including recent hybrid models, attempt to address this, they often struggle to restore fine-grained details without introducing visual artifacts. To overcome this limitation, this work introduces a novel hybrid U-Net-Transformer (UTR) architecture that synergizes local feature extraction with global context modeling. The core innovation is a Recurrent Multi-Scale Feature Modulation (R-MSFM) mechanism, which, unlike prior recurrent refinement techniques, employs a gated modulation strategy across multiple feature scales within the decoder to iteratively refine textural and structural details with high fidelity. This approach effectively preserves spatial information during upsampling. Extensive experiments demonstrate the superiority of the proposed method. On the EUVP dataset, UTR achieves a PSNR of 28.347 dB, a significant gain of +3.947 dB over the state-of-the-art UWFormer. Moreover, it attains a top-ranking UIQM score of 3.059 on the UIEB dataset, underscoring its robustness. The results confirm that UTR provides a computationally efficient and highly effective solution for underwater image enhancement. Full article
Show Figures

Figure 1

20 pages, 944 KB  
Article
Predicting Corrosion Behaviour of Magnesium Alloy Using Machine Learning Approaches
by Tülay Yıldırım and Hüseyin Zengin
Metals 2025, 15(11), 1183; https://doi.org/10.3390/met15111183 (registering DOI) - 24 Oct 2025
Abstract
The primary objective of this study is to develop a machine learning-based predictive model using corrosion rate data for magnesium alloys compiled from the literature. Corrosion rates measured under different deformation rates and heat treatment parameters were analyzed using artificial intelligence algorithms. Variables [...] Read more.
The primary objective of this study is to develop a machine learning-based predictive model using corrosion rate data for magnesium alloys compiled from the literature. Corrosion rates measured under different deformation rates and heat treatment parameters were analyzed using artificial intelligence algorithms. Variables such as chemical composition, heat treatment temperature and time, deformation state, pH, test method, and test duration were used as inputs in the dataset. Various regression algorithms were compared with the PyCaret AutoML library, and the models with the highest accuracy scores were analyzed with Gradient Extra Trees and AdaBoost regression methods. The findings of this study demonstrate that modelling corrosion behaviour by integrating chemical composition with experimental conditions and processing parameters substantially enhances predictive accuracy. The regression models, developed using the PyCaret library, achieved high accuracy scores, producing corrosion rate predictions that are remarkably consistent with experimental values reported in the literature. Detailed tables and figures confirm that the most influential factors governing corrosion were successfully identified, providing valuable insights into the underlying mechanisms. These results highlight the potential of AI-assisted decision systems as powerful tools for material selection and experimental design, and, when supported by larger databases, for predicting the corrosion life of magnesium alloys and guiding the development of new alloys. Full article
(This article belongs to the Section Computation and Simulation on Metals)
Back to TopTop