Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (34,819)

Search Parameters:
Keywords = model performance test

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2954 KiB  
Article
A Multi-Objective Decision-Making Method for Optimal Scheduling Operating Points in Integrated Main-Distribution Networks with Static Security Region Constraints
by Kang Xu, Zhaopeng Liu and Shuaihu Li
Energies 2025, 18(15), 4018; https://doi.org/10.3390/en18154018 - 28 Jul 2025
Abstract
With the increasing penetration of distributed generation (DG), integrated main-distribution networks (IMDNs) face challenges in rapidly and effectively performing comprehensive operational risk assessments under multiple uncertainties. Thereby, using the traditional hierarchical economic scheduling method makes it difficult to accurately find the optimal scheduling [...] Read more.
With the increasing penetration of distributed generation (DG), integrated main-distribution networks (IMDNs) face challenges in rapidly and effectively performing comprehensive operational risk assessments under multiple uncertainties. Thereby, using the traditional hierarchical economic scheduling method makes it difficult to accurately find the optimal scheduling operating point. To address this problem, this paper proposes a multi-objective dispatch decision-making optimization model for the IMDN with static security region (SSR) constraints. Firstly, the non-sequential Monte Carlo sampling is employed to generate diverse operational scenarios, and then the key risk characteristics are extracted to construct the risk assessment index system for the transmission and distribution grid, respectively. Secondly, a hyperplane model of the SSR is developed for the IMDN based on alternating current power flow equations and line current constraints. Thirdly, a risk assessment matrix is constructed through optimal power flow calculations across multiple load levels, with the index weights determined via principal component analysis (PCA). Subsequently, a scheduling optimization model is formulated to minimize both the system generation costs and the comprehensive risk, where the adaptive grid density-improved multi-objective particle swarm optimization (AG-MOPSO) algorithm is employed to efficiently generate Pareto-optimal operating point solutions. A membership matrix of the solution set is then established using fuzzy comprehensive evaluation to identify the optimal compromised operating point for dispatch decision support. Finally, the effectiveness and superiority of the proposed method are validated using an integrated IEEE 9-bus and IEEE 33-bus test system. Full article
Show Figures

Figure 1

12 pages, 3213 KiB  
Article
Improving Laser Direct Writing Overlay Precision Based on a Deep Learning Method
by Guohan Gao, Jiong Wang, Xin Liu, Junfeng Du, Jiang Bian and Hu Yang
Micromachines 2025, 16(8), 871; https://doi.org/10.3390/mi16080871 - 28 Jul 2025
Abstract
This study proposes a deep learning-based method to improve overlay alignment precision in laser direct writing systems. Alignment errors arise from multiple sources in nanoscale processes, including optical aberrations, mechanical drift, and fiducial mark imperfections. A significant portion of the residual alignment error [...] Read more.
This study proposes a deep learning-based method to improve overlay alignment precision in laser direct writing systems. Alignment errors arise from multiple sources in nanoscale processes, including optical aberrations, mechanical drift, and fiducial mark imperfections. A significant portion of the residual alignment error stems from the interpretation of mark coordinates by the vision system and algorithms. Here, we developed a convolutional neural network (CNN) model to predict the coordinates calculation error of 66,000 sets of computer-generated defective crosshair marks (simulating real fiducial mark imperfections). We compared 14 neural network architectures (8 CNN variants and 6 feedforward neural network (FNN) configurations) and found a well-performing, simple CNN structure achieving a mean squared error (MSE) of 0.0011 on the training sets and 0.0016 on the validation sets, demonstrating 90% error reduction compared to the FNN structure. Experimental results on test datasets showed the CNN’s capability to maintain prediction errors below 100 nm in both X/Y coordinates, significantly outperforming traditional FNN approaches. The proposed method’s success stems from the CNN’s inherent advantages in local feature extraction and translation invariance, combined with a simplified network architecture that prevents overfitting while maintaining computational efficiency. This breakthrough establishes a new paradigm for precision enhancement in micro–nano optical device fabrication. Full article
Show Figures

Figure 1

13 pages, 634 KiB  
Article
Cross-Cultural Adaptation and Psychometric Validation of the YFAS 2.0 for Assessing Food Addiction in the Mexican Adult Population
by Haydee Alejandra Martini-Blanquel, Indira Rocío Mendiola-Pastrana, Rubí Gisela Hernández-López, Daniela Guzmán-Covarrubias, Luisa Fernanda Romero-Henríquez, Carlos Alonso Rivero-López and Geovani López-Ortiz
Behav. Sci. 2025, 15(8), 1023; https://doi.org/10.3390/bs15081023 - 28 Jul 2025
Abstract
Food addiction is characterized by compulsive consumption and impaired control over highly palatable foods, with neurobiological mechanisms analogous to substance use disorders. The Yale Food Addiction Scale 2.0 (YFAS 2.0) is the most widely used instrument to assess these symptoms; however, its psychometric [...] Read more.
Food addiction is characterized by compulsive consumption and impaired control over highly palatable foods, with neurobiological mechanisms analogous to substance use disorders. The Yale Food Addiction Scale 2.0 (YFAS 2.0) is the most widely used instrument to assess these symptoms; however, its psychometric properties have not been validated in Mexican adults. To perform the cross-cultural adaptation of the YFAS 2.0 and validate its psychometric properties for the identification of food addiction in the Mexican adult population. A cross-sectional study was conducted in 500 Mexican adults aged 20 years or older. Participants completed the cross-culturally adapted YFAS 2.0. Exploratory and hierarchical factor analyses were conducted. Reliability was assessed using Cronbach’s alpha and omega coefficients, and model fit was evaluated through global fit indices. The scale showed high internal consistency (α = 0.88; ωt = 0.87; ωh = 0.89). The Kaiser–Meyer–Olkin index was 0.815 and Bartlett’s test was significant (χ2 = 4367.88; df = 595; p < 0.001). Exploratory factor analysis supported a unidimensional structure, with the first factor explaining 21.3% of the total variance. In the hierarchical model, all items loaded substantially onto the general factor. Fit indices indicated excellent model fit (CFI = 0.99; TLI = 0.98; RMSEA = 0.001; RMR = 0.004). The YFAS 2.0 is a valid and reliable instrument for identifying food addiction symptoms in Mexican adults. It may be useful in clinical practice and research on eating disorders. Full article
21 pages, 3633 KiB  
Article
Shear Mechanism of Precast Segmental Concrete Beam Prestressed with Unbonded Tendons
by Wu-Tong Yan, Lei Yuan, Yong-Hua Su and Zi-Wei Song
Buildings 2025, 15(15), 2668; https://doi.org/10.3390/buildings15152668 - 28 Jul 2025
Abstract
The shear tests are conducted on six precast segmental concrete beams (PSCBs) in this paper. A new specimen design scheme is presented to compare the effects of segmental joints on the shear performance of PSCBs. The failure modes, shear strength, structural deflection, stirrup [...] Read more.
The shear tests are conducted on six precast segmental concrete beams (PSCBs) in this paper. A new specimen design scheme is presented to compare the effects of segmental joints on the shear performance of PSCBs. The failure modes, shear strength, structural deflection, stirrup strain, and tendon stress are recorded. The factors of shear span ratio, the position of segmental joints, and hybrid tendon ratio are focused on, and their effects on the shear behaviors are compared. Based on the measured responses, the shear contribution proportions of concrete segments, prestressed tendons, and stirrups are decomposed and quantified. With the observed failure modes, the truss–arch model is employed to clarify the shear mechanism of PSCBs, and simplified equations are further developed for predicting the shear strength. Using the collected test results of 30 specimens, the validity of the proposed equations is verified with a mean ratio of calculated-to-test values of 0.96 and a standard deviation of 0.11. Furthermore, the influence mechanism of shear span ratio, segmental joints, prestressing force, and hybrid tendon ratio on the shear strength is clarified. The increasing shear span ratio decreases the inclined angle of the arch ribs, thereby reducing the shear resistance contribution of the arch action. The open joints reduce the number of stirrups passing through the diagonal cracks, lowering the shear contribution of the truss action. The prestressing force can reduce the inclination of diagonal cracks, improving the contribution of truss action. The external unbonded tendon will decrease the height of the arch rib due to the second-order effects, causing lower shear strength than PSCBs with internal tendons. Full article
(This article belongs to the Special Issue Advances in Steel-Concrete Composite Structure—2nd Edition)
Show Figures

Figure 1

27 pages, 3985 KiB  
Article
Classical Paal-Knorr Cyclization for Synthesis of Pyrrole-Based Aryl Hydrazones and In Vitro/In Vivo Evaluation on Pharmacological Models of Parkinson’s Disease
by Maya Georgieva, Martin Sharkov, Emilio Mateev, Diana Tzankova, Georgi Popov, Vasil Manov, Alexander Zlatkov, Rumyana Simeonova and Magdalena Kondeva-Burdina
Molecules 2025, 30(15), 3154; https://doi.org/10.3390/molecules30153154 - 28 Jul 2025
Abstract
Some studies performed in our laboratory on pyrrole and its derivatives pointed towards the enrichment of the evaluations of these promising chemical structures for the potential treatment of neurodegenerative conditions in general and Parkinson’s disease in particular. A classical Paal-Knorr cyclization approach is [...] Read more.
Some studies performed in our laboratory on pyrrole and its derivatives pointed towards the enrichment of the evaluations of these promising chemical structures for the potential treatment of neurodegenerative conditions in general and Parkinson’s disease in particular. A classical Paal-Knorr cyclization approach is applied to synthesize the basic hydrazine used for the formation of the designed series of hydrazones (15a15g). The potential neurotoxic and neuroprotective effects of the newly synthesized derivatives were investigated in vitro using different models of induced oxidative stress at three subcellular levels (rat brain synaptosomes, mitochondria, and microsomes). The results identified as the least neurotoxic molecules, 15a, 15d, and 15f applied at a concentration of 100 µM to the isolated fractions. In addition, the highest statistically significant neuroprotection was observed for 15a and 15d at a concentration of 100 µM using three different injury models on subcellular fractions, including 6-hydroxydopamine in rat brain synaptosomes, tert-butyl hydroperoxide in brain mitochondria, and non-enzyme-induced lipid peroxidation in brain microsomes. The hMAOA/MAOB inhibitory activity of the new compounds was studied at a concentration of 1 µM. The lack of a statistically significant hMAOA inhibitory effect was observed for all tested compounds, except for 15f, which showed 40% inhibitory activity. The most prominent statistically significant hMAOB inhibitory effect was determined for 15a, 15d, and 15f, comparable to that of selegiline. The corresponding selectivity index defined 15f as a non-selective MAO inhibitor and all other new hydrazones as selective hMAOB inhibitors, with 15d indicating the highest selectivity index of > 471. The most active and least toxic representative (15d) was evaluated in vivo on Rotenone based model of Parkinson’s disease. The results revealed no microscopically visible alterations in the ganglion and glial cells in the animals treated with rotenone in combination with 15d. Full article
(This article belongs to the Special Issue Small-Molecule Targeted Drugs)
22 pages, 825 KiB  
Article
Conformal Segmentation in Industrial Surface Defect Detection with Statistical Guarantees
by Cheng Shen and Yuewei Liu
Mathematics 2025, 13(15), 2430; https://doi.org/10.3390/math13152430 - 28 Jul 2025
Abstract
Detection of surface defects can significantly elongate mechanical service time and mitigate potential risks during safety management. Traditional defect detection methods predominantly rely on manual inspection, which suffers from low efficiency and high costs. Some machine learning algorithms and artificial intelligence models for [...] Read more.
Detection of surface defects can significantly elongate mechanical service time and mitigate potential risks during safety management. Traditional defect detection methods predominantly rely on manual inspection, which suffers from low efficiency and high costs. Some machine learning algorithms and artificial intelligence models for defect detection, such as Convolutional Neural Networks (CNNs), present outstanding performance, but they are often data-dependent and cannot provide guarantees for new test samples. To this end, we construct a detection model by combining Mask R-CNN, selected for its strong baseline performance in pixel-level segmentation, with Conformal Risk Control. The former evaluates the distribution that discriminates defects from all samples based on probability. The detection model is improved by retraining with calibration data that is assumed to be independent and identically distributed (i.i.d) with the test data. The latter constructs a prediction set on which a given guarantee for detection will be obtained. First, we define a loss function for each calibration sample to quantify detection error rates. Subsequently, we derive a statistically rigorous threshold by optimization of error rates and a given guarantee significance as the risk level. With the threshold, defective pixels with high probability in test images are extracted to construct prediction sets. This methodology ensures that the expected error rate on the test set remains strictly bounded by the predefined risk level. Furthermore, our model shows robust and efficient control over the expected test set error rate when calibration-to-test partitioning ratios vary. Full article
Show Figures

Figure 1

25 pages, 945 KiB  
Article
Short-Term Forecasting of the JSE All-Share Index Using Gradient Boosting Machines
by Mueletshedzi Mukhaninga, Thakhani Ravele and Caston Sigauke
Economies 2025, 13(8), 219; https://doi.org/10.3390/economies13080219 - 28 Jul 2025
Abstract
This study applies Gradient Boosting Machines (GBMs) and principal component regression (PCR) to forecast the closing price of the Johannesburg Stock Exchange (JSE) All-Share Index (ALSI), using daily data from 2009 to 2024, sourced from the Wall Street Journal. The models are evaluated [...] Read more.
This study applies Gradient Boosting Machines (GBMs) and principal component regression (PCR) to forecast the closing price of the Johannesburg Stock Exchange (JSE) All-Share Index (ALSI), using daily data from 2009 to 2024, sourced from the Wall Street Journal. The models are evaluated under three training–testing split ratios to assess short-term forecasting performance. Forecast accuracy is assessed using standard error metrics: mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute scaled error (MASE). Across all test splits, the GBM consistently achieves lower forecast errors than PCR, demonstrating superior predictive accuracy. To validate the significance of this performance difference, the Diebold–Mariano (DM) test is applied, confirming that the forecast errors from the GBM are statistically significantly lower than those of PCR at conventional significance levels. These findings highlight the GBM’s strength in capturing nonlinear relationships and complex interactions in financial time series, particularly when using features such as the USD/ZAR exchange rate, oil, platinum, and gold prices, the S&P 500 index, and calendar-based variables like month and day. Future research should consider integrating additional macroeconomic indicators and exploring alternative or hybrid forecasting models to improve robustness and generalisability across different market conditions. Full article
Show Figures

Figure 1

21 pages, 2822 KiB  
Article
Deep Learning-Based Rooftop PV Detection and Techno Economic Feasibility for Sustainable Urban Energy Planning
by Ahmet Hamzaoğlu, Ali Erduman and Ali Kırçay
Sustainability 2025, 17(15), 6853; https://doi.org/10.3390/su17156853 - 28 Jul 2025
Abstract
Accurate estimation of available rooftop areas for PV power generation at the city scale is critical for sustainable energy planning and policy development. In this study, using publicly available high-resolution satellite imagery, rooftop solar energy potential in urban, rural, and industrial areas is [...] Read more.
Accurate estimation of available rooftop areas for PV power generation at the city scale is critical for sustainable energy planning and policy development. In this study, using publicly available high-resolution satellite imagery, rooftop solar energy potential in urban, rural, and industrial areas is estimated using deep learning models. In order to identify roof areas, high-resolution open-source images were manually labeled, and the training dataset was trained with DeepLabv3+ architecture. The developed model performed roof area detection with high accuracy. Model outputs are integrated with a user-friendly interface for economic analysis such as cost, profitability, and amortization period. This interface automatically detects roof regions in the bird’s-eye -view images uploaded by users, calculates the total roof area, and classifies according to the potential of the area. The system, which is applied in 81 provinces of Turkey, provides sustainable energy projections such as PV installed capacity, installation cost, annual energy production, energy sales revenue, and amortization period depending on the panel type and region selection. This integrated system consists of a deep learning model that can extract the rooftop area with high accuracy and a user interface that automatically calculates all parameters related to PV installation for energy users. The results show that the DeepLabv3+ architecture and the Adam optimization algorithm provide superior performance in roof area estimation with accuracy between 67.21% and 99.27% and loss rates between 0.6% and 0.025%. Tests on 100 different regions yielded a maximum roof estimation accuracy IoU of 84.84% and an average of 77.11%. In the economic analysis, the amortization period reaches the lowest value of 4.5 years in high-density roof regions where polycrystalline panels are used, while this period increases up to 7.8 years for thin-film panels. In conclusion, this study presents an interactive user interface integrated with a deep learning model capable of high-accuracy rooftop area detection, enabling the assessment of sustainable PV energy potential at the city scale and easy economic analysis. This approach is a valuable tool for planning and decision support systems in the integration of renewable energy sources. Full article
25 pages, 2518 KiB  
Article
An Efficient Semantic Segmentation Framework with Attention-Driven Context Enhancement and Dynamic Fusion for Autonomous Driving
by Jia Tian, Peizeng Xin, Xinlu Bai, Zhiguo Xiao and Nianfeng Li
Appl. Sci. 2025, 15(15), 8373; https://doi.org/10.3390/app15158373 - 28 Jul 2025
Abstract
In recent years, a growing number of real-time semantic segmentation networks have been developed to improve segmentation accuracy. However, these advancements often come at the cost of increased computational complexity, which limits their inference efficiency, particularly in scenarios such as autonomous driving, where [...] Read more.
In recent years, a growing number of real-time semantic segmentation networks have been developed to improve segmentation accuracy. However, these advancements often come at the cost of increased computational complexity, which limits their inference efficiency, particularly in scenarios such as autonomous driving, where strict real-time performance is essential. Achieving an effective balance between speed and accuracy has thus become a central challenge in this field. To address this issue, we present a lightweight semantic segmentation model tailored for the perception requirements of autonomous vehicles. The architecture follows an encoder–decoder paradigm, which not only preserves the capability for deep feature extraction but also facilitates multi-scale information integration. The encoder leverages a high-efficiency backbone, while the decoder introduces a dynamic fusion mechanism designed to enhance information interaction between different feature branches. Recognizing the limitations of convolutional networks in modeling long-range dependencies and capturing global semantic context, the model incorporates an attention-based feature extraction component. This is further augmented by positional encoding, enabling better awareness of spatial structures and local details. The dynamic fusion mechanism employs an adaptive weighting strategy, adjusting the contribution of each feature channel to reduce redundancy and improve representation quality. To validate the effectiveness of the proposed network, experiments were conducted on a single RTX 3090 GPU. The Dynamic Real-time Integrated Vision Encoder–Segmenter Network (DriveSegNet) achieved a mean Intersection over Union (mIoU) of 76.9% and an inference speed of 70.5 FPS on the Cityscapes test dataset, 74.6% mIoU and 139.8 FPS on the CamVid test dataset, and 35.8% mIoU with 108.4 FPS on the ADE20K dataset. The experimental results demonstrate that the proposed method achieves an excellent balance between inference speed, segmentation accuracy, and model size. Full article
Show Figures

Figure 1

27 pages, 11177 KiB  
Article
Robust Segmentation of Lung Proton and Hyperpolarized Gas MRI with Vision Transformers and CNNs: A Comparative Analysis of Performance Under Artificial Noise
by Ramtin Babaeipour, Matthew S. Fox, Grace Parraga and Alexei Ouriadov
Bioengineering 2025, 12(8), 808; https://doi.org/10.3390/bioengineering12080808 - 28 Jul 2025
Abstract
Accurate segmentation in medical imaging is essential for disease diagnosis and monitoring, particularly in lung imaging using proton and hyperpolarized gas MRI. However, image degradation due to noise and artifacts—especially in hyperpolarized gas MRI, where scans are acquired during breath-holds—poses challenges for conventional [...] Read more.
Accurate segmentation in medical imaging is essential for disease diagnosis and monitoring, particularly in lung imaging using proton and hyperpolarized gas MRI. However, image degradation due to noise and artifacts—especially in hyperpolarized gas MRI, where scans are acquired during breath-holds—poses challenges for conventional segmentation algorithms. This study evaluates the robustness of deep learning segmentation models under varying Gaussian noise levels, comparing traditional convolutional neural networks (CNNs) with modern Vision Transformer (ViT)-based models. Using a dataset of proton and hyperpolarized gas MRI slices from 56 participants, we trained and tested Feature Pyramid Network (FPN) and U-Net architectures with both CNN (VGG16, VGG19, ResNet152) and ViT (MiT-B0, B3, B5) backbones. Results showed that ViT-based models, particularly those using the SegFormer backbone, consistently outperformed CNN-based counterparts across all metrics and noise levels. The performance gap was especially pronounced in high-noise conditions, where transformer models retained higher Dice scores and lower boundary errors. These findings highlight the potential of ViT-based architectures for deployment in clinically realistic, low-SNR environments such as hyperpolarized gas MRI, where segmentation reliability is critical. Full article
Show Figures

Figure 1

22 pages, 4695 KiB  
Article
Application of Extra-Trees Regression and Tree-Structured Parzen Estimators Optimization Algorithm to Predict Blast-Induced Mean Fragmentation Size in Open-Pit Mines
by Madalitso Mame, Shuai Huang, Chuanqi Li and Jian Zhou
Appl. Sci. 2025, 15(15), 8363; https://doi.org/10.3390/app15158363 - 28 Jul 2025
Abstract
Blasting is an effective technique for fragmenting rock in open-pit mining operations. Blasting operations produce either boulders or fine fragments, both of which increase costs and pose environmental risks. As a result, predicting the mean fragmentation size (MFS) distribution of rock is critical [...] Read more.
Blasting is an effective technique for fragmenting rock in open-pit mining operations. Blasting operations produce either boulders or fine fragments, both of which increase costs and pose environmental risks. As a result, predicting the mean fragmentation size (MFS) distribution of rock is critical for assessing blasting operations’ quality and mitigating risks. Due to the limitations of empirical and statistical models, several researchers are turning to artificial intelligence (AI)-based techniques to predict the MFS distribution of rock. Thus, this study uses three AI tree-based algorithms—extra trees (ET), gradient boosting (GB), and random forest (RF)—to predict the MFS distribution of rock. The prediction accuracy of the models is optimized utilizing the tree-structured Parzen estimators (TPEs) algorithm, which results in three models: TPE-ET, TPE-GB, and TPE-RF. The dataset used in this study was collected from the published literature and through the data augmentation of a large-scale dataset of 3740 blast samples. Among the evaluated models, the TPE-ET model exhibits the best performance with a coefficient of determination (R2), root mean squared error (RMSE), mean absolute error (MAE), and max error of 0.93, 0.04, 0.03, and 0.25 during the testing phase. Moreover, the block size (XB, m) and modulus of elasticity (E, GPa) parameters are identified as the most influential parameters for predicting the MFS distribution of rock. Lastly, an interactive web application has been developed to assist engineers with the timely prediction of MFS. The predictive model developed in this study is a reliable intelligent model because it combines high accuracy with a strong, explainable AI tool for predicting MFS. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

13 pages, 1058 KiB  
Article
A Machine Learning-Based Guide for Repeated Laboratory Testing in Pediatric Emergency Departments
by Adi Shuchami, Teddy Lazebnik, Shai Ashkenazi, Avner Herman Cohen, Yael Reichenberg and Vered Shkalim Zemer
Diagnostics 2025, 15(15), 1885; https://doi.org/10.3390/diagnostics15151885 - 28 Jul 2025
Abstract
Background/Objectives: Laboratory tests conducted in community settings are occasionally repeated within hours of presentation to pediatric emergency departments (PEDs). Reducing unnecessary repetitions can ease child discomfort and alleviate the healthcare burden without compromising the diagnostic process or quality of care. The aim [...] Read more.
Background/Objectives: Laboratory tests conducted in community settings are occasionally repeated within hours of presentation to pediatric emergency departments (PEDs). Reducing unnecessary repetitions can ease child discomfort and alleviate the healthcare burden without compromising the diagnostic process or quality of care. The aim of this study was to develop a decision tree (DT) model to guide physicians in minimizing unnecessary repeat blood tests in PEDs. The minimal decision tree (MDT) algorithm was selected for its interpretability and capacity to generate optimally pruned classification trees. Methods: Children aged 3 months to 18 years with community-based complete blood count (CBC), electrolyte (ELE), and C-reactive protein (CRP) measurements obtained between 2016 and 2023 were included. Repeat tests performed in the pediatric emergency department within 12 h were evaluated by comparing paired measurements, with tests considered justified when values transitioned from normal to abnormal ranges or changed by ≥20%. Additionally, sensitivity analyses were conducted for absolute change thresholds of 10% and 30% and for repeat intervals of 6, 18, and 24 h. Results: Among 7813 children visits in this study, 6044, 1941, and 2771 underwent repeated CBC, ELE, and CRP tests, respectively. The mean ages of patients undergoing CRP, ELE, and CBC testing were 6.33 ± 5.38, 7.91 ± 5.71, and 5.08 ± 5.28 years, respectively. The majority were of middle socio-economic class, with 66.61–71.24% living in urban areas. Pain was the predominant presented complaint (83.69–85.99%), and in most cases (83.69–85.99%), the examination was conducted by a pediatrician. The DT model was developed and evaluated on training and validation cohorts, and it demonstrated high accuracy in predicting the need for repeat CBC and ELE tests but not CRP. Performance of the DT model significantly exceeded that of the logistic regression model. Conclusions: The data-driven guide derived from the DT model provides clinicians with a practical, interpretable tool to minimize unnecessary repeat laboratory testing, thereby enhancing patient care and optimizing healthcare resource utilization. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Medicine)
Show Figures

Figure 1

18 pages, 4456 KiB  
Article
Study on the Filling and Plugging Mechanism of Oil-Soluble Resin Particles on Channeling Cracks Based on Rapid Filtration Mechanism
by Bangyan Xiao, Jianxin Liu, Feng Xu, Liqin Fu, Xuehao Li, Xianhao Yi, Chunyu Gao and Kefan Qian
Processes 2025, 13(8), 2383; https://doi.org/10.3390/pr13082383 - 27 Jul 2025
Abstract
Channeling in cementing causes interlayer interference, severely restricting oilfield recovery. Existing channeling plugging agents, such as cement and gels, often lead to reservoir damage or insufficient strength. Oil-soluble resin (OSR) particles show great potential in selective plugging of channeling fractures due to their [...] Read more.
Channeling in cementing causes interlayer interference, severely restricting oilfield recovery. Existing channeling plugging agents, such as cement and gels, often lead to reservoir damage or insufficient strength. Oil-soluble resin (OSR) particles show great potential in selective plugging of channeling fractures due to their excellent oil solubility, temperature/salt resistance, and high strength. However, their application is limited by the efficient filling and retention in deep fractures. This study innovatively combines the OSR particle plugging system with the mature rapid filtration loss plugging mechanism in drilling, systematically exploring the influence of particle size and sorting on their filtration, packing behavior, and plugging performance in channeling fractures. Through API filtration tests, visual fracture models, and high-temperature/high-pressure (100 °C, salinity 3.0 × 105 mg/L) core flow experiments, it was found that well-sorted large particles preferentially bridge in fractures to form a high-porosity filter cake, enabling rapid water filtration from the resin plugging agent. This promotes efficient accumulation of OSR particles to form a long filter cake slug with a water content <20% while minimizing the invasion of fine particles into matrix pores. The slug thermally coalesces and solidifies into an integral body at reservoir temperature, achieving a plugging strength of 5–6 MPa for fractures. In contrast, poorly sorted particles or undersized particles form filter cakes with low porosity, resulting in slow water filtration, high water content (>50%) in the filter cake, insufficient fracture filling, and significantly reduced plugging strength (<1 MPa). Finally, a double-slug strategy is adopted: small-sized OSR for temporary plugging of the oil layer injection face combined with well-sorted large-sized OSR for main plugging of channeling fractures. This strategy achieves fluid diversion under low injection pressure (0.9 MPa), effectively protects reservoir permeability (recovery rate > 95% after backflow), and establishes high-strength selective plugging. This study clarifies the core role of particle size and sorting in regulating the OSR plugging effect based on rapid filtration loss, providing key insights for developing low-damage, high-performance channeling plugging agents and scientific gradation of particle-based plugging agents. Full article
(This article belongs to the Section Chemical Processes and Systems)
Show Figures

Figure 1

18 pages, 5492 KiB  
Article
A Novel Variable Stiffness Torque Sensor with Adjustable Resolution
by Zhongyuan Mao, Yuanchang Zhong, Xuehui Zhao, Tengfei He and Sike Duan
Micromachines 2025, 16(8), 868; https://doi.org/10.3390/mi16080868 - 27 Jul 2025
Abstract
In rotating machinery, the demands for torque sensor resolution and range in various torque measurements are becoming increasingly stringent. This paper presents a novel variable stiffness torque sensor designed to meet the demands for high resolution or a large range under varying measurement [...] Read more.
In rotating machinery, the demands for torque sensor resolution and range in various torque measurements are becoming increasingly stringent. This paper presents a novel variable stiffness torque sensor designed to meet the demands for high resolution or a large range under varying measurement conditions. Unlike traditional strain gauge-based torque sensors, this sensor combines the advantages of torsion springs and magnetorheological fluid (MRF) to achieve dynamic adjustments in both resolution and range. Specifically, the stiffness of the elastic element is adjusted by altering the shear stress of the MRF via an applied magnetic field while simultaneously harnessing the high sensitivity of the torsion spring. The stiffness model is established and validated for accuracy through finite element analysis. A screw modulation-based angle measurement method is proposed for the first time, offering high non-contact angle measurement accuracy and resolving eccentricity issues. The performance of the sensor prototype is evaluated using a self-developed power-closed torque test bench. The experimental results demonstrate that the sensor exhibits excellent linearity, hysteresis, and repeatability while effectively achieving dynamic continuous adjustment of resolution and range. Full article
Show Figures

Figure 1

26 pages, 2330 KiB  
Article
Enhanced Dung Beetle Optimizer-Optimized KELM for Pile Bearing Capacity Prediction
by Bohang Chen, Mingwei Hai, Gaojian Di, Bin Zhou, Qi Zhang, Miao Wang and Yanxiu Guo
Buildings 2025, 15(15), 2654; https://doi.org/10.3390/buildings15152654 - 27 Jul 2025
Abstract
The safety associated with the bearing capacity of pile foundations is intrinsically linked to the overall safety, stability, and economic viability of structural systems. In response to the need for rapid and precise predictions of pile bearing capacity, this study introduces a kernel [...] Read more.
The safety associated with the bearing capacity of pile foundations is intrinsically linked to the overall safety, stability, and economic viability of structural systems. In response to the need for rapid and precise predictions of pile bearing capacity, this study introduces a kernel extreme learning machine (KELM) prediction model optimized through a multi-strategy improved beetle optimization algorithm (IDBO), referred to as the IDBO-KELM model. The model utilizes the pile length, pile diameter, average effective vertical stress, and undrained shear strength as input variables, with the bearing capacity serving as the output variable. Initially, experimental data on pile bearing capacity was gathered from the existing literature and subsequently normalized to facilitate effective integration into the model training process. A detailed introduction of the multi-strategy improved beetle optimization algorithm (IDBO) is provided, with its superior performance validated through 23 benchmark functions. Furthermore, the Wilcoxon rank sum test was employed to statistically assess the experimental outcomes, confirming the IDBO algorithm’s superiority over other prevalent metaheuristic algorithms. The IDBO algorithm was then utilized to optimize the hyperparameters of the KELM model for predicting pile bearing capacity. In conclusion, the statistical metrics for the IDBO-KELM model demonstrated a root mean square error (RMSE) of 4.7875, a coefficient of determination (R2) of 0.9313, and a mean absolute percentage error (MAPE) of 10.71%. In comparison, the baseline KELM model exhibited an RMSE of 6.7357, an R2 of 0.8639, and an MAPE of 18.47%. This represents an improvement exceeding 35%. These findings suggest that the IDBO-KELM model surpasses the KELM model across all evaluation metrics, thereby confirming its superior accuracy in predicting pile bearing capacity. Full article
Show Figures

Figure 1

Back to TopTop