Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (815)

Search Parameters:
Keywords = tuning parameter selection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 7001 KB  
Article
Thermal Intelligence for Hydro-Generators: Data-Driven Prediction of Stator Winding Temperature Under Real Operating Conditions
by Zangpo, Munira Batool and Imtiaz Madni
Energies 2026, 19(7), 1671; https://doi.org/10.3390/en19071671 (registering DOI) - 28 Mar 2026
Abstract
Hydropower remains one of the primary sources of power generation. It can be operated as either a base-load or peak-load plant due to its rapid, easy start-up and stop-down capability. However, power plants, old or new, need to be operated and maintained optimally [...] Read more.
Hydropower remains one of the primary sources of power generation. It can be operated as either a base-load or peak-load plant due to its rapid, easy start-up and stop-down capability. However, power plants, old or new, need to be operated and maintained optimally to meet energy demand and maximise economic returns. While the older plants without digital controls such as the Supervisory Control and Data Acquisition (SCADA) system are unable to leverage the evolving technology including big data and Artificial Intelligence (AI), the newer plants or plants that already have some form of data acquisition system have the advantage of leveraging the newer platforms for efficient operation, monitoring and fault diagnosis. Thus, an Artificial Neural Network (ANN), a machine learning (ML) algorithm, was chosen for this case study to predict the generator’s operational stator temperature by selecting six parameters that could potentially affect it. Real data from the 336 MW Chhukha Hydropower Plant (CHP) in Bhutan were used to train the ANN. The prediction of temperature using an ANN in MATLAB® yielded an R2 (correlation coefficient) of 96.8%, which is impressive but can be further improved through various optimisation and tuning methods with increased data volume and complexity. The performance of ANN prediction was validated against other regression models, and the ANN was found to outperform them. This demonstrated its capability to predict and detect generator temperature faults before failures, thereby enhancing hydropower operation and maintenance (O&M) efficiency. The model’s interpretation was also done through Shapley Additive ExPlanations (SHAP). Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

18 pages, 460 KB  
Article
Lower Bounds for the Asymptotic Relative Efficiency of Huber Regression
by Xiaoyi Wang and Le Zhou
Mathematics 2026, 14(7), 1138; https://doi.org/10.3390/math14071138 (registering DOI) - 28 Mar 2026
Abstract
Huber regression serves as a prominent robust alternative to ordinary least squares (OLS), particularly in the presence of heavy-tailed error distributions. While the asymptotic relative efficiency (ARE) of Huber regression is well documented for the standard normal distribution, its worst-case efficiency across the [...] Read more.
Huber regression serves as a prominent robust alternative to ordinary least squares (OLS), particularly in the presence of heavy-tailed error distributions. While the asymptotic relative efficiency (ARE) of Huber regression is well documented for the standard normal distribution, its worst-case efficiency across the class of all continuous and symmetric error distributions remains an important theoretical question. In this paper, we establish positive lower bounds for the ARE of Huber regression relative to OLS. By strategically selecting the robustification parameter based on the moments or quantiles of the error distribution, we first prove that the ARE is uniformly bounded away from zero across all continuous and symmetric error distributions. This result guarantees a baseline level of efficiency for Huber regression, sharing a similar theoretical spirit with the celebrated lower bound of the Wilcoxon rank estimator. Utilizing the empirical process theory, we further establish that the relative efficiency of Huber regression remains unchanged if the theoretical tuning parameter is replaced by an estimator with a suitable convergence rate. Simulation studies are conducted to examine the performance of Huber regression under the proposed tuning strategies. Full article
(This article belongs to the Special Issue Computational Statistics and Data Analysis, 3rd Edition)
29 pages, 2733 KB  
Article
Productivity Prediction in Tight Oil Reservoirs: A Stacking Ensemble Approach with Hybrid Feature Selection
by Zhengyang Kang, Yong Zheng, Tianyang Zhang, Haoyu Chen, Xiaoyan Zhou, Quanyu Cai and Yiran Sun
Processes 2026, 14(7), 1089; https://doi.org/10.3390/pr14071089 - 27 Mar 2026
Abstract
To address the challenges of low accuracy and complex influencing factors in predicting horizontal well fracturing productivity during the development of unconventional oil and gas resources such as tight oil, this paper proposes a productivity prediction framework based on an improved feature selection [...] Read more.
To address the challenges of low accuracy and complex influencing factors in predicting horizontal well fracturing productivity during the development of unconventional oil and gas resources such as tight oil, this paper proposes a productivity prediction framework based on an improved feature selection method and an ensemble learning model. This study employs a fusion analysis using the entropy weight method to combine Pearson correlation analysis and improved gray relational analysis (IGRA) for feature selection. Thirteen machine learning models were tested with six distinct parameter combinations to construct a Stacking-based ensemble learning model, with base models including Random Forest (RF), Ridge Regression (RR), and Artificial Neural Network (ANN). Particle Swarm Optimization (PSO) was employed to optimize hyperparameters, followed by interpretability analysis using SHapley Additive exPlanations (SHAP). The results indicate that the model with fused weights demonstrated optimal performance. The Stacking model achieved significantly improved accuracy after PSO optimization, with the coefficient of determination increasing by 4.9%, outperforming all comparison models. Engineering guidance is provided: Under current geological conditions, sand ratio and displacement fluid volume require fine-tuning to prevent over-treatment. Fracturing design should implement differentiated strategies based on the target sand body thickness. This study not only delivers a high-precision production prediction tool but also offers decision support for efficient unconventional oil and gas field development through its exceptional interpretability. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
42 pages, 2464 KB  
Article
Energy-Aware Multilingual Evaluation of Large Language Models
by I. de Zarzà, Mauro Liz, J. de Curtò and Carlos T. Calafate
Electronics 2026, 15(7), 1395; https://doi.org/10.3390/electronics15071395 - 27 Mar 2026
Abstract
The rapid deployment of Large Language Models (LLMs) in multilingual, production-scale systems has made inference-time energy consumption a critical yet systematically under-evaluated dimension of model quality. While accuracy-centric benchmarks dominate current evaluation practice, they fail to capture the energy cost of reasoning, particularly [...] Read more.
The rapid deployment of Large Language Models (LLMs) in multilingual, production-scale systems has made inference-time energy consumption a critical yet systematically under-evaluated dimension of model quality. While accuracy-centric benchmarks dominate current evaluation practice, they fail to capture the energy cost of reasoning, particularly across languages and task complexities where consumption profiles diverge substantially. In this work, we present a comprehensive energy–performance evaluation of five instruction-tuned LLMs, spanning Transformer, Grouped-Query Attention, and State Space Model architectures, across thirteen typologically diverse languages and multiple task difficulty levels under controlled GPU-level energy measurement on NVIDIA H200 hardware. Our analysis encompasses 65 model–language configurations totaling over 5100 individual inference runs, supported by rigorous non-parametric statistical testing (Friedman tests, pairwise Wilcoxon signed-rank with Holm correction, and paired Cohen’s d effect sizes). We report four principal findings. First, energy consumption varies up to threefold across models under identical workloads (χ2=49.42, p=4.78×1010, Friedman test), stratifying into three distinct energy regimes driven by architecture and generation dynamics rather than parameter count. Second, energy expenditure and reasoning performance are only weakly coupled, as confirmed by Spearman rank correlation analysis (rs=0.109, p=0.386). Third, task category and difficulty level introduce substantial and model-dependent variation in both energy demand and performance, with cross-lingual performance variance amplifying at higher difficulty levels. Fourth, language choice acts as a measurable deployment parameter as follows: Romance languages on average achieve lower energy consumption than English across multiple models, while model efficiency rankings shift across languages, yielding language-dependent Pareto-optimal frontiers. We formalize these trade-offs through multi-objective Pareto analysis and introduce a composite AI Energy Score metric that captures reasoning quality per unit of energy. Of the 65 evaluated configurations, only four are Pareto-optimal, three Mistral-7B configurations at the low-energy extreme and one Phi-4-mini-instruct configuration at the high-performance end, while three of the five models are entirely dominated across all language configurations. These findings provide actionable guidelines for energy-aware model selection in multilingual deployments and support the integration of AI Energy Scores as a standard complementary criterion in LLM evaluation frameworks. Full article
(This article belongs to the Special Issue Data-Related Challenges in Machine Learning: Theory and Application)
Show Figures

Figure 1

24 pages, 2457 KB  
Article
An Enhanced ABC Algorithm with Hybrid Initialization and Stagnation-Guided Search for Parameter-Efficient Text Summarization
by Yun Liu, Yingjing Yao, Wenyu Pei, Mengqi Liu and Hao Gao
Mathematics 2026, 14(7), 1120; https://doi.org/10.3390/math14071120 - 27 Mar 2026
Abstract
The digital transformation of oil and gas pipeline networks has generated substantial volumes of unstructured maintenance documentation from communication systems, creating an urgent need for automated summarization to improve operational efficiency. However, domain-specific text summarization for pipeline communication maintenance remains challenging due to [...] Read more.
The digital transformation of oil and gas pipeline networks has generated substantial volumes of unstructured maintenance documentation from communication systems, creating an urgent need for automated summarization to improve operational efficiency. However, domain-specific text summarization for pipeline communication maintenance remains challenging due to scarce labeled data and the high computational cost of fine-tuning large pretrained models. Parameter-efficient fine-tuning alleviates this issue, but its effectiveness strongly depends on appropriate hyperparameter selection. This paper proposes a unified framework that combines weight-decomposed low-rank adaptation with an enhanced Artificial Bee Colony algorithm for automated hyperparameter optimization. The enhanced algorithm addresses two specific limitations of the standard Artificial Bee Colony algorithm: uninformed random initialization that ignores promising regions, and premature abandonment of stagnated solutions that discards partially useful search directions. These two components represent principled design choices, each targeting a distinct bottleneck in applying swarm intelligence search to high-dimensional mixed-type hyperparameter spaces. The method introduces a hybrid initialization strategy to exploit prior knowledge and a stagnation-guided local search mechanism to refine stagnated solutions instead of discarding them, achieving a better balance between exploration and exploitation. Experimental results on a public Chinese summarization benchmark and an industrial oil and gas pipeline communication maintenance corpus show that the proposed approach consistently outperforms full fine-tuning, manually tuned parameter-efficient methods, and several evolutionary optimization baselines in terms of ROUGE metrics. The automated search introduces modest additional computational overhead compared to manual tuning while eliminating expert-dependent hyperparameter configuration and achieving consistent performance gains across both datasets. Overall, the proposed framework provides an efficient and robust solution for adapting large language models to specialized summarization tasks in the context of pipeline communication system maintenance. Full article
Show Figures

Figure 1

28 pages, 8120 KB  
Article
Genetic Programming Algorithm Evolving Robust Unary Costs for Efficient Graph Cut Segmentation
by Reem M. Mostafa, Emad Mabrouk, Ahmed Ayman, Hamdy Z. Zidan and Abdelmonem M. Ibrahim
Algorithms 2026, 19(4), 256; https://doi.org/10.3390/a19040256 - 27 Mar 2026
Abstract
Accurate cell and nuclei segmentation remains challenging due to the sensitivity of classical graph-cut methods to parameter tuning. While deep learning models like U-Net offer strong performance, they require large annotated datasets and substantial GPU resources. This work presents a cost-effective alternative: a [...] Read more.
Accurate cell and nuclei segmentation remains challenging due to the sensitivity of classical graph-cut methods to parameter tuning. While deep learning models like U-Net offer strong performance, they require large annotated datasets and substantial GPU resources. This work presents a cost-effective alternative: a genetic programming (GP) framework that jointly optimizes unary cost functions and regularization parameters for graph-cut segmentation, coupled with automatic seed selection. Evaluation is conducted under two distinct protocols: (1) oracle-guided per-image optimization, establishing upper-bound performance (mean Dice 0.822, IoU 0.733), and (2) true generalization via train/test split, where expressions learned on 50 images are applied to 50 unseen images (mean Dice 0.695, IoU 0.588). The fixed-model generalization still significantly outperforms the baseline graph cut (+0.158 Dice, p<0.001). Cross-dataset validation on MoNuSeg (H&E histopathology) achieves a Dice score of 0.823 with the fixed GP model, significantly outperforming the baseline (+0.272). This result uses a single fixed model—the best-performing expression from BBBC038 training—applied in a zero-shot manner to MoNuSeg without any retraining or domain adaptation. All 100 images showed non-negative improvement under oracle optimization in the experiments. The method requires no GPU training, runs in 550 s per image for oracle search, and offers interpretable symbolic cost functions. Code and annotations are provided to ensure reproducibility. This approach offers a practical, interpretable alternative in resource-constrained biomedical imaging settings. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms: 2nd Edition)
Show Figures

Figure 1

20 pages, 8342 KB  
Article
The State of Health Prediction of Li-Ion Batteries Based on ISMA-HKELM
by Yao Jiang, Yuanzhao Deng, Yan Ai and Yuesheng Zhu
Energies 2026, 19(7), 1627; https://doi.org/10.3390/en19071627 - 26 Mar 2026
Viewed by 183
Abstract
Accurately predicting the State of Health (SOH) of lithium-ion batteries (LIBs) is essential to ensure their long-term stable and safe operation. This paper proposes a novel model, the ISMA-HKELM, which is an Improved Slime Mould Algorithm (ISMA)-optimized Hybrid Kernel Extreme Learning Machine (HKELM), [...] Read more.
Accurately predicting the State of Health (SOH) of lithium-ion batteries (LIBs) is essential to ensure their long-term stable and safe operation. This paper proposes a novel model, the ISMA-HKELM, which is an Improved Slime Mould Algorithm (ISMA)-optimized Hybrid Kernel Extreme Learning Machine (HKELM), designed for high-precision SOH estimation. We first selected the equal voltage rise time and equal voltage drop time as indirect health indicators, and their validity was rigorously confirmed through Pearson and Spearman correlation tests. Subsequently, the ISMA was utilized to effectively tune the key parameters of the HKELM model. Experimental results demonstrate that the ISMA-HKELM model exhibits superior prediction performance across multiple public datasets, achieving an average R2 value of more than 0.99. Furthermore, the model shows significantly lower Mean Absolute Error (MAE), Mean Bias Error (MBE), and Root Mean Square Error (RMSE) compared to other control models. These results fully prove the advancement and validity of the ISMA-HKELM model for LIB SOH estimation. Full article
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)
Show Figures

Figure 1

32 pages, 5732 KB  
Article
Multi-Objective Optimization of the Grinding Process in a Spring-Rotor Mill Using Regression-Based Modeling
by Aidos Baigunusov, Bekbolat Moldakhanov, Alina Kim, Mikhail Doudkin, Vladimir Yakovlev, Piotr Stryczek and Tadeusz Lesniewski
Machines 2026, 14(3), 356; https://doi.org/10.3390/machines14030356 - 23 Mar 2026
Viewed by 127
Abstract
This study addresses the problem of improving the efficiency of fine grinding of bulk materials in a spring-rotor mill. The objective is to determine technologically sound operating parameters based on mathematical modeling, design of experiments, and multi-objective optimization. The methodology relies on a [...] Read more.
This study addresses the problem of improving the efficiency of fine grinding of bulk materials in a spring-rotor mill. The objective is to determine technologically sound operating parameters based on mathematical modeling, design of experiments, and multi-objective optimization. The methodology relies on a full-factorial experimental design according to the Hartley plan, with five control factors: rotor rotational speed, material loading ratio, overlap of the working zones, grinding chamber clearance, and grinding duration. The analyzed responses include grinding fineness, throughput, power consumption, specific energy consumption, and specific metal intensity. Based on experimental data, adequate second-order polynomial regression models were obtained for all response variables using the least-squares method. Statistical analysis showed that grinding time and rotational speed had the most significant influence on the process. Multi-objective optimization using the weighted-sum method enabled the identification of optimal operating conditions that balance product quality, throughput, and energy consumption. Verification experiments confirmed the adequacy of the developed models. Practical implementation of the optimized regimes increases throughput by 15–20% while simultaneously reducing energy consumption by 8–12% compared with empirically selected operating conditions. The proposed models and recommendations provide a quantitative basis for tuning and controlling grinding equipment in processing industries. Full article
(This article belongs to the Section Material Processing Technology)
Show Figures

Figure 1

27 pages, 4265 KB  
Article
Condition Monitoring Model Development for Belt Systems Using Hybrid CNN–BiLSTM Deep-Learning Techniques
by Mortda Mohammed Sahib, Philipp Plänitz, Matthias Hackert-Oschätzchen and Christoph Lerez
Machines 2026, 14(3), 348; https://doi.org/10.3390/machines14030348 - 19 Mar 2026
Viewed by 192
Abstract
Predictive maintenance aims to monitor equipment conditions through data-driven analysis and estimate failure in advance, which enables the scheduling of maintenance prior to equipment breakdown. In this work, a deep-learning neural network is used to predict the condition of the belt-drive system. A [...] Read more.
Predictive maintenance aims to monitor equipment conditions through data-driven analysis and estimate failure in advance, which enables the scheduling of maintenance prior to equipment breakdown. In this work, a deep-learning neural network is used to predict the condition of the belt-drive system. A combined Convolutional Neural Network with Bi-directional Long Short-Term Memory (CNN-BiLSTM) model is assigned for processing the operational parameters along with vibrational signals to predict belt-drive system conditions in two separate binary classifications: faulty or healthy and unbalanced or balanced conditions. The data flow in the CNN-BiLSTM model initiates with the CNN to extract the features from the vibration signals and performs essential pattern detection. Consequently, the BiLSTM’s role is to capture long-term temporal relationships that cannot be captured by the CNN alone. To predict the targeted conditions, a fully connected layer with a classifier is built at the BiLSTM outputs. For efficient model training, the data is preprocessed through denoising, augmentation, and normalization. Additionally, hyperparameter tuning is conducted to explore different model configurations and select the optimal ones on the basis of relevant performance. An ablation study is conducted to investigate the use of CNN and BiLSTM models individually, confirming that combining both components is essential for accurate classification. Moreover, the cross-validation technique is used to assess the proposed model’s generality by organizing unseen data across rotational speeds, which also depicts robust performance under varying operating conditions. The key added value of this study lies in integrating deep-learning techniques to address a knowledge gap by using raw vibrational signals to establish intelligent monitoring systems, which represents a new scientific contribution to the predictive maintenance of belt-drive systems. Full article
(This article belongs to the Special Issue AI-Driven Reliability Analysis and Predictive Maintenance)
Show Figures

Figure 1

21 pages, 30483 KB  
Article
Preliminary Assessment of ICON-LAM Performance in Romania: Sensitivity Studies
by Amalia Iriza-Burcă, Ioan-Ştefan Gabrian, Ştefan Dinicilă, Mihaela Silvana Neacşu and Rodica Claudia Dumitrache
Atmosphere 2026, 17(3), 315; https://doi.org/10.3390/atmos17030315 - 19 Mar 2026
Viewed by 187
Abstract
The Earth system model ICON (ICOsahedral Nonhydrostatic general circulation) is a flexible framework that can be configured and tuned for various applications such as weather forecasting, simulations of aerosols and trace gases, and climate modelling. The numerical weather prediction component ICON is used [...] Read more.
The Earth system model ICON (ICOsahedral Nonhydrostatic general circulation) is a flexible framework that can be configured and tuned for various applications such as weather forecasting, simulations of aerosols and trace gases, and climate modelling. The numerical weather prediction component ICON is used in limited area mode (ICON-LAM) in Romania to obtain realistic weather simulations that support operational forecasting activities. The sensitivity of ICON-LAM is preliminarily evaluated for the geographical area of Romania. Numerical simulations using two parameterization schemes for radiation processes, two convection settings and different values for the laminar resistance of heat transfer from the surface to the air are evaluated against a control run employed for operational forecasts at the National Meteorological Administration. The validation is performed focusing on the precipitation field and surface continuous parameters. All configurations were integrated for a short period in summer when forecasted precipitation was strongly overestimated. Further on, selected configurations were evaluated for winter cases. The experiment with the shallow convection only, the ecRad radiation parameterization, and the laminar heat value 10 emerged as the best fit for Romania. This configuration (considered optimal) was evaluated alongside the operational control run for August 2022. Overall results indicate the selected optimal configuration generally outperforms the control run both with regard to precipitation and in forecasting surface parameters. This experiment has been adapted and implemented in operational workflow. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

30 pages, 4114 KB  
Article
TricP: A Novel Approach for Human Activity Recognition Using Tricky Predator Optimization Based on Inception and LSTM
by Palak Girdhar, Muslem Al-Saidi, Prashant Johri, Deepali Virmani, Hussein Taha and Oday Ali Hassen
Telecom 2026, 7(2), 32; https://doi.org/10.3390/telecom7020032 - 19 Mar 2026
Viewed by 174
Abstract
Human Activity Recognition (HAR) is a pivotal research area for applications such as automated surveillance, smart homes, security, healthcare, and human behavior analysis. Traditional machine-learning approaches often rely on manual feature engineering, which can limit generalization. Although deep learning has improved HAR through [...] Read more.
Human Activity Recognition (HAR) is a pivotal research area for applications such as automated surveillance, smart homes, security, healthcare, and human behavior analysis. Traditional machine-learning approaches often rely on manual feature engineering, which can limit generalization. Although deep learning has improved HAR through automatic representation learning, achieving high detection performance under computational constraints remains challenging. This paper proposes an efficient HAR framework that combines deep learning with hybrid optimization. Surveillance videos are first decomposed into frames, and a keyframe selection stage identifies distinctive frames to reduce redundancy and computational cost while preserving informative content. Motion and appearance features are then extracted using Histogram of Oriented Optical Flow (HOOF) and a ResNet-101 model, respectively, and concatenated into a unified feature representation. Classification is performed using an Inception-based Long Short-Term Memory (Incept-LSTM) network, which is fine-tuned via the proposed Tricky Predator Optimization (TricP) over a restricted, low-dimensional parameter vector. TricP is inspired by predator poaching behavior and the social dynamics of Latrans to enhance exploration and exploitation during search. Experiments on the UCF-Crime dataset show that the proposed method achieves 96.84% specificity, 92.16% sensitivity, and 93.62% accuracy. Full article
Show Figures

Figure 1

21 pages, 1823 KB  
Article
Machine Learning-Based Models for Identifying Learning Disabilities
by Wun-Tsong Chaou, Yu-Hui Liu, Ying-Lei Lin, Chao-Chien Huang and Ping-Feng Pai
Electronics 2026, 15(6), 1278; https://doi.org/10.3390/electronics15061278 - 18 Mar 2026
Viewed by 176
Abstract
Timely and accurate identification of learning disability (LD) severity is critical for early screening and for guiding appropriate clinical and educational interventions. This study developed a machine learning model with feature selection and hyperparameter optimization (MLFSHO) architecture to predict the severity of LD [...] Read more.
Timely and accurate identification of learning disability (LD) severity is critical for early screening and for guiding appropriate clinical and educational interventions. This study developed a machine learning model with feature selection and hyperparameter optimization (MLFSHO) architecture to predict the severity of LD using heterogeneous clinical data with clinical expert labeling. Four machine learning models including eXtreme Gradient Boosting (XGB), Categorical Boosting (CAT), Light Gradient Boosting Machine (LGBM), and Multi-Layer Perceptron (MLP) were implemented within the MLFSHO architecture that integrates HSIC-based feature selection and Optuna-based joint optimization of feature-related parameters and model hyperparameters. Experiment results indicated all machine learning-based (ML-based) models can provide average accuracy of more than 85%. In addition, hyperparameter optimization consistently improved most predictive performance. Joint optimization of feature-related parameters and model hyperparameters achieved the best overall performance across models. These findings suggest that treating feature selection and hyperparameter tuning as a unified optimization problem can improve the reliability of severity prediction in learning disabilities and support early screening in clinical settings. The proposed MLFSHO architecture provides a systematic approach for modeling heterogeneous clinical data and improves the performance of LD severity prediction. Full article
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 2nd Edition)
Show Figures

Figure 1

28 pages, 5762 KB  
Article
Optimization of Technological Parameters of the Working Process of a Spring–Rotor Grinder Based on Mathematical Modeling
by Bekbolat Moldakhanov, Alina Kim, Aidos Baigunusov, Mikhail Doudkin, Vladimir Yakovlev, Piotr Stryczek and Tadeusz Lesniewski
Appl. Sci. 2026, 16(6), 2900; https://doi.org/10.3390/app16062900 - 17 Mar 2026
Viewed by 310
Abstract
This study addresses the problem of improving the efficiency of fine grinding of bulk materials in an original-design double spring–rotor grinder equipped with a separating diaphragm with a variable discharge orifice. The purpose of the work is to determine rational operating parameters that [...] Read more.
This study addresses the problem of improving the efficiency of fine grinding of bulk materials in an original-design double spring–rotor grinder equipped with a separating diaphragm with a variable discharge orifice. The purpose of the work is to determine rational operating parameters that ensure a balanced trade-off between grinding quality, throughput, and energy consumption. The methodology is based on a full-factorial experimental design (Hartley plan) with five controllable parameters—rotational speed, material filling ratio, overlap of the working zones, grinding chamber clearance, and grinding duration—followed by response surface modeling and multi-objective optimization. The main responses included grinding fineness, throughput, drive power, specific energy consumption, and specific metal intensity. Adequate second-order regression models were obtained (R2 > 0.93), and analysis of variance confirmed the statistical significance of the main effects and interactions. Multi-objective optimization enabled the identification of operating regimes that increase throughput by 15–20% while reducing specific energy consumption by 8–12% compared with empirical settings. The proposed approach provides a quantitative basis for selecting compromise operating conditions and can be applied to the tuning and control of spring–rotor grinding equipment in processing industries. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

17 pages, 1013 KB  
Article
Can Eretmocerus eremicus Assess Oviposition Sites with Varying Host Densities and Predation Risks, and Make Decisions Based on Scent Cues?
by Luis Enrique Chavarín-Gómez, Víctor Parra-Tabla, Lizette Cicero, Carla Vanessa Sánchez-Hernández, Paola Andrea Palmeros-Suárez and Ricardo Ramírez-Romero
Insects 2026, 17(3), 329; https://doi.org/10.3390/insects17030329 - 17 Mar 2026
Viewed by 320
Abstract
Parasitoids use different signals to locate their hosts, and these signals can modulate their behavioral decisions. Thus, patch selection and foraging in patches with different characteristics depend on their ability to gather and use such information efficiently. In this study, we evaluated whether [...] Read more.
Parasitoids use different signals to locate their hosts, and these signals can modulate their behavioral decisions. Thus, patch selection and foraging in patches with different characteristics depend on their ability to gather and use such information efficiently. In this study, we evaluated whether the parasitoid Eretmocerus eremicus (Hymenoptera: Aphelinidae), a natural enemy of Trialeurodes vaporariorum (Hemiptera: Aleyrodidae) on tomato plants (Solanum lycopersicum), uses scent cues to select and forage in patches that differ in host density and predation risk. Using choice bioassays in a wind tunnel under a continuous airflow, we recorded selection patch and selection time, as well as foraging parameters, including residence time, oviposition events, and attacks. Our results show that E. eremicus discriminated between sites with and without hosts using scent cues, but discrimination between patches with different host numbers was not detected under our assay conditions. It also distinguished between patches with maximum risk and those without risk, but not between subtle differences in risk. These findings suggest that E. eremicus, responded mainly to contrasting olfactory cues rather than to subtle odor differences. From an applied standpoint, our results motivate deeper investigation into how host- and predator-associated olfactory cues could fine-tune parasitoid deployment in biological control. Full article
Show Figures

Graphical abstract

42 pages, 1179 KB  
Article
Towards Reliable LLM Grading Through Self-Consistency and Selective Human Review: Higher Accuracy, Less Work
by Luke Korthals, Emma Akrong, Gali Geller, Hannes Rosenbusch, Raoul Grasman and Ingmar Visser
Mach. Learn. Knowl. Extr. 2026, 8(3), 74; https://doi.org/10.3390/make8030074 - 16 Mar 2026
Viewed by 288
Abstract
Large language models (LLMs) show promise for grading open-ended assessments but still exhibit inconsistent accuracy, systematic biases, and limited reliability across assignments. To address these concerns, we introduce SURE (Selective Uncertainty-based Re-Evaluation), a human-in-the-loop pipeline that combines repeated LLM prompting, uncertainty-based flagging, and [...] Read more.
Large language models (LLMs) show promise for grading open-ended assessments but still exhibit inconsistent accuracy, systematic biases, and limited reliability across assignments. To address these concerns, we introduce SURE (Selective Uncertainty-based Re-Evaluation), a human-in-the-loop pipeline that combines repeated LLM prompting, uncertainty-based flagging, and selective human regrading. Three LLMs—gpt-4.1-nano, gpt-5-nano, and the open-source gpt-oss-20b—graded answers of 46 students to 130 open questions and coding exercises across five assignments. Each student answer was scored 20 times to derive majority-voted predictions and self-consistency-based certainty estimates. We simulated human regrading by flagging low-certainty cases and replacing them with scores from four human graders. We used the first assignment as a training set for tuning certainty thresholds and to explore LLM output diversification via sampling parameters, rubric shuffling, varied personas, multilingual prompts, and post hoc ensembles. We then evaluated the effectiveness and efficiency of SURE on the other four assignments using a fixed certainty threshold. Across assignments, fully automated grading with a single prompt resulted in substantial underscoring, and majority-voting based on 20 prompts improved but did not eliminate this bias. Low certainty (i.e., high output diversity) was diagnostic of incorrect LLM scores, enabling targeted human regrading that improved grading accuracy while reducing manual grading time by 40–90%. Aggregating responses from all three LLMs in an ensemble improved certainty-based flagging and most consistently approached human-level accuracy, with 70–90% of the grades students would receive falling inside human-grader ranges. A reanalysis based on outputs from a more diversified LLM ensemble comprised of gpt-5, codestral-25.01, and llama-3.3-70b-instruct replicated these findings but also suggested that large reasoning models such as gpt-5 might eliminate the need for human oversight of LLM grading entirely. These findings demonstrate that self-consistency-based uncertainty estimation and selective human oversight can substantially improve the reliability and efficiency of AI-assisted grading. Full article
(This article belongs to the Section Learning)
Show Figures

Graphical abstract

Back to TopTop