Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,474)

Search Parameters:
Keywords = automated model selection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 6908 KB  
Article
Parametric Characterization and Multi-Objective Optimization of Low-Pressure Abrasive Water Jets for Biofouling Removal from Net Cages Using Response Surface Methodology and the Entropy Method
by Yingjie Wu, Yongqiang Tu, Bin Deng, Hui Li, Guohong Xiao and Hu Chen
Sustainability 2026, 18(1), 215; https://doi.org/10.3390/su18010215 - 24 Dec 2025
Abstract
Deep-sea cages are highly susceptible to biofouling due to long-term seawater immersion, which promotes the attachment and growth of marine organisms on nets, significantly reducing fish survival. To address this issue, this study explores the use of low-pressure abrasive water jets (LPAWJs) for [...] Read more.
Deep-sea cages are highly susceptible to biofouling due to long-term seawater immersion, which promotes the attachment and growth of marine organisms on nets, significantly reducing fish survival. To address this issue, this study explores the use of low-pressure abrasive water jets (LPAWJs) for cage fouling removal through numerical simulation. Based on a Box–Behnken response surface design, nozzle inlet pressure X1, nozzle outlet diameter X2, and target distance X3 were selected as optimization parameters. The peak jet impact force Z1, stable jet impact force Z2, peak abrasive water jet velocity Z3, and peak abrasive particle velocity Z4 were chosen as evaluation indicators to characterize the jet’s instantaneous impact ability, sustained action ability, and dynamic particle behavior. Using the entropy method, weights for each indicator were determined, and the jet’s overall removal capability was calculated. A regression model was developed by integrating numerical simulation with the response surface methodology (RSM), and the optimal parameter combination was identified as X1 = 4.5 MPa, X2 = 10 mm, and X3 = 205.396 mm. Compared with the poorest experimental condition (Condition 1), the jet’s overall removal capability obtained under the optimal parameter combination increases by 101.35%. Experimental validation further confirms that the optimized parameters yield the best oyster-removal performance of the low-pressure abrasive jet, with the average removal rate improving by 100.55% relative to Condition 1. The methodology and results of this study provide a theoretical foundation and technical reference for the design and optimization of automated net-cleaning systems or net-cleaning robots equipped with low-pressure abrasive jets. By integrating the proposed model and operating parameters, future robotic systems will be able to predict and dynamically adjust jet conditions according to fouling characteristics, thereby improving the efficiency, cost-effectiveness, and sustainability of maintenance operations in marine aquaculture. Full article
(This article belongs to the Section Sustainable Oceans)
24 pages, 20267 KB  
Review
Artificial Intelligence-Aided Microfluidic Cell Culture Systems
by Muhammad Sohail Ibrahim and Minseok Kim
Biosensors 2026, 16(1), 16; https://doi.org/10.3390/bios16010016 - 24 Dec 2025
Abstract
Microfluidic cell culture systems and organ-on-a-chip platforms provide powerful tools for modeling physiological processes, disease progression, and drug responses under controlled microenvironmental conditions. These technologies rely on diverse cell culture methodologies, including 2D and 3D culture formats, spheroids, scaffold-based systems, hydrogels, and organoid [...] Read more.
Microfluidic cell culture systems and organ-on-a-chip platforms provide powerful tools for modeling physiological processes, disease progression, and drug responses under controlled microenvironmental conditions. These technologies rely on diverse cell culture methodologies, including 2D and 3D culture formats, spheroids, scaffold-based systems, hydrogels, and organoid models, to recapitulate tissue-level functions and generate rich, multiparametric datasets through high-resolution imaging, integrated sensors, and biochemical assays. The heterogeneity and volume of these data introduce substantial challenges in pre-processing, feature extraction, multimodal integration, and biological interpretation. Artificial intelligence (AI), particularly machine learning and deep learning, offers solutions to these analytical bottlenecks by enabling automated phenotyping, predictive modeling, and real-time control of microfluidic environments. Recent advances also highlight the importance of technical frameworks such as dimensionality reduction, explainable feature selection, spectral pre-processing, lightweight on-chip inference models, and privacy-preserving approaches that support robust and deployable AI–microfluidic workflows. AI-enabled microfluidic and organ-on-a-chip systems now span a broad application spectrum, including cancer biology, drug screening, toxicity testing, microbial and environmental monitoring, pathogen detection, angiogenesis studies, nerve-on-a-chip models, and exosome-based diagnostics. These platforms also hold increasing potential for precision medicine, where AI can support individualized therapeutic prediction using patient-derived cells and organoids. As the field moves toward more interpretable and autonomous systems, explainable AI will be essential for ensuring transparency, regulatory acceptance, and biological insight. Recent AI-enabled applications in cancer modeling, drug screening, etc., highlight how deep learning can enable precise detection of phenotypic shifts, classify therapeutic responses with high accuracy, and support closed-loop regulation of microfluidic environments. These studies demonstrate that AI can transform microfluidic systems from static culture platforms into adaptive, data-driven experimental tools capable of enhancing assay reproducibility, accelerating drug discovery, and supporting personalized therapeutic decision-making. This narrative review synthesizes current progress, technical challenges, and future opportunities at the intersection of AI, microfluidic cell culture platforms, and advanced organ-on-a-chip systems, highlighting their emerging role in precision health and next-generation biomedical research. Full article
(This article belongs to the Collection Microsystems for Cell Cultures)
24 pages, 3622 KB  
Article
Deep Learning-Based Intelligent Monitoring of Petroleum Infrastructure Using High-Resolution Remote Sensing Imagery
by Nannan Zhang, Hang Zhao, Pengxu Jing, Yan Gao, Song Liu, Jinli Shen, Shanhong Huang, Qihong Zeng, Yang Liu and Miaofen Huang
Processes 2026, 14(1), 28; https://doi.org/10.3390/pr14010028 - 20 Dec 2025
Viewed by 100
Abstract
The rapid advancement of high-resolution remote sensing technology has significantly expanded observational capabilities in the oil and gas sector, enabling more precise identification of petroleum infrastructure. Remote sensing now plays a critical role in providing real-time, continuous monitoring. Manual interpretation remains the predominant [...] Read more.
The rapid advancement of high-resolution remote sensing technology has significantly expanded observational capabilities in the oil and gas sector, enabling more precise identification of petroleum infrastructure. Remote sensing now plays a critical role in providing real-time, continuous monitoring. Manual interpretation remains the predominant approach, yet is plagued by multiple limitations. To overcome the limitations of manual interpretation in large-scale monitoring of upstream petroleum assets, this study develops an end-to-end, deep learning-driven framework for intelligent extraction of key oilfield targets from high-resolution remote sensing imagery. Specific aims are as follows: (1) To leverage temporal diversity in imagery to construct a representative training dataset. (2) To automate multi-class detection of well sites, production discharge pools, and storage facilities with high precision. This study proposes an intelligent monitoring framework based on deep learning for the automatic extraction of petroleum-related features from high-resolution remote sensing imagery. Leveraging the temporal richness of multi-temporal satellite data, a geolocation-based sampling strategy was adopted to construct a dedicated petroleum remote sensing dataset. The dataset comprises over 8000 images and more than 30,000 annotated targets across three key classes: well pads, production ponds, and storage facilities. Four state-of-the-art object detection models were evaluated—two-stage frameworks (Faster R-CNN, Mask R-CNN) and single-stage algorithms (YOLOv3, YOLOv4)—with the integration of transfer learning to improve accuracy, generalization, and robustness. Experimental results demonstrate that two-stage detectors significantly outperform their single-stage counterparts in terms of mean Average Precision (mAP). Specifically, the Mask R-CNN model, enhanced through transfer learning, achieved an mAP of 89.2% across all classes, exceeding the best-performing single-stage model (YOLOv4) by 11 percentage points. This performance gap highlights the trade-off between speed and accuracy inherent in single-shot detection models, which prioritize real-time inference at the expense of precision. Additionally, comparative analysis among similar architectures confirmed that newer versions (e.g., YOLOv4 over YOLOv3) and the incorporation of transfer learning consistently yield accuracy improvements of 2–4%, underscoring its effectiveness in remote sensing applications. Three oilfield areas were selected for practical application. The results indicate that the constructed model can automatically extract multiple target categories simultaneously, with average detection accuracies of 84% for well sites and 77% for production ponds. For multi-class targets over 100 square kilometers, manual detection previously required one day but now takes only one hour. Full article
Show Figures

Figure 1

21 pages, 3674 KB  
Article
scSelector: A Flexible Single-Cell Data Analysis Assistant for Biomedical Researchers
by Xiang Gao, Peiqi Wu, Jiani Yu, Xueying Zhu, Shengyao Zhang, Hongxiang Shao, Dan Lu, Xiaojing Hou and Yunqing Liu
Genes 2026, 17(1), 2; https://doi.org/10.3390/genes17010002 - 19 Dec 2025
Viewed by 146
Abstract
Background: Standard single-cell RNA sequencing (scRNA-seq) analysis workflows face significant limitations, particularly the rigidity of clustering-dependent methods that can obscure subtle cellular heterogeneity and the potential loss of biologically meaningful cells during stringent quality control (QC) filtering. This study aims to develop [...] Read more.
Background: Standard single-cell RNA sequencing (scRNA-seq) analysis workflows face significant limitations, particularly the rigidity of clustering-dependent methods that can obscure subtle cellular heterogeneity and the potential loss of biologically meaningful cells during stringent quality control (QC) filtering. This study aims to develop scSelector (v1.0), an interactive software toolkit designed to empower researchers to flexibly select and analyze cell populations directly from low-dimensional embeddings, guided by their expert biological knowledge. Methods: scSelector was developed using Python, relying on core dependencies such as Scanpy (v1.9.0), Matplotlib (v3.4.0), and NumPy (v1.20.0). It integrates an intuitive lasso selection tool with backend analytical modules for differential expression and functional enrichment analysis. Furthermore, it incorporates Large Language Model (LLM) assistance via API integration (DeepSeek/Gemini) to provide automated, contextually informed cell-type and state prediction reports. Results: Validation across multiple public datasets demonstrated that scSelector effectively resolves functional heterogeneity within broader cell types, such as identifying distinct alpha-cell subpopulations with unique remodeling capabilities in pancreatic tissue. It successfully characterized rare populations, including platelets in PBMCs and extremely low-abundance endothelial cells in liver tissue (as few as 53 cells). Additionally, scSelector revealed that cells discarded by standard QC can represent biologically functional subpopulations, and it accurately dissected the states of outlier cells, such as proliferative NK cells. Conclusions: scSelector provides a flexible, researcher-centric platform that moves beyond the constraints of automated pipelines. By combining interactive selection with AI-assisted interpretation, it enhances the precision of scRNA-seq analysis and facilitates the discovery of novel cell types and complex cellular behaviors. Full article
(This article belongs to the Section Bioinformatics)
Show Figures

Figure 1

32 pages, 6078 KB  
Article
Optimization of Metro-Based Underground Logistics Network Based on Bi-Level Programming Model: A Case Study of Beijing
by Han Zhang, Yongbo Lv, Feng Jiang and Yanhui Wang
Sustainability 2026, 18(1), 7; https://doi.org/10.3390/su18010007 - 19 Dec 2025
Viewed by 151
Abstract
Characterized by zero-carbon, congestion-free, and high-capacity features, the utilization of metro systems for collaborative passenger-and-freight transport (the metro-based underground logistics system, M-ULS) has been recognized as a favorable alternative to facilitate automated freight transport in future megacities. This article constructs a three-echelon M-ULS [...] Read more.
Characterized by zero-carbon, congestion-free, and high-capacity features, the utilization of metro systems for collaborative passenger-and-freight transport (the metro-based underground logistics system, M-ULS) has been recognized as a favorable alternative to facilitate automated freight transport in future megacities. This article constructs a three-echelon M-ULS network and establishes a multi-objective bilevel programming model, considering the interests of both government investment departments and transport enterprises. The overall goal of the study is to establish a transportation network with the lowest construction cost, lowest operating cost, and highest facility utilization rate, taking into account factors such as population density, transportation conditions, land resources, logistics demand, and metro station location, under given cost parameters and demand conditions. The upper-level model takes government investment as the main body and aims to minimize the total cost, establishing an optimization model for location selection allocation paths with capacity constraints; the lower-level model aims to minimize the generalized cost for freight enterprises by simulating the competition between traditional transportation and the M-ULS mode. In addition, a bi-level programming model solving framework was established, and a multi-stage precise heuristic hybrid algorithm based on adaptive immune clone selection algorithm (AICSA) and improved plant growth simulation algorithm (IPGSA) is designed for the upper-level model. Finally, taking the central urban area of Beijing as an example, four network scales are set up for numerical simulation research to verify the reliability and superiority of the model and algorithm. By analyzing and setting key indicators, an optimal network configuration scheme is proposed, providing a feasible path for cities to improve logistics efficiency and reduce the impact of logistics externalities under limited land resources, further strengthening the strategic role of subway logistics systems in urban sustainable development. Full article
Show Figures

Figure 1

48 pages, 5217 KB  
Article
AutoML-Based Prediction of Unconfined Compressive Strength of Stabilized Soils: A Multi-Dataset Evaluation on Worldwide Experimental Data
by Romulo Murucci Oliveira, Deivid Campos, Katia Vanessa Bicalho, Bruno da S. Macêdo, Matteo Bodini, Camila Martins Saporetti and Leonardo Goliatt
Forecasting 2025, 7(4), 80; https://doi.org/10.3390/forecast7040080 - 18 Dec 2025
Viewed by 320
Abstract
Unconfined Compressive Strength (UCS) of stabilized soils is commonly used for evaluating the effectiveness of soil improvement techniques. Achieving target UCS values through conventional trial-and-error approaches requires extensive laboratory experiments, which are time-consuming and resource-intensive. Automated Machine Learning (AutoML) frameworks offer a promising [...] Read more.
Unconfined Compressive Strength (UCS) of stabilized soils is commonly used for evaluating the effectiveness of soil improvement techniques. Achieving target UCS values through conventional trial-and-error approaches requires extensive laboratory experiments, which are time-consuming and resource-intensive. Automated Machine Learning (AutoML) frameworks offer a promising alternative by enabling automated, reproducible, and accessible predictive modeling of UCS values from more readily obtainable index and physical soil and stabilizer properties, reducing the reliance on experimental testing and empirical relationships, and allowing systematic exploration of multiple models and configurations. This study evaluates the predictive performance of five state-of-the-art AutoML frameworks (i.e., AutoGluon, AutoKeras, FLAML, H2O, and TPOT) using analyses of results from 10 experimental datasets comprising 2083 samples from laboratory experiments spanning diverse soil types, stabilizers, and experimental conditions across many countries worldwide. Comparative analyses revealed that FLAML achieved the highest overall performance (average PI score of 0.7848), whereas AutoKeras exhibited lower accuracy on complex datasets; AutoGluon , H2O and TPOT also demonstrated strong predictive capabilities, with performance varying with dataset characteristics. Despite the promising potential of AutoML, prior research has shown that fully automated frameworks have limited applicability to UCS prediction, highlighting a gap in end-to-end pipeline automation. The findings provide practical guidance for selecting AutoML tools based on dataset characteristics and research objectives, and suggest avenues for future studies, including expanding the range of AutoML frameworks and integrating interpretability techniques, such as feature importance analysis, to deepen understanding of soil–stabilizer interactions. Overall, the results indicate that AutoML frameworks can effectively accelerate UCS prediction, reduce laboratory workload, and support data-driven decision-making in geotechnical engineering. Full article
Show Figures

Figure 1

33 pages, 9875 KB  
Article
An Adaptive Optimization Method for Moored Buoy Site Selection Integrating Ontology Reasoning and Numerical Computation
by Miaomiao Song, Haihui Song, Shixuan Liu, Xiao Fu, Bin Miao, Wenqing Li, Keke Zhang, Wei Hu and Xingkun Yan
J. Mar. Sci. Eng. 2025, 13(12), 2401; https://doi.org/10.3390/jmse13122401 - 18 Dec 2025
Viewed by 93
Abstract
With the growing diversity and complexity of marine monitoring requirements, the scientific deployment of moored buoys has attracted increasing attention. To address the limitations of traditional methods—such as inconsistent knowledge representation, insufficient logical reasoning capacity, and poor adaptability to dynamic marine environments—this study [...] Read more.
With the growing diversity and complexity of marine monitoring requirements, the scientific deployment of moored buoys has attracted increasing attention. To address the limitations of traditional methods—such as inconsistent knowledge representation, insufficient logical reasoning capacity, and poor adaptability to dynamic marine environments—this study proposes an adaptive optimization method for moored buoy site selection integrating ontology reasoning and numerical computation. The proposed approach constructs an ontology model covering key concepts such as buoy specifications, monitoring objectives, and deployment requirements, and further defines formalized reasoning rules to enable automated judgment of deployment feasibility, sensor configuration, and spatial conflict resolution for moored buoy siting. Based on this semantic framework, a spatio-temporal comprehensive variation index (STCVI) is established by integrating temperature, salinity, and current velocity to characterize dynamic oceanographic conditions. Furthermore, a coverage-first greedy algorithm is designed to determine buoy deployment locations, enabling dynamic optimization and environmental adaptability of the buoy station layout. To verify the feasibility and adaptability of the proposed method, simulation experiments are conducted in the Beibu Gulf. Two layout scenarios—an appending layout with existing buoys and an independent layout without existing buoys—are designed to test the method’s adaptability under different deployment conditions. By combining Voronoi spatial partitioning and nearest-neighbor distance analysis, the optimized results are quantitatively evaluated in terms of spatial uniformity and observational effectiveness. The results indicate that the proposed method effectively enhances the spatial rationality and monitoring efficiency of buoy deployment, demonstrating strong generality and scalability. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 1457 KB  
Article
Practical Test-Time Domain Adaptation for Industrial Condition Monitoring by Leveraging Normal-Class Data
by Payman Goodarzi and Andreas Schütze
Sensors 2025, 25(24), 7614; https://doi.org/10.3390/s25247614 - 15 Dec 2025
Viewed by 269
Abstract
Machine learning has driven significant advancements across diverse domains. However, models often experience performance degradation when applied to data distributions that differ from those encountered during training, a challenge known as domain shift. This issue is particularly relevant in industrial condition monitoring, where [...] Read more.
Machine learning has driven significant advancements across diverse domains. However, models often experience performance degradation when applied to data distributions that differ from those encountered during training, a challenge known as domain shift. This issue is particularly relevant in industrial condition monitoring, where data originate from heterogeneous sensors operating under varying conditions, hardware configurations, or environments. Domain adaptation is a well-known method to address this problem; however, the proposed methods are not directly applicable in real-world condition monitoring scenarios. This study addresses such challenges by introducing a Normal-Class Test-Time Domain Adaptation (NC-TTDA) framework tailored for condition monitoring applications. The proposed framework detects distributional shifts in sensor data and adapts pretrained models to new operating conditions by exploiting readily available normal-class samples, without requiring labeled target data. Furthermore, it integrates seamlessly with automated machine learning (AutoML) workflows to support hyperparameter optimization, model selection, and test-time adaptation within an end-to-end pipeline. Experiments conducted on six publicly available condition monitoring datasets demonstrate that the proposed approach achieves robust generalization under domain shift, yielding average AUROC scores above 99% and low false positive rates across all target domains. This work emphasizes the need for practical solutions to address domain adaptation in condition monitoring and highlights the effectiveness of NC-TTDA for real-world industrial monitoring applications. Full article
(This article belongs to the Special Issue Condition Monitoring in Manufacturing with Advanced Sensors)
Show Figures

Figure 1

28 pages, 1813 KB  
Article
Econometric and Python-Based Forecasting Tools for Global Market Price Prediction in the Context of Economic Security
by Dmytro Zherlitsyn, Volodymyr Kravchenko, Oleksiy Mints, Oleh Kolodiziev, Olena Khadzhynova and Oleksandr Shchepka
Econometrics 2025, 13(4), 52; https://doi.org/10.3390/econometrics13040052 - 15 Dec 2025
Viewed by 384
Abstract
Debate persists over whether classical econometric or modern machine learning (ML) approaches provide superior forecasts for volatile monthly price series. Despite extensive research, no systematic cross-domain comparison exists to guide model selection across diverse asset types. In this study, we compare traditional econometric [...] Read more.
Debate persists over whether classical econometric or modern machine learning (ML) approaches provide superior forecasts for volatile monthly price series. Despite extensive research, no systematic cross-domain comparison exists to guide model selection across diverse asset types. In this study, we compare traditional econometric models with classical ML baselines and hybrid approaches across financial assets, futures, commodities, and market index domains. Universal Python-based forecasting tools include month-end preprocessing, automated ARIMA order selection, Fourier terms for seasonality, circular terms, and ML frameworks for forecasting and residual corrections. Performance is assessed via anchored rolling-origin backtests with expanding windows and a fixed 12-month horizon. MAPE comparisons show that ARIMA-based models provide stable, transparent benchmarks but often fail to capture the nonlinear structure of high-volatility series. ML tools can enhance accuracy in these cases, but they are susceptible to stability and overfitting on monthly histories. The most accurate and reliable forecasts come from models that combine ARIMA-based methods with Fourier transformation and a slight enhancement using machine learning residual correction. ARIMA-based approaches achieve about 30% lower forecast errors than pure ML (18.5% vs. 26.2% average MAPE and 11.6% vs. 16.8% median MAPE), with hybrid models offering only marginal gains (0.1 pp median improvement) at significantly higher computational cost. This work demonstrates the domain-specific nature of model performance, clarifying when hybridization is effective and providing reproducible Python pipelines suited for economic security applications. Full article
Show Figures

Figure 1

28 pages, 16312 KB  
Article
PS-InSAR Monitoring Integrated with a Bayesian-Optimized CNN–LSTM for Predicting Surface Subsidence in Complex Mining Goafs Under a Symmetry Perspective
by Tianlong Su, Linxin Zhang, Xuzhao Yuan, Xiaoquan Li, Xuefeng Li, Xuxing Huang, Zheng Huang and Danhua Zhu
Symmetry 2025, 17(12), 2152; https://doi.org/10.3390/sym17122152 - 14 Dec 2025
Viewed by 249
Abstract
Mine-induced surface subsidence threatens infrastructure and can trigger cascading geohazards, so accurate and computationally efficient monitoring and forecasting are essential for early warning. We integrate Persistent Scatterer InSAR (PS-InSAR) time series with a Bayesian-optimized CNN–LSTM designed for spatiotemporal prediction. The CNN extracts spatial [...] Read more.
Mine-induced surface subsidence threatens infrastructure and can trigger cascading geohazards, so accurate and computationally efficient monitoring and forecasting are essential for early warning. We integrate Persistent Scatterer InSAR (PS-InSAR) time series with a Bayesian-optimized CNN–LSTM designed for spatiotemporal prediction. The CNN extracts spatial deformation patterns, the LSTM models temporal dependence, and Bayesian optimization selects the architecture, training hyperparameters, and the most informative exogenous drivers. Groundwater level and backfilling intensity are encoded as multichannel inputs. Endpoint anchoring with affine calibration aligns the historical series and the forward projections. PS-InSAR indicates a maximum subsidence rate of 85.6 mm yr−1, and the estimates are corroborated against nearby leveling benchmarks and FLAC3D simulations. Cross-site comparisons show acceleration followed by deceleration after backfilling and groundwater recovery, which is consistent with geological engineering conditions. A symmetry-aware preprocessing step exploits axial regularities of the deformation field through mirroring augmentation and documents symmetry-breaking hotspots linked to geological heterogeneity. These choices improve generalization to shifted and oscillatory patterns in both the spatial CNN and the temporal LSTM branches. Short-term forecasts from the BO–CNN–LSTM indicate subsequent stabilization with localized rebound, highlighting its practical value for operational planning and risk mitigation. The framework combines automated hyperparameter search with physically consistent objectives, reduces manual tuning, enhances reproducibility and generalizability, and provides a transferable quantitative workflow for forecasting mine-induced deformation in complex goaf systems. Full article
Show Figures

Figure 1

27 pages, 1614 KB  
Article
Comparative Analysis of Neural Network Models for Predicting Peach Maturity on Tabular Data
by Dejan Ljubobratović, Marko Vuković, Marija Brkić Bakarić, Tomislav Jemrić and Maja Matetić
Computers 2025, 14(12), 554; https://doi.org/10.3390/computers14120554 - 13 Dec 2025
Viewed by 177
Abstract
Peach maturity at harvest is a critical factor influencing fruit quality and postharvest life. Traditional destructive methods for maturity assessment, although effective, compromise fruit integrity and are unsuitable for practical implementation in modern production. This study presents a machine learning approach for non-destructive [...] Read more.
Peach maturity at harvest is a critical factor influencing fruit quality and postharvest life. Traditional destructive methods for maturity assessment, although effective, compromise fruit integrity and are unsuitable for practical implementation in modern production. This study presents a machine learning approach for non-destructive peach maturity prediction using tabular data collected from 701 ‘Redhaven’ peaches. Three neural network models suitable for small tabular datasets (TabNet, SAINT, and NODE) were applied and evaluated using classification metrics, including accuracy, F1-score, and AUC. The models demonstrated consistently strong performance across several feature configurations, with TabNet achieving the highest accuracy when all non-destructive measurements were available, while TabNet provided the most robust and practical performance on the comprehensive non-destructive subset and in optimized minimal-feature settings. These findings indicate that non-destructive sensing methods, particularly when combined with modern neural architectures, can reliably predict maturity and offer potential for real-time, automated fruit selection after harvest. The integration of such models into autonomous harvesting systems, for instance, through drone-based platforms equipped with appropriate sensors, could significantly improve efficiency and fruit quality management in horticultural peach production. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

36 pages, 6448 KB  
Article
Toward Smart Agriculture: AI-Optimized Prototype Conceptual Design for Lentil Seed Germination with UV-C and Spirulina
by Pedro Ponce, Claudia Hernandez-Aguilar, Mario Rojas, Juana Isabel Méndez, David Balderas, Flavio Arturo Dominguez-Pacheco and Alfredo Diaz-Lara
Processes 2025, 13(12), 4030; https://doi.org/10.3390/pr13124030 - 12 Dec 2025
Viewed by 222
Abstract
This study introduces an adaptable, intelligent prototype designed to optimize lentil seed germination and biomass accumulation via controlled UV-C radiation and Spirulina supplementation. Building on earlier experiments that separately and jointly assessed these treatments, the work presents a novel seed-treatment chamber that combines [...] Read more.
This study introduces an adaptable, intelligent prototype designed to optimize lentil seed germination and biomass accumulation via controlled UV-C radiation and Spirulina supplementation. Building on earlier experiments that separately and jointly assessed these treatments, the work presents a novel seed-treatment chamber that combines environmental sensing, real-time delivery mechanisms, and a machine-learning decision engine. The system automatically selects among three operational modes, Fast Germination, High Biomass, and Flavonoid Enrichment, each targeting a specific agronomic goal. To uncover the most influential treatment factors, the authors applied Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), revealing key response patterns that inform mode definitions. A regression-based AI model was then trained on experimental data to predict treatment outcomes and dynamically adjust parameters. Model performance metrics demonstrate high predictive fidelity, with a Mean Absolute Error (MAE) of 2.1267%, indicating an average deviation of just over two percentage points between predicted and observed germination rates. In comparison, a Mean Squared Error (MSE) of 6.4598 and a corresponding Root Mean Squared Error (RMSE) of 2.5416% confirm consistently low squared deviations. An R2 score of 0.8702 indicates that the model accounts for approximately 87% of the variance in germination outcomes, underscoring the robustness of the regression approach. Importantly, the specific treatment ranges illustrated in this study are not direct replications of prior data, but rather representative values drawn from earlier research to demonstrate the framework’s applicability. By abstracting treatment parameters into realistic ranges, the paper shows how the chamber can accommodate various empirical datasets. The principal contribution lies in offering a generalizable methodology for designing AI-enhanced seed-treatment systems. This conceptual framework can be tailored to multiple crops and cultivation environments, paving the way for scalable, precision agriculture solutions that integrate automated monitoring, intelligent control, and real-time optimization. Full article
(This article belongs to the Section Sustainable Processes)
Show Figures

Figure 1

33 pages, 353 KB  
Article
Integration of Artificial Intelligence into Criminal Procedure Law and Practice in Kazakhstan
by Gulzhan Nusupzhanovna Mukhamadieva, Akynkozha Kalenovich Zhanibekov, Nurdaulet Mukhamediyaruly Apsimet and Yerbol Temirkhanovich Alimkulov
Laws 2025, 14(6), 98; https://doi.org/10.3390/laws14060098 - 12 Dec 2025
Viewed by 526
Abstract
Legal regulation and practical implementation of artificial intelligence (AI) in Kazakhstan’s criminal procedure are considered within the context of judicial digital transformation. Risks arise for fundamental procedural principles, including the presumption of innocence, adversarial process, and protection of individual rights and freedoms. Legislative [...] Read more.
Legal regulation and practical implementation of artificial intelligence (AI) in Kazakhstan’s criminal procedure are considered within the context of judicial digital transformation. Risks arise for fundamental procedural principles, including the presumption of innocence, adversarial process, and protection of individual rights and freedoms. Legislative mechanisms ensuring lawful and rights-based application of AI in criminal proceedings are required to maintain procedural balance. Comparative legal analysis, formal legal research, and a systemic approach reveal gaps in existing legislation: absence of clear definitions, insufficient regulation, and lack of accountability for AI use. Legal recognition of AI and the establishment of procedural safeguards are essential. The novelty of the study lies in the development of concrete approaches to the introduction of artificial intelligence technologies into criminal procedure, taking into account Kazakhstan’s practical experience with the digitalization of criminal case management. Unlike existing research, which examines AI in the legal profession primarily from a theoretical perspective, this work proposes detailed mechanisms for integrating models and algorithms into the processing of criminal cases. The implementation of AI in criminal justice enhances the efficiency, transparency, and accuracy of case handling by automating document preparation, data analysis, and monitoring compliance with procedural deadlines. At the same time, several constraints persist, including dependence on the quality of training datasets, the impossibility of fully replacing human legal judgment, and the need to uphold the principles of the presumption of innocence, the right to privacy, and algorithmic transparency. The findings of the study underscore the potential of AI, provided that procedural safeguards are strictly observed and competent authorities exercise appropriate oversight. Two potential approaches are outlined: selective amendments to the Criminal Procedure Code concerning rights protection, privacy, and judicial powers; or adoption of a separate provision on digital technologies and AI. Implementation of these measures would create a balanced legal framework that enables effective use of AI while preserving core procedural guarantees. Full article
(This article belongs to the Special Issue Criminal Justice: Rights and Practice)
22 pages, 4060 KB  
Article
High-Performance Concrete Strength Regression Based on Machine Learning with Feature Contribution Visualization
by Lei Zhen, Chang Qu, Man-Lai Tang and Junping Yin
Mathematics 2025, 13(24), 3965; https://doi.org/10.3390/math13243965 - 12 Dec 2025
Viewed by 191
Abstract
Concrete compressive strength is a fundamental indicator of the mechanical properties of High-Performance Concrete (HPC) with multiple components. Traditionally, it is measured through laboratory tests, which are time-consuming and resource-intensive. Therefore, this study develops a machine learning-based regression framework to predict compressive strength, [...] Read more.
Concrete compressive strength is a fundamental indicator of the mechanical properties of High-Performance Concrete (HPC) with multiple components. Traditionally, it is measured through laboratory tests, which are time-consuming and resource-intensive. Therefore, this study develops a machine learning-based regression framework to predict compressive strength, aiming to reduce experimental costs and resource usage. Under three different data preprocessing strategies—raw data, standard score, and Box–Cox transformation—a selected set of high-performance ensemble models demonstrates excellent predictive capacity, with both the coefficient of determination (R2) and explained variance score (EVS) exceeding 90% across all datasets, indicating high accuracy in compressive strength prediction. In particular, stacking ensemble (R2-0.920, EVS-0.920), XGBoost regression (R2-0.920, EVS-0.920), and HistGradientBoosting regression (R2-0.913, EVS-0.914) based on Box–Cox transformation data show strong generalization capability and stability. Additionally, tree-based and boosting methods demonstrate high effectiveness in capturing complex feature interactions. Furthermore, this study presents an analytical workflow that enhances feature interpretability through visualization techniques—including Partial Dependence Plots (PDP), Individual Conditional Expectation (ICE), and SHapley Additive exPlanations (SHAP). These methods clarify the contribution of each feature and quantify the direction and magnitude of its impact on predictions. Overall, this approach supports automated concrete quality control, optimized mixture proportioning, and more sustainable construction practices. Full article
(This article belongs to the Special Issue Advanced Computational Mechanics)
Show Figures

Figure 1

19 pages, 1071 KB  
Article
AI-Driven Clinical Decision Support System for Automated Ventriculomegaly Classification from Fetal Brain MRI
by Mannam Subbarao, Simi Surendran, Seena Thomas, Hemanth Lakshman, Vinjanampati Goutham, Keshagani Goud and Suhas Udayakumaran
J. Imaging 2025, 11(12), 444; https://doi.org/10.3390/jimaging11120444 - 12 Dec 2025
Viewed by 307
Abstract
Fetal ventriculomegaly (VM) is a condition characterized by abnormal enlargement of the cerebral ventricles of the fetus brain that often causes developmental disorders in children. Manual segmentation and classification of ventricular structures from brain MRI scans are time-consuming and require clinical expertise. To [...] Read more.
Fetal ventriculomegaly (VM) is a condition characterized by abnormal enlargement of the cerebral ventricles of the fetus brain that often causes developmental disorders in children. Manual segmentation and classification of ventricular structures from brain MRI scans are time-consuming and require clinical expertise. To address this challenge, we develop an automated pipeline for ventricle segmentation, ventricular width estimation, and VM severity classification using a publicly available dataset. An adaptive slice selection strategy converts 3D MRI volumes into the most informative 2D slices, which are then segmented to isolate the lateral ventricles and deep gray matter. Ventricular width is automatically estimated to assign severity levels based on clinical thresholds, generating labeled data for training a deep learning classifier. Finally, an explainability module using a large language model integrates the MRI slices, segmentation masks, and predicted severity to provide interpretable clinical reasoning. Experimental results demonstrate that the proposed decision support system delivers robust performance, achieving dice scores of 89% and 87.5% for the 2D and 3D segmentation models, respectively. Also, the classification network attains an accuracy of 86% and an F1-score of 0.84 in VM analysis. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

Back to TopTop