Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,671)

Search Parameters:
Keywords = K-cross validation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 8200 KB  
Article
Few-Shot Bearing Fault Diagnosis Based on Multi-Layer Feature Fusion and Similarity Measurement
by Changyong Deng, Dawei Dong, Sipeng Wang, Hongsheng Zhang and Li Feng
Lubricants 2026, 14(4), 172; https://doi.org/10.3390/lubricants14040172 - 17 Apr 2026
Abstract
The running reliability of rolling bearings depends on the effective lubrication state, and poor lubrication will induce abnormal vibration. Therefore, vibration-based fault diagnosis is an important means to evaluate the health of bearings through vibration characteristics. However, the lack of fault samples in [...] Read more.
The running reliability of rolling bearings depends on the effective lubrication state, and poor lubrication will induce abnormal vibration. Therefore, vibration-based fault diagnosis is an important means to evaluate the health of bearings through vibration characteristics. However, the lack of fault samples in actual working conditions seriously restricts the generalization ability and accuracy of an intelligent diagnosis model. A novel few-shot diagnosis method integrating multi-layer feature fusion and adaptive similarity measurement is proposed. This method adopts a meta-learning framework to simulate sample scarcity through numerous N-way K-shot diagnostic tasks. An efficient feature extractor with a cross-task feature stitching mechanism is designed to fuse features from support and query sets. To overcome the limitation of fixed-distance metrics in existing meta-learners, a learnable similarity scheduler adaptively generates optimal pseudo-distance functions. In particular, a multi-layer feature fusion strategy is introduced to compute adaptive similarities at multiple network depths, which significantly enhances feature robustness against operational variations. Experimental results demonstrate the method achieves stable diagnostic accuracy above 90% under extremely few-shot conditions and maintains over 90% accuracy when transferring from laboratory-simulated faults to natural operational faults, validating its strong potential for practical industrial applications where annotated fault data is scarce. Full article
(This article belongs to the Special Issue Advances in Wear Life Prediction of Bearings)
14 pages, 806 KB  
Article
TRIDENT: Efficient Small-Large Model Collaboration via Heterogeneous Expert Decoupling
by Guangyu Dai, Siliang Tang and Yueting Zhuang
Electronics 2026, 15(8), 1699; https://doi.org/10.3390/electronics15081699 - 17 Apr 2026
Abstract
The burgeoning scale of Pre-trained Large Models (PLMs) has intensified the demand for efficient inference without compromising performance, while existing large model collaborative frameworks have shown promise, they often suffer from functional redundancy among experts and limited robustness in complex cross-domain scenarios. In [...] Read more.
The burgeoning scale of Pre-trained Large Models (PLMs) has intensified the demand for efficient inference without compromising performance, while existing large model collaborative frameworks have shown promise, they often suffer from functional redundancy among experts and limited robustness in complex cross-domain scenarios. In this paper, we propose Tri-gate Routing for Inference via Decoupled Efficient Network Technologies (TRIDENT), a highly efficient and robust heterogeneous collaborative inference framework. TRIDENT leverages the complementary inductive biases of MLP (for statistical patterns) and KAN (for symbolic logic) to maximize reasoning potential with minimal parametric overhead. To address feature homogenization in traditional distillation, we introduce Orthogonal Feature Decoupling Distillation, utilizing an orthogonality loss Lorth for functional decoupling and a reconstruction loss Lrecon to anchor decoupled features to the PLM knowledge manifold. During inference, a Dual-Threshold Arbiter effectively detects expert hallucinations by integrating individual confidence τcon and heterogeneous consistency τagree. Extensive experiments on CIFAR-100-LT, XNLI, and GSM8K demonstrate that TRIDENT significantly reduces the Invocation Rate (IR) of PLMs while maintaining high accuracy. Our findings reveal a distinct Pareto optimal balance and validate the spontaneous division of labor between heterogeneous experts. By transcending the limitations of single-architecture systems, TRIDENT provides a robust and interpretable pathway for efficient collaborative intelligence. Full article
(This article belongs to the Section Artificial Intelligence)
13 pages, 1146 KB  
Technical Note
Observations of Atmospheric Temperature in the Mesopause Region Using a Na Doppler Lidar and Comparison with SABER Satellite Data over Qingdao, China
by Xianxin Li, Li Wang, Zhangjun Wang, Chao Ban, Chao Chen, Quanfeng Zhuang, Ruijie Hua, Zhi Qin, Xiufen Wang, Hui Li, Xin Pan, Fei Gao and Dengxin Hua
Remote Sens. 2026, 18(8), 1201; https://doi.org/10.3390/rs18081201 - 16 Apr 2026
Abstract
Accurate measurement of atmospheric temperature profiles in the mesopause region is crucial for understanding the atmospheric dynamics and climate processes. To address this challenge, a sodium Doppler lidar based on the resonance fluorescence scattering mechanism was recently developed to precisely detect atmospheric temperatures [...] Read more.
Accurate measurement of atmospheric temperature profiles in the mesopause region is crucial for understanding the atmospheric dynamics and climate processes. To address this challenge, a sodium Doppler lidar based on the resonance fluorescence scattering mechanism was recently developed to precisely detect atmospheric temperatures in the mesopause region in Qingdao (36.1°N, 120.1°E), China. For the first time, high-resolution observations of atmospheric temperature in the mesopause region (80–105 km) were achieved by the self-developed Na Doppler lidar in Qingdao under the complex atmospheric conditions of the mid-latitude coastal zone. A systematic cross-validation between the self-developed lidar and SABER satellite observations was conducted, and the temperature bias between the two detection methods in the mesopause region and its altitude-dependent characteristics were quantitatively assessed. The temperature profiles measured by lidar exhibited good agreement when compared with the satellite data yielding estimations of RMSE and mean absolute deviation of 9.2 K and 7.3 K, respectively, from 80 km to 100 km altitudes. A correlation analysis conducted between the lidar temperature data and satellite data showed that the closer the satellite passed over Qingdao, the better the correlation demonstrated by the data. The correlation coefficient of the closer comparison data can reach 0.86, which means that the self-developed lidar system in Qingdao has a good ability to detect temperature profiles in the middle and upper atmosphere. The nocturnal evolution details and short-period fluctuations of the temperature field in the mesopause region over Qingdao were observed, revealing the local temperature structural characteristics under the complex atmospheric conditions at the land–sea interface in the Qingdao area. Full article
28 pages, 4578 KB  
Article
Feature Engineering Approach for sEMG Signal Classification in Combat Sport Athletes: A Comparative Study of Machine Learning Algorithms
by Kudratjon Zohirov, Feruz Ruziboev, Sardor Boykobilov, Markhabo Shukurova, Mirjakhon Temirov, Mamadiyor Sattorov, Gulrukh Sherboboyeva, Gulbanbegim Jamolova, Zavqiddin Temirov and Rashid Nasimov
Appl. Sci. 2026, 16(8), 3873; https://doi.org/10.3390/app16083873 - 16 Apr 2026
Abstract
Surface electromyography (sEMG) signals are important for assessing muscle activity, neuromuscular behavior, and movement stability. sEMG signals are widely used in athlete performance monitoring and human–machine interface applications. However, existing methods have limitations in classification, accuracy and generalization across users. In this study, [...] Read more.
Surface electromyography (sEMG) signals are important for assessing muscle activity, neuromuscular behavior, and movement stability. sEMG signals are widely used in athlete performance monitoring and human–machine interface applications. However, existing methods have limitations in classification, accuracy and generalization across users. In this study, a real-world dataset was generated from 30 professional wrestlers using an 8-channel system based on 10 physical movements and technical elements. Nine time-domain and energy features, mean absolute value (MAV), integrated EMG (IEMG), root mean square (RMS), simple square integral (SSI), fourth power (4POW), wavelength (WL), difference absolute standard deviation (DASDV), variance (VAR), and average amplitude change (AAC), were systematically evaluated separately and in combination. Five classifiers were compared: Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), k-Nearest Neighbor (KNN), and Neural Networks (NNs). The models were evaluated for accuracy, sensitivity, specificity, positive predictive value, and F1-score. The generalization ability was analyzed through cross-subject (24/6) and cross-session validation protocols. The nine feature combinations achieved the highest classification accuracy of 97.8% with the RF algorithm. The proposed approach can serve as a practical basis for real-time muscle activity monitoring, movement classification, and rehabilitation systems. Full article
19 pages, 19416 KB  
Article
Identification of Prognostic Factors in Esophageal Cancer Using Machine Learning: A Retrospective Study Based on the SEER Database
by Piman Pocasap, Sarinya Kongpetch, Auemduan Prawan, Karnchanok Kaimuangpak and Laddawan Senggunprai
J. Clin. Med. 2026, 15(8), 3049; https://doi.org/10.3390/jcm15083049 - 16 Apr 2026
Abstract
Background: Esophageal cancer (EC) is an aggressive malignancy with low survival rates, making accurate prognosis critical for guiding treatment decisions. Traditional prognostic methods, while essential, often lack precision and comprehensive data insights. This study aims to apply machine learning (ML) approaches to investigate [...] Read more.
Background: Esophageal cancer (EC) is an aggressive malignancy with low survival rates, making accurate prognosis critical for guiding treatment decisions. Traditional prognostic methods, while essential, often lack precision and comprehensive data insights. This study aims to apply machine learning (ML) approaches to investigate EC prognosis by identifying key factors associated with 5-year survival. Methods: Multiple ML algorithms—Random Forest (RF), Artificial Neural Networks (ANN), K-Nearest Neighbors (KNN), AdaBoost, and Naïve Bayes—were applied to a dataset from the SEER database. Model development included exploratory data analysis, internal validation, and 5-fold cross-validation. Traditional survival analysis methods, such as Cox regression and Kaplan–Meier (KM) analysis, were integrated to further explore relationships between key predictor and outcome variables. Additionally, time-series analysis was conducted to examine survival trends over time and identify influencing factors. Results: RF demonstrated the highest predictive performance among the models tested. Key prognostic factors identified included surgery, summary stage, tumor size, metastasis, AJCC M stage, and age. An exploratory analysis of temporal trends further showed changes in survival outcomes across diagnosis years. Conclusions: The findings highlight the potential of ML approaches to analyze prognostic patterns in EC. Integrating ML models with traditional statistical analyses helped identify key prognostic factors such as surgery, summary stage, and metastasis, while the exploratory temporal analysis provided additional context regarding survival trends over time. While promising, further external validation and addressing time-series challenges are necessary. Overall, this study demonstrates the potential of ML to support the identification of prognostic factors in EC and may contribute to more informed clinical decision-making. Full article
(This article belongs to the Section Gastroenterology & Hepatopancreatobiliary Medicine)
Show Figures

Figure 1

14 pages, 5203 KB  
Article
Machine Learning Prediction of Listeria monocytogenes Serogroups and Biofilm Formation from Infrared Spectra: A Comparative Study with Genomic Analysis
by Martine Denis, Stéphanie Bougeard, Virginie Allain, Mélanie Guy, Emmanuelle Houard, Arnaud Felten, Jean Lagarde, Benoit Gassilloud, Evelyne Boscher and Pierre-Emmanuel Douarre
Appl. Microbiol. 2026, 6(4), 54; https://doi.org/10.3390/applmicrobiol6040054 - 16 Apr 2026
Viewed by 36
Abstract
This study evaluated the performance of Fourier-transform infrared (FTIR) spectroscopy for identifying spectral signatures associated with two key traits of Listeria monocytogenes: serogroup classification and biofilm-forming capacity. A total of 100 strains, previously serogrouped by PCR and categorized as high, intermediate, or [...] Read more.
This study evaluated the performance of Fourier-transform infrared (FTIR) spectroscopy for identifying spectral signatures associated with two key traits of Listeria monocytogenes: serogroup classification and biofilm-forming capacity. A total of 100 strains, previously serogrouped by PCR and categorized as high, intermediate, or low biofilm producers, were analyzed. Whole-genome sequencing was performed, and comparative genomics was conducted at core-genome, pangenome, and whole-genome (k-mer) levels to determine which genomic representation best reflected the phenotypes. Strains were typed using Fourier-Transform Infrared (FTIR Biotyper® system from Bruker Daltonics GmbH and Co., Bremen, Germany) with five technical replicates. Spectral data from the polysaccharide region (1300–800 cm−1) were extracted and used to train twelve statistical models within a machine learning pipeline combined with cross-validation to predict four serogroups and three biofilm clusters from 501 spectral variables. Genomic analyses showed strong concordance between population structure and serogroup, whereas biofilm formation displayed only weak genomic association, explaining less than 0.1% of genomic variance (PERMANOVA R2 ≤ 0.001). Penalized discriminant analysis achieved the highest performance for serogroup prediction (overall accuracy 97.2%), while the k-nearest neighbor model performed best for biofilm prediction (74.8%). Two dedicated R Shiny applications were developed to facilitate model use. Overall, FTIR spectroscopy coupled with machine learning can provide a rapid and cost-effective alternative to PCR, genomic analyses, and in vitro assays for phenotypic trait prediction. Full article
Show Figures

Figure 1

31 pages, 7153 KB  
Article
Balancing Accuracy and Efficiency in the Temporal Resampling of Met-Ocean Data
by Sara Ramos-Marin and C. Guedes Soares
Oceans 2026, 7(2), 35; https://doi.org/10.3390/oceans7020035 - 16 Apr 2026
Viewed by 170
Abstract
Harmonising heterogeneous met-ocean time series to a common temporal resolution is a prerequisite for integrated marine renewable energy assessments. Such datasets often differ in their sampling frequency, statistical distribution, and non-stationarity, complicating joint analysis. This study presents a practical multi-criteria framework for selecting [...] Read more.
Harmonising heterogeneous met-ocean time series to a common temporal resolution is a prerequisite for integrated marine renewable energy assessments. Such datasets often differ in their sampling frequency, statistical distribution, and non-stationarity, complicating joint analysis. This study presents a practical multi-criteria framework for selecting temporal interpolation strategies for met-ocean datasets, explicitly balancing prediction accuracy and computational efficiency. Six environmental variables relevant to offshore renewable energy—wind speed, significant wave height, energy period, peak period, global horizontal irradiance, and upper-ocean thermal gradients—are analysed using ten-year reanalysis datasets for the Madeira Archipelago. Six commonly used deterministic time-domain interpolation methods are evaluated within a unified validation framework combining training–test splits, k-fold cross-validation, and Monte Carlo resampling. Their performances are quantified using the relative root mean square error and computational time, integrated through a composite performance score. The results show that makima interpolation provides the most consistent compromise between accuracy and efficiency for most variables in dense, regularly sampled met-ocean datasets, while spline-based approaches perform better for highly skewed solar irradiance. Preprocessing steps, such as detrending and distribution normalisation, yield only marginal improvements for dense, regularly sampled datasets, and method rankings remain stable under moderate changes in accuracy–speed weightings. Rather than proposing a universal interpolator, this work delivers a reproducible decision-support workflow for temporal resampling of multi-variable met-ocean datasets, supporting early-stage marine renewable energy assessments. Full article
(This article belongs to the Special Issue Offshore Renewable Energy and Related Environmental Science)
Show Figures

Figure 1

14 pages, 2208 KB  
Article
Data-Driven Identification of Operating Thresholds for Cycling Reduction in Chiller Systems
by Shiue-Der Lu, Chin-Tsung Hsieh, Hwa-Dong Liu and Shao-Tang Xu
Processes 2026, 14(8), 1266; https://doi.org/10.3390/pr14081266 - 15 Apr 2026
Viewed by 196
Abstract
Chiller systems account for a substantial proportion of building energy consumption, where their operational efficiency and start–stop cycling frequency directly influence overall system energy use and equipment lifespan. In practical applications, load fluctuations and improper control settings often cause chillers to experience frequent [...] Read more.
Chiller systems account for a substantial proportion of building energy consumption, where their operational efficiency and start–stop cycling frequency directly influence overall system energy use and equipment lifespan. In practical applications, load fluctuations and improper control settings often cause chillers to experience frequent cycling, leading to decreased efficiency and increased mechanical wear. While existing studies predominantly focus on real-time control or model predictive approaches, fewer investigations systematically identify stable operating regions and optimal control thresholds using historical operational data. This study proposes a data-driven method for identifying an operational threshold. Long-term historical data are analyzed to establish a start–stop event detection mechanism. A normalized power index is introduced, and multi-scenario classification—incorporating seasonal conditions and peak/off-peak periods—is employed to evaluate system behavior across different contexts. Furthermore, a quantile scanning approach combined with hysteresis simulation is utilized to identify optimal operational threshold intervals. Stability evaluation indices, based on cycling frequency, power variation rate, and load deviation magnitude, are constructed to quantify stability performance. To verify the robustness of these thresholds, K-fold cross-validation is performed. Results indicate that the identified thresholds effectively reduce cycling frequency and power fluctuations, thereby enhancing system stability. Specifically, the start–stop cycling frequency is reduced by approximately 75–90%, and the power variation rate decreases by up to 85% across various scenarios. This study provides an offline decision-support framework to assist operators in optimizing control parameters and strategies. These outcomes serve as a reference for chiller energy management and provide empirical evidence for the future design of control strategies. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

19 pages, 1064 KB  
Article
Machine Learning-Driven Kinetic Optimization of Hydroxylamine-Modified Transition Metal Oxide/Peroxymonosulfate System for antibiotic Degradation
by Zhixuan Li, Jianwei Li, Ao Zeng, Xi Lian, Wenjun Zhou, Shuyi Xie and Pengjun Wu
Water 2026, 18(8), 945; https://doi.org/10.3390/w18080945 - 15 Apr 2026
Viewed by 165
Abstract
Hydroxylamine-modified transition-metal oxides (HA-TMOs) represent a promising class of catalysts for activating peroxymonosulfate (PMS) to degrade antibiotics. However, identifying energy-efficient operational conditions remains challenging. This study established a comprehensive dataset encompassing 600 experimental records from both in-house experiments and literature and systematically compared [...] Read more.
Hydroxylamine-modified transition-metal oxides (HA-TMOs) represent a promising class of catalysts for activating peroxymonosulfate (PMS) to degrade antibiotics. However, identifying energy-efficient operational conditions remains challenging. This study established a comprehensive dataset encompassing 600 experimental records from both in-house experiments and literature and systematically compared 12 machine learning algorithms for predicting the antibiotic degradation efficiency (%) in hydroxylamine-modified transition metal oxide/peroxymonosulfate (HA-TMO/PMS) systems. Among these models, CatBoost delivered the best generalization (test-set R2 = 0.8110, RMSE = 8.92, MAE = 6.15) across repeated 80/20 stratified splits with 5-fold cross-validation, outperforming other ensembles as confirmed by cumulative distribution plots and error-metric analyses. Moreover, the permutation importance analysis identified PMS dosage, HA level, pH, initial pollutant concentration, and catalyst loading as the dominant drivers governing the pollutant removal performance. The partial-dependence plots, incorporating two-variable interactions, elucidated the response surfaces and enabled the discovery of operating windows that jointly maximize degradation efficiency and minimize electrical energy per order (EE/O). ML-guided optimization yielded optimal conditions, which were experimentally verified with sulfamethoxazole (SMZ). The HA-Co3O4/PMS system achieved the highest degradation rate constant (k = 0.11 min−1) and the lowest EE/O value (6.84 kWh·m−3·order−1), markedly improving kinetics and reducing energy consumption compared with non-optimized runs. This work establishes an interpretable ML framework that connects catalyst properties and reaction conditions to degradation kinetics and mechanisms, providing a practical strategy for the screening and scale-up of energy-efficient HA-TMOs/PMS-based advanced oxidation processes (AOPs). Full article
9 pages, 247 KB  
Article
Adherence to Treatment, Quality of Life, and Level of Knowledge in Patients on Anticoagulant Therapy with Vitamin K Antagonists
by Adolfo Romero-Arana, Nerea Romero-Sibajas, Juan Gómez-Salgado, María Isabel Ruiz-Moreno, Víctor Manuel Cotta-Luque, Lucía Rojas-Suárez, Luis El Khoury-Moreno, Julio Torrejón-Martínez and Adolfo Romero-Ruiz
Healthcare 2026, 14(8), 1042; https://doi.org/10.3390/healthcare14081042 - 15 Apr 2026
Viewed by 178
Abstract
Background: In Spain, the number of patients anticoagulated with vitamin K antagonists (VKAs) is high. Among them, poor adherence is common, which may be justified by a low level of knowledge, and could affect their quality of life. We analyzed treatment adherence, health-related [...] Read more.
Background: In Spain, the number of patients anticoagulated with vitamin K antagonists (VKAs) is high. Among them, poor adherence is common, which may be justified by a low level of knowledge, and could affect their quality of life. We analyzed treatment adherence, health-related quality of life, and knowledge level about treatment, and evaluated the possible influence of these factors on patients’ time in the therapeutic range while also studying potential differences between patients under routine monitoring or self-monitoring. Methodology: A cross-sectional descriptive study was conducted using three validated and cross-culturally adapted questionnaires to study therapeutic adherence, health-related quality of life, and knowledge level about VKA treatment in a sample of anticoagulated patients. Additionally, it was assessed whether they were self-monitoring or not; the Rosendaal Time in Therapeutic Range (TTRr) was also administered for each patient at the time of recruitment. Descriptive analysis of all variables was performed, and a logistic regression model was constructed to evaluate the possible interaction of variables. Results: Ninety-eight patients participated and were selected sequentially from those attending the oral anticoagulation clinic at Hospital Universitario Virgen de la Victoria in Malaga. Of these, 39 were men and 59 were women. The mean age of these participants was 60.62 years (SD 11.67). Sixty-six were under conventional monitoring and thirty-two followed the self-monitoring program. The DecaMIRT had a mean score of 39.22 (SD 8.57), the SF-12 mean score was 31.73 (SD 6.21), and the knowledge questionnaire’s was 14.2 (SD 2.6). The mean TTRr value was 63.88 (SD 22.99). Self-monitored patients showed better results in DECAMirt and knowledge. Discussion: Overall, patients included in the sample presented satisfactory values in these three questionnaires, which seems to indicate that this was a treatment-compliant group with a correct quality of life, and adequately informed about their treatment. Conclusions: The work of nurses responsible for these aspects appears crucial in achieving these results. We aim to extend this study by focusing on groups with poorer results to design specific activities that allow for improvement in care and, as much as possible, homogenize outcomes. For this purpose, we intend to use all available tools, including those derived from the use of health-oriented artificial intelligence. Full article
(This article belongs to the Section Chronic Care)
26 pages, 5353 KB  
Article
Beyond Infrastructure: A Capability-Conversion Diagnostic Framework for Rural Water Access Inequality in Morocco
by Youness Boudrik, Rachid Hasnaoui, Achraf Touil, Abdellah Oulakhmis and Nawfal Aissaoui
Water 2026, 18(8), 936; https://doi.org/10.3390/w18080936 - 14 Apr 2026
Viewed by 336
Abstract
Rural water access in developing countries remains deeply unequal, even among communes with comparable infrastructure. This paradox motivates a shift from infrastructure-centered analysis toward a framework that explicitly accounts for governance and local conversion factors. We develop a Capability-Conversion Diagnostic Framework, grounded in [...] Read more.
Rural water access in developing countries remains deeply unequal, even among communes with comparable infrastructure. This paradox motivates a shift from infrastructure-centered analysis toward a framework that explicitly accounts for governance and local conversion factors. We develop a Capability-Conversion Diagnostic Framework, grounded in Sen’s Capability Approach, that decomposes water access variance into three components: measurable infrastructure, provincial governance context, and commune-level unmeasured conversion factors. Applied to 1324 rural communes from Morocco’s 2024 General Census (RGPH 2024), the framework combines k-means capability segmentation (k=3, selected via a composite validation criterion), cross-validated infrastructure-to-water prediction using OLS with engineered features (RCV2=0.274, out-of-fold), conversion residual extraction, and spatial decomposition. The central finding is a three-way variance partition: infrastructure explains 27.4%, provincial context 34.1%, and commune-level unmeasured factors 38.5%—the latter representing an upper bound that includes omitted variables and measurement noise alongside conversion factors. Theil decomposition confirms that 89.9% of water access inequality occurs within capability tiers, consistent with Sen’s emphasis on conversion factors. A six-archetype policy matrix classifies communes into differentiated intervention strategies: 19.9% require comprehensive multi-sector transformation, 10.0% need governance reform despite adequate infrastructure, and 20.0% need targeted bottleneck interventions. The framework offers planners a diagnostic tool that identifies not only where infrastructure is lacking, but where it fails to deliver outcomes and what complementary interventions are needed. Full article
(This article belongs to the Section Water Resources Management, Policy and Governance)
Show Figures

Figure 1

22 pages, 9363 KB  
Article
Detecting Objects in Aerial Imagery Using Drones and a YOLO-C3 Hybrid Approach
by Salvatore Calcagno, Alessandro Midolo, Erika Scaletta, Emiliano Tramontana and Gabriella Verga
Future Internet 2026, 18(4), 204; https://doi.org/10.3390/fi18040204 - 13 Apr 2026
Viewed by 145
Abstract
Drones have proven effective for acquiring aerial imagery, and when equipped with onboard analysis tools, they can automatically identify objects of interest. Neural-network methods for image analysis typically require large training datasets and substantial computational resources. By contrast, algorithmic techniques can detect objects [...] Read more.
Drones have proven effective for acquiring aerial imagery, and when equipped with onboard analysis tools, they can automatically identify objects of interest. Neural-network methods for image analysis typically require large training datasets and substantial computational resources. By contrast, algorithmic techniques can detect objects using simple features, such as pixel colors, thereby reducing the need for extensive training and computational resources. Once trained, both types of system can analyze images in a short time. In our experiments, each approach has distinct strengths. The YOLO-based detector is more accurate for complex-shaped objects, such as trees, whereas the pixel-color approach performs better on sparser objects. This paper proposes YOLO-C3, a hybrid system designed for onboard drone image processing. By leveraging the strengths of both YOLO-based and pixel-based approaches, YOLO-C3 balances detection accuracy with estimation confidence. Trained on Mediterranean imagery dataset, the system is optimized for identifying natural objects, including citrus groves and trees.To assess the robustness of the image classifier, a K-fold cross-validation is performed.Compared to existing models, YOLO-C3 detects a wider range of natural objects with high accuracy and minimal latency, achieving a processing speed of 0.01 s per image. By performing object detection locally, drones can adapt their trajectories to support emergency response, helping to map safe corridors and locate buildings where people may be awaiting rescue after a natural disaster. Full article
20 pages, 702 KB  
Article
Tree Height Prediction Using a Double Hidden-Layer Neural Network and a Mixed-Effects Model
by Jianbo Shen, Xiangdong Lei, Yutang Li, Yuehong Pan and Gongming Wang
Plants 2026, 15(8), 1176; https://doi.org/10.3390/plants15081176 - 10 Apr 2026
Viewed by 385
Abstract
The double hidden-layer neural network has increasingly been applied in tree height modeling due to its superior performance. To improve the precision of tree height estimation, this study compared the performance of a double hidden-layer neural network with that of a nonlinear mixed-effects [...] Read more.
The double hidden-layer neural network has increasingly been applied in tree height modeling due to its superior performance. To improve the precision of tree height estimation, this study compared the performance of a double hidden-layer neural network with that of a nonlinear mixed-effects model, aiming to provide a new method for tree height prediction. Taking the Larix olgensis forest plantation in Jilin Province as the research object, a double hidden-layer back propagation (BP) neural network was established for tree height prediction by adopting trial and error, k-fold cross-validation, and near-domain optimization strategies. In constructing the nonlinear mixed-effects model, the overall and local differences in forest growth data, as well as the autocorrelation among the various levels of data, were considered. Accordingly, after determining the base model, random effects were introduced, the correlation variance–covariance matrix was calculated, and random parameters were estimated to compare the predictive performance of the two aforementioned models. For the mixed-effects model, the coefficient of determination R2 was 0.8590, the root mean square error (RMSE) was 1.6230, and the mean absolute error (MAE) was 2.2658. For the double hidden-layer BP neural network, the R2 reached 0.9068 (an increase of 5.56%), the RMSE was 1.3197 (a decrease of 18.69%), and the MAE was 1.2736 (a decrease of 43.79%). The results demonstrate that the double hidden-layer BP neural network is superior to the nonlinear mixed-effects model for tree height prediction. Therefore, the results provide a more accurate method for tree height prediction. Full article
(This article belongs to the Special Issue AI-Driven Machine Vision Technologies in Plant Science)
Show Figures

Figure 1

14 pages, 1766 KB  
Article
Beyond Static Assessment: A Proof-of-Concept Evaluation of Functional Data Analysis for Assessing Physiological Responses to High-Intensity Effort
by Adrian Odriozola, Cristina Tirnauca, Adriana González, Francesc Corbi and Jesús Álvarez-Herms
J. Funct. Morphol. Kinesiol. 2026, 11(2), 151; https://doi.org/10.3390/jfmk11020151 - 10 Apr 2026
Viewed by 233
Abstract
Background: Conventional analyses of physiological recovery often rely on discrete metrics that assume independence across time points, thereby ignoring intrinsic temporal continuity and masking substantial interindividual heterogeneity. This proof-of-concept study assesses the efficacy of Functional Data Analysis (FDA) as a promising framework [...] Read more.
Background: Conventional analyses of physiological recovery often rely on discrete metrics that assume independence across time points, thereby ignoring intrinsic temporal continuity and masking substantial interindividual heterogeneity. This proof-of-concept study assesses the efficacy of Functional Data Analysis (FDA) as a promising framework for characterizing individual response dynamics following a functional threshold power (FTP) test. Methods: Physiological time-series data (including blood lactate, heart rate, blood pressure, and glucose levels) collected from 21 trained cyclists (10 professionals, 11 amateurs) were represented as functional objects using FDataGrid on the original sampling grid (0, 3, 5, 10, 20 min), without basis expansion or smoothing. We conducted unsupervised functional clustering (K-means; Fuzzy K-means) and supervised classification (Maximum Depth with Modified Band Depth, K-Nearest Neighbors, Nearest Centroid, functional QDA with parametric Gaussian covariance). Model performance was estimated via Repeated Stratified 5-Fold Cross-Validation with 10 repetitions (50 folds), reporting accuracy, balanced accuracy (mean ± SD), 95% CIs, permutation p-values, and sensitivity/specificity from aggregated confusion matrices. Results: Lactate (CL) and diastolic blood pressure (DBP) provided useful and statistically significant discrimination across several classifiers (e.g., KNN, Nearest Centroid, functional QDA), whereas heart rate showed modest discriminative value and glucose intermediate performance. Unsupervised analyses revealed distinct lactate recovery profiles and graded membership for hemodynamic/metabolic variables, supporting the value of FDA for resolving heterogeneity beyond group-average trends. Conclusions: FDA offers a feasible and informative approach for classifying recovery phenotypes while preserving temporal structure. Findings are promising but should be interpreted with caution due to the small sample size, sparse time points, and the need for external validation in larger, independent cohorts before translation into routine decision-making. Full article
(This article belongs to the Special Issue Physiological and Biomechanical Foundations of Strength Training)
Show Figures

Figure 1

25 pages, 2972 KB  
Article
Application of Machine Learning Models (ANN vs. RF) in Optimizing the Fermentation of Sweet-Potato Waste in the Japanese Shochu Industry for Nutritional Enhancement
by Yukun Zhang, Manabu Ishikawa, Shunsuke Koshio, Saichiro Yokoyama, Na Jiang, Jiayi Chen, Yiwen Tong and Xiaoxiao Zhang
Fermentation 2026, 12(4), 191; https://doi.org/10.3390/fermentation12040191 - 9 Apr 2026
Viewed by 419
Abstract
To address the challenge of depleting traditional feed resources, this study aimed to biovalorize sweet potato waste (SPW), a major byproduct of the Japanese shochu industry, into a high-value functional animal feed. An innovative two-stage solid-state fermentation (SSF) was employed, featuring an initial [...] Read more.
To address the challenge of depleting traditional feed resources, this study aimed to biovalorize sweet potato waste (SPW), a major byproduct of the Japanese shochu industry, into a high-value functional animal feed. An innovative two-stage solid-state fermentation (SSF) was employed, featuring an initial aerobic stage with Aspergillus oryzae for substrate degradation, followed by an anaerobic stage with Lactobacillus plantarum for nutritional enhancement. To optimize this complex, multi-variable process, the predictive performance of Artificial Neural Network (ANN) and Random Forest (RF) machine learning models was compared based on an augmented experimental dataset (N = 80). To ensure statistical robustness and prevent data leakage, a repeated k-fold cross-validation strategy was implemented. The RF model demonstrated significantly superior accuracy and reliability than the ANN model, particularly in predicting the primary metric, crude protein (R2 = 0.61 ± 0.04 vs. R2 = 0.12 ± 0.15). Subsequently, the validated RF model was integrated with a Constrained Differential Evolution (CDE) algorithm for global parameter optimization. The optimized process was predicted to yield a final product with a crude protein content of 25.0%, alongside significant increases of 114.1% in total amino acids and 123.9% in essential amino acids. These projections were experimentally validated in vitro, confirming the model’s accuracy with a relative error of less than 5%. Furthermore, comprehensive biochemical assays demonstrated a massive degradation of anti-nutritional factors and significant enhancements in total phenolic content and antioxidant activity. This study provides a scientifically validated, data-driven framework for the valorization of SPW. It confirms the superior efficacy of ensemble learning methods for optimizing complex bioprocesses with limited data, offering a contribution to the development of a circular bioeconomy and sustainable feed resources. Full article
Show Figures

Figure 1

Back to TopTop