Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,901)

Search Parameters:
Keywords = calibration coefficients

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 815 KB  
Article
GA-SVR Optimized Surface-Enhanced Raman Spectroscopy for Rapid Detection of Ciprofloxacin Residues in Chicken Blood
by Gaoliang Zhang, Zihan Ma, Chao Yang, Yang Liu, Tianyan You and Jinhui Zhao
Biosensors 2026, 16(5), 259; https://doi.org/10.3390/bios16050259 - 1 May 2026
Abstract
Ciprofloxacin residues in chicken blood pose a potential food safety risk; however, rapid detection methods for complex chicken blood matrices are lacking. This study aimed to establish a surface-enhanced Raman spectroscopy (SERS) method for the rapid detection of ciprofloxacin in chicken blood using [...] Read more.
Ciprofloxacin residues in chicken blood pose a potential food safety risk; however, rapid detection methods for complex chicken blood matrices are lacking. This study aimed to establish a surface-enhanced Raman spectroscopy (SERS) method for the rapid detection of ciprofloxacin in chicken blood using gold colloid as the SERS substrate. Gold colloid was synthesized via the Frens method with slight modification, and key SERS detection conditions were systematically optimized to maximize SERS intensities at 1265 cm−1, including the amount of trisodium citrate solution, the electrolyte type, the amount of gold colloid, the amount of NaCl solution, and the adsorption time. Raw SERS spectra were pretreated with adaptive iteratively reweighted penalized least squares (air-PLS) combined with Savitzky–Golay (SG) smoothing. A genetic algorithm (GA) was used to extract characteristic Raman shifts, and a GA-SVR prediction model with radial basis function (RBF) as the kernel was constructed, with its performance compared with multivariate linear regression (MLR) and partial least squares regression (PLSR) models. The GA-SVR model exhibited the best performance, with a coefficient of determination for the calibration set (Rc2) value of 0.9893 and for the prediction set (Rp2) value of 0.9874. The root mean square error of calibration (RMSEC) and prediction (RMSEP) were 1.2953 and 1.8617, respectively, outperforming the MLR and PLSR models. These results demonstrate that the SERS method combined with GA-SVR enables rapid quantitative detection of ciprofloxacin residues in chicken blood, providing a technical reference for monitoring veterinary drug residues in livestock and poultry products. Full article
(This article belongs to the Section Optical and Photonic Biosensors)
Show Figures

Figure 1

20 pages, 2396 KB  
Article
Cross-Regional Hyperspectral Estimation of Soil Organic Carbon in Eurasian Black Soils Using an Optimal Spectral Feature Set
by Aonan Zhang, Shengbo Chen, Zhengyuan Xu, Xitong Xu and Zibo Wang
Appl. Sci. 2026, 16(9), 4433; https://doi.org/10.3390/app16094433 - 1 May 2026
Abstract
Soil organic carbon (SOC) plays a critical role in the global carbon cycle and agroecosystem productivity. However, existing hyperspectral inversion models often exhibit significant predictive biases when applied across large geographic scales, primarily due to the spatial heterogeneity of pedogenic environments and background [...] Read more.
Soil organic carbon (SOC) plays a critical role in the global carbon cycle and agroecosystem productivity. However, existing hyperspectral inversion models often exhibit significant predictive biases when applied across large geographic scales, primarily due to the spatial heterogeneity of pedogenic environments and background mineralogy. This study proposes a cross-regional SOC prediction method based on an optimal spectral feature set (SOC-OSFS). Leveraging laboratory hyperspectral and SOC data from 17,730 samples collected across the black soil regions of Northeast China and Europe, a core spectral feature set comprising 31 diagnostic bands was extracted using the competitive adaptive reweighted sampling (CARS) algorithm combined with the successive projections algorithm (SPA). Although this SOC-OSFS accounts for merely 1.55% of the original full-spectrum dimensionality (31 out of 2000 bands), it demonstrated robust analytical capability in local modeling across all study regions, yielding coefficients of determination (R2 = 0.6714–0.8854). When transferring the prediction model calibrated in the core source domain (n = 10,000) to the other seven independent typical black soil target domains, the direct cross-regional prediction consistently reduced the root mean square error (RMSE) by over 15% compared to that of the full-spectrum models. By further incorporating 20% of the local background samples for intercept correction, the cross-regional predictive accuracy was substantially improved; the goodness-of-fit for the Northeast China target domains increased sharply (maximum R2 = 0.8567), and the European target domains, which feature substantially different pedogenic environments, were successfully corrected from negative to positive linear fits. This study validates the efficacy of extracting physiochemically meaningful spectral bands in mitigating the interference caused by spatial heterogeneity, thereby providing a mechanistically grounded and practically viable framework for large-scale SOC estimation via remote sensing. Full article
Show Figures

Figure 1

25 pages, 2496 KB  
Article
Multi-Dimensional Method Innovation and System Construction for Synergistic Damage Assessment of Multi-Media Pollution
by Zhengda Lin, Jifeng Wang, Bingjie Yan, Jun Zhang, Yu Wang, Lingling Fan and Caoqingqing Li
Water 2026, 18(9), 1068; https://doi.org/10.3390/w18091068 - 29 Apr 2026
Viewed by 32
Abstract
To address issues existing in current multi-media pollution assessment, such as data mismatch, parameter conflicts, and inadequate characterization of nonlinear effects, this study developed a multi-factor synergistic assessment methodological system encompassing “data preprocessing-parameter calibration-damage quantification-model coupling”. A three-stage parameter calibration system of “inheritance-linkage-sensitivity [...] Read more.
To address issues existing in current multi-media pollution assessment, such as data mismatch, parameter conflicts, and inadequate characterization of nonlinear effects, this study developed a multi-factor synergistic assessment methodological system encompassing “data preprocessing-parameter calibration-damage quantification-model coupling”. A three-stage parameter calibration system of “inheritance-linkage-sensitivity screening” was established to achieve cross-media parameter synergy; an Environmental Damage Entropy (EDE) model was constructed based on information entropy to quantify the nonlinear coupled damage of multiple factors; and the optimal governance threshold was determined by combining the coupling theory of marginal damage and governance cost. Taking a multi-media pollution incident (atmosphere-soil-surface water-groundwater) caused by a chemical plant explosion as a case study, pollution chain identification, damage quantification, ecological risk cascading effect analysis, and health risk assessment were conducted. The results show that this method can accurately identify key pollution pathways. Based on the calculation of Environmental Damage Entropy (EDE = 0.604) and the synergy coefficient (δ = 1.32), the comprehensive damage value was quantified as 8.21 million yuan. Additionally, the threshold exceedance characteristics of various media were identified, reflecting the cumulative and lagging nature of ecological risk cascading effects. The method proposed in this study can accurately identify key pollution pathways and quantify comprehensive damage as well as ecological risks, providing scientific support for the allocation of multi-media pollution governance responsibilities and precise prevention and control. Full article
(This article belongs to the Section Water Quality and Contamination)
30 pages, 8060 KB  
Article
Modeling and Optimization of Deep and Machine Learning Methods for Credit Card Fraud Risk Management
by Slavi Georgiev, Maya Markova, Vesela Mihova and Venelin Todorov
Mathematics 2026, 14(9), 1496; https://doi.org/10.3390/math14091496 - 29 Apr 2026
Viewed by 63
Abstract
As digital payment infrastructures expand, the incidence of card-not-present fraud has become a major source of operational and financial risk for banks, payment processors, and merchants. In response, financial institutions increasingly rely on data-driven decision systems, yet fraudsters continuously adapt their strategies to [...] Read more.
As digital payment infrastructures expand, the incidence of card-not-present fraud has become a major source of operational and financial risk for banks, payment processors, and merchants. In response, financial institutions increasingly rely on data-driven decision systems, yet fraudsters continuously adapt their strategies to evade conventional rule-based controls. A promising way to strengthen risk management is to model transactional data so as to uncover non-trivial, high-dimensional patterns characteristic of fraudulent behavior and to embed these models into real-time decision pipelines. In this work, we develop and compare a suite of learning-based fraud detectors, including a convolutional neural network and several machine learning classifiers, within a unified quantitative risk-management framework. The problem is formulated as a supervised classification task within a quantitative risk management framework, where the cost of missed fraud is particularly critical. The mathematical contribution is methodological rather than architectural: we design a leakage-safe and prevalence-faithful evaluation protocol for extremely imbalanced binary classification, combine cross-validated hyperparameter optimization with risk-aligned model selection based on metrics such as recall and Matthews correlation coefficient, and quantify uncertainty by bootstrap confidence intervals and paired McNemar tests. In addition, we connect statistical evaluation with deployment-time decisioning through a decision-theoretic, cost-sensitive threshold rule, showing how institution-specific false-positive and false-negative costs determine the operating point of the classifier. Because fraudulent transactions constitute only a small proportion of the total volume, we employ resampling strategies to mitigate severe class imbalance and systematically calibrate the models via cross-validated hyperparameter optimization. The empirical analysis on real transaction data shows that carefully tuned deep and ensemble methods can achieve strong fraud-detection performance, while the proposed framework clarifies which performance differences are statistically meaningful and which operating points are most suitable under institution-specific false-positive and false-negative costs. Full article
Show Figures

Figure 1

37 pages, 13630 KB  
Article
Data-Driven Probabilistic Forecasting of Voltage Quality in Distribution Transformers Using Gaussian Processes
by Efraín Mondragón-García, Ángel Marroquín de Jesús, Raúl García-García, Yuri Salazar-Flores, Adán Díaz-Hernández and Emmanuel Vallejo-Castañeda
Energies 2026, 19(9), 2133; https://doi.org/10.3390/en19092133 - 29 Apr 2026
Viewed by 63
Abstract
A probabilistic data-driven framework for voltage quality forecasting in distribution transformers based on Gaussian process regression and high-resolution field measurements is presented. Voltage time series acquired under real operating conditions were modeled using composite covariance functions designed to capture long-term trends and stochastic [...] Read more.
A probabilistic data-driven framework for voltage quality forecasting in distribution transformers based on Gaussian process regression and high-resolution field measurements is presented. Voltage time series acquired under real operating conditions were modeled using composite covariance functions designed to capture long-term trends and stochastic multi-scale fluctuations. The proposed approach enables simultaneous prediction and uncertainty quantification, allowing direct compliance assessment with voltage quality standards. The additive Gaussian process models achieved coefficients of determination above 0.75 and produced statistically uncorrelated residuals, indicating an adequate representation of the intrinsic temporal structure. However, the predictive intervals exhibit a certain level of undercoverage, indicating that, while uncertainty is effectively quantified, there is still room for improvement in calibration. The selected kernel structures revealed distinct physical regimes in the voltage dynamics, including smooth steady operation, moderately irregular behavior associated with localized disturbances, and multi-scale stochastic variability. For benchmarking purposes, results were compared with those obtained from a stochastic damped harmonic oscillator with restoring force, a naive model, a seasonal naive model and an Autoregressive Integrated Moving Average model. The oscillator model, the naive model, the seasonal naive model, and the Autoregressive Integrated Moving Average model generated strongly autocorrelated residuals, whereas the Gaussian process models yielded consistent white-noise residuals that outperformed all the other models. These findings demonstrate that probabilistic Gaussian process modeling provides an interpretable, scalable, and uncertainty-aware alternative for predictive voltage quality assessment in modern distribution systems. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Graphical abstract

23 pages, 4224 KB  
Article
Physics-Informed Active Learning for Calibrating Mesoscopic Dynamic Parameters of Multiphase Concrete in DEM Simulations
by Jinyuan Huang, Zhongyuan Li and Tingting Zhao
Buildings 2026, 16(9), 1713; https://doi.org/10.3390/buildings16091713 - 27 Apr 2026
Viewed by 101
Abstract
The discrete element method (DEM) is widely used to simulate concrete failure, but calibrating its mesoscopic dynamic parameters is computationally expensive due to the high-dimensional parameter space. This study proposes a physics-informed active learning framework to autonomously calibrate these parameters under impact loads. [...] Read more.
The discrete element method (DEM) is widely used to simulate concrete failure, but calibrating its mesoscopic dynamic parameters is computationally expensive due to the high-dimensional parameter space. This study proposes a physics-informed active learning framework to autonomously calibrate these parameters under impact loads. An FDM-DEM coupled split Hopkinson pressure bar model is established to simulate macroscopic dynamic compressive responses. Subsequently, a Plackett–Burman experimental design reduces the parameter optimization space from 16 to 8 core dimensions. A multi-layer perceptron surrogate model is then constructed. By comparing two heuristic active sampling strategies, results indicate that a parameter priority-guided strategy incorporating physical priors significantly outperforms a mid-value exploration strategy. The proposed approach achieves coefficients of determination exceeding 0.9 for predicting multiple macroscopic dynamic indicators on an independent testing set. Building upon this forward mapping, a robust inverse parameter prediction mechanism is established, achieving a closed-loop reconstruction of 0.8662. This framework provides a reliable, data-efficient, and automated pathway for calibrating complex multiphase particulate systems. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

25 pages, 1872 KB  
Article
Contactless Microwave-Based Estimation of Complex Permittivity of Masonry Materials: A Frequency-Domain Approach
by Zenon Szczepaniak, Paweł Juszczyński, Waldemar Susek, Krzysztof Tabiś and Zbigniew Suchorab
Sensors 2026, 26(9), 2693; https://doi.org/10.3390/s26092693 - 26 Apr 2026
Viewed by 781
Abstract
This article concerns the issue of contactless estimation of the complex electrical permittivity of masonry materials by means of a microwave technique in the frequency domain. The main aim of the study was to develop a method enabling the determination of the real [...] Read more.
This article concerns the issue of contactless estimation of the complex electrical permittivity of masonry materials by means of a microwave technique in the frequency domain. The main aim of the study was to develop a method enabling the determination of the real part of relative permittivity and the electrical conductivity of ceramic building materials using microwave reflection measurements, as well as to assess the applicability of the proposed approach for moisture diagnostics in porous media. The research was performed using a reflection-mode measuring setup comprising a vector network analyser and a broadband horn antenna, while measurements were carried out in the frequency range from 1 to 6 GHz on samples of solid ceramic brick with six gravimetric moisture levels. A one-dimensional model of electromagnetic wave propagation in the material was developed, considering complex permittivity, impedance transformation, and a calibration procedure compensating for the influence of the antenna and free-space propagation. Based on the fitting of the magnitude and phase characteristics of the reflection coefficient, the electrical parameters of the tested samples were estimated. The results obtained showed an increase in both permittivity and conductivity with increasing moisture content and revealed very good agreement with the reference values determined using the time-domain method. It can be concluded that the frequency-domain microwave approach may be effectively applied for contactless and non-destructive diagnostics and estimation of the dielectric properties and moisture content in ceramic materials. Full article
(This article belongs to the Section Physical Sensors)
31 pages, 7149 KB  
Article
Nationwide Solar Radiation Zoning and Performance Comparison of Empirical and Deep Learning Models
by Bing Hui, Qian Zhang, Lei Hou, Yan Zhang, Qinghua Shi, Guoqing Chen and Junhui Wang
Appl. Sci. 2026, 16(9), 4229; https://doi.org/10.3390/app16094229 - 26 Apr 2026
Viewed by 126
Abstract
Accurate solar radiation estimation is critical for optimizing solar energy applications. This study divided 819 meteorological stations in China into six solar radiation zones using k-means, hierarchical, and bisecting k-means clustering based on daily relative sunshine duration. Correlation analysis and feature importance evaluation [...] Read more.
Accurate solar radiation estimation is critical for optimizing solar energy applications. This study divided 819 meteorological stations in China into six solar radiation zones using k-means, hierarchical, and bisecting k-means clustering based on daily relative sunshine duration. Correlation analysis and feature importance evaluation were conducted to quantify the contributions of key meteorological variables. A comparison of models considering regional heterogeneity was performed. Six sunshine-based empirical models, three machine learning models (Random Forest, Support Vector Machine, and Extreme Gradient Boosting), and two deep learning models (Long Short-Term Memory and Gated Recurrent Unit) were systematically evaluated across 98 stations with observed solar radiation data. Model performance was assessed using the coefficient of determination (R2), mean absolute error (MAE), root mean square error (RMSE), and normalized RMSE (NRMSE). Results showed that k-means clustering outperformed the other two methods and was adopted for final zoning. The correlation analysis identified sunshine duration (S), extraterrestrial radiation (Ra), temperature difference (ΔT), and maximum temperature (Tmax) as the dominant influencing factors, with clear regional heterogeneity. The deep learning models, particularly LSTM (R2 = 0.939, RMSE = 1.702 MJ/m/2/d1, MAE = 1.319 MJ/m/2/d1, NRMSE = 0.046), achieved the highest accuracy, followed by GRU, XGB, SVM, and RF. Among the empirical models, Model 5 performed best in Zones 1, 3, 4, and 5, while Model 6 was optimal in Zones 2 and 6. The key novelty of the study is an integrated zoning–prediction framework for regional solar radiation estimation, combining clustering validation, correlation analysis, empirical model calibration, and deep learning benchmarking, with enhanced physical interpretability and prediction accuracy. Full article
25 pages, 8307 KB  
Article
A Physics–Data Hybrid Framework Using Uncalibrated Consumer CMOS Vision: Pilot Study on Monocular Automatic TUG Assessment Towards Early Parkinson’s Disease Risk Screening
by Yuxiang Qiu, Xiaodong Sun, Fan Yang, Jarred Fastier-Wooller, Shun Muramatsu, Michitaka Yamamoto and Toshihiro Itoh
Micromachines 2026, 17(5), 523; https://doi.org/10.3390/mi17050523 - 25 Apr 2026
Viewed by 132
Abstract
The Timed Up and Go (TUG) test is a clinical gold standard for assessing elderly mobility, yet its automated deployment in home-monitoring and resource-limited areas is hindered by high hardware costs and expert calibration requirements. This study introduces a Physics–Data Hybrid framework specifically [...] Read more.
The Timed Up and Go (TUG) test is a clinical gold standard for assessing elderly mobility, yet its automated deployment in home-monitoring and resource-limited areas is hindered by high hardware costs and expert calibration requirements. This study introduces a Physics–Data Hybrid framework specifically designed for uncalibrated consumer-grade CMOS cameras, enabling a “plug-and-play” solution for early Parkinson’s disease (PD) risk screening. The proposed pipeline integrates learning-based pose perception with a self-evolving physics model to recover absolute metric-scale motion without manual checkerboard calibration. A noise-adaptive fusion strategy is implemented to reconcile 2D pixel dynamics with 3D kinematic consistency, overcoming the inherent scale ambiguity of monocular vision. Crucially, this framework enables the extraction of high-dimensional spatiotemporal parameters—such as stride length coefficient of variation and mean gait velocity—which provide a finer diagnostic resolution for capturing subtle motor fluctuations than conventional timing-only systems. Results from our pilot study with a cohort of 10 subjects demonstrate that these extracted metric features serve as decisive markers for risk staging simulated by dual-task-induced cognitive-motor-interference, achieving 98% screening accuracy and an overall classification accuracy of 87.32%. This framework provides a robust, low-cost tool for ubiquitous telehealth, potentially supporting early PD risk assessment in underserved populations. Full article
13 pages, 331 KB  
Article
Impact of Trait Measurement Error on Quantitative Genetic Analysis of Computer Vision-Derived Traits
by Ye Bi, Yijian Huang, Haipeng Yu and Gota Morota
Genes 2026, 17(5), 506; https://doi.org/10.3390/genes17050506 (registering DOI) - 24 Apr 2026
Viewed by 171
Abstract
Background: Quantitative genetic analysis of image- or video-derived phenotypes is increasingly being performed for a wide range of traits. Pig body weight values estimated by a conventional approach or a computer vision system can be considered two different measurements of the same trait [...] Read more.
Background: Quantitative genetic analysis of image- or video-derived phenotypes is increasingly being performed for a wide range of traits. Pig body weight values estimated by a conventional approach or a computer vision system can be considered two different measurements of the same trait but with different sources of phenotyping error. Previous studies have shown that trait measurement error, defined as the difference between manually collected phenotypes and image-derived phenotypes, can be influenced by genetics, suggesting that the error is systematic rather than random and is more likely to lead to misleading quantitative genetic analysis results. Therefore, we investigated the effect of trait measurement error on the genetic analysis of pig body weight (BW). Results: Calibrated scale-based and image-based BW showed high coefficients of determination and goodness of fit. Genomic heritability estimates for scale-based and image-based BW were mostly identical across growth periods. Genomic heritability estimates for trait measurement error were consistently negligible, regardless of the choice of computer vision algorithm. In addition, genome-wide association analysis revealed no overlap between the top markers identified for scale-based BW and those associated with trait measurement error. Overall, the deep learning-based regressions outperformed the adaptive thresholding segmentation methods. Conclusion: This study showed that manually measured scale-based and image-based BW phenotypes yielded the same quantitative genetic results. We found no evidence that BW trait measurement error could be influenced, at least in part, by genetic factors. This suggests that trait measurement error in pig BW does not contain systematic errors that could bias downstream genetic analysis. Full article
(This article belongs to the Section Animal Genetics and Genomics)
Show Figures

Figure 1

26 pages, 4696 KB  
Article
Exploring Variable Influences on the Compressive Strength of Alkali-Activated Concrete Using Ensemble Tree, Deep Learning Methods and SHAP-Based Interpretation
by Musa Adamu, Mahmud M. Jibril, Abdurra’uf M. Gora, Yasser E. Ibrahim and Hani Alanazi
Eng 2026, 7(5), 192; https://doi.org/10.3390/eng7050192 - 24 Apr 2026
Viewed by 125
Abstract
Growing concerns about global climate change and its negative consequences for communities have put immense pressure on the building industry, which is one of the primary sources of greenhouse gas emissions. Due to the environmental issues associated with the manufacture of sustainable construction [...] Read more.
Growing concerns about global climate change and its negative consequences for communities have put immense pressure on the building industry, which is one of the primary sources of greenhouse gas emissions. Due to the environmental issues associated with the manufacture of sustainable construction materials, alkali-activated concrete (AAC) has emerged as a competitive alternative to cement. To predict the compressive strength (CS) of AAC, four machine learning (ML) models, namely, Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Random Forest (RF), and Extreme Gradient Boosting (XGBoost), were employed in this study using 193 data points. The input variables include Precursor “P” (kg/m3), Blast Furnace Slag “BFS ratio”, Sodium hydroxide “Na” (kg/m3), silicate modulus “Ms”, water content “W” (kg/m3), fine aggregate “FA” (kg/m3), coarse aggregate “A” (kg/m3), and curing time “CT” (day), with CS (MPa) as the output variable. The dataset was checked for stationarity and then normalized to decrease data redundancy and increase integrity. Furthermore, three model combinations were developed based on the relationship between the input and target variables. The XGB-M3 model outperformed all other models with a high degree of accuracy, according to the study’s findings. Specifically, the Pearson correlation coefficient (PCC) was 0.9577, and the mean absolute percentage error (MAPE) was 14.95% during the calibration phase. SHAP, an explainable AI approach that provides interpretable insights into complex AI systems by assigning feature importance to model predictions, was employed. Results suggest the higher predictions from the XGB-M3 and RF-M3 models were largely driven by curing time (CT). Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications, 2nd Edition)
29 pages, 1984 KB  
Article
A Smart Agro-Modelling Framework for Maize Growth and Yield Assessment in a Mediterranean Climate
by Sofia Silva, Cassio Miguel Ferrazza, João Rolim, Maria do Rosário Cameira and Paula Paredes
Water 2026, 18(9), 1015; https://doi.org/10.3390/w18091015 - 24 Apr 2026
Viewed by 436
Abstract
Accurate estimation of crop development, water use and yield is essential for improving irrigation management in Mediterranean agricultural systems under increasing climate variability. However, many crop models require extensive input data and technical expertise, limiting their operational use by farmers and technicians. This [...] Read more.
Accurate estimation of crop development, water use and yield is essential for improving irrigation management in Mediterranean agricultural systems under increasing climate variability. However, many crop models require extensive input data and technical expertise, limiting their operational use by farmers and technicians. This study proposes an integrated agro-modelling framework that combines thermal time modelling, satellite-derived vegetation indices and simplified yield estimation approaches to assess maize phenology, crop water use and productivity under real farming conditions. A key component of the framework is the use of the Sentinel-2 Normalized Difference Vegetation Index (NDVI) time series to dynamically identify crop growth stages and derive actual basal crop coefficients (Kcb act), enabling the estimation of actual crop transpiration (Tc act). These NDVI-based estimates of actual Kcb and Tc were evaluated against simulations from the previously calibrated soil water balance model SIMDualKc. The results showed that the temporal profiles of the NDVI successfully captured the progression of the maize growth stages, although some discrepancies were observed during early stages of development due to the effects of the soil background and the satellite revisit intervals. An empirical relationship between the NDVI and Kcb was developed using multi-year observations and model simulations, improving crop transpiration estimation under field conditions. The NDVI-based approach adequately reproduced daily transpiration dynamics with good agreement with SIMDualKc simulations, yielding RMSE values of 0.11–0.69 mm d−1 and errors generally below 21% of the mean transpiration rate. Seasonal transpiration estimates showed stronger agreement once canopy cover reached its maximum. The integrated AEZ–Stewart modelling framework incorporating NDVI-based transpiration estimations provided accurate yield predictions, with RMSE values of 1.7–2.3 t ha−1 (representing less than 14% of the observed yields). Overall, the proposed framework demonstrates strong potential as a practical and scalable decision-support tool for irrigation management and yield assessment in Mediterranean maize systems. Its novelty lies in the operational integration of NDVI-derived crop development and transpiration estimates within a simplified yield modelling structure, offering a transferable approach applicable to other regions and cropping systems where satellite data are available. Full article
(This article belongs to the Special Issue Use of Remote Sensing Technologies for Water Resources Management)
14 pages, 332 KB  
Article
QSAR Models for Sweetness: Can They Shape the Future of Nutritional Safety?
by Alla P. Toropova, Andrey A. Toropov, Ivan Raŝka, Maria Raŝkova and Patnala Ganga Raju Achary
Foods 2026, 15(9), 1481; https://doi.org/10.3390/foods15091481 - 23 Apr 2026
Viewed by 392
Abstract
Food safety, nutrition, and public health are actual economic and medical problems. Sweetness is an important feature of food technology. Models for the sweetness of special organic compounds used in the food industry are suggested. The models are built using the CORAL software. [...] Read more.
Food safety, nutrition, and public health are actual economic and medical problems. Sweetness is an important feature of food technology. Models for the sweetness of special organic compounds used in the food industry are suggested. The models are built using the CORAL software. New statistical coefficients of predictive potential are studied. These are the index of ideality of correlation (IIC) and correlation intensity index (CII). The effectiveness of using the IIC and CII has been tested in simulated sweetness via Monte Carlo optimization of correlation weights for molecular features extracted from Simplified Molecular Input Line Entry System (SMILES) strings. Both factors have been shown to improve the model’s statistical quality on the calibration and validation sets. However, this is accompanied by a decrease in the statistical quality of the training sets. Full article
Show Figures

Graphical abstract

14 pages, 1200 KB  
Technical Note
Consideration of Correlations in Radiometric Measurements of the Environment
by Steven W. Brown, Maritoni A. Litorja, Julia K. Marrs and David W. Allen
Remote Sens. 2026, 18(9), 1286; https://doi.org/10.3390/rs18091286 - 23 Apr 2026
Viewed by 134
Abstract
Vicarious calibration is a technique that makes use of radiometrically stable targets such as dry lakebeds, desert sites, and open grasslands for the post-launch calibration of a satellite sensor. Top-of-the-atmosphere radiances or reflectances are provided from those sites for the calibration of a [...] Read more.
Vicarious calibration is a technique that makes use of radiometrically stable targets such as dry lakebeds, desert sites, and open grasslands for the post-launch calibration of a satellite sensor. Top-of-the-atmosphere radiances or reflectances are provided from those sites for the calibration of a sensor. The reflectance of a remote sensing vicarious calibration site is measured by ratioing the signal from a ground target to the signal from a reference target, often a white panel made of PTFE whose reflectance is known. When physically mapping a vicarious calibration site prior to a satellite sensor overflight, there can be elapsed times between the two measurements as great as 10 min. The solar illumination can vary on time scales relevant to the time between measurements of a ground target and a reference panel, impacting the variance in the measured reflectance. In this work, we explore the impact of a temporal delay between two measurements taken outdoors on the Type A uncertainties in their ratios. A factor of 3 reduction in the Coefficient of Variation of the ratio taken simultaneously versus sequentially with delays on the order of 10 min was realized. Implications for protocols employed to measure the surface reflectance at sites used for the vicarious calibration of aircraft and satellite sensors are discussed. Full article
(This article belongs to the Section Environmental Remote Sensing)
24 pages, 5686 KB  
Article
3D Simulation Study for a Pneumatic Nozzle–Cylindrical Flapper System
by Peimin Xu, Kazuaki Inaba and Toshiharu Kagawa
Sensors 2026, 26(9), 2578; https://doi.org/10.3390/s26092578 - 22 Apr 2026
Viewed by 374
Abstract
With the increasing demand for higher efficiency in semiconductor machining, air spindles with compensation systems have attracted growing attention. The pneumatic nozzle–cylindrical flapper is a promising sensing approach due to its high precision and suitability for displacement measurement of high-speed rotating bodies. However, [...] Read more.
With the increasing demand for higher efficiency in semiconductor machining, air spindles with compensation systems have attracted growing attention. The pneumatic nozzle–cylindrical flapper is a promising sensing approach due to its high precision and suitability for displacement measurement of high-speed rotating bodies. However, its complex three-dimensional flow behavior leads to significant deviations from conventional nozzle–flat flapper models, limiting its practical application. This study aims to clarify the flow mechanisms governing the nozzle–cylindrical flapper system and to establish a reliable framework for predicting its static characteristics. A computational fluid dynamics model is developed to analyze gas flow within the micron-scale clearance under varying gap sizes and angular orientations, and the results are validated against experimental measurements. The analysis shows that curvature plays a dominant role in the flow behavior. Increasing curvature enhances inertia-driven acceleration and weakens viscous effects while simultaneously inducing strong recirculation due to flow wrapping around the cylindrical surface. These competing mechanisms explain the observed deviations from conventional models and cannot be captured by two-dimensional approaches. Based on the numerical results, a mass flow rate compensation coefficient is introduced and correlated with the momentum compensation coefficient. A quadratic relationship between the two coefficients is identified, indicating a common recirculation-driven mechanism. These findings support previous semi-empirical assumptions and provide a basis for predicting static characteristics with reduced reliance on experimental calibration. Full article
Show Figures

Figure 1

Back to TopTop