Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (42)

Search Parameters:
Keywords = Mahalanobis distance squared

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1169 KB  
Article
Putting DOAC Doubts to Bed(Side): Preliminary Evidence of Comparable Functional Outcomes in Anticoagulated and Non-Anticoagulated Stroke Patients Using Point-of-Care ClotPro® Testing
by Jessica Seetge, Balázs Cséke, Zsófia Nozomi Karádi, Edit Bosnyák, Eszter Johanna Jozifek and László Szapáry
J. Clin. Med. 2025, 14(15), 5476; https://doi.org/10.3390/jcm14155476 - 4 Aug 2025
Viewed by 583
Abstract
Background/Objectives: Direct oral anticoagulants (DOACs) are now the guideline-recommended alternative to vitamin K antagonists (VKAs) for long-term anticoagulation in patients with non-valvular atrial fibrillation. However, accurately assessing their impact on ischemic stroke outcomes remains challenging, primarily due to uncertainty regarding anticoagulation status at [...] Read more.
Background/Objectives: Direct oral anticoagulants (DOACs) are now the guideline-recommended alternative to vitamin K antagonists (VKAs) for long-term anticoagulation in patients with non-valvular atrial fibrillation. However, accurately assessing their impact on ischemic stroke outcomes remains challenging, primarily due to uncertainty regarding anticoagulation status at the time of hospital admission. This preliminary study addresses this gap by using point-of-care testing (POCT) to confirm DOAC activity at bedside, allowing for a more accurate comparison of 90-day functional outcomes between anticoagulated and non-anticoagulated stroke patients. Methods: We conducted a retrospective cohort study of 786 ischemic stroke patients admitted to the University of Pécs between February 2023 and February 2025. Active DOAC therapy was confirmed using the ClotPro® viscoelastic testing platform, with ecarin Clotting Time (ECT) employed for thrombin inhibitors and Russell’s Viper Venom (RVV) assays for factor Xa inhibitors. Patients were categorized as non-anticoagulated (n = 767) or DOAC-treated with confirmed activity (n = 19). Mahalanobis distance-based matching was applied to account for confounding variables including age, sex, pre-stroke modified Rankin Scale (mRS), and National Institutes of Health Stroke Scale (NIHSS) scores at admission and 72 h post-stroke. The primary outcome was the change in mRS from baseline to 90 days. Statistical analysis included ordinary least squares (OLS) regression and principal component analysis (PCA). Results: After matching, 90-day functional outcomes were comparable between groups (mean mRS-shift: 2.00 in DOAC-treated vs. 1.78 in non-anticoagulated; p = 0.745). OLS regression showed no significant association between DOAC status and recovery (p = 0.599). In contrast, NIHSS score at 72 h (p = 0.004) and age (p = 0.015) were significant predictors of outcome. PCA supported these findings, identifying stroke severity as the primary driver of outcome. Conclusions: This preliminary analysis suggests that ischemic stroke patients with confirmed active DOAC therapy at admission may achieve 90-day functional outcomes comparable to those of non-anticoagulated patients. The integration of bedside POCT enhances the reliability of anticoagulation assessment and underscores its clinical value for real-time management in acute stroke care. Larger prospective studies are needed to validate these findings and to further refine treatment strategies. Full article
(This article belongs to the Section Clinical Neurology)
Show Figures

Figure 1

24 pages, 37475 KB  
Article
Synergistic WSET-CNN and Confidence-Driven Pseudo-Labeling for Few-Shot Aero-Engine Bearing Fault Diagnosis
by Shiqian Wu, Lifei Yang and Liangliang Tao
Processes 2025, 13(7), 1970; https://doi.org/10.3390/pr13071970 - 22 Jun 2025
Viewed by 424
Abstract
Reliable fault diagnosis in aero-engine bearing systems is essential for maintaining process stability and safety. However, acquiring fault samples in aerospace applications is costly and difficult, resulting in severely limited data for model training. Traditional methods often perform poorly under such constraints, lacking [...] Read more.
Reliable fault diagnosis in aero-engine bearing systems is essential for maintaining process stability and safety. However, acquiring fault samples in aerospace applications is costly and difficult, resulting in severely limited data for model training. Traditional methods often perform poorly under such constraints, lacking the ability to extract discriminative features or effectively correlate observed signal changes with underlying process faults. To address this challenge, this study presents a process-oriented framework—WSET-CNN-OOA-LSSVM—designed for effective fault recognition in small-sample scenarios. The framework begins with Wavelet Synchroextracting Transform (WSET), enhancing time–frequency resolution and capturing energy-concentrated fault signatures that reflect degradation along the process timeline. A tailored CNN with asymmetric pooling and progressive dropout preserves temporal dynamics while preventing overfitting. To compensate for limited labels, confidence-based pseudo-labeling is employed, guided by Mahalanobis distance and adaptive thresholds to ensure reliability. Classification is finalized using an Osprey Optimization Algorithm (OOA)-enhanced Least Squares SVM, which adapts decision boundaries to reflect subtle process state transitions. Validated on both test bench and real aero-engine data, the framework achieves 93.4% accuracy with only five fault samples per class and 100% in full-scale scenarios, outperforming eight existing methods. Therefore, the experimental results confirm that the proposed framework can effectively overcome the data scarcity challenge in aerospace bearing fault diagnosis, demonstrating its practical viability for few-shot learning applications in industrial condition monitoring. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

17 pages, 1601 KB  
Article
Application of Portable Near-Infrared Spectroscopy for Quantitative Prediction of Protein Content in Torreya grandis Kernels Under Different States
by Yuqi Gu, Haosheng Zhong, Jianhua Wu, Kaixuan Li, Yu Huang, Huimin Fang, Muhammad Hassan, Lijian Yao and Chao Zhao
Foods 2025, 14(11), 1847; https://doi.org/10.3390/foods14111847 - 22 May 2025
Cited by 1 | Viewed by 832
Abstract
Protein content is a key quality indicator in nuts, influencing their color, taste, storage, and processing properties. Traditional methods for protein quantification, such as the Kjeldahl nitrogen method, are time-consuming and destructive, highlighting the need for rapid, convenient alternatives. This study explores the [...] Read more.
Protein content is a key quality indicator in nuts, influencing their color, taste, storage, and processing properties. Traditional methods for protein quantification, such as the Kjeldahl nitrogen method, are time-consuming and destructive, highlighting the need for rapid, convenient alternatives. This study explores the feasibility of using portable near-infrared spectroscopy (NIRS) for the quantitative prediction of protein content in Torreya grandis (T. grandis) kernels by comparing different sample states (with shell, without shell, and granules). Spectral data were acquired using a portable NIR spectrometer, and the protein content was determined via the Kjeldahl nitrogen method as a reference. Outlier detection was performed using principal component analysis combined with Mahalanobis distance (PCA-MD) and concentration residual analysis. Various spectral preprocessing techniques and partial least squares regression (PLSR) were applied to develop protein prediction models. The results demonstrated that portable NIRS could effectively predict protein content in T. grandis kernels, with the best performance being achieved using granulated samples. The optimized model (1Der-SNV-PLSR-G) significantly outperformed models based on whole kernels (with or without shell), with determination coefficients for the calibration set (Rc2) and prediction set (Rp2) of 0.92 and 0.86, respectively, indicating that the sample state critically influenced prediction accuracy. This study confirmed the potential of portable NIRS as a rapid and convenient tool for protein quantification in nuts, offering a practical alternative to conventional methods. The findings also suggested its broader applicability for quality assessment in other nuts and food products, contributing to advancements in food science and agricultural technology. Full article
(This article belongs to the Special Issue Food Proteins: Innovations for Food Technologies)
Show Figures

Figure 1

26 pages, 9328 KB  
Article
Global Optical and SAR Image Registration Method Based on Local Distortion Division
by Bangjie Li, Dongdong Guan, Yuzhen Xie, Xiaolong Zheng, Zhengsheng Chen, Lefei Pan, Weiheng Zhao and Deliang Xiang
Remote Sens. 2025, 17(9), 1642; https://doi.org/10.3390/rs17091642 - 6 May 2025
Viewed by 1159
Abstract
Variations in terrain elevation cause images acquired under different imaging modalities to deviate from a linear mapping relationship. This effect is particularly pronounced between optical and SAR images, where the range-based imaging mechanism of SAR sensors leads to significant local geometric distortions, such [...] Read more.
Variations in terrain elevation cause images acquired under different imaging modalities to deviate from a linear mapping relationship. This effect is particularly pronounced between optical and SAR images, where the range-based imaging mechanism of SAR sensors leads to significant local geometric distortions, such as perspective shrinkage and occlusion. As a result, it becomes difficult to represent the spatial correspondence between optical and SAR images using a single geometric model. To address this challenge, we propose a global optical-SAR image registration method that leverages local distortion characteristics. Specifically, we introduce a Superpixel-based Local Distortion Division (SLDD) method, which defines superpixel region features and segments the image into local distortion and normal regions by computing the Mahalanobis distance between superpixel features. We further design a Multi-Feature Fusion Capsule Network (MFFCN) that integrates shallow salient features with deep structural details, reconstructing the dimensions of digital capsules to generate feature descriptors encompassing texture, phase, structure, and amplitude information. This design effectively mitigates the information loss and feature degradation problems caused by pooling operations in conventional convolutional neural networks (CNNs). Additionally, a hard negative mining loss is incorporated to further enhance feature discriminability. Feature descriptors are extracted separately from regions with different distortion levels, and corresponding transformation models are built for local registration. Finally, the local registration results are fused to generate a globally aligned image. Experimental results on public datasets demonstrate that the proposed method achieves superior performance over state-of-the-art (SOTA) approaches in terms of Root Mean Squared Error (RMSE), Correct Match Number (CMN), Distribution of Matched Points (Scat), Edge Fidelity (EF), and overall visual quality. Full article
(This article belongs to the Special Issue Temporal and Spatial Analysis of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

13 pages, 543 KB  
Article
Fitting Geometric Shapes to Fuzzy Point Cloud Data
by Vincent B. Verhoeven, Pasi Raumonen and Markku Åkerblom
J. Imaging 2025, 11(1), 7; https://doi.org/10.3390/jimaging11010007 - 3 Jan 2025
Cited by 2 | Viewed by 1302
Abstract
This article describes procedures and thoughts regarding the reconstruction of geometry-given data and its uncertainty. The data are considered as a continuous fuzzy point cloud, instead of a discrete point cloud. Shape fitting is commonly performed by minimizing the discrete Euclidean distance; however, [...] Read more.
This article describes procedures and thoughts regarding the reconstruction of geometry-given data and its uncertainty. The data are considered as a continuous fuzzy point cloud, instead of a discrete point cloud. Shape fitting is commonly performed by minimizing the discrete Euclidean distance; however, we propose the novel approach of using the expected Mahalanobis distance. The primary benefit is that it takes both the different magnitude and orientation of uncertainty for each data point into account. We illustrate the approach with laser scanning data of a cylinder and compare its performance with that of the conventional least squares method with and without random sample consensus (RANSAC). Our proposed method fits the geometry more accurately, albeit generally with greater uncertainty, and shows promise for geometry reconstruction with laser-scanned data. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

17 pages, 382 KB  
Article
MODE: Minimax Optimal Deterministic Experiments for Causal Inference in the Presence of Covariates
by Shaohua Xu, Songnan Liu and Yongdao Zhou
Entropy 2024, 26(12), 1023; https://doi.org/10.3390/e26121023 - 26 Nov 2024
Viewed by 939
Abstract
Data-driven decision-making has become crucial across various domains. Randomization and re-randomization are standard techniques employed in controlled experiments to estimate causal effects in the presence of numerous pre-treatment covariates. This paper quantifies the worst-case mean squared error of the difference-in-means estimator as a [...] Read more.
Data-driven decision-making has become crucial across various domains. Randomization and re-randomization are standard techniques employed in controlled experiments to estimate causal effects in the presence of numerous pre-treatment covariates. This paper quantifies the worst-case mean squared error of the difference-in-means estimator as a generalized discrepancy of covariates between treatment and control groups. We demonstrate that existing randomized or re-randomized experiments utilizing Monte Carlo methods are sub-optimal in minimizing this generalized discrepancy. To address this limitation, we introduce a novel optimal deterministic experiment based on quasi-Monte Carlo techniques, which effectively minimizes the generalized discrepancy in a model-independent manner. We provide a theoretical proof indicating that the difference-in-means estimator derived from the proposed experiment converges more rapidly than those obtained from completely randomized or re-randomized experiments using Mahalanobis distance. Simulation results illustrate that the proposed experiment significantly reduces covariate imbalances and estimation uncertainties when compared to existing randomized and deterministic approaches. In summary, the proposed experiment serves as a reliable and effective framework for controlled experimentation in causal inference. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

20 pages, 3598 KB  
Article
Multi-Site Wind Speed Prediction Based on Graph Embedding and Cyclic Graph Isomorphism Network (GIN-GRU)
by Hongshun Wu and Hui Chen
Energies 2024, 17(14), 3516; https://doi.org/10.3390/en17143516 - 17 Jul 2024
Cited by 3 | Viewed by 1356
Abstract
Accurate and reliable wind speed prediction is conducive to improving the power generation efficiency of electrical systems. Due to the lack of adequate consideration of spatial feature extraction, the existing wind speed prediction models have certain limitations in capturing the rich neighborhood information [...] Read more.
Accurate and reliable wind speed prediction is conducive to improving the power generation efficiency of electrical systems. Due to the lack of adequate consideration of spatial feature extraction, the existing wind speed prediction models have certain limitations in capturing the rich neighborhood information of multiple sites. To address the previously mentioned constraints, our study introduces a graph isomorphism-based gated recurrent unit (GIN-GRU). Initially, the model utilizes a hybrid mechanism of random forest and principal component analysis (PCA-RF) to discuss the feature data from different sites. This process not only preserves the primary features but also extracts critical information by performing dimensionality reduction on the residual features. Subsequently, the model constructs graph networks by integrating graph embedding techniques with the Mahalanobis distance metric to synthesize the correlation information among features from multiple sites. This approach effectively consolidates the interrelated feature data and captures the complex interactions across multiple sites. Ultimately, the graph isomorphism network (GIN) delves into the intrinsic relationships within the graph networks and the gated recurrent unit (GRU) integrates these relationships with temporal correlations to address the challenges of wind speed prediction effectively. The experiments conducted on wind farm datasets for offshore California in 2019 have demonstrated that the proposed model has higher prediction accuracy compared to the comparative model such as CNN-LSTM and GAT-LSTM. Specifically, by modifying the network layers, we achieved higher precision, with the mean square error (MSE) and root mean square error (RMSE) of wind speed at a height of 10 m being 0.8457 m/s and 0.9196 m/s, respectively. Full article
(This article belongs to the Topic Advances in Power Science and Technology)
Show Figures

Figure 1

28 pages, 2100 KB  
Article
Damage Detection with Data-Driven Machine Learning Models on an Experimental Structure
by Yohannes L. Alemu, Tom Lahmer and Christian Walther
Eng 2024, 5(2), 629-656; https://doi.org/10.3390/eng5020036 - 17 Apr 2024
Cited by 5 | Viewed by 4103
Abstract
Various techniques have been employed to detect damage in civil engineering structures. Apart from the model-based approach, which demands the frequent updating of its corresponding finite element method (FEM)-built model, data-driven methods have gained prominence. Environmental and operational effects significantly affect damage detection [...] Read more.
Various techniques have been employed to detect damage in civil engineering structures. Apart from the model-based approach, which demands the frequent updating of its corresponding finite element method (FEM)-built model, data-driven methods have gained prominence. Environmental and operational effects significantly affect damage detection due to the presence of damage-related trends in their analyses. Time-domain approaches such as autoregression and metrics such as the Mahalanobis squared distance have been utilized to mitigate these effects. In the realm of machine learning (ML) models, their effectiveness relies heavily on the type and quality of the extracted features, making this aspect a focal point of attention. The objective of this work is therefore to deploy and observe potential feature extraction approaches used as input in training fully data-driven damage detection machine learning models. The most damage-sensitive segment (MDSS) feature extraction technique, which potentially treats signals under multiple conditions, is also proposed and deployed. It identifies potential segments for each feature coefficient under a defined criterion. Therefore, 680 signals, each consisting of 8192 data points, are recorded using accelerometer sensors at the Los Alamos National Laboratory in the USA. The data are obtained from a three-story 3D building frame and are utilized in this research for a mainly data-driven damage detection task. Three approaches are implemented to replace four missing signals with the generated ones. In this paper, multiple fast Fourier and wavelet-transformed features are employed to evaluate their performance. Most importantly, a power spectral density (PSD)-based feature extraction approach that considers the maximum variability criterion to identify the most sensitive segments is developed and implemented. The performance of the MDSS selection technique, proposed in this work, surpasses that of all 18 trained neural networks (NN) and recurrent neural network (RNN) models, achieving more than 80% prediction accuracy on an unseen prediction dataset. It also significantly reduces the feature dimension. Furthermore, a sensitivity analysis is conducted on signal segmentation, overlapping, the treatment of a training dataset imbalance, and principal component analysis (PCA) implementation across various combinations of features. Binary and multiclass classification models are employed to primarily detect and additionally locate and identify the severity class of the damage. The collaborative approach of feature extraction and machine learning models effectively addresses the impact of environmental and operational effects (EOFs), suppressing their influences on the damage detection process. Full article
Show Figures

Figure 1

28 pages, 2631 KB  
Article
Preliminary Nose Landing Gear Digital Twin for Damage Detection
by Lucio Pinello, Omar Hassan, Marco Giglio and Claudio Sbarufatti
Aerospace 2024, 11(3), 222; https://doi.org/10.3390/aerospace11030222 - 12 Mar 2024
Cited by 5 | Viewed by 2532
Abstract
An increase in aircraft availability and readiness is one of the most desired characteristics of aircraft fleets. Unforeseen failures cause additional expenses and are particularly critical when thinking about combat jets and Unmanned Aerial Vehicles (UAVs). For instance, these systems are used under [...] Read more.
An increase in aircraft availability and readiness is one of the most desired characteristics of aircraft fleets. Unforeseen failures cause additional expenses and are particularly critical when thinking about combat jets and Unmanned Aerial Vehicles (UAVs). For instance, these systems are used under extreme conditions, and there can be situations where standard maintenance procedures are impractical or unfeasible. Thus, it is important to develop a Health and Usage Monitoring System (HUMS) that relies on diagnostic and prognostic algorithms to minimise maintenance downtime, improve safety and availability, and reduce maintenance costs. In particular, within the realm of aircraft structures, landing gear emerges as one of the most intricate systems, comprising several elements, such as actuators, shock absorbers, and structural components. Therefore, this work aims to develop a preliminary digital twin of a nose landing gear and implement diagnostic algorithms within the framework of the Health and Usage Monitoring System (HUMS). In this context, a digital twin can be used to build a database of signals acquired under healthy and faulty conditions on which damage detection algorithms can be implemented and tested. In particular, two algorithms have been implemented: the first is based on the Root-Mean-Square Error (RMSE), while the second relies on the Mahalanobis distance (MD). The algorithms were tested for three nose landing gear subsystems, namely, the steering system, the retraction/extraction system, and the oleo-pneumatic shock absorber. A comparison is made between the two algorithms using the ROC curve and accuracy, assuming equal weight for missed detections and false alarms. The algorithm that uses the Mahalanobis distance demonstrated superior performance, with a lower false alarm rate and higher accuracy compared to the other algorithm. Full article
(This article belongs to the Special Issue Aircraft Structural Health Monitoring and Digital Twin)
Show Figures

Figure 1

18 pages, 3783 KB  
Article
Forest Canopy Height Estimation by Integrating Structural Equation Modeling and Multiple Weighted Regression
by Hongbo Zhu, Bing Zhang, Weidong Song, Qinghua Xie, Xinyue Chang and Ruishan Zhao
Forests 2024, 15(2), 369; https://doi.org/10.3390/f15020369 - 16 Feb 2024
Cited by 4 | Viewed by 2195
Abstract
As an important component of forest parameters, forest canopy height is of great significance to the study of forest carbon stocks and carbon cycle status. There is an increasing interest in obtaining large-scale forest canopy height quickly and accurately. Therefore, many studies have [...] Read more.
As an important component of forest parameters, forest canopy height is of great significance to the study of forest carbon stocks and carbon cycle status. There is an increasing interest in obtaining large-scale forest canopy height quickly and accurately. Therefore, many studies have aimed to address this issue by proposing machine learning models that accurately invert forest canopy height. However, most of the these approaches feature PolSAR observations from a data-driven viewpoint in the feature selection part of the machine learning model, without taking into account the intrinsic mechanisms of PolSAR polarization observation variables. In this work, we evaluated the correlations between eight polarization observation variables, namely, T11, T22, T33, total backscattered power (SPAN), radar vegetation index (RVI), the surface scattering component (Ps), dihedral angle scattering component (Pd), and body scattering component (Pv) of Freeman-Durden three-component decomposition, and the height of the forest canopy. On this basis, a weighted inversion method for determining forest canopy height under the view of structural equation modeling was proposed. In this study, the direct and indirect contributions of the above eight polarization observation variables to the forest canopy height inversion task were estimated based on structural equation modeling. Among them, the indirect contributions were generated by the interactions between the variables and ultimately had an impact on the forest canopy height inversion. In this study, the covariance matrix between polarization variables and forest canopy height was calculated based on structural equation modeling, the weights of the variables were calculated by combining with the Mahalanobis distance, and the weighted inversion of forest canopy height was carried out using PSO-SVR. In this study, some experiments were carried out using three Gaofen-3 satellite (GF-3) images and ICESat-2 forest canopy height data for some forest areas of Gaofeng Ridge, Baisha Lizu Autonomous County, Hainan Province, China. The results showed that T11, T33, and total backscattered power (SPAN) are highly correlated with forest canopy height. In addition, this study showed that determining the weights of different polarization observation variables contributes positively to the accurate estimation of forest canopy height. The forest canopy height-weighted inversion method proposed in this paper was shown to be superior to the multiple regression model, with a 26% improvement in r and a 0.88 m reduction in the root-mean-square error (RMSE). Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

18 pages, 4392 KB  
Article
Assessing Algorithms Used for Constructing Confidence Ellipses in Multidimensional Scaling Solutions
by Panos Nikitas and Efthymia Nikita
Algorithms 2023, 16(12), 535; https://doi.org/10.3390/a16120535 - 24 Nov 2023
Cited by 1 | Viewed by 2218
Abstract
This paper assesses algorithms proposed for constructing confidence ellipses in multidimensional scaling (MDS) solutions and proposes a new approach to interpreting these confidence ellipses via hierarchical cluster analysis (HCA). It is shown that the most effective algorithm for constructing confidence ellipses involves the [...] Read more.
This paper assesses algorithms proposed for constructing confidence ellipses in multidimensional scaling (MDS) solutions and proposes a new approach to interpreting these confidence ellipses via hierarchical cluster analysis (HCA). It is shown that the most effective algorithm for constructing confidence ellipses involves the generation of simulated distances based on the original multivariate dataset and then the creation of MDS maps that are scaled, reflected, rotated, translated, and finally superimposed. For this algorithm, the stability measure of the average areas tends to zero with increasing sample size n following the power model, An−B, with positive B values ranging from 0.7 to 2 and high R-squared fitting values around 0.99. This algorithm was applied to create confidence ellipses in the MDS plots of squared Euclidean and Mahalanobis distances for continuous and binary data. It was found that plotting confidence ellipses in MDS plots offers a better visualization of the distance map of the populations under study compared to plotting single points. However, the confidence ellipses cannot eliminate the subjective selection of clusters in the MDS plot based simply on the proximity of the MDS points. To overcome this subjective selection, we should quantify the formation of clusters of proximal samples. Thus, in addition to the algorithm assessment, we propose a new approach that estimates all possible cluster probabilities associated with the confidence ellipses by applying HCA using distance matrices derived from these ellipses. Full article
(This article belongs to the Special Issue Algorithms in Data Classification)
Show Figures

Figure 1

13 pages, 1613 KB  
Article
A Sustainable Way to Determine the Water Content in Torreya grandis Kernels Based on Near-Infrared Spectroscopy
by Jiankai Xiang, Yu Huang, Shihao Guan, Yuqian Shang, Liwei Bao, Xiaojie Yan, Muhammad Hassan, Lijun Xu and Chao Zhao
Sustainability 2023, 15(16), 12423; https://doi.org/10.3390/su151612423 - 16 Aug 2023
Cited by 9 | Viewed by 1690
Abstract
Water content is an important parameter of Torreya grandis (T. grandis) kernels that affects their quality, processing and storage. The traditional drying method for water content determination is time-consuming and laborious. Water content detection based on modern analytical techniques such as [...] Read more.
Water content is an important parameter of Torreya grandis (T. grandis) kernels that affects their quality, processing and storage. The traditional drying method for water content determination is time-consuming and laborious. Water content detection based on modern analytical techniques such as spectroscopy is accomplished in a fast, accurate, nondestructive, and sustainable way. The aim of this study was to realize the rapid detection of the water content in T. grandis kernels using near-infrared spectroscopy. The water content of T. grandis kernels was measured by the traditional drying method. Meanwhile, the corresponding near-infrared spectra of these samples were collected. A quantitative water content model of T. grandis kernels was established using the full spectrum after 10 outlier samples were removed by the Mahalanobis distance method and concentration residual analysis. The results showed that the prediction model developed from the partial least squares regression (PLS) method after the spectra were pretreated by the standard normal variate transform (SNV) achieved optimal performance. The correlation coefficient of the calibration set (R2c) and the cross-validation set (R2cv) were 0.9879 and 0.9782, respectively, and the root mean square error of the calibration set (RMSEC) and the root mean square error of the cross-validation set (RMSECV) were 0.0029 and 0.0039, respectively. Thus, near-infrared spectroscopy is feasible for the rapid nondestructive detection of the water content in T. grandis seeds. Detecting the water content of agricultural and forestry products in such an environmentally friendly manner is conducive to the sustainable development of agriculture. Full article
(This article belongs to the Special Issue Sustainable Technology in Agricultural Engineering)
Show Figures

Figure 1

18 pages, 6068 KB  
Article
Power-Weighted Prediction of Photovoltaic Power Generation in the Context of Structural Equation Modeling
by Hongbo Zhu, Bing Zhang, Weidong Song, Jiguang Dai, Xinmei Lan and Xinyue Chang
Sustainability 2023, 15(14), 10808; https://doi.org/10.3390/su151410808 - 10 Jul 2023
Cited by 9 | Viewed by 1954
Abstract
With the popularization of solar energy development and utilization, photovoltaic power generation is widely used in countries around the world and is increasingly becoming an important part of new energy generation. However, it cannot be ignored that changes in solar radiation and meteorological [...] Read more.
With the popularization of solar energy development and utilization, photovoltaic power generation is widely used in countries around the world and is increasingly becoming an important part of new energy generation. However, it cannot be ignored that changes in solar radiation and meteorological conditions can cause volatility and intermittency in power generation, which, in turn, affects the stability and security of the power grid. Therefore, many studies aim to solve this problem by constructing accurate power prediction models for PV plants. However, most studies focus on adjusting the photovoltaic power station prediction model structure and parameters to achieve a high prediction accuracy. Few studies have examined how the various parameters affect the output of photovoltaic power plants, as well as how significantly and effectively these elements influence the forecast accuracy. In this study, we evaluate the correlations between solar irradiance intensity (GHI), atmospheric density (ρ), cloudiness (CC), wind speed (WS), relative humidity (RH), and ambient temperature (T) and a photovoltaic power station using a Pearson correlation analysis and remove the factors that have little correlation. The direct and indirect effects of the five factors other than wind speed (CC) on the photovoltaic power station are then estimated based on structural equation modeling; the indirect effects are generated by the interaction between the variables and ultimately have an impact on the power of the photovoltaic power station. Particle swarm optimization-based support vector regression (PSO-SVR) and variable weights utilizing the Mahalanobis distance were used to estimate the power of the photovoltaic power station over a short period of time, based on the contribution of the various solar radiation and climatic elements. Experiments were conducted on the basis of the measured data from a distributed photovoltaic power station in Changzhou, Jiangsu province, China. The results demonstrate that the short-term power of a photovoltaic power station is significantly influenced by the global horizontal irradiance (GHI), ambient temperature (T), and atmospheric density (ρ). Furthermore, the results also demonstrate how calculating the relative importance of the various contributing factors can help to improve the accuracy when estimating how powerful a photovoltaic power station will be. The multiple weighted regression model described in this study is demonstrated to be superior to the standard multiple regression model (PSO-SVR). The multiple weighted regression model resulted in a 7.2% increase in R2, a 10.7% decrease in the sum of squared error (SSE), a 2.2% decrease in the root mean square error (RMSE), and a 2.06% decrease in the continuous ranked probability score (CRPS). Full article
Show Figures

Figure 1

23 pages, 5370 KB  
Article
State-of-Health Estimation and Anomaly Detection in Li-Ion Batteries Based on a Novel Architecture with Machine Learning
by Junghwan Lee, Huanli Sun, Yuxia Liu, Xue Li, Yixin Liu and Myungjun Kim
Batteries 2023, 9(5), 264; https://doi.org/10.3390/batteries9050264 - 8 May 2023
Cited by 11 | Viewed by 5319
Abstract
Variations across cells, modules, packs, and vehicles can cause significant errors in the state estimation of LIBs using machine learning algorithms, especially when trained with small datasets. Training with large datasets that account for all variations is often impractical due to resource and [...] Read more.
Variations across cells, modules, packs, and vehicles can cause significant errors in the state estimation of LIBs using machine learning algorithms, especially when trained with small datasets. Training with large datasets that account for all variations is often impractical due to resource and time constraints at initial product release. To address this issue, we proposed a novel architecture that leverages electronic control units, edge computers, and the cloud to detect unrevealed variations and abnormal degradations in LIBs. The architecture comprised a generalized deep neural network (DNN) for generalizability, a personalized DNN for accuracy within a vehicle, and a detector. We emphasized that a generalized DNN trained with small datasets must show reasonable estimation accuracy during cross validation, which is critical for real applications before online training. We demonstrated the feasibility of the architecture by conducting experiments on 65 DNN models, where we found distinct hyperparameter configurations. The results showed that the personalized DNN achieves a root mean square error (RMSE) of 0.33%, while the generalized DNN achieves an RMSE of 4.6%. Finally, the Mahalanobis distance was used to consider the SOH differences between the generalized DNN and personalized DNN to detect abnormal degradations. Full article
(This article belongs to the Special Issue Advances in Battery Management Systems)
Show Figures

Figure 1

27 pages, 18377 KB  
Article
Short-Training Damage Detection Method for Axially Loaded Beams Subject to Seasonal Thermal Variations
by Marta Berardengo, Francescantonio Lucà, Marcello Vanali and Gianvito Annesi
Sensors 2023, 23(3), 1154; https://doi.org/10.3390/s23031154 - 19 Jan 2023
Cited by 7 | Viewed by 2263
Abstract
Vibration-based damage features are widely adopted in the field of structural health monitoring (SHM), and particularly in the monitoring of axially loaded beams, due to their high sensitivity to damage-related changes in structural properties. However, changes in environmental and operating conditions often cause [...] Read more.
Vibration-based damage features are widely adopted in the field of structural health monitoring (SHM), and particularly in the monitoring of axially loaded beams, due to their high sensitivity to damage-related changes in structural properties. However, changes in environmental and operating conditions often cause damage feature variations which can mask any possible change due to damage, thus strongly affecting the effectiveness of the monitoring strategy. Most of the approaches proposed to tackle this problem rely on the availability of a wide training dataset, accounting for the most part of the damage feature variability due to environmental and operating conditions. These approaches are reliable when a complete training set is available, and this represents a significant limitation in applications where only a short training set can be used. This often occurs when SHM systems aim at monitoring the health state of an already existing and possibly already damaged structure (e.g., tie-rods in historical buildings), or for systems which can undergo rapid deterioration. To overcome this limit, this work proposes a new damage index not affected by environmental conditions and able to properly detect system damages, even in case of short training set. The proposed index is based on the principal component analysis (PCA) of vibration-based damage features. PCA is shown to allow for a simple filtering procedure of the operating and environmental effects on the damage feature, thus avoiding any dependence on the extent of the training set. The proposed index effectiveness is shown through both simulated and experimental case studies related to an axially loaded beam-like structure, and it is compared with a Mahalanobis square distance-based index, as a reference. The obtained results highlight the capability of the proposed index in filtering out the temperature effects on a multivariate damage feature composed of eigenfrequencies, in case of both short and long training set. Moreover, the proposed PCA-based strategy is shown to outperform the benchmark one, both in terms of temperature dependency and damage sensitivity. Full article
Show Figures

Figure 1

Back to TopTop