Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (27)

Search Parameters:
Keywords = Mahalanobis distance space

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 4080 KiB  
Article
Defining and Analyzing Nervousness Using AI-Based Facial Expression Recognition
by Hyunsoo Seo, Seunghyun Kim and Eui Chul Lee
Mathematics 2025, 13(11), 1745; https://doi.org/10.3390/math13111745 - 25 May 2025
Viewed by 983
Abstract
Nervousness is a complex emotional state characterized by high arousal and ambiguous valence, often triggered in high-stress environments. This study presents a mathematical and computational framework for defining and classifying nervousness using facial expression data projected onto a valence–arousal (V–A) space. A statistical [...] Read more.
Nervousness is a complex emotional state characterized by high arousal and ambiguous valence, often triggered in high-stress environments. This study presents a mathematical and computational framework for defining and classifying nervousness using facial expression data projected onto a valence–arousal (V–A) space. A statistical approach employing the Minimum Covariance Determinant (MCD) estimator is used to construct 90% and 99% confidence ellipses for nervous and non-nervous states, respectively, using Mahalanobis distance. These ellipses form the basis for binary labeling of the AffectNet dataset. We apply a deep learning model trained via knowledge distillation, with EmoNet as the teacher and MobileNetV2 as the student, to efficiently classify nervousness. The experimental results on the AffectNet dataset show that our proposed method achieves a classification accuracy of 81.08%, improving over the baseline by approximately 6%. These results are obtained by refining the valence–arousal distributions and applying knowledge distillation from EmoNet to MobileNetV2. We use accuracy and F1-score as evaluation metrics to validate the performance. Furthermore, we perform a qualitative analysis using action unit (AU) activation graphs to provide deeper insight into nervous facial expressions. The proposed method demonstrates how mathematical tools and deep learning can be integrated for robust affective state modeling. Full article
Show Figures

Figure 1

22 pages, 5973 KiB  
Article
Environmental Factors in Structural Health Monitoring—Analysis and Removal of Effects from Resonance Frequencies
by Rims Janeliukstis, Lasma Ratnika, Liga Gaile and Sandris Rucevskis
J. Sens. Actuator Netw. 2025, 14(2), 33; https://doi.org/10.3390/jsan14020033 - 20 Mar 2025
Viewed by 938
Abstract
Strategically important objects, such as dams, tunnels, bridges, and others, require long-term structural health monitoring programs in order to preserve their structural integrity with minimal downtime, financial expenses, and increased safety for civilians. The current study focuses on developing a damage detection methodology [...] Read more.
Strategically important objects, such as dams, tunnels, bridges, and others, require long-term structural health monitoring programs in order to preserve their structural integrity with minimal downtime, financial expenses, and increased safety for civilians. The current study focuses on developing a damage detection methodology that is applicable to the long-term monitoring of such structures. It is based on the identification of resonant frequencies from operational modal analysis, removing the effect of environmental factors on the resonant frequencies through support vector regression with optimized hyperparameters and, finally, classifying the global structural state as either healthy or damaged, utilizing the Mahalanobis distance. The novelty lies in two additional steps that supplement this procedure, namely, the nonlinear estimation of the relative effects of various environmental factors, such as temperature, humidity, and ambient loads on the resonant frequencies, and the selection of the most informative resonant frequency features using a non-parametric neighborhood component analysis algorithm. This methodology is validated on a wooden two-story truss structure with different artificial structural modifications that simulate damage in a non-destructive manner. It is found that, firstly, out of all environmental factors, temperature has a dominating decreasing effect on resonance frequencies, followed by humidity, wind speed, and precipitation. Secondly, the selection of only a handful of the most informative resonance frequency features not only reduces the feature space, but also increases the classification performance, albeit with a trade-off between false alarms and missed damage detection. The proposed approach effectively minimizes false alarms and ensures consistent damage detection under varying environmental conditions, offering tangible benefits for long-term SHM applications. Full article
(This article belongs to the Special Issue Fault Diagnosis in the Internet of Things Applications)
Show Figures

Figure 1

18 pages, 4243 KiB  
Article
An Optimal Spatio-Temporal Hybrid Model Based on Wavelet Transform for Early Fault Detection
by Jingyang Xing, Fangfang Li, Xiaoyu Ma and Qiuyue Qin
Sensors 2024, 24(14), 4736; https://doi.org/10.3390/s24144736 - 21 Jul 2024
Cited by 3 | Viewed by 1496
Abstract
An optimal spatio-temporal hybrid model (STHM) based on wavelet transform (WT) is proposed to improve the sensitivity and accuracy of detecting slowly evolving faults that occur in the early stage and easily submerge with noise in complex industrial production systems. Specifically, a WT [...] Read more.
An optimal spatio-temporal hybrid model (STHM) based on wavelet transform (WT) is proposed to improve the sensitivity and accuracy of detecting slowly evolving faults that occur in the early stage and easily submerge with noise in complex industrial production systems. Specifically, a WT is performed to denoise the original data, thus reducing the influence of background noise. Then, a principal component analysis (PCA) and the sliding window algorithm are used to acquire the nearest neighbors in both spatial and time dimensions. Subsequently, the cumulative sum (CUSUM) and the mahalanobis distance (MD) are used to reconstruct the hybrid statistic with spatial and temporal sequences. It helps to enhance the correlation between high-frequency temporal dynamics and space and improves fault detection precision. Moreover, the kernel density estimation (KDE) method is used to estimate the upper threshold of the hybrid statistic so as to optimize the fault detection process. Finally, simulations are conducted by applying the WT-based optimal STHM in the early fault detection of the Tennessee Eastman (TE) process, with the aim of proving that the fault detection method proposed has a high fault detection rate (FDR) and a low false alarm rate (FAR), and it can improve both production safety and product quality. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

16 pages, 7196 KiB  
Article
3D Object Detection under Urban Road Traffic Scenarios Based on Dual-Layer Voxel Features Fusion Augmentation
by Haobin Jiang, Junhao Ren and Aoxue Li
Sensors 2024, 24(11), 3267; https://doi.org/10.3390/s24113267 - 21 May 2024
Cited by 3 | Viewed by 1607
Abstract
To enhance the accuracy of detecting objects in front of intelligent vehicles in urban road scenarios, this paper proposes a dual-layer voxel feature fusion augmentation network (DL-VFFA). It aims to address the issue of objects misrecognition caused by local occlusion or limited field [...] Read more.
To enhance the accuracy of detecting objects in front of intelligent vehicles in urban road scenarios, this paper proposes a dual-layer voxel feature fusion augmentation network (DL-VFFA). It aims to address the issue of objects misrecognition caused by local occlusion or limited field of view for targets. The network employs a point cloud voxelization architecture, utilizing the Mahalanobis distance to associate similar point clouds within neighborhood voxel units. It integrates local and global information through weight sharing to extract boundary point information within each voxel unit. The relative position encoding of voxel features is computed using an improved attention Gaussian deviation matrix in point cloud space to focus on the relative positions of different voxel sequences within channels. During the fusion of point cloud and image features, learnable weight parameters are designed to decouple fine-grained regions, enabling two-layer feature fusion from voxel to voxel and from point cloud to image. Extensive experiments on the KITTI dataset demonstrate the significant performance of DL-VFFA. Compared to the baseline network Second, DL-VFFA performs better in medium- and high-difficulty scenarios. Furthermore, compared to the voxel fusion module in MVX-Net, the voxel feature fusion results in this paper are more accurate, effectively capturing fine-grained object features post-voxelization. Through ablative experiments, we conducted in-depth analyses of the three voxel fusion modules in DL-VFFA to enhance the performance of the baseline detector and achieved superior results. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

30 pages, 6106 KiB  
Article
Detecting IoT Anomalies Using Fuzzy Subspace Clustering Algorithms
by Mohamed Shenify, Fokrul Alom Mazarbhuiya and A. S. Wungreiphi
Appl. Sci. 2024, 14(3), 1264; https://doi.org/10.3390/app14031264 - 2 Feb 2024
Cited by 4 | Viewed by 1460
Abstract
There are many applications of anomaly detection in the Internet of Things domain. IoT technology consists of a large number of interconnecting digital devices not only generating huge data continuously but also making real-time computations. Since IoT devices are highly exposed due to [...] Read more.
There are many applications of anomaly detection in the Internet of Things domain. IoT technology consists of a large number of interconnecting digital devices not only generating huge data continuously but also making real-time computations. Since IoT devices are highly exposed due to the Internet, they frequently meet with the challenges of illegitimate access in the form of intrusions, anomalies, fraud, etc. Identifying these illegitimate accesses can be an exciting research problem. In numerous applications, either fuzzy clustering or rough set theory or both have been successfully employed. As the data generated in IoT domains are high-dimensional, the clustering methods used for lower-dimensional data cannot be efficiently applied. Also, very few methods were proposed for such applications until today with limited efficacies. So, there is a need to address the problem. In this article, mixed approaches consisting of nano topology and fuzzy clustering techniques have been proposed for anomaly detection in the IoT domain. The methods first use nano topology of rough set theory to generate CORE as a subspace and then employ a couple of well-known fuzzy clustering techniques on it for the detection of anomalies. As the anomalies are detected in the lower dimensional space, and fuzzy clustering algorithms are involved in the methods, the performances of the proposed approaches improve comparatively. The effectiveness of the methods is evaluated using time-complexity analysis and experimental studies with a synthetic dataset and a real-life dataset. Experimentally, it has been found that the proposed approaches outperform the traditional fuzzy clustering algorithms in terms of detection rates, accuracy rates, false alarm rates and computation times. Furthermore, nano topological and common Mahalanobis distance-based fuzzy c-means algorithm (NT-CM-FCM) is the best among all traditional or nano topology-based algorithms, as it has accuracy rates of 84.02% and 83.21%, detection rates of 80.54% and 75.37%, and false alarm rates of 7.89% and 9.09% with the KDDCup’99 dataset and Kitsune Network Attack Dataset, respectively. Full article
Show Figures

Figure 1

26 pages, 9562 KiB  
Article
Hyperspectral Anomaly Detection with Auto-Encoder and Independent Target
by Shuhan Chen, Xiaorun Li and Yunfeng Yan
Remote Sens. 2023, 15(22), 5266; https://doi.org/10.3390/rs15225266 - 7 Nov 2023
Cited by 3 | Viewed by 3075
Abstract
As an unsupervised data representation neural network, auto-encoder (AE) has shown great potential in denoising, dimensionality reduction, and data reconstruction. Many AE-based background (BKG) modeling methods have been developed for hyperspectral anomaly detection (HAD). However, their performance is subject to their unbiased reconstruction [...] Read more.
As an unsupervised data representation neural network, auto-encoder (AE) has shown great potential in denoising, dimensionality reduction, and data reconstruction. Many AE-based background (BKG) modeling methods have been developed for hyperspectral anomaly detection (HAD). However, their performance is subject to their unbiased reconstruction of BKG and target pixels. This article presents a rather different low rank and sparse matrix decomposition (LRaSMD) method based on AE, named auto-encoder and independent target (AE-IT), for hyperspectral anomaly detection. First, the encoder weight matrix, obtained by a designed AE network, is utilized to construct a projector for generating a low-rank component in the encoder subspace. By adaptively and reasonably determining the number of neurons in the latent layer, the designed AE-based method can promote the reconstruction of BKG. Second, to ensure independence and representativeness, the component in the encoder orthogonal subspace is made into a sphere and followed by finding of unsupervised targets to construct an anomaly space. In order to mitigate the influence of noise on anomaly detection, sparse cardinality (SC) constraint is enforced on the component in the anomaly space for obtaining the sparse anomaly component. Finally, anomaly detector is constructed by combining Mahalanobi distance and multi-components, which include encoder component and sparse anomaly component, to detect anomalies. The experimental results demonstrate that AE-IT performs competitively compared to the LRaSMD-based models and AE-based approaches. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

19 pages, 3953 KiB  
Article
Finding Misclassified Natura 2000 Habitats by Applying Outlier Detection to Sentinel-1 and Sentinel-2 Data
by David Moravec, Vojtěch Barták and Petra Šímová
Remote Sens. 2023, 15(18), 4409; https://doi.org/10.3390/rs15184409 - 7 Sep 2023
Viewed by 1363
Abstract
The monitoring of Natura 2000 habitats (Habitat Directive 92/43/EEC) is a key activity ensuring the sufficient protection of European biodiversity. Reporting on the status of Natura 2000 habitats is required every 6 years. Although field mapping is still an indispensable source of data [...] Read more.
The monitoring of Natura 2000 habitats (Habitat Directive 92/43/EEC) is a key activity ensuring the sufficient protection of European biodiversity. Reporting on the status of Natura 2000 habitats is required every 6 years. Although field mapping is still an indispensable source of data on the status of Natura 2000 habitats, and very good field-based data exist in some countries, keeping the field-based habitat maps up to date can be an issue. Remote sensing techniques represent an excellent alternative. Here, we present a new method for detecting habitats that were likely misclassified during the field mapping or that have changed since then. The method identifies the possible habitat mapping errors as the so-called “attribute outliers”, i.e., outlying observations in the feature space of all relevant (spectral and other) characteristics of an individual habitat patch. We used the Czech Natura 2000 Habitat Layer as field-based habitat data. To prepare the feature space of habitat characteristics, we used a fusion of Sentinel-1 and Sentinel-2 satellite data along with a Digital Elevation Model. We compared outlier ratings using the robust Mahalanobis distance and Local Outlier Factor using three different thresholds (Tukey rule, histogram-based Scott’s rule, and 95% quantiles in χ2 distribution). The Mahalanobis distance thresholded by the 95% χ2 quantile achieved the best results, and, because of its high specificity, appeared as a promising tool for identifying erroneously mapped or changed habitats. The presented method can, therefore, be used as a guide to target field updates of Natura 2000 habitat maps or for other habitat/land cover mapping activities where the detection of misclassifications or changes is needed. Full article
Show Figures

Figure 1

23 pages, 4402 KiB  
Article
Spectral Clustering Approach with K-Nearest Neighbor and Weighted Mahalanobis Distance for Data Mining
by Lifeng Yin, Lei Lv, Dingyi Wang, Yingwei Qu, Huayue Chen and Wu Deng
Electronics 2023, 12(15), 3284; https://doi.org/10.3390/electronics12153284 - 31 Jul 2023
Cited by 11 | Viewed by 3032
Abstract
This paper proposes a spectral clustering method using k-means and weighted Mahalanobis distance (Referred to as MDLSC) to enhance the degree of correlation between data points and improve the clustering accuracy of Laplacian matrix eigenvectors. First, we used the correlation coefficient as the [...] Read more.
This paper proposes a spectral clustering method using k-means and weighted Mahalanobis distance (Referred to as MDLSC) to enhance the degree of correlation between data points and improve the clustering accuracy of Laplacian matrix eigenvectors. First, we used the correlation coefficient as the weight of the Mahalanobis distance to calculate the weighted Mahalanobis distance between any two data points and constructed the weighted Mahalanobis distance matrix of the data set; then, based on the weighted Mahalanobis distance matrix, we used the K-nearest neighborhood (KNN) algorithm construct similarity matrix. Secondly, the regularized Laplacian matrix was calculated according to the similarity matrix, normalized and decomposed, and the feature space for clustering was obtained. This method fully considered the degree of linear correlation between data and special spatial structure and achieved accurate clustering. Finally, various spectral clustering algorithms were used to conduct multi-angle comparative experiments on artificial and UCI data sets. The experimental results show that MDLSC has certain advantages in each clustering index and the clustering quality is better. The distribution results of the eigenvectors also show that the similarity matrix calculated by MDLSC is more reasonable, and the calculation of the eigenvectors of the Laplacian matrix maximizes the retention of the distribution characteristics of the original data, thereby improving the accuracy of the clustering algorithm. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 5647 KiB  
Article
Research on Pedestrian Detection and DeepSort Tracking in Front of Intelligent Vehicle Based on Deep Learning
by Xuewen Chen, Yuanpeng Jia, Xiaoqi Tong and Zirou Li
Sustainability 2022, 14(15), 9281; https://doi.org/10.3390/su14159281 - 28 Jul 2022
Cited by 34 | Viewed by 4003
Abstract
In order to improve the tracking failure caused by small-target pedestrians and partially blocked pedestrians in dense crowds in complex environments, a pedestrian target detection and tracking method for an intelligent vehicle was proposed based on deep learning. On the basis of the [...] Read more.
In order to improve the tracking failure caused by small-target pedestrians and partially blocked pedestrians in dense crowds in complex environments, a pedestrian target detection and tracking method for an intelligent vehicle was proposed based on deep learning. On the basis of the YOLO detection model, the channel attention module and spatial attention module were introduced and were joined to the back of the backbone network Darknet-53 in order to achieve weight amplification of important feature information in channel and space dimensions and improve the representation ability of the model for important feature information. Based on the improved YOLO network, the flow of the DeepSort pedestrian tracking method was designed and the Kalman filter algorithm was used to estimate the pedestrian motion state. The Mahalanobis distance and apparent feature were used to calculate the similarity between the detection frame and the predicted pedestrian trajectory; the Hungarian algorithm was used to achieve the optimal matching of pedestrian targets. Finally, the improved YOLO pedestrian detection model and the DeepSort pedestrian tracking method were verified in the same experimental environment. The verification results showed that the improved model can improve the detection accuracy of small-target pedestrians, effectively deal with the problem of target occlusion, reduce the rate of missed detection and false detection of pedestrian targets, and improve the tracking failure caused by occlusion. Full article
Show Figures

Figure 1

21 pages, 3789 KiB  
Article
A Design and Optimization of a CGK-Based Fuzzy Granular Model Based on the Generation of Rational Information Granules
by Chan-Uk Yeom and Keun-Chang Kwak
Appl. Sci. 2022, 12(14), 7226; https://doi.org/10.3390/app12147226 - 18 Jul 2022
Cited by 1 | Viewed by 1844
Abstract
This study proposes an optimized context-based Gustafson Kessel (CGK)-based fuzzy granular model based on the generation of rational information granules and an optimized CGK-based fuzzy granular model with the aggregated structure. The conventional context-based fuzzy-c-means (CFCM) clustering generates clusters considering the input and [...] Read more.
This study proposes an optimized context-based Gustafson Kessel (CGK)-based fuzzy granular model based on the generation of rational information granules and an optimized CGK-based fuzzy granular model with the aggregated structure. The conventional context-based fuzzy-c-means (CFCM) clustering generates clusters considering the input and output spaces. However, the prediction performance decreases when the specific data points with geometric features are used. The CGK clustering solves the above situation by generating valid clusters considering the geometric attributes of data in input and output spaces with the aid of the Mahalanobis distance. However, it is necessary to generate rational information granules (IGs) because there is a significant change in performance according to the context generated in the output space and the shape, size, and several clusters generated in the input space. As a result, the rational IGs are obtained by considering the relationship between the coverage and specificity of IG using the genetic algorithm (GA). Thus, the optimized CGK-based fuzzy granular models with the aggregated structure are designed based on rational IGs. The prediction performance was compared using the two databases to verify the validity of the proposed method. Finally, the experiments revealed that the performance of the proposed method is higher than that of the previous model. Full article
(This article belongs to the Special Issue Fuzzy Systems and Fuzzy Neural Networks: Theory and Applications)
Show Figures

Figure 1

26 pages, 9200 KiB  
Article
A Design of CGK-Based Granular Model Using Hierarchical Structure
by Chan-Uk Yeom and Keun-Chang Kwak
Appl. Sci. 2022, 12(6), 3154; https://doi.org/10.3390/app12063154 - 19 Mar 2022
Cited by 1 | Viewed by 2598
Abstract
In this paper, we propose context-based GK clustering and design a CGK-based granular model and a hierarchical CGK-based granular model. Existing fuzzy clustering generates clusters using Euclidean distances. However, there is a problem in that performance decreases when a cluster is created from [...] Read more.
In this paper, we propose context-based GK clustering and design a CGK-based granular model and a hierarchical CGK-based granular model. Existing fuzzy clustering generates clusters using Euclidean distances. However, there is a problem in that performance decreases when a cluster is created from data with strong nonlinearity. To improve this problem, GK clustering is used. GK clustering creates clusters using Mahalanobis distance. In this paper, we propose context-based GK (CGK) clustering, which adds a method that considers the output space in the existing GK clustering, to create a cluster that considers not only the input space but also the output space. there is. Based on the proposed CGK clustering, a CGK-based granular model and a hierarchical CGK-based granular model were designed. Since the output of the CGK-based granular model is in the form of a context, it has the advantage of verbally expressing the prediction result, and the CGK-based granular model with a hierarchical structure can generate high-dimensional information granules, so meaningful information with high abstraction value granules can be created. In order to verify the validity of the method proposed in this paper, as a result of conducting an experiment using the concrete compressive strength database, it was confirmed that the proposed methods showed superior performance than the existing granular models. Full article
(This article belongs to the Special Issue Novel Hybrid Intelligence Techniques in Engineering)
Show Figures

Figure 1

46 pages, 8890 KiB  
Article
Impact of Image-Processing Routines on Mapping Glacier Surface Facies from Svalbard and the Himalayas Using Pixel-Based Methods
by Shridhar D. Jawak, Sagar F. Wankhede, Alvarinho J. Luis and Keshava Balakrishna
Remote Sens. 2022, 14(6), 1414; https://doi.org/10.3390/rs14061414 - 15 Mar 2022
Cited by 14 | Viewed by 4764
Abstract
Glacier surface facies are valuable indicators of changes experienced by a glacial system. The interplay of accumulation and ablation facies, followed by intermixing with dust and debris, as well as the local climate, all induce observable and mappable changes on the supraglacial terrain. [...] Read more.
Glacier surface facies are valuable indicators of changes experienced by a glacial system. The interplay of accumulation and ablation facies, followed by intermixing with dust and debris, as well as the local climate, all induce observable and mappable changes on the supraglacial terrain. In the absence or lag of continuous field monitoring, remote sensing observations become vital for maintaining a constant supply of measurable data. However, remote satellite observations suffer from atmospheric effects, resolution disparity, and use of a multitude of mapping methods. Efficient image-processing routines are, hence, necessary to prepare and test the derivable data for mapping applications. The existing literature provides an application-centric view for selection of image processing schemes. This can create confusion, as it is not clear which method of atmospheric correction would be ideal for retrieving facies spectral reflectance, nor are the effects of pansharpening examined on facies. Moreover, with a variety of supervised classifiers and target detection methods now available, it is prudent to test the impact of variations in processing schemes on the resultant thematic classifications. In this context, the current study set its experimental goals. Using very-high-resolution (VHR) WorldView-2 data, we aimed to test the effects of three common atmospheric correction methods, viz. Dark Object Subtraction (DOS), Quick Atmospheric Correction (QUAC), and Fast Line-of-Sight Atmospheric Analysis of Hypercubes (FLAASH); and two pansharpening methods, viz. Gram–Schmidt (GS) and Hyperspherical Color Sharpening (HCS), on thematic classification of facies using 12 supervised classifiers. The conventional classifiers included: Mahalanobis Distance (MHD), Maximum Likelihood (MXL), Minimum Distance to Mean (MD), Spectral Angle Mapper (SAM), and Winner Takes All (WTA). The advanced/target detection classifiers consisted of: Adaptive Coherence Estimator (ACE), Constrained Energy Minimization (CEM), Matched Filtering (MF), Mixture-Tuned Matched Filtering (MTMF), Mixture-Tuned Target-Constrained Interference-Minimized Filter (MTTCIMF), Orthogonal Space Projection (OSP), and Target-Constrained Interference-Minimized Filter (TCIMF). This experiment was performed on glaciers at two test sites, Ny-Ålesund, Svalbard, Norway; and Chandra–Bhaga basin, Himalaya, India. The overall performance suggested that the FLAASH correction delivered realistic reflectance spectra, while DOS delivered the least realistic. Spectra derived from HCS sharpened subsets seemed to match the average reflectance trends, whereas GS reduced the overall reflectance. WTA classification of the DOS subsets achieved the highest overall accuracy (0.81). MTTCIMF classification of the FLAASH subsets yielded the lowest overall accuracy of 0.01. However, FLAASH consistently provided better performance (less variable and generally accurate) than DOS and QUAC, making it the more reliable and hence recommended algorithm. While HCS-pansharpened classification achieved a lower error rate (0.71) in comparison to GS pansharpening (0.76), neither significantly improved accuracy nor efficiency. The Ny-Ålesund glacier facies were best classified using MXL (error rate = 0.49) and WTA classifiers (error rate = 0.53), whereas the Himalayan glacier facies were best classified using MD (error rate = 0.61) and WTA (error rate = 0.45). The final comparative analysis of classifiers based on the total error rate across all atmospheric corrections and pansharpening methods yielded the following reliability order: MXL > WTA > MHD > ACE > MD > CEM = MF > SAM > MTMF = TCIMF > OSP > MTTCIMF. The findings of the current study suggested that for VHR visible near-infrared (VNIR) mapping of facies, FLAASH was the best atmospheric correction, while MXL may deliver reliable thematic classification. Moreover, an extensive account of the varying exertions of each processing scheme is discussed, and could be transferable when compared against other VHR VNIR mapping methods. Full article
Show Figures

Graphical abstract

13 pages, 1872 KiB  
Article
Selecting Milk Spectra to Develop Equations to Predict Milk Technological Traits
by Maria Frizzarin, Isobel Claire Gormley, Alessandro Casa and Sinéad McParland
Foods 2021, 10(12), 3084; https://doi.org/10.3390/foods10123084 - 11 Dec 2021
Viewed by 2698
Abstract
Including all available data when developing equations to relate midinfrared spectra to a phenotype may be suboptimal for poorly represented spectra. Here, an alternative local changepoint approach was developed to predict six milk technological traits from midinfrared spectra. Neighbours were objectively identified for [...] Read more.
Including all available data when developing equations to relate midinfrared spectra to a phenotype may be suboptimal for poorly represented spectra. Here, an alternative local changepoint approach was developed to predict six milk technological traits from midinfrared spectra. Neighbours were objectively identified for each predictand as those most similar to the predictand using the Mahalanobis distances between the spectral principal components, and subsequently used in partial least square regression (PLSR) analyses. The performance of the local changepoint approach was compared to that of PLSR using all spectra (global PLSR) and another LOCAL approach, whereby a fixed number of neighbours was used in the prediction according to the correlation between the predictand and the available spectra. Global PLSR had the lowest RMSEV for five traits. The local changepoint approach had the lowest RMSEV for one trait; however, it outperformed the LOCAL approach for four traits. When the 5% of the spectra with the greatest Mahalanobis distance from the centre of the global principal component space were analysed, the local changepoint approach outperformed the global PLSR and the LOCAL approach in two and five traits, respectively. The objective selection of neighbours improved the prediction performance compared to utilising a fixed number of neighbours; however, it generally did not outperform the global PLSR. Full article
(This article belongs to the Special Issue Advances in Application of Spectral Analysis in Dairy Products)
Show Figures

Figure 1

21 pages, 4141 KiB  
Article
Towards a Resilience to Stress Index Based on Physiological Response: A Machine Learning Approach
by Ramon E. Diaz-Ramos, Daniela A. Gomez-Cravioto, Luis A. Trejo, Carlos Figueroa López and Miguel Angel Medina-Pérez
Sensors 2021, 21(24), 8293; https://doi.org/10.3390/s21248293 - 11 Dec 2021
Cited by 6 | Viewed by 5287
Abstract
This study proposes a new index to measure the resilience of an individual to stress, based on the changes of specific physiological variables. These variables include electromyography, which is the muscle response, blood volume pulse, breathing rate, peripheral temperature, and skin conductance. We [...] Read more.
This study proposes a new index to measure the resilience of an individual to stress, based on the changes of specific physiological variables. These variables include electromyography, which is the muscle response, blood volume pulse, breathing rate, peripheral temperature, and skin conductance. We measured the data with a biofeedback device from 71 individuals subjected to a 10-min psychophysiological stress test. The data exploration revealed that features’ variability among test phases could be observed in a two-dimensional space with Principal Components Analysis (PCA). In this work, we demonstrate that the values of each feature within a phase are well organized in clusters. The new index we propose, Resilience to Stress Index (RSI), is based on this observation. To compute the index, we used non-supervised machine learning methods to calculate the inter-cluster distances, specifically using the following four methods: Euclidean Distance of PCA, Mahalanobis Distance, Cluster Validity Index Distance, and Euclidean Distance of Kernel PCA. While there was no statistically significant difference (p>0.01) among the methods, we recommend using Mahalanobis, since this method provides higher monotonic association with the Resilience in Mexicans (RESI-M) scale. Results are encouraging since we demonstrated that the computation of a reliable RSI is possible. To validate the new index, we undertook two tasks: a comparison of the RSI against the RESI-M, and a Spearman correlation between phases one and five to determine if the behavior is resilient or not. The computation of the RSI of an individual has a broader scope in mind, and it is to understand and to support mental health. The benefits of having a metric that measures resilience to stress are multiple; for instance, to the extent that individuals can track their resilience to stress, they can improve their everyday life. Full article
Show Figures

Graphical abstract

14 pages, 1322 KiB  
Article
Deep Coupling Recurrent Auto-Encoder with Multi-Modal EEG and EOG for Vigilance Estimation
by Kuiyong Song, Lianke Zhou and Hongbin Wang
Entropy 2021, 23(10), 1316; https://doi.org/10.3390/e23101316 - 9 Oct 2021
Cited by 16 | Viewed by 2682
Abstract
Vigilance estimation of drivers is a hot research field of current traffic safety. Wearable devices can monitor information regarding the driver’s state in real time, which is then analyzed by a data analysis model to provide an estimation of vigilance. The accuracy of [...] Read more.
Vigilance estimation of drivers is a hot research field of current traffic safety. Wearable devices can monitor information regarding the driver’s state in real time, which is then analyzed by a data analysis model to provide an estimation of vigilance. The accuracy of the data analysis model directly affects the effect of vigilance estimation. In this paper, we propose a deep coupling recurrent auto-encoder (DCRA) that combines electroencephalography (EEG) and electrooculography (EOG). This model uses a coupling layer to connect two single-modal auto-encoders to construct a joint objective loss function optimization model, which consists of single-modal loss and multi-modal loss. The single-modal loss is measured by Euclidean distance, and the multi-modal loss is measured by a Mahalanobis distance of metric learning, which can effectively reflect the distance between different modal data so that the distance between different modes can be described more accurately in the new feature space based on the metric matrix. In order to ensure gradient stability in the long sequence learning process, a multi-layer gated recurrent unit (GRU) auto-encoder model was adopted. The DCRA integrates data feature extraction and feature fusion. Relevant comparative experiments show that the DCRA is better than the single-modal method and the latest multi-modal fusion. The DCRA has a lower root mean square error (RMSE) and a higher Pearson correlation coefficient (PCC). Full article
Show Figures

Figure 1

Back to TopTop