Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = isomap

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 649 KB  
Article
Manifold Causal Conditional Deep Networks for Heterogeneous Treatment Effect Estimation and Policy Evaluation
by Jong-Min Kim
Mathematics 2026, 14(4), 738; https://doi.org/10.3390/math14040738 - 22 Feb 2026
Cited by 1 | Viewed by 390
Abstract
We present a comprehensive framework for estimating heterogeneous treatment effects and evaluating decision-making policies in high-dimensional settings. Our approach combines nonlinear manifold learning techniques—UMAP, t-SNE, and Isomap—with a Causal Conditional Deep Network (CCDN) to model complex nonlinear interactions among covariates, treatments, and outcomes. [...] Read more.
We present a comprehensive framework for estimating heterogeneous treatment effects and evaluating decision-making policies in high-dimensional settings. Our approach combines nonlinear manifold learning techniques—UMAP, t-SNE, and Isomap—with a Causal Conditional Deep Network (CCDN) to model complex nonlinear interactions among covariates, treatments, and outcomes. Within this framework, we assess five treatment assignment policies—Greedy, Thompson Sampling, Epsilon-Greedy, Random, and a novel LLM-guided Thompson policy—across simulated and real-world datasets, including Adult, Wine Quality, and Boston Housing. Empirical results reveal a fundamental trade-off: exploitative policies like Greedy minimize cumulative regret but underperform in recovering heterogeneous treatment effects, whereas exploratory policies, particularly Random and LLM-Thompson, achieve a lower Conditional Average Treatment Effect Root Mean Squared Error (CATE RMSE) by providing broader coverage of the action–covariate space. Notably, LLM-Thompson consistently delivers strong performance across noisy, real-world datasets, highlighting the advantage of uncertainty-aware exploration in capturing treatment heterogeneity. Overall, the framework demonstrates that integrating manifold-informed deep networks with principled exploration strategies enhances both policy optimization and individualized treatment effect estimation in high-dimensional, complex environments. Full article
Show Figures

Figure 1

52 pages, 10804 KB  
Article
Silhouette-Based Evaluation of PCA, Isomap, and t-SNE on Linear and Nonlinear Data Structures
by Mostafa Zahed and Maryam Skafyan
Stats 2025, 8(4), 105; https://doi.org/10.3390/stats8040105 - 3 Nov 2025
Cited by 2 | Viewed by 1842
Abstract
Dimensionality reduction is fundamental for analyzing high-dimensional data, supporting visualization, denoising, and structure discovery. We present a systematic, large-scale benchmark of three widely used methods—Principal Component Analysis (PCA), Isometric Mapping (Isomap), and t-Distributed Stochastic Neighbor Embedding (t-SNE)—evaluated by average silhouette scores to quantify [...] Read more.
Dimensionality reduction is fundamental for analyzing high-dimensional data, supporting visualization, denoising, and structure discovery. We present a systematic, large-scale benchmark of three widely used methods—Principal Component Analysis (PCA), Isometric Mapping (Isomap), and t-Distributed Stochastic Neighbor Embedding (t-SNE)—evaluated by average silhouette scores to quantify cluster preservation after embedding. Our full factorial simulation varies sample size n{100,200,300,400,500}, noise variance σ2{0.25,0.5,0.75,1,1.5,2}, and feature count p{20,50,100,200,300,400} under four generative regimes: (1) a linear Gaussian mixture, (2) a linear Student-t mixture with heavy tails, (3) a nonlinear Swiss-roll manifold, and (4) a nonlinear concentric-spheres manifold, each replicated 1000 times per condition. Beyond empirical comparisons, we provide mathematical results that explain the observed rankings: under standard separation and sampling assumptions, PCA maximizes silhouettes for linear, low-rank structure, whereas Isomap dominates on smooth curved manifolds; t-SNE prioritizes local neighborhoods, yielding strong local separation but less reliable global geometry. Empirically, PCA consistently achieves the highest silhouettes for linear structure (Isomap second, t-SNE third); on manifolds the ordering reverses (Isomap > t-SNE > PCA). Increasing σ2 and adding uninformative dimensions (larger p) degrade all methods, while larger n improves levels and stability. To our knowledge, this is the first integrated study combining a comprehensive factorial simulation across linear and nonlinear regimes with distribution-based summaries (density and violin plots) and supporting theory that predicts method orderings. The results offer clear, practice-oriented guidance: prefer PCA when structure is approximately linear; favor manifold learning—especially Isomap—when curvature is present; and use t-SNE for the exploratory visualization of local neighborhoods. Complete tables and replication materials are provided to facilitate method selection and reproducibility. Full article
Show Figures

Figure 1

33 pages, 3983 KB  
Article
Real-Time EEG Decoding of Motor Imagery via Nonlinear Dimensionality Reduction (Manifold Learning) and Shallow Classifiers
by Hezzal Kucukselbes and Ebru Sayilgan
Biosensors 2025, 15(10), 692; https://doi.org/10.3390/bios15100692 - 13 Oct 2025
Cited by 2 | Viewed by 1557
Abstract
This study introduces a real-time processing framework for decoding motor imagery EEG signals by integrating manifold learning techniques with shallow classifiers. EEG recordings were obtained from six healthy participants performing five distinct wrist and hand motor imagery tasks. To address the challenges of [...] Read more.
This study introduces a real-time processing framework for decoding motor imagery EEG signals by integrating manifold learning techniques with shallow classifiers. EEG recordings were obtained from six healthy participants performing five distinct wrist and hand motor imagery tasks. To address the challenges of high dimensionality and inherent nonlinearity in EEG data, five nonlinear dimensionality reduction methods, t-SNE, ISOMAP, LLE, Spectral Embedding, and MDS, were comparatively evaluated. Each method was combined with three shallow classifiers (k-NN, Naive Bayes, and SVM) to investigate performance across binary, ternary, and five-class classification settings. Among all tested configurations, the t-SNE + k-NN pairing achieved the highest accuracies, reaching 99.7% (two-class), 99.3% (three-class), and 89.0% (five-class). ISOMAP and MDS also delivered competitive results, particularly in multi-class scenarios. The presented approach builds upon our previous work involving EEG datasets from individuals with spinal cord injury (SCI), where the same manifold techniques were examined extensively. Comparative findings between healthy and SCI groups reveal consistent advantages of t-SNE and ISOMAP in preserving class separability, despite higher overall accuracies in healthy subjects due to improved signal quality. The proposed pipeline demonstrates low-latency performance, completing signal processing and classification in approximately 150 ms per trial, thereby meeting real-time requirements for responsive BCI applications. These results highlight the potential of nonlinear dimensionality reduction to enhance real-time EEG decoding, offering a low-complexity yet high-accuracy solution applicable to both healthy users and neurologically impaired individuals in neurorehabilitation and assistive technology contexts. Full article
(This article belongs to the Section Wearable Biosensors)
Show Figures

Figure 1

23 pages, 3314 KB  
Article
Optimization of Manifold Learning Using Differential Geometry for 3D Reconstruction in Computer Vision
by Yawen Wang
Mathematics 2025, 13(17), 2771; https://doi.org/10.3390/math13172771 - 28 Aug 2025
Viewed by 3093
Abstract
Manifold learning is a significant computer vision task used to describe high-dimensional visual data in lower-dimensional manifolds without sacrificing the intrinsic structural properties required for 3D reconstruction. Isomap, Locally Linear Embedding (LLE), Laplacian Eigenmaps, and t-SNE are helpful in data topology preservation but [...] Read more.
Manifold learning is a significant computer vision task used to describe high-dimensional visual data in lower-dimensional manifolds without sacrificing the intrinsic structural properties required for 3D reconstruction. Isomap, Locally Linear Embedding (LLE), Laplacian Eigenmaps, and t-SNE are helpful in data topology preservation but are typically indifferent to the intrinsic differential geometric characteristics of the manifolds, thus leading to deformation of spatial relations and reconstruction accuracy loss. This research proposes an Optimization of Manifold Learning using Differential Geometry Framework (OML-DGF) to overcome the drawbacks of current manifold learning techniques in 3D reconstruction. The framework employs intrinsic geometric properties—like curvature preservation, geodesic coherence, and local–global structure correspondence—to produce structurally correct and topologically consistent low-dimensional embeddings. The model utilizes a Riemannian metric-based neighborhood graph, approximations of geodesic distances with shortest path algorithms, and curvature-sensitive embedding from second-order derivatives in local tangent spaces. A curvature-regularized objective function is derived to steer the embedding toward facilitating improved geometric coherence. Principal Component Analysis (PCA) reduces initial dimensionality and modifies LLE with curvature weighting. Experiments on the ModelNet40 dataset show an impressive improvement in reconstruction quality, with accuracy gains of up to 17% and better structure preservation than traditional methods. These findings confirm the advantage of employing intrinsic geometry as an embedding to improve the accuracy of 3D reconstruction. The suggested approach is computationally light and scalable and can be utilized in real-time contexts such as robotic navigation, medical image diagnosis, digital heritage reconstruction, and augmented/virtual reality systems in which strong 3D modeling is a critical need. Full article
Show Figures

Figure 1

14 pages, 10156 KB  
Article
Seismic Waveform Feature Extraction and Reservoir Prediction Based on CNN and UMAP: A Case Study of the Ordos Basin
by Lifu Zheng, Hao Yang and Guichun Luo
Appl. Sci. 2025, 15(13), 7377; https://doi.org/10.3390/app15137377 - 30 Jun 2025
Cited by 1 | Viewed by 1270
Abstract
Seismic waveform feature extraction is a critical task in seismic exploration, as it directly impacts reservoir prediction and geological interpretation. However, large-scale seismic data and nonlinear relationships between seismic signals and reservoir properties are challenging for traditional machine learning methods. To address these [...] Read more.
Seismic waveform feature extraction is a critical task in seismic exploration, as it directly impacts reservoir prediction and geological interpretation. However, large-scale seismic data and nonlinear relationships between seismic signals and reservoir properties are challenging for traditional machine learning methods. To address these limitations, this paper proposes a novel framework combining Convolutional Neural Network (CNN) and Uniform Manifold Approximation and Projection (UMAP) for seismic waveform feature extraction and analysis. The UMAP-CNN framework leverages the strengths of manifold learning and deep learning, enabling multi-scale feature extraction and dimensionality reduction while preserving both local and global data structures. The evaluation experiments, which considered runtime, receiver operating characteristic (ROC) curves, embedding distribution maps, and other quantitative assessments, illustrated that the UMAP-CNN outperformed t-distributed stochastic neighbor embedding (t-SNE), locally linear embedding (LLE) and isometric feature mapping (Isomap). A case study in the Ordos Basin further demonstrated that UMAP-CNN offers a high degree of accuracy in predicting coal seam thickness. Furthermore, our framework exhibited superior computational efficiency and robustness in handling large-scale datasets. Full article
(This article belongs to the Special Issue Current Advances and Future Trend in Enhanced Oil Recovery)
Show Figures

Figure 1

14 pages, 3516 KB  
Article
Deep-Learning-Based Identification of Broad-Absorption Line Quasars
by Sen Pang, Hoiio Kong, Zijun Li, Weibo Kao and Yanxia Zhang
Appl. Sci. 2025, 15(3), 1024; https://doi.org/10.3390/app15031024 - 21 Jan 2025
Cited by 3 | Viewed by 1777
Abstract
The accurate classification of broad-absorption line (BAL) quasars and non-broad-absorption line (non-BAL) quasars is key in understanding active galactic nuclei (AGN) and the evolution of the universe. With the rapid accumulation of data from large-scale spectroscopic survey projects (e.g., LAMOST, SDSS, and DESI), [...] Read more.
The accurate classification of broad-absorption line (BAL) quasars and non-broad-absorption line (non-BAL) quasars is key in understanding active galactic nuclei (AGN) and the evolution of the universe. With the rapid accumulation of data from large-scale spectroscopic survey projects (e.g., LAMOST, SDSS, and DESI), traditional manual classification methods face limitations. In this study, we propose a new method based on deep learning techniques to achieve an accurate distinction between BAL quasars and non-BAL quasars. We use a convolutional neural network (CNN) as the core model, in combination with various dimensionality reduction techniques, including principal component analysis (PCA), t-distributed stochastic neighborhood embedding (t-SNE), and isometric mapping (ISOMAP). These dimensionality reduction methods help extract meaningful features from high-dimensional spectral data while reducing model complexity. We employ quasar spectra from the 16th data release (DR16) of the Sloan Digital Sky Survey (SDSS) and obtain classification labels from the DR16Q quasar catalogues to train and evaluate our model. Through extensive experiments and comparisons, the combination of PCA and CNN achieve a test accuracy of 99.11%, demonstrating the effectiveness of deep learning for classifying the spectral data. Additionally, we explore other dimensionality reduction methods and machine learning models, providing valuable insights for future research in this field. Full article
Show Figures

Figure 1

24 pages, 3462 KB  
Article
Underutilized Feature Extraction Methods for Burn Severity Mapping: A Comprehensive Evaluation
by Linh Nguyen Van and Giha Lee
Remote Sens. 2024, 16(22), 4339; https://doi.org/10.3390/rs16224339 - 20 Nov 2024
Cited by 5 | Viewed by 2107
Abstract
Wildfires increasingly threaten ecosystems and infrastructure, making accurate burn severity mapping (BSM) essential for effective disaster response and environmental management. Machine learning (ML) models utilizing satellite-derived vegetation indices are crucial for assessing wildfire damage; however, incorporating many indices can lead to multicollinearity, reducing [...] Read more.
Wildfires increasingly threaten ecosystems and infrastructure, making accurate burn severity mapping (BSM) essential for effective disaster response and environmental management. Machine learning (ML) models utilizing satellite-derived vegetation indices are crucial for assessing wildfire damage; however, incorporating many indices can lead to multicollinearity, reducing classification accuracy. While principal component analysis (PCA) is commonly used to address this issue, its effectiveness relative to other feature extraction (FE) methods in BSM remains underexplored. This study aims to enhance ML classifier accuracy in BSM by evaluating various FE techniques that mitigate multicollinearity among vegetation indices. Using composite burn index (CBI) data from the 2014 Carlton Complex fire in the United States as a case study, we extracted 118 vegetation indices from seven Landsat-8 spectral bands. We applied and compared 13 different FE techniques—including linear and nonlinear methods such as PCA, t-distributed stochastic neighbor embedding (t-SNE), linear discriminant analysis (LDA), Isomap, uniform manifold approximation and projection (UMAP), factor analysis (FA), independent component analysis (ICA), multidimensional scaling (MDS), truncated singular value decomposition (TSVD), non-negative matrix factorization (NMF), locally linear embedding (LLE), spectral embedding (SE), and neighborhood components analysis (NCA). The performance of these techniques was benchmarked against six ML classifiers to determine their effectiveness in improving BSM accuracy. Our results show that alternative FE techniques can outperform PCA, improving classification accuracy and computational efficiency. Techniques like LDA and NCA effectively capture nonlinear relationships critical for accurate BSM. The study contributes to the existing literature by providing a comprehensive comparison of FE methods, highlighting the potential benefits of underutilized techniques in BSM. Full article
Show Figures

Figure 1

31 pages, 4735 KB  
Article
Enhanced Neonatal Brain Tissue Analysis via Minimum Spanning Tree Segmentation and the Brier Score Coupled Classifier
by Tushar Hrishikesh Jaware, Chittaranjan Nayak, Priyadarsan Parida, Nawaf Ali, Yogesh Sharma and Wael Hadi
Computers 2024, 13(10), 260; https://doi.org/10.3390/computers13100260 - 11 Oct 2024
Cited by 1 | Viewed by 2180
Abstract
Automatic assessment of brain regions in an MR image has emerged as a pivotal tool in advancing diagnosis and continual monitoring of neurological disorders through different phases of life. Nevertheless, current solutions often exhibit specificity to particular age groups, thereby constraining their utility [...] Read more.
Automatic assessment of brain regions in an MR image has emerged as a pivotal tool in advancing diagnosis and continual monitoring of neurological disorders through different phases of life. Nevertheless, current solutions often exhibit specificity to particular age groups, thereby constraining their utility in observing brain development from infancy to late adulthood. In our research, we introduce a novel approach for segmenting and classifying neonatal brain images. Our methodology capitalizes on minimum spanning tree (MST) segmentation employing the Manhattan distance, complemented by a shrunken centroid classifier empowered by the Brier score. This fusion enhances the accuracy of tissue classification, effectively addressing the complexities inherent in age-specific segmentation. Moreover, we propose a novel threshold estimation method utilizing the Brier score, further refining the classification process. The proposed approach yields a competitive Dice similarity index of 0.88 and a Jaccard index of 0.95. This approach marks a significant step toward neonatal brain tissue segmentation, showcasing the efficacy of our proposed methodology in comparison to the latest cutting-edge methods. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Figure 1

14 pages, 3914 KB  
Article
Hybrid Lithology Identification Method Based on Isometric Feature Mapping Manifold Learning and Particle Swarm Optimization-Optimized LightGBM
by Guo Wang, Song Deng, Shuguo Xu, Chaowei Li, Wan Wei, Haolin Zhang, Changsheng Li, Wenhao Gong and Haoyu Pan
Processes 2024, 12(8), 1593; https://doi.org/10.3390/pr12081593 - 29 Jul 2024
Cited by 2 | Viewed by 1901
Abstract
Accurate identification of lithology in petroleum engineering is very important for oil and gas reservoir evaluation, drilling decisions, and petroleum geological exploration. Using a cross-plot to identify lithology only considers two logging parameters, causing the accuracy of lithology identification to be insufficient. With [...] Read more.
Accurate identification of lithology in petroleum engineering is very important for oil and gas reservoir evaluation, drilling decisions, and petroleum geological exploration. Using a cross-plot to identify lithology only considers two logging parameters, causing the accuracy of lithology identification to be insufficient. With the continuous development of artificial intelligence technology, machine learning has become an important means to identify lithology. In this study, the cutting logging data of the Junggar Basin were collected as lithologic samples, and the identification of argillaceous siltstone, mudstone, gravel mudstone, silty mudstone, and siltstone was established by logging and logging parameters at corresponding depths. Aiming at the non-equilibrium problem of lithologic data, this paper proposes using equilibrium accuracy to evaluate the model. In this study, manifold learning is used to reduce logging and logging parameters to three dimensions. Based on balance accuracy, four dimensionality reductions including isometric feature mapping (ISOMAP), principal component (PCA), independent component (ICA), and non-negative matrix factorization (NMF) are compared. It is found that ISOMAP improves the balance accuracy of the LightGBM model to 0.829, which can effectively deal with unbalanced lithologic data. In addition, the particle swarm optimization (PSO) algorithm is used to automatically optimize the super-parameters of the lightweight gradient hoist (LightGBM) model, which effectively improves the balance accuracy and generalization ability of the lithology identification model and provides strong support for fast and accurate lithology identification. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

21 pages, 86652 KB  
Article
Toward Unbiased High-Quality Portraits through Latent-Space Evaluation
by Doaa Almhaithawi, Alessandro Bellini and Tania Cerquitelli
J. Imaging 2024, 10(7), 157; https://doi.org/10.3390/jimaging10070157 - 28 Jun 2024
Viewed by 2808
Abstract
Images, texts, voices, and signals can be synthesized by latent spaces in a multidimensional vector, which can be explored without the hurdles of noise or other interfering factors. In this paper, we present a practical use case that demonstrates the power of latent [...] Read more.
Images, texts, voices, and signals can be synthesized by latent spaces in a multidimensional vector, which can be explored without the hurdles of noise or other interfering factors. In this paper, we present a practical use case that demonstrates the power of latent space in exploring complex realities such as image space. We focus on DaVinciFace, an AI-based system that explores the StyleGAN2 space to create a high-quality portrait for anyone in the style of the Renaissance genius Leonardo da Vinci. The user enters one of their portraits and receives the corresponding Da Vinci-style portrait as an output. Since most of Da Vinci’s artworks depict young and beautiful women (e.g., “La Belle Ferroniere”, “Beatrice de’ Benci”), we investigate the ability of DaVinciFace to account for other social categorizations, including gender, race, and age. The experimental results evaluate the effectiveness of our methodology on 1158 portraits acting on the vector representations of the latent space to produce high-quality portraits that retain the facial features of the subject’s social categories, and conclude that sparser vectors have a greater effect on these features. To objectively evaluate and quantify our results, we solicited human feedback via a crowd-sourcing campaign. Analysis of the human feedback showed a high tolerance for the loss of important identity features in the resulting portraits when the Da Vinci style is more pronounced, with some exceptions, including Africanized individuals. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

15 pages, 2997 KB  
Article
Overcoming Dimensionality Constraints: A Gershgorin Circle Theorem-Based Feature Extraction for Weighted Laplacian Matrices in Computer Vision Applications
by Sahaj Anilbhai Patel and Abidin Yildirim
J. Imaging 2024, 10(5), 121; https://doi.org/10.3390/jimaging10050121 - 15 May 2024
Cited by 1 | Viewed by 2485
Abstract
In graph theory, the weighted Laplacian matrix is the most utilized technique to interpret the local and global properties of a complex graph structure within computer vision applications. However, with increasing graph nodes, the Laplacian matrix’s dimensionality also increases accordingly. Therefore, there is [...] Read more.
In graph theory, the weighted Laplacian matrix is the most utilized technique to interpret the local and global properties of a complex graph structure within computer vision applications. However, with increasing graph nodes, the Laplacian matrix’s dimensionality also increases accordingly. Therefore, there is always the “curse of dimensionality”; In response to this challenge, this paper introduces a new approach to reducing the dimensionality of the weighted Laplacian matrix by utilizing the Gershgorin circle theorem by transforming the weighted Laplacian matrix into a strictly diagonal domain and then estimating rough eigenvalue inclusion of a matrix. The estimated inclusions are represented as reduced features, termed GC features; The proposed Gershgorin circle feature extraction (GCFE) method was evaluated using three publicly accessible computer vision datasets, varying image patch sizes, and three different graph types. The GCFE method was compared with eight distinct studies. The GCFE demonstrated a notable positive Z-score compared to other feature extraction methods such as I-PCA, kernel PCA, and spectral embedding. Specifically, it achieved an average Z-score of 6.953 with the 2D grid graph type and 4.473 with the pairwise graph type, particularly on the E_Balanced dataset. Furthermore, it was observed that while the accuracy of most major feature extraction methods declined with smaller image patch sizes, the GCFE maintained consistent accuracy across all tested image patch sizes. When the GCFE method was applied to the E_MNSIT dataset using the K-NN graph type, the GCFE method confirmed its consistent accuracy performance, evidenced by a low standard deviation (SD) of 0.305. This performance was notably lower compared to other methods like Isomap, which had an SD of 1.665, and LLE, which had an SD of 1.325; The GCFE outperformed most feature extraction methods in terms of classification accuracy and computational efficiency. The GCFE method also requires fewer training parameters for deep-learning models than the traditional weighted Laplacian method, establishing its potential for more effective and efficient feature extraction in computer vision tasks. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

40 pages, 21076 KB  
Article
A Study on Dimensionality Reduction and Parameters for Hyperspectral Imagery Based on Manifold Learning
by Wenhui Song, Xin Zhang, Guozhu Yang, Yijin Chen, Lianchao Wang and Hanghang Xu
Sensors 2024, 24(7), 2089; https://doi.org/10.3390/s24072089 - 25 Mar 2024
Cited by 16 | Viewed by 3252
Abstract
With the rapid advancement of remote-sensing technology, the spectral information obtained from hyperspectral remote-sensing imagery has become increasingly rich, facilitating detailed spectral analysis of Earth’s surface objects. However, the abundance of spectral information presents certain challenges for data processing, such as the “curse [...] Read more.
With the rapid advancement of remote-sensing technology, the spectral information obtained from hyperspectral remote-sensing imagery has become increasingly rich, facilitating detailed spectral analysis of Earth’s surface objects. However, the abundance of spectral information presents certain challenges for data processing, such as the “curse of dimensionality” leading to the “Hughes phenomenon”, “strong correlation” due to high resolution, and “nonlinear characteristics” caused by varying surface reflectances. Consequently, dimensionality reduction of hyperspectral data emerges as a critical task. This paper begins by elucidating the principles and processes of hyperspectral image dimensionality reduction based on manifold theory and learning methods, in light of the nonlinear structures and features present in hyperspectral remote-sensing data, and formulates a dimensionality reduction process based on manifold learning. Subsequently, this study explores the capabilities of feature extraction and low-dimensional embedding for hyperspectral imagery using manifold learning approaches, including principal components analysis (PCA), multidimensional scaling (MDS), and linear discriminant analysis (LDA) for linear methods; and isometric mapping (Isomap), locally linear embedding (LLE), Laplacian eigenmaps (LE), Hessian locally linear embedding (HLLE), local tangent space alignment (LTSA), and maximum variance unfolding (MVU) for nonlinear methods, based on the Indian Pines hyperspectral dataset and Pavia University dataset. Furthermore, the paper investigates the optimal neighborhood computation time and overall algorithm runtime for feature extraction in hyperspectral imagery, varying by the choice of neighborhood k and intrinsic dimensionality d values across different manifold learning methods. Based on the outcomes of feature extraction, the study examines the classification experiments of various manifold learning methods, comparing and analyzing the variations in classification accuracy and Kappa coefficient with different selections of neighborhood k and intrinsic dimensionality d values. Building on this, the impact of selecting different bandwidths t for the Gaussian kernel in the LE method and different Lagrange multipliers λ for the MVU method on classification accuracy, given varying choices of neighborhood k and intrinsic dimensionality d, is explored. Through these experiments, the paper investigates the capability and effectiveness of different manifold learning methods in feature extraction and dimensionality reduction within hyperspectral imagery, as influenced by the selection of neighborhood k and intrinsic dimensionality d values, identifying the optimal neighborhood k and intrinsic dimensionality d value for each method. A comparison of classification accuracies reveals that the LTSA method yields superior classification results compared to other manifold learning approaches. The study demonstrates the advantages of manifold learning methods in processing hyperspectral image data, providing an experimental reference for subsequent research on hyperspectral image dimensionality reduction using manifold learning methods. Full article
Show Figures

Figure 1

14 pages, 1931 KB  
Article
Automatic Active Lesion Tracking in Multiple Sclerosis Using Unsupervised Machine Learning
by Jason Uwaeze, Ponnada A. Narayana, Arash Kamali, Vladimir Braverman, Michael A. Jacobs and Alireza Akhbardeh
Diagnostics 2024, 14(6), 632; https://doi.org/10.3390/diagnostics14060632 - 16 Mar 2024
Cited by 8 | Viewed by 3336
Abstract
Background: Identifying active lesions in magnetic resonance imaging (MRI) is crucial for the diagnosis and treatment planning of multiple sclerosis (MS). Active lesions on MRI are identified following the administration of Gadolinium-based contrast agents (GBCAs). However, recent studies have reported that repeated administration [...] Read more.
Background: Identifying active lesions in magnetic resonance imaging (MRI) is crucial for the diagnosis and treatment planning of multiple sclerosis (MS). Active lesions on MRI are identified following the administration of Gadolinium-based contrast agents (GBCAs). However, recent studies have reported that repeated administration of GBCA results in the accumulation of Gd in tissues. In addition, GBCA administration increases health care costs. Thus, reducing or eliminating GBCA administration for active lesion detection is important for improved patient safety and reduced healthcare costs. Current state-of-the-art methods for identifying active lesions in brain MRI without GBCA administration utilize data-intensive deep learning methods. Objective: To implement nonlinear dimensionality reduction (NLDR) methods, locally linear embedding (LLE) and isometric feature mapping (Isomap), which are less data-intensive, for automatically identifying active lesions on brain MRI in MS patients, without the administration of contrast agents. Materials and Methods: Fluid-attenuated inversion recovery (FLAIR), T2-weighted, proton density-weighted, and pre- and post-contrast T1-weighted images were included in the multiparametric MRI dataset used in this study. Subtracted pre- and post-contrast T1-weighted images were labeled by experts as active lesions (ground truth). Unsupervised methods, LLE and Isomap, were used to reconstruct multiparametric brain MR images into a single embedded image. Active lesions were identified on the embedded images and compared with ground truth lesions. The performance of NLDR methods was evaluated by calculating the Dice similarity (DS) index between the observed and identified active lesions in embedded images. Results: LLE and Isomap, were applied to 40 MS patients, achieving median DS scores of 0.74 ± 0.1 and 0.78 ± 0.09, respectively, outperforming current state-of-the-art methods. Conclusions: NLDR methods, Isomap and LLE, are viable options for the identification of active MS lesions on non-contrast images, and potentially could be used as a clinical decision tool. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

26 pages, 6668 KB  
Article
Supervised Manifold Learning Based on Multi-Feature Information Discriminative Fusion within an Adaptive Nearest Neighbor Strategy Applied to Rolling Bearing Fault Diagnosis
by Hongwei Wang, Linhu Yao, Haoran Wang, Yu Liu, Zhiyuan Li, Di Wang, Ren Hu and Lei Tao
Sensors 2023, 23(24), 9820; https://doi.org/10.3390/s23249820 - 14 Dec 2023
Cited by 4 | Viewed by 2127
Abstract
Rolling bearings are a key component for ensuring the safe and smooth operation of rotating machinery and are very prone to failure. Therefore, intelligent fault diagnosis research on rolling bearings has become a crucial task in the field of mechanical fault diagnosis. This [...] Read more.
Rolling bearings are a key component for ensuring the safe and smooth operation of rotating machinery and are very prone to failure. Therefore, intelligent fault diagnosis research on rolling bearings has become a crucial task in the field of mechanical fault diagnosis. This paper proposes research on the fault diagnosis of rolling bearings based on an adaptive nearest neighbor strategy and the discriminative fusion of multi-feature information using supervised manifold learning (AN-MFIDFS-Isomap). Firstly, an adaptive nearest neighbor strategy is proposed using the Euclidean distance and cosine similarity to optimize the selection of neighboring points. Secondly, three feature space transformation and feature information extraction methods are proposed, among which an innovative exponential linear kernel function is introduced to provide new feature information descriptions for the data, enhancing feature sensitivity. Finally, under the adaptive nearest neighbor strategy, a novel AN-MFIDFS-Isomap algorithm is proposed for rolling bearing fault diagnosis by fusing various feature information and classifiers through discriminative fusion with label information. The proposed AN-MFIDFS-Isomap algorithm is validated on the CWRU open dataset and our experimental dataset. The experiments show that the proposed method outperforms other traditional manifold learning methods in terms of data clustering and fault diagnosis. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

20 pages, 5623 KB  
Article
Manifold Learning in Electric Power System Transient Stability Analysis
by Petar Sarajcev and Dino Lovric
Energies 2023, 16(23), 7810; https://doi.org/10.3390/en16237810 - 27 Nov 2023
Cited by 1 | Viewed by 1887
Abstract
This paper examines the use of manifold learning in the context of electric power system transient stability analysis. Since wide-area monitoring systems (WAMSs) introduced a big data paradigm into the power system operation, manifold learning can be seen as a means of condensing [...] Read more.
This paper examines the use of manifold learning in the context of electric power system transient stability analysis. Since wide-area monitoring systems (WAMSs) introduced a big data paradigm into the power system operation, manifold learning can be seen as a means of condensing these high-dimensional data into an appropriate low-dimensional representation (i.e., embedding) which preserves as much information as possible. In this paper, we consider several embedding methods (principal component analysis (PCA) and its variants, singular value decomposition, isomap and spectral embedding, locally linear embedding (LLE) and its variants, multidimensional scaling (MDS), and others) and apply them to the dataset derived from the IEEE New England 39-bus power system transient simulations. We found that PCA with a radial basis function kernel is well suited to this type of power system data (where features are instances of three-phase phasor values). We also found that the LLE (including its variants) did not produce a good embedding with this particular kind of data. Furthermore, we found that a support vector machine, trained on top of the embedding produced by several different methods was able to detect power system disturbances from WAMS data. Full article
Show Figures

Figure 1

Back to TopTop