Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = manifold space ranking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 430 KiB  
Article
Advanced Manifold–Metric Pairs
by Pierros Ntelis
Mathematics 2025, 13(15), 2510; https://doi.org/10.3390/math13152510 - 4 Aug 2025
Abstract
This article presents a novel mathematical formalism for advanced manifold–metric pairs, enhancing the frameworks of geometry and topology. We construct various D-dimensional manifolds and their associated metric spaces using functional methods, with a focus on integrating concepts from mathematical physics, field theory, topology, [...] Read more.
This article presents a novel mathematical formalism for advanced manifold–metric pairs, enhancing the frameworks of geometry and topology. We construct various D-dimensional manifolds and their associated metric spaces using functional methods, with a focus on integrating concepts from mathematical physics, field theory, topology, algebra, probability, and statistics. Our methodology employs rigorous mathematical construction proofs and logical foundations to develop generalized manifold–metric pairs, including homogeneous and isotropic expanding manifolds, as well as probabilistic and entropic variants. Key results include the establishment of metrizability for topological manifolds via the Urysohn Metrization Theorem, the formulation of higher-rank tensor metrics, and the exploration of complex and quaternionic codomains with applications to cosmological models like the expanding spacetime. By combining spacetime generalized sets with information-theoretic and probabilistic approaches, we achieve a unified framework that advances the understanding of manifold–metric interactions and their physical implications. Full article
24 pages, 379 KiB  
Article
Involutive Symmetries and Langlands Duality in Moduli Spaces of Principal G-Bundles
by Álvaro Antón-Sancho
Symmetry 2025, 17(6), 819; https://doi.org/10.3390/sym17060819 - 24 May 2025
Viewed by 392
Abstract
Let X be a compact Riemann surface of genus g2, G be a complex semisimple Lie group, and MG(X) be the moduli space of stable principal G-bundles. This paper studies the fixed point set of [...] Read more.
Let X be a compact Riemann surface of genus g2, G be a complex semisimple Lie group, and MG(X) be the moduli space of stable principal G-bundles. This paper studies the fixed point set of involutions on MG(X) induced by an anti-holomorphic involution τ on X and a Cartan involution θ of G, producing an involution σ=θτ. These fixed points are shown to correspond to stable GR-bundles over the real curve (Xτ,τ), where GR is the real form associated with θ. The fixed point set MG(X)σ consists of exactly 2r connected components, each a smooth complex manifold of dimension (g1)dimG2, where r is the rank of the fundamental group of the compact form of G. A cohomological obstruction in H2(Xτ,π1(GR)) characterizes which bundles are fixed. A key result establishes a derived equivalence between coherent sheaves on MG(X)σ and on the fixed point set of the dual involution on the moduli space of G-local systems, where G denotes the Langlands dual of G. This provides an extension of the Geometric Langlands Correspondence to settings with involutions. An application to the Chern–Simons theory on real curves interprets MG(X)σ as a (B,B,B)-brane, mirror to an (A,A,A)-brane in the Hitchin system, revealing new links between real structures, quantization, and mirror symmetry. Full article
(This article belongs to the Special Issue Symmetry in Integrable Systems: Topics and Advances)
20 pages, 468 KiB  
Article
Geometric Aspects of Mixed Quantum States Inside the Bloch Sphere
by Paul M. Alsing, Carlo Cafaro, Domenico Felice and Orlando Luongo
Quantum Rep. 2024, 6(1), 90-109; https://doi.org/10.3390/quantum6010007 - 6 Feb 2024
Cited by 5 | Viewed by 4144
Abstract
When studying the geometry of quantum states, it is acknowledged that mixed states can be distinguished by infinitely many metrics. Unfortunately, this freedom causes metric-dependent interpretations of physically significant geometric quantities such as the complexity and volume of quantum states. In this paper, [...] Read more.
When studying the geometry of quantum states, it is acknowledged that mixed states can be distinguished by infinitely many metrics. Unfortunately, this freedom causes metric-dependent interpretations of physically significant geometric quantities such as the complexity and volume of quantum states. In this paper, we present an insightful discussion on the differences between the Bures and the Sjöqvist metrics inside a Bloch sphere. First, we begin with a formal comparative analysis between the two metrics by critically discussing three alternative interpretations for each metric. Second, we explicitly illustrate the distinct behaviors of the geodesic paths on each one of the two metric manifolds. Third, we compare the finite distances between an initial state and the final mixed state when calculated with the two metrics. Interestingly, in analogy with what happens when studying the topological aspects of real Euclidean spaces equipped with distinct metric functions (for instance, the usual Euclidean metric and the taxicab metric), we observe that the relative ranking based on the concept of a finite distance between mixed quantum states is not preserved when comparing distances determined with the Bures and the Sjöqvist metrics. Finally, we conclude with a brief discussion on the consequences of this violation of a metric-based relative ranking on the concept of the complexity and volume of mixed quantum states. Full article
Show Figures

Figure 1

22 pages, 23761 KiB  
Article
Robust Ranking Kernel Support Vector Machine via Manifold Regularized Matrix Factorization for Multi-Label Classification
by Heping Song, Yiming Zhou, Ebenezer Quayson, Qian Zhu and Xiangjun Shen
Appl. Sci. 2024, 14(2), 638; https://doi.org/10.3390/app14020638 - 11 Jan 2024
Cited by 1 | Viewed by 1450
Abstract
Multi-label classification has been extensively researched and utilized for several decades. However, the performance of these methods is highly susceptible to the presence of noisy data samples, resulting in a significant decrease in accuracy when noise levels are high. To address this issue, [...] Read more.
Multi-label classification has been extensively researched and utilized for several decades. However, the performance of these methods is highly susceptible to the presence of noisy data samples, resulting in a significant decrease in accuracy when noise levels are high. To address this issue, we propose a robust ranking support vector machine (Rank-SVM) method that incorporates manifold regularized matrix factorization. Unlike traditional Rank-SVM methods, our approach integrates feature selection and multi-label learning into a unified framework. Within this framework, we employ matrix factorization to learn a low-rank robust subspace within the input space, thereby enhancing the robustness of data representation in high-noise conditions. Additionally, we incorporate manifold structure regularization into the framework to preserve manifold relationships among low-rank samples, which further improves the robustness of the low-rank representation. Leveraging on this robust low-rank representation, we extract a resilient low-rank features and employ them to construct a more effective classifier. Finally, the proposed framework is extended to derive a kernelized ranking approach, for the creation of nonlinear multi-label classifiers. To effectively solve this non-convex kernelized method, we employ the augmented Lagrangian multiplier (ALM) and alternating direction method of multipliers (ADMM) techniques to obtain the optimal solution. Experimental evaluations conducted on various datasets demonstrate that our framework achieves superior classification results and significantly enhances performance in high-noise scenarios. Full article
Show Figures

Figure 1

14 pages, 4120 KiB  
Article
Saliency Detection Based on Low-Level and High-Level Features via Manifold-Space Ranking
by Xiaoli Li, Yunpeng Liu and Huaici Zhao
Electronics 2023, 12(2), 449; https://doi.org/10.3390/electronics12020449 - 15 Jan 2023
Cited by 2 | Viewed by 3356
Abstract
Saliency detection as an active research direction in image understanding and analysis has been studied extensively. In this paper, to improve the accuracy of saliency detection, we propose an efficient unsupervised salient object detection method. The first step of our method is that [...] Read more.
Saliency detection as an active research direction in image understanding and analysis has been studied extensively. In this paper, to improve the accuracy of saliency detection, we propose an efficient unsupervised salient object detection method. The first step of our method is that we extract local low-level features of each superpixel after segmenting the image into different scale parts, which helps to locate the approximate locations of salient objects. Then, we use convolutional neural networks to extract high-level, semantically rich features as complementary features of each superpixel, and low-level features, as well as high-level features of each superpixel, are incorporated into a new feature vector to measure the distance between different superpixels. The last step is that we use a manifold space-ranking method to calculate the saliency of each superpixel. Extensive experiments over four challenging datasets indicate that the proposed method surpasses state-of-the-art methods and is closer to the ground truth. Full article
Show Figures

Figure 1

16 pages, 1592 KiB  
Article
Neural Subspace Learning for Surface Defect Detection
by Bin Liu, Weifeng Chen, Bo Li and Xiuping Liu
Mathematics 2022, 10(22), 4351; https://doi.org/10.3390/math10224351 - 19 Nov 2022
Viewed by 1952
Abstract
Surface defect inspection is a key technique in industrial product assessments. Compared with other visual applications, industrial defect inspection suffers from a small sample problem and a lack of labeled data. Therefore, conventional deep-learning methods depending on huge supervised samples cannot be directly [...] Read more.
Surface defect inspection is a key technique in industrial product assessments. Compared with other visual applications, industrial defect inspection suffers from a small sample problem and a lack of labeled data. Therefore, conventional deep-learning methods depending on huge supervised samples cannot be directly generalized to this task. To deal with the lack of labeled data, unsupervised subspace learning provides more clues for the task of defect inspection. However, conventional subspace learning methods focus on studying the linear subspace structure. In order to explore the nonlinear manifold structure, a novel neural subspace learning algorithm is proposed by substituting linear operators with nonlinear neural networks. The low-rank property of the latent space is approximated by limiting the dimensions of the encoded feature, and the sparse coding property is simulated by quantized autoencoding. To overcome the small sample problem, a novel data augmentation strategy called thin-plate-spline deformation is proposed. Compared with the rigid transformation methods used in previous literature, our strategy could generate more reliable training samples. Experiments on real-world datasets demonstrate that our method achieves state-of-the-art performance compared with unsupervised methods. More importantly, the proposed method is competitive and has a better generalization capability compared with supervised methods based on deep learning techniques. Full article
Show Figures

Figure 1

19 pages, 6708 KiB  
Article
A New Low-Rank Structurally Incoherent Algorithm for Robust Image Feature Extraction
by Hongmei Ge and Aibo Song
Mathematics 2022, 10(19), 3648; https://doi.org/10.3390/math10193648 - 5 Oct 2022
Cited by 3 | Viewed by 1322
Abstract
In order to solve the problem in which structurally incoherent low-rank non-negative matrix decomposition (SILR-NMF) algorithms only consider the non-negativity of the data and do not consider the manifold distribution of high-dimensional space data, a new structurally incoherent low rank two-dimensional local discriminant [...] Read more.
In order to solve the problem in which structurally incoherent low-rank non-negative matrix decomposition (SILR-NMF) algorithms only consider the non-negativity of the data and do not consider the manifold distribution of high-dimensional space data, a new structurally incoherent low rank two-dimensional local discriminant graph embedding (SILR-2DLDGE) is proposed in this paper. The algorithm consists of the following three parts. Firstly, it is vital to keep the intrinsic relationship between data points. By the token, we introduced the graph embedding (GE) framework to preserve locality information. Secondly, the algorithm alleviates the impact of noise and corruption uses the L1 norm as a constraint by low-rank learning. Finally, the algorithm improves the discriminant ability by encrypting the structurally incoherent parts of the data. In the meantime, we capture the theoretical basis of the algorithm and analyze the computational cost and convergence. The experimental results and discussions on several image databases show that the proposed algorithm is more effective than the SILR-NMF algorithm. Full article
Show Figures

Figure 1

10 pages, 302 KiB  
Article
On the Geometry in the Large of Einstein-like Manifolds
by Josef Mikeš, Lenka Rýparová, Sergey Stepanov and Irina Tsyganok
Mathematics 2022, 10(13), 2208; https://doi.org/10.3390/math10132208 - 24 Jun 2022
Viewed by 1694
Abstract
Gray has presented the invariant orthogonal irreducible decomposition of the space of all covariant tensors of rank 3, obeying only the identities of the gradient of the Ricci tensor. This decomposition introduced the seven classes of Einstein-like manifolds, the Ricci tensors of which [...] Read more.
Gray has presented the invariant orthogonal irreducible decomposition of the space of all covariant tensors of rank 3, obeying only the identities of the gradient of the Ricci tensor. This decomposition introduced the seven classes of Einstein-like manifolds, the Ricci tensors of which fulfill the defining condition of each subspace. The large-scale geometry of such manifolds has been studied by many geometers using the classical Bochner technique. However, the scope of this method is limited to compact Riemannian manifolds. In the present paper, we prove several Liouville-type theorems for certain classes of Einstein-like complete manifolds. This represents an illustration of the new possibilities of geometric analysis. Full article
(This article belongs to the Special Issue Differential Geometry of Spaces with Special Structures)
18 pages, 2468 KiB  
Article
Multi-Modal Feature Selection with Feature Correlation and Feature Structure Fusion for MCI and AD Classification
by Zhuqing Jiao, Siwei Chen, Haifeng Shi and Jia Xu
Brain Sci. 2022, 12(1), 80; https://doi.org/10.3390/brainsci12010080 - 5 Jan 2022
Cited by 37 | Viewed by 5048
Abstract
Feature selection for multiple types of data has been widely applied in mild cognitive impairment (MCI) and Alzheimer’s disease (AD) classification research. Combining multi-modal data for classification can better realize the complementarity of valuable information. In order to improve the classification performance of [...] Read more.
Feature selection for multiple types of data has been widely applied in mild cognitive impairment (MCI) and Alzheimer’s disease (AD) classification research. Combining multi-modal data for classification can better realize the complementarity of valuable information. In order to improve the classification performance of feature selection on multi-modal data, we propose a multi-modal feature selection algorithm using feature correlation and feature structure fusion (FC2FS). First, we construct feature correlation regularization by fusing a similarity matrix between multi-modal feature nodes. Then, based on manifold learning, we employ feature matrix fusion to construct feature structure regularization, and learn the local geometric structure of the feature nodes. Finally, the two regularizations are embedded in a multi-task learning model that introduces low-rank constraint, the multi-modal features are selected, and the final features are linearly fused and input into a support vector machine (SVM) for classification. Different controlled experiments were set to verify the validity of the proposed method, which was applied to MCI and AD classification. The accuracy of normal controls versus Alzheimer’s disease, normal controls versus late mild cognitive impairment, normal controls versus early mild cognitive impairment, and early mild cognitive impairment versus late mild cognitive impairment achieve 91.85 ± 1.42%, 85.33 ± 2.22%, 78.29 ± 2.20%, and 77.67 ± 1.65%, respectively. This method makes up for the shortcomings of the traditional multi-modal feature selection based on subjects and fully considers the relationship between feature nodes and the local geometric structure of feature space. Our study not only enhances the interpretation of feature selection but also improves the classification performance, which has certain reference values for the identification of MCI and AD. Full article
(This article belongs to the Special Issue New Insight into Cellular and Molecular Bases of Brain Disorders)
Show Figures

Figure 1

15 pages, 316 KiB  
Article
Heat Kernels Estimates for Hermitian Line Bundles on Manifolds of Bounded Geometry
by Yuri A. Kordyukov
Mathematics 2021, 9(23), 3060; https://doi.org/10.3390/math9233060 - 28 Nov 2021
Cited by 1 | Viewed by 1921
Abstract
We consider a family of semiclassically scaled second-order elliptic differential operators on high tensor powers of a Hermitian line bundle (possibly, twisted by an auxiliary Hermitian vector bundle of arbitrary rank) on a Riemannian manifold of bounded geometry. We establish an off-diagonal Gaussian [...] Read more.
We consider a family of semiclassically scaled second-order elliptic differential operators on high tensor powers of a Hermitian line bundle (possibly, twisted by an auxiliary Hermitian vector bundle of arbitrary rank) on a Riemannian manifold of bounded geometry. We establish an off-diagonal Gaussian upper bound for the associated heat kernel. The proof is based on some tools from the theory of operator semigroups in a Hilbert space, results on Sobolev spaces adapted to the current setting, and weighted estimates with appropriate exponential weights. Full article
(This article belongs to the Special Issue Asymptotics for Differential Equations)
17 pages, 489 KiB  
Article
Principal Bundle Structure of Matrix Manifolds
by Marie Billaud-Friess, Antonio Falcó and Anthony Nouy
Mathematics 2021, 9(14), 1669; https://doi.org/10.3390/math9141669 - 15 Jul 2021
Cited by 2 | Viewed by 2543
Abstract
In this paper, we introduce a new geometric description of the manifolds of matrices of fixed rank. The starting point is a geometric description of the Grassmann manifold Gr(Rk) of linear subspaces of dimension r<k in [...] Read more.
In this paper, we introduce a new geometric description of the manifolds of matrices of fixed rank. The starting point is a geometric description of the Grassmann manifold Gr(Rk) of linear subspaces of dimension r<k in Rk, which avoids the use of equivalence classes. The set Gr(Rk) is equipped with an atlas, which provides it with the structure of an analytic manifold modeled on R(kr)×r. Then, we define an atlas for the set Mr(Rk×r) of full rank matrices and prove that the resulting manifold is an analytic principal bundle with base Gr(Rk) and typical fibre GLr, the general linear group of invertible matrices in Rk×k. Finally, we define an atlas for the set Mr(Rn×m) of non-full rank matrices and prove that the resulting manifold is an analytic principal bundle with base Gr(Rn)×Gr(Rm) and typical fibre GLr. The atlas of Mr(Rn×m) is indexed on the manifold itself, which allows a natural definition of a neighbourhood for a given matrix, this neighbourhood being proved to possess the structure of a Lie group. Moreover, the set Mr(Rn×m) equipped with the topology induced by the atlas is proven to be an embedded submanifold of the matrix space Rn×m equipped with the subspace topology. The proposed geometric description then results in a description of the matrix space Rn×m, seen as the union of manifolds Mr(Rn×m), as an analytic manifold equipped with a topology for which the matrix rank is a continuous map. Full article
(This article belongs to the Special Issue Differential Geometry: Structures on Manifolds and Their Applications)
Show Figures

Figure 1

22 pages, 2248 KiB  
Article
Unified Low-Rank Subspace Clustering with Dynamic Hypergraph for Hyperspectral Image
by Jinhuan Xu, Liang Xiao and Jingxiang Yang
Remote Sens. 2021, 13(7), 1372; https://doi.org/10.3390/rs13071372 - 2 Apr 2021
Cited by 4 | Viewed by 2615
Abstract
Low-rank representation with hypergraph regularization has achieved great success in hyperspectral imagery, which can explore global structure, and further incorporate local information. Existing hypergraph learning methods only construct the hypergraph by a fixed similarity matrix or are adaptively optimal in original feature space; [...] Read more.
Low-rank representation with hypergraph regularization has achieved great success in hyperspectral imagery, which can explore global structure, and further incorporate local information. Existing hypergraph learning methods only construct the hypergraph by a fixed similarity matrix or are adaptively optimal in original feature space; they do not update the hypergraph in subspace-dimensionality. In addition, the clustering performance obtained by the existing k-means-based clustering methods is unstable as the k-means method is sensitive to the initialization of the cluster centers. In order to address these issues, we propose a novel unified low-rank subspace clustering method with dynamic hypergraph for hyperspectral images (HSIs). In our method, the hypergraph is adaptively learned from the low-rank subspace feature, which can capture a more complex manifold structure effectively. In addition, we introduce a rotation matrix to simultaneously learn continuous and discrete clustering labels without any relaxing information loss. The unified model jointly learns the hypergraph and the discrete clustering labels, in which the subspace feature is adaptively learned by considering the optimal dynamic hypergraph with the self-taught property. The experimental results on real HSIs show that the proposed methods can achieve better performance compared to eight state-of-the-art clustering methods. Full article
Show Figures

Graphical abstract

19 pages, 7014 KiB  
Review
Locality Sensitive Discriminative Unsupervised Dimensionality Reduction
by Yun-Long Gao, Si-Zhe Luo, Zhi-Hao Wang, Chih-Cheng Chen and Jin-Yan Pan
Symmetry 2019, 11(8), 1036; https://doi.org/10.3390/sym11081036 - 12 Aug 2019
Cited by 6 | Viewed by 4044
Abstract
Graph-based embedding methods receive much attention due to the use of graph and manifold information. However, conventional graph-based embedding methods may not always be effective if the data have high dimensions and have complex distributions. First, the similarity matrix only considers local distance [...] Read more.
Graph-based embedding methods receive much attention due to the use of graph and manifold information. However, conventional graph-based embedding methods may not always be effective if the data have high dimensions and have complex distributions. First, the similarity matrix only considers local distance measurement in the original space, which cannot reflect a wide variety of data structures. Second, separation of graph construction and dimensionality reduction leads to the similarity matrix not being fully relied on because the original data usually contain lots of noise samples and features. In this paper, we address these problems by constructing two adjacency graphs to stand for the original structure featuring similarity and diversity of the data, and then impose a rank constraint on the corresponding Laplacian matrix to build a novel adaptive graph learning method, namely locality sensitive discriminative unsupervised dimensionality reduction (LSDUDR). As a result, the learned graph shows a clear block diagonal structure so that the clustering structure of data can be preserved. Experimental results on synthetic datasets and real-world benchmark data sets demonstrate the effectiveness of our approach. Full article
(This article belongs to the Special Issue Selected Papers from IIKII 2019 conferences in Symmetry)
Show Figures

Graphical abstract

22 pages, 707 KiB  
Article
State-Space Models on the Stiefel Manifold with a New Approach to Nonlinear Filtering
by Yukai Yang and Luc Bauwens
Econometrics 2018, 6(4), 48; https://doi.org/10.3390/econometrics6040048 - 12 Dec 2018
Cited by 2 | Viewed by 9722
Abstract
We develop novel multivariate state-space models wherein the latent states evolve on the Stiefel manifold and follow a conditional matrix Langevin distribution. The latent states correspond to time-varying reduced rank parameter matrices, like the loadings in dynamic factor models and the parameters of [...] Read more.
We develop novel multivariate state-space models wherein the latent states evolve on the Stiefel manifold and follow a conditional matrix Langevin distribution. The latent states correspond to time-varying reduced rank parameter matrices, like the loadings in dynamic factor models and the parameters of cointegrating relations in vector error-correction models. The corresponding nonlinear filtering algorithms are developed and evaluated by means of simulation experiments. Full article
(This article belongs to the Special Issue Filtering)
Show Figures

Figure 1

23 pages, 3083 KiB  
Article
Hyperspectral Mixed Denoising via Spectral Difference-Induced Total Variation and Low-Rank Approximation
by Le Sun, Tianming Zhan, Zebin Wu, Liang Xiao and Byeungwoo Jeon
Remote Sens. 2018, 10(12), 1956; https://doi.org/10.3390/rs10121956 - 5 Dec 2018
Cited by 28 | Viewed by 3618
Abstract
Exploration of multiple priors on observed signals has been demonstrated to be one of the effective ways for recovering underlying signals. In this paper, a new spectral difference-induced total variation and low-rank approximation (termed SDTVLA) method is proposed for hyperspectral mixed denoising. Spectral [...] Read more.
Exploration of multiple priors on observed signals has been demonstrated to be one of the effective ways for recovering underlying signals. In this paper, a new spectral difference-induced total variation and low-rank approximation (termed SDTVLA) method is proposed for hyperspectral mixed denoising. Spectral difference transform, which projects data into spectral difference space (SDS), has been proven to be powerful at changing the structures of noises (especially for sparse noise with a specific pattern, e.g., stripes or dead lines present at the same position in a series of bands) in an original hyperspectral image (HSI), thus allowing low-rank techniques to get rid of mixed noises more efficiently without treating them as low-rank features. In addition, because the neighboring pixels are highly correlated and the spectra of homogeneous objects in a hyperspectral scene are always in the same low-dimensional manifold, we are inspired to combine total variation and the nuclear norm to simultaneously exploit the local piecewise smoothness and global low rankness in SDS for mixed noise reduction of HSI. Finally, the alternating direction methods of multipliers (ADMM) is employed to effectively solve the SDTVLA model. Extensive experiments on three simulated and two real HSI datasets demonstrate that, in terms of quantitative metrics (i.e., the mean peak signal-to-noise ratio (MPSNR), the mean structural similarity index (MSSIM) and the mean spectral angle (MSA)), the proposed SDTVLA method is, on average, 1.5 dB higher MPSNR values than the competitive methods as well as performing better in terms of visual effect. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Back to TopTop