Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (43)

Search Parameters:
Keywords = Hessian analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 20009 KB  
Article
Fractal Waves and Caustic Signatures in a Superdeterministic Framework: Benchmarking PINNs and PI-GNNs for the Fractional Klein–Gordon Equation
by Luis Rojas and José Garcia
Fractal Fract. 2026, 10(5), 287; https://doi.org/10.3390/fractalfract10050287 - 24 Apr 2026
Viewed by 149
Abstract
While superdeterministic and fractal spacetime models offer compelling alternative perspectives on quantum foundations, the simulation and validation of effective wave dynamics in such non-differentiable, deterministic settings remain computationally and theoretically challenging. To address this, a framework built around the Fractional Nonlinear Klein–Gordon Equation [...] Read more.
While superdeterministic and fractal spacetime models offer compelling alternative perspectives on quantum foundations, the simulation and validation of effective wave dynamics in such non-differentiable, deterministic settings remain computationally and theoretically challenging. To address this, a framework built around the Fractional Nonlinear Klein–Gordon Equation (FNKGE), defined through the spectral fractional Laplacian, was developed. This equation was solved and benchmarked through a comparative study between Physics-Informed Neural Networks (PINNs) with Fourier features and Physics-Informed Graph Neural Networks (PI-GNNs). Additionally, detection patterns were simulated via deterministic agents, and theoretical links between fractal geometry, computational irreducibility, and deviations from statistical independence were formalized. Regarding the computational evaluation, superior accuracy was achieved by the PI-GNNs, yielding a mean relative error of 0.5% (ϵ¯=0.005), alongside faster convergence and a more well-conditioned Hessian spectrum compared to PINNs. Crucially, a continuous power-law decay (S(ky)ky1.8) was revealed by the spectral analysis of the simulated detection patterns, confirming the emergence of classical optical caustics rather than discrete quantum-interference peaks. Furthermore, a modified dispersion relation that accurately predicts linear instability regimes was derived, and specific boundary artifacts in non-periodic domains were identified. Taken together, the FNKGE is validated by these results as a viable effective model for fractal wave phenomenology and as a robust benchmark for physics-informed learning architectures. Full article
(This article belongs to the Section Engineering)
19 pages, 1112 KB  
Article
Trade-In and Cash-Out Strategies from Perspective of Dynamic Pricing Model
by Xiang Li and Jiqiong Liu
Mathematics 2026, 14(8), 1340; https://doi.org/10.3390/math14081340 - 16 Apr 2026
Viewed by 184
Abstract
In recent years, scientific and technological development has made trade-in programs for innovative electronic products more and more popular. Many of these innovative companies that continue to launch new products offer trade-in and cash-out sales strategies to stimulate purchase. This paper studies when [...] Read more.
In recent years, scientific and technological development has made trade-in programs for innovative electronic products more and more popular. Many of these innovative companies that continue to launch new products offer trade-in and cash-out sales strategies to stimulate purchase. This paper studies when a company launches these two sales strategies and how to ensure optimal pricing that maximizes profits, while taking into account the degree of the consumer’s strategy, the degree of the new product’s innovation, the residual value of the old products, and the cost. We construct a two-period dynamic pricing joint optimization model with four core decision variables and derive the closed-form optimal solution through strict mathematical derivation including Hessian matrix analysis and KKT condition verification. We have adopted a dynamic pricing strategy that conforms to the actual market. The results show that this study provides new mathematical insights for dynamic pricing research, and reveals the substantive rule that companies are more likely to gain greater benefits when the degree of product innovation is not high and the consumer’s strategy degree is moderate. Statistics show that companies are more likely to gain greater benefits when the degree of product innovation is not high and the consumer’s strategy degree is moderate. Full article
Show Figures

Figure 1

27 pages, 23751 KB  
Article
A Mathematical Framework for Retinal Vessel Segmentation: Fractional Hessian-Based Curvature Analysis
by Priyanka Harjule, Mukesh Delu, Rajesh Kumar and Pilani Nkomozepi
Fractal Fract. 2026, 10(4), 246; https://doi.org/10.3390/fractalfract10040246 - 8 Apr 2026
Viewed by 292
Abstract
This study proposes an improved retinal blood vessel segmentation method to enhance the diagnosis of microvascular retinal complications. The proposed method extracts local shape features from retinal images utilizing a fractional Hessian matrix, which models blood vessels as surface structures characterized by ridges [...] Read more.
This study proposes an improved retinal blood vessel segmentation method to enhance the diagnosis of microvascular retinal complications. The proposed method extracts local shape features from retinal images utilizing a fractional Hessian matrix, which models blood vessels as surface structures characterized by ridges and valleys resulting from variations in curvature. The methodology integrates adaptive principal curvature estimation with a new framework leveraging the fractional Hessian matrix with nonsingular and nonlocal kernels. The effectiveness of the suggested method is assessed using publicly accessible datasets, including DRIVE, HRF, STARE, and some real images obtained from a local hospital. The proposed segmentation achieves 96.77% accuracy and 98.82% specificity on the DRIVE database, 96.91% accuracy and 98.69% specificity on STARE, and 95.90% accuracy and 98.36% specificity on the HRF database. Optimal parameters for the fractional order and Gaussian standard deviation were empirically determined by maximizing segmentation accuracy. Our findings show that the proposed approach achieves competitive performance compared to the listed methods, including several deep learning approaches, while maintaining significant computational efficiency. The output of the suggested method can be further utilized with deep learning techniques, which will be applied in the clinical context of diabetic retinopathy and glaucoma to identify abnormalities likely related to disease progression and different stages. Full article
Show Figures

Figure 1

26 pages, 427 KB  
Article
On Dimension-Free Stochastic Surrogates and Estimators of Cross-Partial Derivatives and the Hessian Matrix
by Matieyendou Lamboni
Stats 2026, 9(2), 36; https://doi.org/10.3390/stats9020036 - 29 Mar 2026
Viewed by 287
Abstract
This study introduces stochastic surrogates of all the cross-partial derivatives of functions using L evaluations of functions at randomized points. Such randomized points are constructed using the class of lp-spherical distributions or equivalent distributions. For the cross-partial derivatives of a given [...] Read more.
This study introduces stochastic surrogates of all the cross-partial derivatives of functions using L evaluations of functions at randomized points. Such randomized points are constructed using the class of lp-spherical distributions or equivalent distributions. For the cross-partial derivatives of a given order |u|{2,,d}, the proposed surrogates and the corresponding estimators of cross-partial derivatives enjoy the parametric rate of convergence and dimension-free mean squared errors when dp, leading to breaking down the curse of dimensionality. Imposing pd allows to break down the curse of dimensionality for only the cross-partial derivatives of orders given by |u|1+d2log(d). Also, the L-point-based Hessian surrogate and estimator are proposed, including the convergence analysis. A particular choice of p allows to achieve the dimension-free mean squared errors. Analytical examples and simulations have been provided to show the efficiency of such surrogates and estimators. Full article
(This article belongs to the Section Computational Statistics)
18 pages, 636 KB  
Article
Directional Quaternion Step Differentiation and a Bicomplex Double-Step Calculus for Cancellation-Free First and Second Derivatives
by Ji Eun Kim
Mathematics 2026, 14(4), 728; https://doi.org/10.3390/math14040728 - 20 Feb 2026
Viewed by 375
Abstract
Accurate derivative information is central to sensitivity analysis and optimization, yet standard finite differences can lose many digits when the step size is small because of subtractive cancellation. Complex-step differentiation largely resolves this issue for first derivatives, but robust second derivatives and mixed [...] Read more.
Accurate derivative information is central to sensitivity analysis and optimization, yet standard finite differences can lose many digits when the step size is small because of subtractive cancellation. Complex-step differentiation largely resolves this issue for first derivatives, but robust second derivatives and mixed partials remain delicate: several practical complex-step variants for f still subtract nearly equal quantities, and quaternion-step rules are often presented as separate constructions. We develop a unified slice-based framework that extracts first and second derivatives from a single evaluation by projecting algebraic coefficients in commutative subalgebras of the complexified quaternions. First, we formulate a directional quaternion-steprule parameterized by an arbitrary unit pure quaternion u and provide an explicit projection operator that makes the underlying complex slice CuC transparent; the resulting first-derivative formula is rotation invariant and recovers classical j-step and planar (j,k)-step rules as special cases. Second, we construct a bicomplex double-step calculus in the commuting imaginary units i and u and show that one evaluation at z+(i+u)h separates derivative information into distinct coefficients, with the iu-component equal to h2f(z)+O(h4), giving a subtraction-free O(h2) approximation of f. For bivariate analytic functions we additionally derive one-shot identities for fx, fy, and fxy from f(x+uh,y+ih) and supply practical extraction identities, step-size guidance for h2-scaled coefficients, and branch-consistency diagnostics for non-entire functions. The “cancellation-free” property here refers to avoiding the subtraction of nearly equal real quantities at the level of the differentiation formula; in floating-point arithmetic, coefficient extraction and the 1/h2 scaling for second-order quantities still interact with roundoff, and we quantify the resulting stable regimes numerically. Full article
(This article belongs to the Special Issue New Advances in Complex Analysis and Functional Analysis)
Show Figures

Figure 1

29 pages, 9470 KB  
Article
Dendro-AutoCount Enhanced Using Pith Localization and Peak Analysis Method for Anomalous Images
by Sumitra Nuanmeesri and Lap Poomhiran
Mathematics 2026, 14(1), 94; https://doi.org/10.3390/math14010094 - 26 Dec 2025
Viewed by 611
Abstract
Dendrochronology serves as a vital tool for analyzing the long-term interactions between commercial timber growth and environmental variables such as soil, water, and climate. This study presents Dendro-AutoCount, an innovative image processing framework designed for identifying obscured tree rings in cross-sectional images of [...] Read more.
Dendrochronology serves as a vital tool for analyzing the long-term interactions between commercial timber growth and environmental variables such as soil, water, and climate. This study presents Dendro-AutoCount, an innovative image processing framework designed for identifying obscured tree rings in cross-sectional images of Pinus taeda L. The methodology integrates Hessian-based ridge detection with a weighted radial voting gradient method to precisely locate the pith. Following pith detection, the system performs radial cropping to generate directional sub-images (north, east, south, west), where rings are identified via intensity profile analysis, signal smoothing, and peak detection. By filtering outliers and averaging directional counts, the system effectively mitigates common visual interference from black molds, fungus, structural cracks, buds, knots, and cracks. Experimental results confirm the high efficacy of Dendro-AutoCount in processing anomalous tree ring images. Full article
Show Figures

Figure 1

18 pages, 1838 KB  
Article
Quantitative Modeling of Speculative Bubbles, Crash Dynamics, and Critical Transitions in the Stock Market Using the Log-Periodic Power-Law Model
by Avi Singh, Rajesh Mahadeva, Varun Sarda and Amit Kumar Goyal
Int. J. Financial Stud. 2025, 13(4), 195; https://doi.org/10.3390/ijfs13040195 - 17 Oct 2025
Cited by 2 | Viewed by 2293
Abstract
The global economy frequently experiences cycles of rapid growth followed by abrupt crashes, challenging economists and analysts in forecasting and risk management. Crashes like the dot-com bubble crash and the 2008 global financial crisis caused huge disruptions to the world economy. These crashes [...] Read more.
The global economy frequently experiences cycles of rapid growth followed by abrupt crashes, challenging economists and analysts in forecasting and risk management. Crashes like the dot-com bubble crash and the 2008 global financial crisis caused huge disruptions to the world economy. These crashes have been found to display somewhat similar characteristics, like rapid price inflation and speculation, followed by collapse. In search of these underlying patterns, the Log-Periodic Power-Law (LPPL) model has emerged as a promising framework, capable of capturing self-reinforcing dynamics and log-periodic oscillations. However, while log-periodic structures have been tested in developed and stable markets, they lack validation in volatile and developing markets. This study investigates the applicability of the LPPL framework for modeling financial crashes in the Brazilian stock market, which serves as a representative case of a volatile market, particularly through the Bovespa Index (IBOVESPA). In this study, daily data spanning 1993 to 2025 is analyzed to model pre-crash oscillations and speculative bubbles for five major market crashes. In addition to the traditional LPPL model, autoregressive residual analysis is incorporated to account for market noise and improve predictive accuracy. The results demonstrate that the enhanced LPPL model effectively captures pre-crash oscillations and critical transitions, with low error metrics. Eigenstructure analysis of the Hessian matrices highlights stiff and sloppy parameters, emphasizing the pivotal role of critical time and frequency parameters. Overall, these findings validate LPPL-based nonlinear modeling as an effective approach for anticipating speculative bubbles and crash dynamics in complex financial systems. Full article
(This article belongs to the Special Issue Stock Market Developments and Investment Implications)
Show Figures

Figure 1

27 pages, 8900 KB  
Article
Pre-Dog-Leg: A Feature Optimization Method for Visual Inertial SLAM Based on Adaptive Preconditions
by Junyang Zhao, Shenhua Lv, Huixin Zhu, Yaru Li, Han Yu, Yutie Wang and Kefan Zhang
Sensors 2025, 25(19), 6161; https://doi.org/10.3390/s25196161 - 4 Oct 2025
Viewed by 1084
Abstract
To address the ill-posedness of the Hessian matrix in monocular visual-inertial SLAM (Simultaneous Localization and Mapping) caused by unobservable depth of feature points, which leads to convergence difficulties and reduced robustness, this paper proposes a Pre-Dog-Leg feature optimization method based on an adaptive [...] Read more.
To address the ill-posedness of the Hessian matrix in monocular visual-inertial SLAM (Simultaneous Localization and Mapping) caused by unobservable depth of feature points, which leads to convergence difficulties and reduced robustness, this paper proposes a Pre-Dog-Leg feature optimization method based on an adaptive preconditioner. First, we propose a multi-candidate initialization method with robust characteristics. This method effectively circumvents erroneous depth initialization by introducing multiple depth assumptions and geometric consistency constraints. Second, we address the pathology of the Hessian matrix of the feature points by constructing a hybrid SPAI-Jacobi adaptive preconditioner. This preconditioner is capable of identifying matrix pathology and dynamically enabling preconditioning as a strategy. Finally, we construct a hybrid adaptive preconditioner for the traditional Dog-Leg numerical optimization method. To address the issue of degraded convergence performance when solving pathological problems, we map the pathological optimization problem from the original parameter space to a well-conditioned preconditioned space. The optimization equivalence is maintained by variable recovery. The experiments on the EuRoC dataset show that the method reduces the number of Hessian matrix conditionals by a factor of 7.9, effectively suppresses outliers, and significantly improves the overall convergence time. From the analysis of trajectory error, the absolute trajectory error is reduced by up to 16.48% relative to RVIO2 on the MH_01 sequence, 20.83% relative to VINS-mono on the MH_02 sequence, and up to 14.73% relative to VINS-mono and 34.0% relative to OpenVINS on the highly dynamic MH_05 sequence, indicating that the algorithm achieves higher localization accuracy and stronger system robustness. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

28 pages, 1638 KB  
Article
Sign-Entropy Regularization for Personalized Federated Learning
by Koffka Khan
Entropy 2025, 27(6), 601; https://doi.org/10.3390/e27060601 - 4 Jun 2025
Cited by 1 | Viewed by 2569
Abstract
Personalized Federated Learning (PFL) seeks to train client-specific models across distributed data silos with heterogeneous distributions. We introduce Sign-Entropy Regularization (SER), a novel entropy-based regularization technique that penalizes excessive directional variability in client-local optimization. Motivated by Descartes’ Rule of Signs, we hypothesize that [...] Read more.
Personalized Federated Learning (PFL) seeks to train client-specific models across distributed data silos with heterogeneous distributions. We introduce Sign-Entropy Regularization (SER), a novel entropy-based regularization technique that penalizes excessive directional variability in client-local optimization. Motivated by Descartes’ Rule of Signs, we hypothesize that frequent sign changes in gradient trajectories reflect complexity in the local loss landscape. By minimizing the entropy of gradient sign patterns during local updates, SER encourages smoother optimization paths, improves convergence stability, and enhances personalization. We formally define a differentiable sign-entropy objective over the gradient sign distribution and integrate it into standard federated optimization frameworks, including FedAvg and FedProx. The regularizer is computed efficiently and applied post hoc per local round. Extensive experiments on three benchmark datasets (FEMNIST, Shakespeare, and CIFAR-10) show that SER improves both average and worst-case client accuracy, reduces variance across clients, accelerates convergence, and smooths the local loss surface as measured by Hessian trace and spectral norm. We also present a sensitivity analysis of the regularization strength ρ and discuss the potential for client-adaptive variants. Comparative evaluations against state-of-the-art methods (e.g., Ditto, pFedMe, momentum-based variants, Entropy-SGD) highlight that SER introduces an orthogonal and scalable mechanism for personalization. Theoretically, we frame SER as an information-theoretic and geometric regularizer that stabilizes learning dynamics without requiring dual-model structures or communication modifications. This work opens avenues for trajectory-based regularization and hybrid entropy-guided optimization in federated and resource-constrained learning settings. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

24 pages, 3740 KB  
Article
Distributed Time-Varying Optimal Resource Management for Microgrids via Fixed-Time Multiagent Approach
by Tingting Zhou, Salah Laghrouche and Youcef Ait-Amirat
Energies 2025, 18(10), 2616; https://doi.org/10.3390/en18102616 - 19 May 2025
Cited by 1 | Viewed by 961
Abstract
This paper investigates the distributed time-varying (TV) resource management problem (RMP) for microgrids (MGs) within a multi-agent system (MAS) framework. A novel fixed-time (FXT) distributed optimization algorithm is proposed, capable of operating over switching communication graphs and handling both local inequality and global [...] Read more.
This paper investigates the distributed time-varying (TV) resource management problem (RMP) for microgrids (MGs) within a multi-agent system (MAS) framework. A novel fixed-time (FXT) distributed optimization algorithm is proposed, capable of operating over switching communication graphs and handling both local inequality and global equality constraints. By incorporating a time-decaying penalty function, the algorithm achieves an FXT consensus on marginal costs and ensures asymptotic convergence to the optimal TV solution of the original RMP. Unlike the prior methods with centralized coordination, the proposed algorithm is fully distributed, scalable, and privacy-preserving, making it suitable for real-time deployment in dynamic MG environments. Rigorous theoretical analysis establishes FXT convergence under both identical and nonidentical Hessian conditions. Simulations on the IEEE 14-bus system validate the algorithm’s superior performance in convergence speed, plug-and-play adaptability, and robustness to switching topologies. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

16 pages, 7035 KB  
Article
An Explainable Scheme for Memorization of Noisy Instances by Downstream Evaluation
by Chun-Yi Tsai, Ping-Hsun Tsai and Yu-Wei Chung
Appl. Sci. 2025, 15(5), 2392; https://doi.org/10.3390/app15052392 - 24 Feb 2025
Viewed by 942
Abstract
Deep learning models are often perceived as black boxes, making it challenging to analyze the causal relationships between inputs and outputs. For this reason, the explainability of model learning has garnered increasing attention in recent years. Some previous studies proposed influence functions, which [...] Read more.
Deep learning models are often perceived as black boxes, making it challenging to analyze the causal relationships between inputs and outputs. For this reason, the explainability of model learning has garnered increasing attention in recent years. Some previous studies proposed influence functions, which evaluate how the weighting of data impacts the model by mathematical analysis, thereby explaining how it realizes the data. This inspires us to suggest that when data in an upstream task is affected by varying levels of noise interference, it is practical to set up a downstream model to apply Taylor expansion in conjunction with the Hessian matrix to estimate perturbations that each data point cause in the model. Additionally, utilizing Integrated Gradients to compute the loss difference between the original data instances and a baseline instance which does not affect the model is powerful to yield a memorization matrix that allows researchers to observe the changes in model reasoning before and after noise interference, helping to analyze the causes of erroneous inference. Full article
Show Figures

Figure 1

19 pages, 1024 KB  
Article
A Hessian-Based Deep Learning Preprocessing Method for Coronary Angiography Image Analysis
by Yanjun Li, Takaaki Yoshimura, Yuto Horima and Hiroyuki Sugimori
Electronics 2024, 13(18), 3676; https://doi.org/10.3390/electronics13183676 - 16 Sep 2024
Cited by 5 | Viewed by 2642
Abstract
Leveraging its high accuracy and stability, deep-learning-based coronary artery detection technology has been extensively utilized in diagnosing coronary artery diseases. However, traditional algorithms for localizing coronary stenosis often fall short when detecting stenosis in branch vessels, which can pose significant health risks due [...] Read more.
Leveraging its high accuracy and stability, deep-learning-based coronary artery detection technology has been extensively utilized in diagnosing coronary artery diseases. However, traditional algorithms for localizing coronary stenosis often fall short when detecting stenosis in branch vessels, which can pose significant health risks due to factors like imaging angles and uneven contrast agent distribution. To tackle these challenges, we propose a preprocessing method that integrates Hessian-based vascular enhancement and image fusion as prerequisites for deep learning. This approach enhances fuzzy features in coronary angiography images, thereby increasing the neural network’s sensitivity to stenosis characteristics. We assessed the effectiveness of this method using the latest deep learning networks, such as YOLOv10, YOLOv9, and RT-DETR, across various evaluation metrics. Our results show that our method improves AP50 accuracy by 4.84% and 5.07% on RT-DETR R101 and YOLOv10-X, respectively, compared to images without special pre-processing. Furthermore, our analysis of different imaging angles on stenosis localization detection indicates that the left coronary artery zero is the most suitable for detecting stenosis with a AP50(%) value of 90.5. The experimental results have revealed that the proposed method is effective as a preprocessing technique for deep-learning-based coronary angiography image processing and enhances the model’s ability to identify stenosis in small blood vessels. Full article
Show Figures

Figure 1

18 pages, 516 KB  
Article
Likelihood Inference for Factor Copula Models with Asymmetric Tail Dependence
by Harry Joe and Xiaoting Li
Entropy 2024, 26(7), 610; https://doi.org/10.3390/e26070610 - 19 Jul 2024
Viewed by 2160
Abstract
For multivariate non-Gaussian involving copulas, likelihood inference is dominated by the data in the middle, and fitted models might not be very good for joint tail inference, such as assessing the strength of tail dependence. When preliminary data and likelihood analysis suggest asymmetric [...] Read more.
For multivariate non-Gaussian involving copulas, likelihood inference is dominated by the data in the middle, and fitted models might not be very good for joint tail inference, such as assessing the strength of tail dependence. When preliminary data and likelihood analysis suggest asymmetric tail dependence, a method is proposed to improve extreme value inferences based on the joint lower and upper tails. A prior that uses previous information on tail dependence can be used in combination with the likelihood. With the combination of the prior and the likelihood (which in practice has some degree of misspecification) to obtain a tilted log-likelihood, inferences with suitably transformed parameters can be based on Bayesian computing methods or with numerical optimization of the tilted log-likelihood to obtain the posterior mode and Hessian at this mode. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

24 pages, 4565 KB  
Article
Modelling Rigid Body Potential of Small Celestial Bodies for Analyzing Orbit–Attitude Coupled Motions of Spacecraft
by Jinah Lee and Chandeok Park
Aerospace 2024, 11(5), 364; https://doi.org/10.3390/aerospace11050364 - 5 May 2024
Cited by 5 | Viewed by 2390
Abstract
The present study aims to propose a general framework of modeling rigid body potentials (RBPs) suitable for analyzing the orbit–attitude coupled motion of a spacecraft (S/C) near small celestial bodies, regardless of gravity estimation models. Here, ‘rigid body potential’ refers to the potential [...] Read more.
The present study aims to propose a general framework of modeling rigid body potentials (RBPs) suitable for analyzing the orbit–attitude coupled motion of a spacecraft (S/C) near small celestial bodies, regardless of gravity estimation models. Here, ‘rigid body potential’ refers to the potential of a small celestial body integrated across the finite volume of an S/C, assuming that the mass of the S/C has no influence on the motion of the small celestial body. First proposed is a comprehensive formulation for modeling the RBP including its associated force, torque, and Hessian matrix, which is then applied to three gravity estimation models. The Hessian of potential plays a crucial role in calculating the RBP. This study assesses the RBP via numerical simulations for the purpose of determining proper gravity estimation models and seeking modeling conditions. The gravity estimation models and the associated RBP are tested for eight small celestial bodies. In this study, we utilize distance units (DUs) instead of SI units, where the DU is defined as the mean radius of the given small celestial body. For a given specific distance in Dus, the relative error of the gravity estimation model at this distance has a similar value regardless of the small celestial body. However, the difference value between the potential and RBP depends on the DU; in other words, it depends on the size of the small celestial body. This implies that accurate gravity estimation models are imperative for conducting RBP analysis. The overall results can help develop a propagation system for orbit–attitude coupled motions of an S/C in the vicinity of small celestial bodies. Full article
(This article belongs to the Special Issue Deep Space Exploration)
Show Figures

Figure 1

40 pages, 21076 KB  
Article
A Study on Dimensionality Reduction and Parameters for Hyperspectral Imagery Based on Manifold Learning
by Wenhui Song, Xin Zhang, Guozhu Yang, Yijin Chen, Lianchao Wang and Hanghang Xu
Sensors 2024, 24(7), 2089; https://doi.org/10.3390/s24072089 - 25 Mar 2024
Cited by 16 | Viewed by 3366
Abstract
With the rapid advancement of remote-sensing technology, the spectral information obtained from hyperspectral remote-sensing imagery has become increasingly rich, facilitating detailed spectral analysis of Earth’s surface objects. However, the abundance of spectral information presents certain challenges for data processing, such as the “curse [...] Read more.
With the rapid advancement of remote-sensing technology, the spectral information obtained from hyperspectral remote-sensing imagery has become increasingly rich, facilitating detailed spectral analysis of Earth’s surface objects. However, the abundance of spectral information presents certain challenges for data processing, such as the “curse of dimensionality” leading to the “Hughes phenomenon”, “strong correlation” due to high resolution, and “nonlinear characteristics” caused by varying surface reflectances. Consequently, dimensionality reduction of hyperspectral data emerges as a critical task. This paper begins by elucidating the principles and processes of hyperspectral image dimensionality reduction based on manifold theory and learning methods, in light of the nonlinear structures and features present in hyperspectral remote-sensing data, and formulates a dimensionality reduction process based on manifold learning. Subsequently, this study explores the capabilities of feature extraction and low-dimensional embedding for hyperspectral imagery using manifold learning approaches, including principal components analysis (PCA), multidimensional scaling (MDS), and linear discriminant analysis (LDA) for linear methods; and isometric mapping (Isomap), locally linear embedding (LLE), Laplacian eigenmaps (LE), Hessian locally linear embedding (HLLE), local tangent space alignment (LTSA), and maximum variance unfolding (MVU) for nonlinear methods, based on the Indian Pines hyperspectral dataset and Pavia University dataset. Furthermore, the paper investigates the optimal neighborhood computation time and overall algorithm runtime for feature extraction in hyperspectral imagery, varying by the choice of neighborhood k and intrinsic dimensionality d values across different manifold learning methods. Based on the outcomes of feature extraction, the study examines the classification experiments of various manifold learning methods, comparing and analyzing the variations in classification accuracy and Kappa coefficient with different selections of neighborhood k and intrinsic dimensionality d values. Building on this, the impact of selecting different bandwidths t for the Gaussian kernel in the LE method and different Lagrange multipliers λ for the MVU method on classification accuracy, given varying choices of neighborhood k and intrinsic dimensionality d, is explored. Through these experiments, the paper investigates the capability and effectiveness of different manifold learning methods in feature extraction and dimensionality reduction within hyperspectral imagery, as influenced by the selection of neighborhood k and intrinsic dimensionality d values, identifying the optimal neighborhood k and intrinsic dimensionality d value for each method. A comparison of classification accuracies reveals that the LTSA method yields superior classification results compared to other manifold learning approaches. The study demonstrates the advantages of manifold learning methods in processing hyperspectral image data, providing an experimental reference for subsequent research on hyperspectral image dimensionality reduction using manifold learning methods. Full article
Show Figures

Figure 1

Back to TopTop