Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = Hessian regularization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 24767 KB  
Article
VINA-SLAM: A Voxel-Based Inertial and Normal-Aligned LiDAR–IMU SLAM
by Ruyang Zhang and Bingyu Sun
Sensors 2026, 26(6), 1810; https://doi.org/10.3390/s26061810 - 13 Mar 2026
Viewed by 160
Abstract
Environments with sparse or repetitive geometric structures, such as long corridors and narrow stairwells, remain challenging for LiDAR–inertial simultaneous localization and mapping (LiDAR–IMU SLAM) due to insufficient geometric observability and unreliable data associations. To address these issues, we propose VINA-SLAM, a novel LiDAR–IMU [...] Read more.
Environments with sparse or repetitive geometric structures, such as long corridors and narrow stairwells, remain challenging for LiDAR–inertial simultaneous localization and mapping (LiDAR–IMU SLAM) due to insufficient geometric observability and unreliable data associations. To address these issues, we propose VINA-SLAM, a novel LiDAR–IMU SLAM framework that constructs a unified global voxel map to explicitly exploit structural consistency. VINA-SLAM continuously tracks surface normals stored in the global voxel map using a normal-guided correspondence strategy, enabling stable scan-to-map alignment in degenerate scenes. Furthermore, a tangent-space metric is introduced to supplement missing rotational constraints around planar regions, providing reliable initial pose estimates for local optimization. A tightly coupled sliding-window bundle adjustment is then formulated by jointly incorporating IMU factors, voxel normal consistency factors, and planar regularization terms. In particular, the minimum eigenvalue of each voxel’s covariance is used as a statistically principled planar constraint, improving the Hessian conditioning and cross-view geometric consistency. The proposed system directly aligns raw LiDAR scans to the voxelized map without explicit feature extraction or loop closure. Experiments on 25 sequences from the HILTI and MARS-LVIG datasets show that VINA-SLAM reduces ATE by 25–40% on average while maintaining real-time performance at 10 Hz in the evaluated geometrically degenerate environments. Full article
Show Figures

Figure 1

31 pages, 2615 KB  
Article
Zeroth-Order Riemannian Adaptive Regularized Proximal Quasi-Newton Optimization Method
by Yinpu Ma, Cunlin Li, Zhichao Wang and Qian Li
Axioms 2026, 15(3), 203; https://doi.org/10.3390/axioms15030203 - 10 Mar 2026
Viewed by 206
Abstract
Recently, the adaptive regularized proximal quasi-Newton (ARPQN) method has demonstrated a strong performance in solving composite optimization problems over the Stiefel manifold. However, its reliance on first-order information limits its applicability to scenarios where gradient and Hessian evaluations are unavailable or costly. In [...] Read more.
Recently, the adaptive regularized proximal quasi-Newton (ARPQN) method has demonstrated a strong performance in solving composite optimization problems over the Stiefel manifold. However, its reliance on first-order information limits its applicability to scenarios where gradient and Hessian evaluations are unavailable or costly. In this paper, we propose a zeroth-order adaptive regularized proximal quasi-Newton method (ZO-ARPQN) for black-box composite optimization over Riemannian manifolds, particularly the Stiefel and symmetric positive definite (SPD) manifolds. The proposed method estimates the Riemannian gradient and curvature information through randomized one-point finite-difference approximations and adaptively updates a regularized quasi-Newton matrix to capture the local manifold geometry. Theoretically, we established global convergence and complex analyses under mild assumptions. More importantly, by incorporating curvature-aware regularization and random perturbations in the proximal quasi-Newton framework, we proved that ZO-ARPQN can escape strict saddle points with a high probability. This guarantees convergence to a stationary point, even in the absence of explicit gradients. Extensive numerical experiments were conducted on manifold-constrained problems, including sparse PCA and robot stiffness tuning. These demonstrated that ZO-ARPQN shows a competitive convergence behavior compared with other state-of-the-art Riemannian optimization methods, while requiring only function evaluations. Full article
(This article belongs to the Section Geometry and Topology)
Show Figures

Figure 1

20 pages, 4545 KB  
Article
SRE-FMaps: A Sinkhorn-Regularized Elastic Functional Map Framework for Non-Isometric 3D Shape Matching
by Dan Zhang, Yue Zhang, Ning Wang and Dong Zhao
J. Imaging 2025, 11(12), 452; https://doi.org/10.3390/jimaging11120452 - 16 Dec 2025
Viewed by 505
Abstract
Precise 3D shape correspondence is a fundamental prerequisite for critical applications ranging from medical anatomical modeling to visual recognition. However, non-isometric 3D shape matching remains a challenging task due to the limited sensitivity of traditional Laplace–Beltrami (LB) bases to local geometric deformations such [...] Read more.
Precise 3D shape correspondence is a fundamental prerequisite for critical applications ranging from medical anatomical modeling to visual recognition. However, non-isometric 3D shape matching remains a challenging task due to the limited sensitivity of traditional Laplace–Beltrami (LB) bases to local geometric deformations such as stretching and bending. To address these limitations, this paper proposes a Sinkhorn-Regularized Elastic Functional Map framework (SRE-FMaps) that integrates entropy-regularized optimal transport with an elastic thin-shell energy basis. First, a sparse Sinkhorn transport plan is adopted to initialize a bijective correspondence with linear computational complexity. Then, a non-orthogonal elastic basis, derived from the Hessian of thin-shell deformation energy, is introduced to enhance high-frequency feature perception. Finally, correspondence stability is quantified through a cosine-based elastic distance metric, enabling retrieval and classification. Experiments on the SHREC2015, McGill, and Face datasets demonstrate that SRE-FMaps reduces the correspondence error by a maximum of 32% and achieves an average of 92.3% classification accuracy (with a peak of 94.74% on the Face dataset). Moreover, the framework exhibits superior robustness, yielding a recall of up to 91.67% and an F1-score of 0.94, effectively handling bending, stretching, and folding deformations compared with conventional LB-based functional map pipelines. The proposed framework provides a scalable solution for non-isometric shape correspondence in medical modeling, 3D reconstruction, and visual recognition. Full article
Show Figures

Figure 1

15 pages, 759 KB  
Article
Efficiency and Convergence Insights in Large-Scale Optimization Using the Improved Inexact–Newton–Smart Algorithm and Interior-Point Framework
by Neda Bagheri Renani, Maryam Jaefarzadeh and Daniel Ševčovič
Mathematics 2025, 13(22), 3657; https://doi.org/10.3390/math13223657 - 14 Nov 2025
Viewed by 722
Abstract
We present a head-to-head evaluation of the Improved Inexact–Newton–Smart (INS) algorithm against a primal–dual interior-point framework for large-scale nonlinear optimization. On extensive synthetic benchmarks, the interior-point method converges with roughly one-third fewer iterations and about one-half the computation time relative to INS, while [...] Read more.
We present a head-to-head evaluation of the Improved Inexact–Newton–Smart (INS) algorithm against a primal–dual interior-point framework for large-scale nonlinear optimization. On extensive synthetic benchmarks, the interior-point method converges with roughly one-third fewer iterations and about one-half the computation time relative to INS, while attaining marginally higher accuracy and meeting all primary stopping conditions. By contrast, INS succeeds in fewer cases under default settings but benefits markedly from moderate regularization and step-length control; in tuned regimes, its iteration count and runtime decrease substantially, narrowing yet not closing the gap. A sensitivity study indicates that interior-point performance remains stable across parameter changes, whereas INS is more affected by step length and regularization choice. Collectively, the evidence positions the interior-point method as a reliable baseline and INS as a configurable alternative when problem structure favors adaptive regularization. Full article
Show Figures

Figure 1

28 pages, 1638 KB  
Article
Sign-Entropy Regularization for Personalized Federated Learning
by Koffka Khan
Entropy 2025, 27(6), 601; https://doi.org/10.3390/e27060601 - 4 Jun 2025
Cited by 1 | Viewed by 2387
Abstract
Personalized Federated Learning (PFL) seeks to train client-specific models across distributed data silos with heterogeneous distributions. We introduce Sign-Entropy Regularization (SER), a novel entropy-based regularization technique that penalizes excessive directional variability in client-local optimization. Motivated by Descartes’ Rule of Signs, we hypothesize that [...] Read more.
Personalized Federated Learning (PFL) seeks to train client-specific models across distributed data silos with heterogeneous distributions. We introduce Sign-Entropy Regularization (SER), a novel entropy-based regularization technique that penalizes excessive directional variability in client-local optimization. Motivated by Descartes’ Rule of Signs, we hypothesize that frequent sign changes in gradient trajectories reflect complexity in the local loss landscape. By minimizing the entropy of gradient sign patterns during local updates, SER encourages smoother optimization paths, improves convergence stability, and enhances personalization. We formally define a differentiable sign-entropy objective over the gradient sign distribution and integrate it into standard federated optimization frameworks, including FedAvg and FedProx. The regularizer is computed efficiently and applied post hoc per local round. Extensive experiments on three benchmark datasets (FEMNIST, Shakespeare, and CIFAR-10) show that SER improves both average and worst-case client accuracy, reduces variance across clients, accelerates convergence, and smooths the local loss surface as measured by Hessian trace and spectral norm. We also present a sensitivity analysis of the regularization strength ρ and discuss the potential for client-adaptive variants. Comparative evaluations against state-of-the-art methods (e.g., Ditto, pFedMe, momentum-based variants, Entropy-SGD) highlight that SER introduces an orthogonal and scalable mechanism for personalization. Theoretically, we frame SER as an information-theoretic and geometric regularizer that stabilizes learning dynamics without requiring dual-model structures or communication modifications. This work opens avenues for trajectory-based regularization and hybrid entropy-guided optimization in federated and resource-constrained learning settings. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

10 pages, 836 KB  
Article
Solving the Adaptive Cubic Regularization Sub-Problem Using the Lanczos Method
by Zhi Zhu and Jingya Chang
Symmetry 2022, 14(10), 2191; https://doi.org/10.3390/sym14102191 - 18 Oct 2022
Viewed by 2380
Abstract
The adaptive cubic regularization method solves an unconstrained optimization model by using a three-order regularization term to approximate the objective function at each iteration. Similar to the trust-region method, the calculation of the sub-problem highly affects the computing efficiency. The Lanczos method is [...] Read more.
The adaptive cubic regularization method solves an unconstrained optimization model by using a three-order regularization term to approximate the objective function at each iteration. Similar to the trust-region method, the calculation of the sub-problem highly affects the computing efficiency. The Lanczos method is an useful tool for simplifying the objective function in the sub-problem. In this paper, we implement the adaptive cubic regularization method with the aid of the Lanczos method, and analyze the error of Lanczos approximation. We show that both the error between the Lanczos objective function and the original cubic term, and the error between the solution of the Lanczos approximation and the solution of the original cubic sub-problem are bounded up by the condition number of the optimal Hessian matrix. Furthermore, we compare the numerical performances of the adaptive cubic regularization algorithm when using the Lanczos approximation method and the adaptive cubic regularization algorithm without using the Lanczos approximation for unconstrained optimization problems. Numerical experiments show that the Lanczos method improves the computation efficiency of the adaptive cubic method remarkably. Full article
(This article belongs to the Special Issue Tensors and Matrices in Symmetry with Applications)
Show Figures

Figure 1

22 pages, 14363 KB  
Article
Application of Particle Swarm Optimization with Simulated Annealing in MIT Regularization Image Reconstruction
by Dan Yang, Bin Xu, Bin Xu, Tian Lu and Xu Wang
Symmetry 2022, 14(2), 275; https://doi.org/10.3390/sym14020275 - 29 Jan 2022
Cited by 2 | Viewed by 3310
Abstract
Background and Objectives: Due to the soft-field effect of the electromagnetic field and the limit of detection, image reconstruction of magnetic induction tomography has to recover more complex electrical characteristics from very few signals. These cause a problem which have underdetermination, nonlinearity, and [...] Read more.
Background and Objectives: Due to the soft-field effect of the electromagnetic field and the limit of detection, image reconstruction of magnetic induction tomography has to recover more complex electrical characteristics from very few signals. These cause a problem which have underdetermination, nonlinearity, and ill-posed characteristics, and therefore lead to many difficulties in finding its solution. Although many regularization image reconstruction methods exist, they are not suitable for MIT applications due to regularization parameter selection. The purpose of this paper is to study the principle of particle swarm optimization with simulated annealing, and to propose a regularization method for reconstruction, which will provide a new way to solve the MIT image problems. Methods and Models: Firstly, the regularization principle of image reconstruction of MIT will be analyzed. Then the hybrid regularization algorithm, including Tikhonov and NOSER regularization, will be developed, using the dimension of the Hessian matrix as a penalty term respecting the prior knowledge. PSO-SA algorithm will be applied to obtain an optimal solution for regularization parameters. Finally, six typical numerical models and approximately symmetrical cerebral hemorrhage models by COMSOL will be carried out, and the voltage signals obtained from the simulation will be used to verify the proposed reconstruction method. Results: Through the simulation results, the proposed imaging method has the average CC values of 0.9932, 0.8286 and the average RE values of 0.4982, 0.8320 for simple and complex models, respectively. Moreover, when the SNR changes from 55 dB to 35 dB, the CC value of the cerebral hemorrhage model reduced by 0.1034. The results demonstrate the effectiveness and high theoretical feasibility of the proposed method in MIT image reconstruction. Conclusions: This study indicates the potential application of PSO-SA algorithm in regularization imaging problem. Compared with traditional regularization imaging methods, the proposed method has the advantages of better accuracy, robustness and noise resistance, showing the certain application value in other similar ill-ness imaging problems. Full article
Show Figures

Figure 1

13 pages, 7105 KB  
Article
Scaled in Cartesian Coordinates Ab Initio Molecular Force Fields of DNA Bases: Application to Canonical Pairs
by Igor Kochikov, Anna Stepanova and Gulnara Kuramshina
Molecules 2022, 27(2), 427; https://doi.org/10.3390/molecules27020427 - 10 Jan 2022
Viewed by 2141
Abstract
The model of Regularized Quantum Mechanical Force Field (RQMFF) was applied to the joint treatment of ab initio and experimental vibrational data of the four primary nucleobases using a new algorithm based on the scaling procedure in Cartesian coordinates. The matrix of scaling [...] Read more.
The model of Regularized Quantum Mechanical Force Field (RQMFF) was applied to the joint treatment of ab initio and experimental vibrational data of the four primary nucleobases using a new algorithm based on the scaling procedure in Cartesian coordinates. The matrix of scaling factors in Cartesian coordinates for the considered molecules includes diagonal elements for all atoms of the molecule and off-diagonal elements for bonded atoms and for some non-bonded atoms (1–3 and some 1–4 interactions). The choice of the model is based on the results of the second-order perturbation analysis of the Fock matrix for uncoupled interactions using the Natural Bond Orbital (NBO) analysis. The scaling factors obtained within this model as a result of solving the inverse problem (regularized Cartesian scale factors) of adenine, cytosine, guanine, and thymine molecules were used to correct the Hessians of the canonical base pairs: adenine–thymine and cytosine–guanine. The proposed procedure is based on the block structure of the scaling matrix for molecular entities with non-covalent interactions, as in the case of DNA base pairs. It allows avoiding introducing internal coordinates (or coordinates of symmetry, local symmetry, etc.) when scaling the force field of a compound of a complex structure with non-covalent H-bonds. Full article
Show Figures

Figure 1

20 pages, 756 KB  
Article
Population Risk Improvement with Model Compression: An Information-Theoretic Approach
by Yuheng Bu, Weihao Gao, Shaofeng Zou and Venugopal V. Veeravalli
Entropy 2021, 23(10), 1255; https://doi.org/10.3390/e23101255 - 27 Sep 2021
Cited by 13 | Viewed by 3526
Abstract
It has been reported in many recent works on deep model compression that the population risk of a compressed model can be even better than that of the original model. In this paper, an information-theoretic explanation for this population risk improvement phenomenon is [...] Read more.
It has been reported in many recent works on deep model compression that the population risk of a compressed model can be even better than that of the original model. In this paper, an information-theoretic explanation for this population risk improvement phenomenon is provided by jointly studying the decrease in the generalization error and the increase in the empirical risk that results from model compression. It is first shown that model compression reduces an information-theoretic bound on the generalization error, which suggests that model compression can be interpreted as a regularization technique to avoid overfitting. The increase in empirical risk caused by model compression is then characterized using rate distortion theory. These results imply that the overall population risk could be improved by model compression if the decrease in generalization error exceeds the increase in empirical risk. A linear regression example is presented to demonstrate that such a decrease in population risk due to model compression is indeed possible. Our theoretical results further suggest a way to improve a widely used model compression algorithm, i.e., Hessian-weighted K-means clustering, by regularizing the distance between the clustering centers. Experiments with neural networks are provided to validate our theoretical assertions. Full article
(This article belongs to the Special Issue Information Theory and Machine Learning)
Show Figures

Figure 1

18 pages, 371 KB  
Article
Randomized Simplicial Hessian Update
by Árpád Bűrmen, Tadej Tuma and Jernej Olenšek
Mathematics 2021, 9(15), 1775; https://doi.org/10.3390/math9151775 - 27 Jul 2021
Cited by 1 | Viewed by 2042
Abstract
Recently, a derivative-free optimization algorithm was proposed that utilizes a minimum Frobenius norm (MFN) Hessian update for estimating the second derivative information, which in turn is used for accelerating the search. The proposed update formula relies only on computed function values and is [...] Read more.
Recently, a derivative-free optimization algorithm was proposed that utilizes a minimum Frobenius norm (MFN) Hessian update for estimating the second derivative information, which in turn is used for accelerating the search. The proposed update formula relies only on computed function values and is a closed-form expression for a special case of a more general approach first published by Powell. This paper analyzes the convergence of the proposed update formula under the assumption that the points from Rn where the function value is known are random. The analysis assumes that the N+2 points used by the update formula are obtained by adding N+1 vectors to a central point. The vectors are obtained by transforming a prototype set of N+1 vectors with a random orthogonal matrix from the Haar measure. The prototype set must positively span a Nn dimensional subspace. Because the update is random by nature we can estimate a lower bound on the expected improvement of the approximate Hessian. This lower bound was derived for a special case of the proposed update by Leventhal and Lewis. We generalize their result and show that the amount of improvement greatly depends on N as well as the choice of the vectors in the prototype set. The obtained result is then used for analyzing the performance of the update based on various commonly used prototype sets. One of the results obtained by this analysis states that a regular n-simplex is a bad choice for a prototype set because it does not guarantee any improvement of the approximate Hessian. Full article
(This article belongs to the Special Issue Optimization Theory and Applications)
Show Figures

Figure 1

34 pages, 7921 KB  
Article
A Multi-Strategy Marine Predator Algorithm and Its Application in Joint Regularization Semi-Supervised ELM
by Wenbiao Yang, Kewen Xia, Tiejun Li, Min Xie and Fei Song
Mathematics 2021, 9(3), 291; https://doi.org/10.3390/math9030291 - 1 Feb 2021
Cited by 22 | Viewed by 4067
Abstract
A novel semi-supervised learning method is proposed to better utilize labeled and unlabeled samples to improve classification performance. However, there is exists the limitation that Laplace regularization in a semi-supervised extreme learning machine (SSELM) tends to lead to poor generalization ability and it [...] Read more.
A novel semi-supervised learning method is proposed to better utilize labeled and unlabeled samples to improve classification performance. However, there is exists the limitation that Laplace regularization in a semi-supervised extreme learning machine (SSELM) tends to lead to poor generalization ability and it ignores the role of labeled information. To solve the above problems, a Joint Regularized Semi-Supervised Extreme Learning Machine (JRSSELM) is proposed, which uses Hessian regularization instead of Laplace regularization and adds supervised information regularization. In order to solve the problem of slow convergence speed and the easy to fall into local optimum of marine predator algorithm (MPA), a multi-strategy marine predator algorithm (MSMPA) is proposed, which first uses a chaotic opposition learning strategy to generate high-quality initial population, then uses adaptive inertia weights and adaptive step control factor to improve the exploration, utilization, and convergence speed, and then uses neighborhood dimensional learning strategy to maintain population diversity. The parameters in JRSSELM are then optimized using MSMPA. The MSMPA-JRSSELM is applied to logging oil formation identification. The experimental results show that MSMPA shows obvious superiority and strong competitiveness in terms of convergence accuracy and convergence speed. Also, the classification performance of MSMPA-JRSSELM is better than other classification methods, and the practical application is remarkable. Full article
Show Figures

Figure 1

22 pages, 4284 KB  
Article
Deep Learning and Adaptive Graph-Based Growing Contours for Agricultural Field Extraction
by Matthias P. Wagner and Natascha Oppelt
Remote Sens. 2020, 12(12), 1990; https://doi.org/10.3390/rs12121990 - 21 Jun 2020
Cited by 29 | Viewed by 4568
Abstract
Field mapping and information on agricultural landscapes is of increasing importance for many applications. Monitoring schemes and national cadasters provide a rich source of information but their maintenance and regular updating is costly and labor-intensive. Automatized mapping of fields based on remote sensing [...] Read more.
Field mapping and information on agricultural landscapes is of increasing importance for many applications. Monitoring schemes and national cadasters provide a rich source of information but their maintenance and regular updating is costly and labor-intensive. Automatized mapping of fields based on remote sensing imagery may aid in this task and allow for a faster and more regular observation. Although remote sensing has seen extensive use in agricultural research topics, such as plant health monitoring, crop type classification, yield prediction, and irrigation, field delineation and extraction has seen comparatively little research interest. In this study, we present a field boundary detection technique based on deep learning and a variety of image features, and combine it with the graph-based growing contours (GGC) method to extract agricultural fields in a study area in northern Germany. The boundary detection step only requires red, green, and blue (RGB) data and is therefore largely independent of the sensor used. We compare different image features based on color and luminosity information and evaluate their usefulness for the task of field boundary detection. A model based on texture metrics, gradient information, Hessian matrix eigenvalues, and local statistics showed good results with accuracies up to 88.2%, an area under the ROC curve (AUC) of up to 0.94, and F1 score of up to 0.88. The exclusive use of these universal image features may also facilitate transferability to other regions. We further present modifications to the GGC method intended to aid in upscaling of the method through process acceleration with a minimal effect on results. We combined the boundary detection results with the GGC method for field polygon extraction. Results were promising, with the new GGC version performing similarly or better than the original version while experiencing an acceleration of 1.3× to 2.3× on different subsets and input complexities. Further research may explore other applications of the GGC method outside agricultural remote sensing and field extraction. Full article
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Applications)
Show Figures

Graphical abstract

14 pages, 606 KB  
Article
2,1 Norm and Hessian Regularized Non-Negative Matrix Factorization with Discriminability for Data Representation
by Peng Luo, Jinye Peng and Jianping Fan
Appl. Sci. 2017, 7(10), 1013; https://doi.org/10.3390/app7101013 - 30 Sep 2017
Cited by 4 | Viewed by 5531
Abstract
Matrix factorization based methods have widely been used in data representation. Among them, Non-negative Matrix Factorization (NMF) is a promising technique owing to its psychological and physiological interpretation of spontaneously occurring data. On one hand, although traditional Laplacian regularization can enhance the performance [...] Read more.
Matrix factorization based methods have widely been used in data representation. Among them, Non-negative Matrix Factorization (NMF) is a promising technique owing to its psychological and physiological interpretation of spontaneously occurring data. On one hand, although traditional Laplacian regularization can enhance the performance of NMF, it still suffers from the problem of its weak extrapolating ability. On the other hand, standard NMF disregards the discriminative information hidden in the data and cannot guarantee the sparsity of the factor matrices. In this paper, a novel algorithm called 2 , 1 norm and Hessian Regularized Non-negative Matrix Factorization with Discriminability ( 2 , 1 HNMFD), is developed to overcome the aforementioned problems. In 2 , 1 HNMFD, Hessian regularization is introduced in the framework of NMF to capture the intrinsic manifold structure of the data. 2 , 1 norm constraints and approximation orthogonal constraints are added to assure the group sparsity of encoding matrix and characterize the discriminative information of the data simultaneously. To solve the objective function, an efficient optimization scheme is developed to settle it. Our experimental results on five benchmark data sets have demonstrated that 2 , 1 HNMFD can learn better data representation and provide better clustering results. Full article
Show Figures

Figure 1

6 pages, 184 KB  
Article
A Generalized Cross Validation Method for the Inverse Problem of 3-D Maxwell's Equation
by Liang Ding, Bo Han and Jiaqi Liu
Math. Comput. Appl. 2010, 15(5), 784-789; https://doi.org/10.3390/mca15050784 - 31 Dec 2010
Viewed by 1448
Abstract
The inverse problem of estimation of the electrical conductivity in the Maxwell’s equation is considered, which is reformulated as a nonlinear equation. The Generalized Cross Validation is used to estimate the global regularization parameter and the damped Gauss-Newton is applied to impose local [...] Read more.
The inverse problem of estimation of the electrical conductivity in the Maxwell’s equation is considered, which is reformulated as a nonlinear equation. The Generalized Cross Validation is used to estimate the global regularization parameter and the damped Gauss-Newton is applied to impose local regularization. The damped Gauss-Newton method requires no calculation of the Hessian matrix which is expensive for traditional Newton method. GCV method decreases the computational expense and overcomes the influence of nonlinearity and ill-posedness. The results of numerical simulation testify that this method is efficient. Full article
Back to TopTop