Next Article in Journal
Worst-Case Robust Training Design for Correlated MIMO Channels in the Presence of Colored Interference
Previous Article in Journal
Flash-Attention-Enhanced Multi-Agent Deep Deterministic Policy Gradient for Mobile Edge Computing in Digital Twin-Powered Internet of Things
Previous Article in Special Issue
Variables Selection from the Patterns of the Features Applied to Spectroscopic Data—An Application Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regularized Kaczmarz Solvers for Robust Inverse Laplace Transforms

by
Marta González-Lázaro
1,
Eduardo Viciana
2,
Víctor Valdivieso
2,
Ignacio Fernández
1,* and
Francisco Manuel Arrabal-Campos
2,*
1
Department of Chemistry and Physics, Research Centre CIAIMBITAL, University of Almería, 04120 Almería, Spain
2
Department of Engineering, Research Centre CIAIMBITAL, University of Almería, 04120 Almería, Spain
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(13), 2166; https://doi.org/10.3390/math13132166
Submission received: 20 May 2025 / Revised: 18 June 2025 / Accepted: 26 June 2025 / Published: 2 July 2025

Abstract

Inverse Laplace transforms (ILTs) are fundamental to a wide range of scientific and engineering applications—from diffusion NMR spectroscopy to medical imaging—yet their numerical inversion remains severely ill-posed, particularly in the presence of noise or sparse data. The primary objective of this study is to develop robust and efficient numerical methods that improve the stability and accuracy of ILT reconstructions under challenging conditions. In this work, we introduce a novel family of Kaczmarz-based ILT solvers that embed advanced regularization directly into the iterative projection framework. We propose three algorithmic variants—Tikhonov–Kaczmarz, total variation (TV)–Kaczmarz, and Wasserstein–Kaczmarz—each incorporating a distinct penalty to stabilize solutions and mitigate noise amplification. The Wasserstein–Kaczmarz method, in particular, leverages optimal transport theory to impose geometric priors, yielding enhanced robustness for multi-modal or highly overlapping distributions. We benchmark these methods against established ILT solvers—including CONTIN, maximum entropy (MaxEnt), TRAIn, ITAMeD, and PALMA—using synthetic single- and multi-modal diffusion distributions contaminated with 1% controlled noise. Quantitative evaluation via mean squared error (MSE), Wasserstein distance, total variation, peak signal-to-noise ratio (PSNR), and runtime demonstrates that Wasserstein–Kaczmarz attains an optimal balance of speed (0.53 s per inversion) and accuracy (MSE = 4.7 × 10 8 ), while TRAIn achieves the highest fidelity (MSE = 1.5 × 10 8 ) at a modest computational cost. These results elucidate the inherent trade-offs between computational efficiency and reconstruction precision and establish regularized Kaczmarz solvers as versatile, high-performance tools for ill-posed inverse problems.

1. Introduction

Inverse Laplace transforms (ILTs) are basic math transforms in many scientific and engineering fields. They facilitate the manipulation and the solution of differential equations in diverse areas such as heat conduction [1,2], electrical circuit analysis [3], mechanical vibration studies [4], and control systems design [5]. In applications ranging from magnetic resonance imaging (MRI) [6] to industrial process control, the accuracy of ILT calculations is critical to ensuring reliable and robust results.
Despite their broad utility, obtaining accurate and efficient numerical ILT solutions remains challenging. Many traditional approaches struggle with precision and computational speed, especially for complex or high-frequency functions. These difficulties are further compounded when the underlying problems are ill-posed—a common occurrence in real-world scenarios where sparse or noisy data can induce instability [7]. Such ill-posedness is frequently encountered in
  • Image reconstruction: Recovering detailed images from incomplete or noisy measurements (e.g., in medical or astronomical imaging) [8].
  • Inverse heat conduction: Estimating temperature distributions or heat fluxes from limited thermal data [9].
  • Seismic inversion:Inferring subsurface properties from seismic signals [10].
  • Machine learning: Fitting models in the presence of overfitting or high-dimensional instability [11].
Robust and efficient inverse Laplace transform (ILT) solvers are critical in a wide range of scientific and engineering fields, where they play a big role in the analysis of real and complex data. For instance, in the domain of medical imaging, such as diffusion magnetic resonance imaging (MRI), they are indispensable for generating precise tissue diffusivity maps, which are fundamental for both diagnostics and research. In the domain of polymer science, these techniques facilitate the precise determination of molecular weight distribution, thereby directly impacting the prediction of material properties. Furthermore, their role in engineering applications, such as reverse heat conduction and seismology, where data are often incomplete or noisy, is indispensable. Therefore, to mitigate these challenges, regularization techniques have been extensively developed and applied [12,13,14,15,16]. Landmark advances in computational regularization methods—such as those discussed in [17,18,19]—have provided stable frameworks for tackling ill-posed inverse problems. Yet, many of these methods require delicate parameter tuning and may still suffer from convergence issues in complex settings.
Potential strategies to address these challenges are based on the Kaczmarz method which emerged as a highly promising approach [20]. The original Kaczmarz algorithm was introduced in 1937 by Kaczmarz as an iterative method to solve the consistent systems of linear equations by sequentially projecting the current iterate onto the hyperplanes defined by each equation. Although simple and easy to implement, its performance is highly sensitive to the ordering of the equations. In its classical (cyclic) form, the convergence can be slow when successive rows are “nearly parallel” or when the system is large and ill-posed. The randomized Kaczmarz (RK) algorithm, proposed by Strohmer and Vershynin [21], was a major breakthrough. In their formulation, each row is randomly selected with a probability that increases proportionally to the square of its norm. This choice, which often leads to exponential convergence on average, helps mitigate problems caused by cyclic misordering. More recently, Gower and Richtárik [22] developed a unifying framework for randomized iterative methods in linear systems that not only recovers the RK method as a special case, but also shows that, if a positive definite ‘geometry’ matrix and sampling matrix are appropriately chosen, variants including coordinate descent and block methods can be derived. Their analysis provides a unique convergence theorem from which many known complexity results are derived and opens the door to novel algorithmic variants and preconditioning strategies. In the work of He, Dong, and Li [23], a new probability distribution is proposed for the RK algorithm, in which the probability of selecting a given row in the current iteration becomes proportional to the square of the sine of the angle between that row and the row chosen in the previous iteration. This angle-based selection is based on the observation that, when consecutive hyperplanes are nearly aligned, the benefit of an additional projection diminishes. The method aims to avoid redundant projections and improve the speed of convergence by ‘rewarding’ the selection of rows that are more ‘orthogonal’ (or, in other words, form a greater angle) relative to the previously used row. In the research area, the authors proposed acceleration strategies that, instead of projecting onto a single hyperplane, project onto the intersection of two hyperplanes. These dual-projection or intersection-based schemes have the potential to reduce the error even faster at each iteration. The convergence analysis presented in their study shows that these accelerated variants enjoy better convergence rates than the classical RK method with uniform or norm-based probabilities.
This method has gained popularity for its simplicity, speed, robustness, and convergence properties. These attributes have led to its successful application in computed tomography [24], signal processing [25], and data reconciliation [26]. Its inherent adaptability and flexibility render it an attractive candidate for the exploration of its applicability to the ILT problem.
Nevertheless, the use of the Kaczmarz method for ILT remains largely unexplored, leaving its potential for solving ILTs an open question in numerical analysis. The primary objective of this work is to adapt the Kaczmarz method specifically for ILT, thereby extending its advantages to the domain of ill-posed inverse problems. By incorporating regularization directly into the iterative scheme (whether via Tikhonov, total variation, or more advanced approaches), our goal is to enhance both the computational efficiency and the noise resilience of ILT solvers.
Several established approaches underscore the current efforts to resolve ILT challenges:
  • MaxEnt (maximum entropy): Utilizes non-parametric solutions that integrate prior knowledge, ensuring stability and robustness [27,28,29].
  • TRAIn (trust-region algorithm): Dynamically adjusts the trust-region to reconstruct ILT with superlinear convergence [30,31].
  • ITAMeD (iterative thresholding algorithm for multiexponential decays): Combines 1 -based sparsity with fast iterative methods (e.g., FISTA) to resolve discrete ILT components [32].
  • PALMA: Merges 1 regularization and maximum entropy priors to automatically balance smoothness and sparsity [33].
While each of these methods, see Appendix A for all details of these algorithms, offers distinct advantages, they often require careful parameter tuning and may not guarantee robust convergence under challenging conditions. The Kaczmarz method, in contrast, provides an intuitive row-by-row projection strategy and can integrate seamlessly with a variety of regularization frameworks. Its iterative refinement process is similar to the Gram–Schmidt orthogonalization principle, reinforcing geometric insights that often translate into stable, noise-tolerant performance. By adapting the Kaczmarz method to ILT, we aim to exploit these strengths and develop an approach that is both computationally efficient and robust to the complexities of ill-posed Laplace inversion.
As a proof of concept, our investigation targets diffusion NMR (whether PFGSE—pulsed field gradient spin echo or DOSY—diffusion-ordered spectroscopy). In diffusion NMR, the measured signal attenuation due to progressively stronger magnetic field gradients results in a multi-exponential decay curve. This decay is mathematically equivalent to a Laplace transformation of the underlying distribution of diffusion coefficients. Traditional ILT approaches applied in this context are vulnerable to instability and inaccuracy. Nonetheless, the utilization of algebraic reconstruction for multi-exponential decays has previously been established in its most fundamental form. This has been achieved through the application of ILT reconstruction to molecular weight estimation in globular proteins and exopolysaccharides [34,35].
By adapting the Kaczmarz method for ILT and integrating robust regularization, our approach aims to recover complex diffusion profiles, accurately decompose overlapping components, and handle diverse sample behaviors (e.g., polydisperse systems). The enhanced numerical stability and efficiency of the proposed solver hold significant promise for advancing ILT applications in chemistry, biophysics, materials science, and chemical engineering.
This research has far-reaching implications for real-world applications of ILT. The development of an enhanced, reliable, and efficient Kaczmarz solver has the potential to substantially enhance the accuracy of problem-solving and system modelling across a broad spectrum of scientific and engineering domains. The present study has implemented several variations of the Kaczmarz algorithm, incorporating various regularization approaches, including Tikhonov, total variation, and Wasserstein regularization. The enhanced computational capabilities of such regularized Kaczmarz algorithms have the potential to transform numerous research and industrial domains.
Structure of this work. Section 2 details the Kaczmarz method for the ILT problem. In Section 3, focus on discretizing the Fredholm integral formulation and incorporating regularization. Section 4 presents a Wasserstein–Kaczmarz method from solving inverse Laplace transform. Finally, in Section 5, a comparative evaluation of the algorithm against established ILT methods in terms of accuracy, convergence speed, and noise robustness. In Section 6, discusses practical limitations, potential extensions, and the broader implications of the work for solving ill-posed inverse problems.

Objectives and Contributions

This work aims to
  • We adapt the Kaczmarz method to the specific requirements of ILT by discretizing the Fredholm integral formulation and integrating advanced regularization techniques.
  • We incorporate regularization methods (such as Tikhonov, total variation, and Wasserstein) directly into the Kaczmarz iterative process to effectively address the ill-posedness inherent in ILT problems.
  • We conduct an empirical evaluation of the proposed algorithm’s accuracy, convergence speed, and robustness against noise, benchmarking it against existing ILT methodologies.
  • We discuss the broader implications, limitations, and future directions for extending Kaczmarz-based solvers to other classes of inverse problems.

2. Kaczmarz Method for Solving Linear Systems

The Kaczmarz method is an iterative algorithm for solving the systems of linear equations of the form:
K x = Y ,
where K R m × n is the system matrix, x R n is the unknown solution vector, and  Y R m is the right-hand side vector. First introduced by Stefan Kaczmarz in 1937, this algorithm has found applications in various fields such as image reconstruction, signal processing, and numerical optimization.
The core idea of the Kaczmarz method is to iteratively project the current estimate of the solution onto the hyperplane defined by one row of the Hilbert space. This projection-based approach guarantees convergence to a solution, even for inconsistent systems, by minimizing the residual error in the least-squares sense. This concept of projection is closely related to the Gram–Schmidt orthogonalization process. While the Gram–Schmidt process orthogonalizes a set of vectors, the Kaczmarz method applies successive orthogonal projections to iteratively converge to the solution. This geometric insight underscores the method’s elegance and its potential for stable, noise-tolerant performance.
This Gram–Schmidt orthogonalization, a classical method in linear algebra, systematically constructs an orthonormal basis for a vector space by projecting vectors onto subspaces spanned by previously orthogonalized vectors. Drawing parallels to Gram–Schmidt orthogonalization helps elucidate the geometric and algebraic principles underlying the Kaczmarz method.

2.1. Gram–Schmidt Orthogonalization

The Gram–Schmidt process is a method for orthogonalizing a set of linearly independent vectors in a finite-dimensional vector space. Given a set of m linearly independent vectors v 1 , v 2 , , v m R n , the algorithm constructs an orthonormal basis u 1 , u 2 , , u m R n (see Algorithm 1).
Algorithm 1: Gram–Schmidt Orthogonalization
Mathematics 13 02166 i001
The resulting set { u 1 , u 2 , , u m } forms an orthonormal basis for the subspace spanned by the original vectors { v 1 , v 2 , , v m } .
Each step in the Gram–Schmidt process involves projecting a vector v i onto the subspace spanned by the previously orthogonalized vectors. The residual vector, obtained after subtracting these projections, lies orthogonal to the subspace, ensuring orthogonality in the resulting basis.

2.2. Kaczmarz Algorithm as an Orthogonalization Process

The Kaczmarz method can be interpreted as a sequential orthogonalization process where the current estimate of the solution vector is projected onto the hyperplanes defined by the rows of A. Each row a i of A represents a hyperplane in R n , defined by the equation:
K i , x = Y i .
The Kaczmarz method can be described step-by-step as follows (see Algorithm 2):
Algorithm 2: Kaczmarz Method for Solving Linear Systems
Mathematics 13 02166 i002
The algorithm successively reduces d i Proj a i ( x k ) by projecting the current estimate onto the hyperplanes defined by the rows of A. After multiple iterations, these projections converge the estimate to a solution x * that minimizes the residual error in the least-squares sense:
x * = arg min x K x Y 2 .
In the Kaczmarz method, each update step, represented by Proj K i ( x k ) = K i , x k K i 2 K i , is analogous to the projection operation in Gram–Schmidt orthogonalization. However, rather than orthogonalizing vectors to construct an orthonormal basis, the method iteratively adjusts the solution vector x k to satisfy the equation of each hyperplane. By iteratively combining these projections, the Kaczmarz method ensures convergence to the optimal solution that minimizes the residual error in the least-squares sense.

3. Adaption of the Kaczmarz Method with Regularization for Solving Inverse Laplace Transforms

In this section, we consider the following Fredholm integral of the first kind as Laplace Transform and it is defined as
L { x ( s ) } ( t ) = a b e t s x ( s ) d s ,
where t 0 and s [ a , b ] . We view { x ( s ) } as the unknown function we wish to recover (i.e., the “inverse Laplace transform” of f ( t ) ), and the kernel k ( t , s ) = e t s is the classical Laplace transform kernel. To solve (1) via the Kaczmarz method, we first discretize both the integral domain in s and the function values in t. The goal is to recast the integral equation into a linear system of the form
K x = Y ,
where K approximates the continuous operator L { x ( s ) } ( t ) = a b e t s x ( s ) d s , Y corresponds to sampled values of f ( t ) , and x is the discretized representation of x ( s ) .
Step 1:
Partition the Interval [ a , b ] .
Divide the interval [ a , b ] into n subintervals by points s 1 , s 2 , , s n . Often, a uniform spacing is chosen, i.e., Δ s = b a n , but non-uniform partitions may be used if certain regions of [ a , b ] require finer resolution.
Step 2:
Approximate the Integral.
Replace the integral by a numerical quadrature:
a b e t s x ( s ) d s j = 1 n w j e t s j x ( s j ) ,
where w j is the quadrature weight for the j-th subinterval. Common choices for w j include the rectangle rule ( w j = Δ s ) or more accurate schemes such as trapezoidal or Gaussian quadrature.
Step 3:
Sample L { x ( s ) } ( t ) at Discrete Points.
Select m distinct values of t, say t 1 , t 2 , , t m . For each t i , approximate L { x ( s ) } ( t i ) by the given data or by sampling from the continuous function L { x ( s ) } ( t ) .
Step 4:
Assemble the Linear System.
Define the matrix K R m × n by
K i , j = w j e t i s j , i = 1 , , m , j = 1 , , n .
Next, form the vector Y R m by
Y i = L { x ( s ) } ( t i ) , i = 1 , , m .
Finally, let x R n represent the unknown discrete values x j = x ( s j ) . Thus, the approximate equation
a b e t i s x ( s ) d s L { x ( s ) } ( t i )
becomes the matrix problem
K x = Y ,
where K R m × n is the system matrix and Y R m is the right-hand side. By sequentially projecting the current estimate x k onto the hyperplanes defined by each row of K , the Kaczmarz method converges toward a solution that satisfies K x Y . In the presence of noise or ill-posedness, however, regularization may be needed to stabilize and improve the quality of the solution. Below, we consider two common forms of regularization:
  • Tikhonov regularization, which penalizes large norms of the solution.
  • Total variation (TV) regularization, which promotes piecewise-smooth solutions by penalizing large discrete gradients.

3.1. Tikhonov Regularization

Tikhonov regularization (sometimes called 2 -regularization) introduces a parameter λ > 0 to counteract noise amplification and large solution components. To efficiently incorporate Tikhonov regularization into our Kaczmarz-type iterations, we employ an incremental proximal-gradient (forward–backward) splitting scheme first popularized in signal-processing [36,37]. Concretely, at each step k, we perform
d i = Y i K i λ x k K i K i 2 2 + λ
followed by a relaxation x k + 1 2 = x k + ω d i , ω is relaxation parameter that used to be 1 and the global 2 -proximal step x k + 1 = prox 2 , λ ( x k + 1 2 ) . This per-row approximation converges to the unique minimizer, and is closely related to generalized forward–backward splitting [38]. Together, these steps implement an incremental proximal-gradient method whose fixed point satisfies
( K T K + λ I ) x = K T Y
the normal equations of the Tikhonov-regularized problem. Hence, it is fully consistent with classical Tikhonov regularization.

3.2. Total Variation Regularization

Total Variation (TV) regularization is well suited for problems where the solution x is expected to be piecewise constant or smooth. Instead of penalizing the size of x directly, we penalize its discrete gradient. In a one-dimensional setting (i.e., each component x i for i = 1 , , n ), a simple finite-difference approximation of the TV penalty involves terms of the form
α 2 x j x j 1 x j + 1 ,
where α > 0 controls the strength of the regularization. Thus, the “distance and direction” step can be modified to
d i = Y i K i 2 K i α TV penalty on x k .
In practice, one updates each component x i with information from its neighbors, promoting locally constant or smoothly varying solutions. Boundary conditions (e.g., for  x 1 and x n ) may need special handling.

3.3. Algorithm: Kaczmarz Method with Regularization

This subsection presents the Kaczmarz method with regularization algorithm to solve the ILTs. In many applications (e.g., inverse problems or image reconstruction), the solution may also be constrained to be non-negative. The pseudocode below illustrates how to incorporate these features into each Kaczmarz iteration, and we subsequently discuss practical tips for parameter selection, boundary conditions, and stopping criteria (see Algorithm 3).
Algorithm 3: Kaczmarz method for solving linear systems (with optional regularization)
Mathematics 13 02166 i003
In practice, it is essential to choose the regularization parameters carefully. Larger values of λ or α impose stronger regularization, leading to more stabilization but also potentially slower convergence and a biased estimate. Boundary conditions matter especially for the TV case, where one must decide how to handle edges (e.g., via forward/backward differences or reflection). If physical considerations dictate non-negativity (such as concentration profiles in inversion problems), projecting each iterate onto the positive orthant provides a simple and effective enforcement of x k 0 . In terms of stopping criteria, one can monitor x k + 1 x k , the residual norm K x k + 1 Y , or both. In summary, the Kaczmarz method furnishes a flexible row-by-row updating scheme; by directly embedding Tikhonov or total variation regularization into its “distance and direction” step, it effectively mitigates noise amplification while supporting structurally relevant constraints such as smoothness, sparsity, or non-negativity.

3.4. Discussion and Practical Considerations

  • Choice of parameters. Both Tikhonov ( λ ) and TV ( α ) regularization parameters significantly affect the trade-off between fitting the data and enforcing smoothness/small norms. These typically require some experimentation or cross-validation strategy to select optimal values.
  • Computational cost. Each inner iteration of the Kaczmarz method only accesses one row of A at a time, which can be advantageous for large problems. However, TV regularization can introduce additional computation because one must update each component x i with information from its neighbors.
  • Convergence behavior. The Kaczmarz method often converges quickly in practice, especially if the rows of A are processed in a randomized or optimal order. Regularization may slow down convergence slightly but typically leads to more stable and physically meaningful solutions in noisy or ill-conditioned scenarios.
By integrating either Tikhonov or TV regularization into the classical Kaczmarz method, we obtain a family of algorithms capable of handling ill-posed and large-scale problems, making them suitable for applications in signal/image processing, tomography, and inverse problems where stability and interpretability are crucial.

4. Wasserstein–Kaczmarz Method for Solving Inverse Laplace Transforms

ILTs for numerical solving can become particularly difficult when the parameter s contains limited information, the signal-to-noise ratio is low, or the sampling in the t-domain is sparse. Such scenarios exacerbate the ill-posed nature of the inverse problem, rendering standard regularization approaches ( L 1 or L 2 norms) insufficient. These traditional techniques often fail to capture the underlying distribution accurately, resulting in oversmoothed or excessively noisy reconstructions under sparse or noisy data conditions.
Wasserstein regularization, inspired by optimal transport theory, is a robust method for aligning the reconstructed x ( s ) with a reference distribution x ref ( s ) . This method is particularly effective for preserving the geometric structure of x ( s ) , even under high noise conditions. The p-Wasserstein distance between two distributions f and g is defined as
W p ( f , g ) = inf γ Π ( f , g ) | x y | p d γ ( x , y ) 1 / p ,
where Π ( f , g ) represents all transport plans γ ( x , y ) with marginals f and g.
The Wasserstein-2 distance ( W 2 ) is typically used. It provides a robust alternative by taking advantage of the geometry of the solution space. By aligning the reconstructed solution with a prior reference x ref , it compensates for missing information, preserves physical interpretability, and effectively handles sparse datasets. Moreover, the Wasserstein technique is particularly well-suited for sparse datasets because of its ability to incorporate geometric priors and maintain robustness against noise and missing data. The objective function is defined as
min x K x Y 2 + λ W 2 2 ( x , x ref ) ,
where
  • W 2 2 ( x , x ref ) : The Wasserstein-2 distance aligns the reconstructed x with the reference distribution x ref .
  • x ref : A prior distribution reflecting expected physical properties or sparsity patterns.
A proximal update is performed in the Wasserstein-2 metric (this equivalence, introduced by Jordan, Kinderlehrer and Otto, inspired the so-called JKO scheme to approximate these diffusion processes via an implicit discretization of the gradient flow in the Wasserstein metric [39,40]. The Wasserstein-proximal subproblem at iterate x k is
x k + 1 = arg min x 1 τ W 2 2 ( x , x ref ) + x x k 2 2
In scenarios where the reference distribution x ref is unknown, Wasserstein regularization can be adapted to infer or estimate x ref . The following approaches can be applied:
x ref = 1 N ,
where N is the number of diffusion coefficients. This ensures no prior bias in the reconstruction. Infer x ref dynamically during reconstruction. Starting with x ref = 1 N , update x ref at each iteration based on the reconstructed x:
x ref k + 1 = α x k + 1 + ( 1 α ) x ref k ,
where α controls how quickly the reference adapts. Thus, this convex combination is precisely the proximal map of the squared-Wasserstein term, i.e., the first JKO-time-step of the Wasserstein gradient flow [41], and so faithfully regularizes toward x ref in the W 2 metric. This Wasserstein–Kaczmarz method is presented in the Algorithm 4.
Algorithm 4: Wasserstein–Kaczmarz method for solving inverse Laplace transforms
Mathematics 13 02166 i004
This Wasserstein–Kaczmarz algorithm effectively recovers the sought function x ( s ) in the presence of noise, sparse data, or limited prior information. The Wasserstein term aligns x with x ref to preserve geometrical or distributional characteristics even under challenging conditions, ensuring a more robust and physically interpretable solution compared to conventional methods.

5. Results

In this section, we present the performance results of the evaluated algorithms on synthetic datasets designed to assess their reconstruction accuracy and robustness under noise conditions. The methods compared include classical Kaczmarz, Tikhonov-regularized Kaczmarz, TV-regularized Kaczmarz, Wasserstein–Kaczmarz, CONTIN, maximum entropy (MaxEnt), TRAIn, ITAMeD, and PALMA.

5.1. Synthetic Data Generation and Evaluation Metrics

Synthetic datasets were generated by simulating Laplace-transformed signals from analytically prescribed “ground-truth” distributions—both single- and multi-component—employing Gaussian and exponential functional forms. In order to emulate experimental measurement variability, a controlled noise level of 1% was introduced. In the context of practical diffusion NMR experiments, the absolute value of the diffusion coefficient, denoted by D, and the reconstruction’s noise sensitivity are considered to be essential performance metrics. However, the explicit variation of these parameters is not a prerequisite for the mathematical demonstrations presented herein. The performance of each reconstruction method was quantitatively evaluated using the following metrics:
  • Total computation time (seconds): Measures the total runtime required for each algorithm to reach convergence.
  • Mean squared error (MSE): Represents the average squared difference between reconstructed and true distributions, quantifying reconstruction accuracy.
    MSE = 1 N i = 1 N ( x recon , i x true , i ) 2 .
  • Wasserstein distance: Measures the distributional difference between reconstructed and true distributions, indicating how closely the overall shapes match.
  • Total variation (TV): Quantifies the smoothness of the reconstructed distribution; lower TV indicates smoother solutions, while higher TV indicates sharper, potentially noisier solutions.
    TV ( x ) = i = 1 N 1 | x i + 1 x i | .
  • Peak signal-to-noise ratio (PSNR): Provides a logarithmic measure of the reconstruction quality, with higher values corresponding to better accuracy.
    PSNR = 10 log 10 max ( x true ) 2 MSE .
All algorithms were uniformly initialized, and regularization parameters were optimized through the L-curve method.

5.2. Single-Component Data Reconstruction

Table 1 summarizes the reconstruction results for single-component synthetic data at a 1% noise level, displaying total computation time, MSE, Wasserstein distance, total variation, and PSNR.
Figure 1 illustrates the underlying true distribution along with both the true signal and its noisy counterpart. This visualization provides a qualitative baseline that helps in assessing the effectiveness of the reconstruction algorithms in preserving the original signal structure while mitigating noise.
As illustrated in Figure 2, the performance of various reconstruction algorithms on single-component synthetic data with 1% noise is demonstrated. The TRAIn algorithm attains the lowest mean squared error (MSE) of 1.5 × 10 8 and the smallest Wasserstein distance of 5.4 × 10 8 thereby signifying its superior reconstruction accuracy. However, it should be noted that this is accompanied by a higher computational cost (19.36 s). In contrast, the classical Kaczmarz method, despite its high processing speed (0.62 s), exhibits comparatively elevated errors with an MSE of 1.6 × 10 7 and a greater total variation. The Tikhonov–Kaczmarz and TV–Kaczmarz methods provide a balanced trade-off between accuracy and efficiency, with Tikhonov–Kaczmarz showing a particularly low MSE (2.4 × 10 8 ) and minimal total variation (3.7 × 10 7 ) at a moderate runtime (1.26 s). Additionally, MaxEnt delivers a high peak signal-to-noise ratio (35.3 dB), whereas PALMA, despite its extended computational time (261.88 s), exhibits the poorest accuracy, as evidenced by its high MSE and Wasserstein values. The results emphasize the inherent trade-offs between computational efficiency and reconstruction fidelity, guiding the selection of an appropriate algorithm based on specific application needs. The results in Table 1 indicate that the TRAIn and Wasserstein–Kaczmarz methods achieve the highest reconstruction accuracy by recording the lowest MSE and Wasserstein distance values, with competitive PSNR figures. Conversely, while the classical Kaczmarz method is the fastest in terms of computational time, its performance is compromised by higher errors. Tikhonov and TV regularizations achieve a balance between accuracy and computational cost, thus rendering them viable options when moderate reconstruction quality is deemed acceptable alongside efficiency.

5.3. Multi-Component Distribution Reconstruction

The best algorithms evaluated in the previous section were then subjected to further evaluation through the utilization of multi-modal synthetic distributions, characterized by closely overlapping diffusion peaks. This approach emulates realistic and analytically challenging scenarios. In such conditions, characterized by stringent constraints, Wasserstein–Kaczmarz and TRAIn demonstrated remarkable stability and precision, adeptly disentangling and retrieving individual distribution components in the face of substantial noise contamination. From a pragmatic standpoint, the minimum resolvable separation between adjacent diffusion coefficients ( Δ D m i n ) is a pivotal parameter for experimental applicability. Nevertheless, it is not necessary to explicitly investigate Δ D m i n within the purely mathematical framework presented herein.
Table 2 presents a quantitative comparison of the reconstruction algorithms’ performance on multi-component synthetic data contaminated with 1% noise. Metrics reported include mean squared error (MSE), Wasserstein distance, total variation (TV), Peak signal-to-noise ratio (PSNR), and computational time.
Figure 3 provides a visual representation of the true distribution, alongside the clean and noisy signals. This qualitative visualization underscores the severity of noise interference and highlights the necessity for robust reconstruction methods.
Figure 4 visually demonstrates the algorithms’ reconstruction fidelity, particularly highlighting the robustness and accuracy of Wasserstein–Kaczmarz. Table 2 reveals clear differences among the evaluated methods. TRAIn achieved the lowest MSE and Wasserstein distance, indicating superior accuracy in reconstructing the original distributions despite having a higher total variation. Wasserstein-Kaczmarz also displayed excellent results with slightly higher MSE but significantly reduced computational time, showcasing its strength as an efficient and accurate alternative. MaxEnt provided rapid reconstruction but compromised somewhat on Wasserstein distance accuracy. Lastly, Tikhonov-Kaczmarz, although highly effective at noise mitigation, lagged in computational speed. Based on these observations, the algorithms are ranked as follows:
  • TRAIn (Best accuracy, balanced efficiency).
  • Wasserstein–Kaczmarz (excellent accuracy and fastest reconstruction).
  • MaxEnt (fast computation, moderate accuracy).
  • Tikhonov–Kaczmarz (good noise reduction, slower computational time).

5.4. Practical Considerations

Optimal parameter tuning was indispensable to each algorithm’s efficacy. Regularization parameters (e.g., λ and α for Tikhonov and total-variation methods) were selected via L-curve analysis [42,43]. Furthermore, in practical diffusion NMR measurements, both the accurate recovery of absolute diffusion coefficient values and the reconstruction’s sensitivity to measurement noise critically affect robustness, while the minimum resolvable separation between adjacent diffusion coefficients ( Δ D m i n ) represents a key benchmark for peak discrimination. The incorporation of non-negativity constraints further enhanced the physical plausibility of the reconstructed distributions. However, within the purely mathematical framework presented herein, the explicit investigation of noise sensitivity and Δ D m i n was deemed unnecessary.
Figure 4. Reconstruction of multi-component synthetic data (1% noise level). Panel (a) Tikhonov–Kaczmarz shows effective noise suppression with minimal error; Panel (b) Wasserstein–Kaczmarz demonstrates superior accuracy and computational efficiency; Panel (c) maximum entropy (MaxEnt) provides good signal preservation with acceptable noise management; Panel (d) TRAIn highlights excellent reconstruction fidelity balanced with moderate processing time.
Figure 4. Reconstruction of multi-component synthetic data (1% noise level). Panel (a) Tikhonov–Kaczmarz shows effective noise suppression with minimal error; Panel (b) Wasserstein–Kaczmarz demonstrates superior accuracy and computational efficiency; Panel (c) maximum entropy (MaxEnt) provides good signal preservation with acceptable noise management; Panel (d) TRAIn highlights excellent reconstruction fidelity balanced with moderate processing time.
Mathematics 13 02166 g004

6. Discussion

This study highlights the significant potential of adapting the Kaczmarz method for solving inverse Laplace transform (ILT) problems, particularly when enhanced with robust regularization techniques. The evaluation across multiple synthetic scenarios demonstrates the substantial advantages of these adaptations in terms of accuracy, stability, and computational efficiency compared to traditional methods.
The proposed Wasserstein–Kaczmarz and other algorithms already developed emerged as particularly effective under challenging conditions, such as sparse, noisy, or multi-modal distributions. Wasserstein–Kaczmarz demonstrated superior computational efficiency and strong reconstruction accuracy, making it especially valuable for time-sensitive applications or computationally constrained scenarios. Meanwhile, TRAIn provided exceptional accuracy, albeit with a slightly increased computational load, reinforcing its suitability for applications requiring high-fidelity reconstruction.
While established algorithms such as MaxEnt, ITAMeD, and PALMA each possess distinct strengths—ranging from rapid computation to specific robustness features—they also exhibit limitations, particularly in complex or highly noisy conditions. For instance, MaxEnt, despite its rapid execution, showed a compromise in accuracy, particularly evident through increased Wasserstein distances. Similarly, Tikhonov–Kaczmarz, despite its strong performance in noise suppression, was limited by slower computational speeds.
From a practical laboratory standpoint, several additional considerations must guide implementation. First, the accurate determination of the absolute diffusion coefficient D is indispensable for meaningful physical interpretation, necessitating careful calibration and reference measurements. Second, the minimum resolvable separation between adjacent diffusion peaks ( Δ D m i n ) must be empirically quantified to ensure the reliable discrimination of closely spaced species. Finally, a more exhaustive investigation of signal-to-noise ratio (SNR) effects—in particular, how SNR degradation influences both peak recovery and parameter estimation—will be essential to translate these algorithms from simulation to experiment. Although such detailed experimental studies fall outside the strict mathematical framework presented here, they constitute the next steps for validating and optimizing Kaczmarz-based ILT solvers in real-world analytical contexts.
Beyond the specific findings of this study, the broader implications suggest significant potential for extending Kaczmarz-based solvers to a wide variety of other inverse problems, including tomography, seismic inversion, and medical imaging. Despite the clear benefits, limitations such as parameter tuning complexity and computational scalability need further exploration. Future research directions should focus on developing adaptive parameter selection methods and investigating hybrid regularization strategies that could further enhance performance and robustness across diverse classes of inverse problems.

7. Conclusions

The adapted Kaczmarz method, enhanced with advanced regularization strategies such as Wasserstein and trust-region techniques, represents a significant advancement in addressing the complexities associated with inverse Laplace transform problems. The results underscore the Wasserstein–Kaczmarz algorithm’s optimal balance between computational efficiency and reconstruction accuracy, positioning it as a strong candidate for real-world applications demanding rapid and accurate solutions.
The TRAIn algorithm also stands out by delivering unmatched accuracy under challenging conditions, demonstrating its value for applications prioritizing precise reconstructions despite higher computational demands.
The methodological advances presented herein not only establish new performance benchmarks for ILT solvers but also pave the way for the further investigation of iterative regularization techniques in addressing ill-posed inverse problems across a wide range of scientific and engineering applications. To ensure reproducibility and to support ongoing development, the full implementation of all algorithms described in this study (software version v1.0.0) is available at the project’s GitHub repository: https://github.com/fmarrabal/kaczmarz-ilt-solver (accessed on 25 June 2025).

Author Contributions

Conceptualization, M.G.-L., I.F. and F.M.A.-C.; methodology, M.G.-L., E.V. and V.V.; software, M.G.-L. and V.V.; validation, M.G.-L., E.V. and V.V.; formal analysis, M.G.-L.; investigation, M.G.-L., E.V. and V.V.; resources, F.M.A.-C.; data curation, M.G.-L.; writing—original draft preparation, M.G.-L.; writing—review and editing, M.G.-L., E.V., V.V., I.F. and F.M.A.-C.; visualization, M.G.-L.; supervision, I.F. and F.M.A.-C.; project administration, F.M.A.-C.; funding acquisition, I.F. and F.M.A.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by the State Research Agency of the Spanish Ministry of Science and Innovation (PID2021-126445OB-I00), and by the Gobierno de España MCIN/AEI/10.13039/501100011033 and Unión Europea “Next Generation EU”/PRTR (PDC2021-121248-I00, PLEC2021-007774 and CPP2022-009967).

Data Availability Statement

The data presented in this study are openly available in https://github.com/fmarrabal/kaczmarz-ilt-solver (accessed on 25 June 2025).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Regularization Optimization for Ill-Posed Problems

Ill-posed problems are a class of mathematical problems that fail to meet one or more of the criteria for well-posedness established by Hadamard [7]. According to Hadamard, a problem is well-posed if it satisfies the three following conditions:
  • Existence: A solution to the problem exists.
  • Uniqueness: The solution is unique.
  • Stability: The solution’s behavior changes continuously with changes in the initial data.
If any of these criteria are not satisfied, the problem is termed ill-posed. Ill-posed problems commonly arise in inverse problems, integral equations, and partial differential equations, particularly in scenarios involving physical systems where data is noisy, incomplete, or ambiguous. Such problems are inherently unstable, meaning that small perturbations in the input data can lead to disproportionately large deviations in the solution.
These challenges necessitate additional techniques to stabilize the problem and obtain meaningful solutions. In the absence of stabilization, solutions to ill-posed problems may be non-unique, physically implausible, or overly sensitive to noise in the data.
The general form of an ill-posed problem can be expressed mathematically as
K x = f , K : X Y
where K is a compact operator mapping elements from the vector space X to Y, x X represents the unknown solution, and  f Y is the data. Compact operators, by their nature, amplify noise, exacerbating the instability of the problem.
In practical scenarios, the observed data f is often corrupted by noise, which is represented as
f δ = f + δ , f f δ < δ
where δ denotes the noise level. The presence of noise makes it difficult to directly solve Equation (A1) because small errors in f can lead to large errors in x. This inherent instability underscores the need for regularization techniques to constrain the solution space and ensure stability.
Given the perturbed data f δ , the solution set can be expressed as
Q δ : = { x X : K x f δ δ }
This formulation defines a set of possible solutions that satisfy the noise-constrained condition. However, the dependence of the solution x δ on the noisy data f δ introduces significant challenges. The inequality K x f δ δ does not inherently ensure either stability or uniqueness of the solution. Small variations in f δ can produce large variations in x δ , leading to instability. Moreover, multiple values of x may satisfy the inequality, making the solution non-unique.
These issues highlight the core difficulties of solving ill-posed problems directly. Without further constraints or regularization, the resulting solutions are often impractical, unreliable, or physically meaningless. Regularization methods address these deficiencies by imposing additional structure or prior knowledge about the solution, which will be discussed in subsequent sections.
To address the instability inherent in ill-posed problems, regularization methods impose additional constraints or introduce prior knowledge about the solution. This process helps ensure that the solution is stable, unique, and physically meaningful. Regularization techniques are discussed in detail in the following sections.
To address these challenges, additional information is introduced in the form of regularization. Regularization imposes constraints or preferences on the solution, effectively narrowing the solution space to ensure stability and feasibility. This additional information can reflect physical, mathematical, or empirical considerations. For instance, in many applications, solutions are expected to be smooth or adhere to certain positivity constraints.
One widely accepted approach to regularization is Tikhonov regularization [19]. Tikhonov regularization stabilizes the solution by introducing a stabilizing functional, modifying the optimization problem to
min x X K x f δ 2 + λ L x 2
where L is a linear operator that incorporates prior knowledge about the solution (e.g., smoothness or sparsity), and  λ > 0 is the regularization parameter that balances the trade-off between fidelity to the data and the stabilizing functional.
The choice of L and λ significantly impacts the solution. Common choices for L include the identity operator (standard Tikhonov regularization) or higher-order derivatives to enforce smoothness. Various methods exist to determine the optimal λ , such as the L-curve criterion [42], cross-validation, or generalized cross-validation [44].
The operator K in Equation (A1) is often a continuous compact operator. For many problems, K is expressed as a Fredholm integral equation of the first kind:
K x : = a b k ( t , s ) x ( s ) d s = f ( t ) , K : X Y , t ( c , d )
Here, k ( t , s ) is the kernel function, which defines the relationship between the unknown function x ( s ) and the data f ( t ) . The kernel function k ( t , s ) itself lies in a Hilbert space, providing a structured framework for its analysis.
A Hilbert space is a complete vector space equipped with an inner product that induces a norm. The inner product, denoted by u , v s . , is a bilinear form that satisfies the following properties for all u , v , w H and α R or C :
  • Linearity: α u + v , w = α u , w + v , w .
  • Symmetry: u , v s . = v , u ¯ .
  • Positive definiteness: u , u 0 , and  u , u = 0 if and only if u = 0 .
A norm is induced by the inner product as u = u , u . Completeness means that every Cauchy sequence in the Hilbert space converges to an element within the space. Examples of Hilbert spaces include 2 (the space of square-summable sequences) and L 2 ( a , b ) (the space of square-integrable functions on [ a , b ] ).
Compact operators such as K are characterized by their ability to map bounded sets in X to relatively compact sets in Y. This property exacerbates the ill-posedness of the problem, as small perturbations in f ( t ) can lead to disproportionately large changes in x ( s ) . Compactness implies that the image under K of any bounded sequence in X has a subsequence that converges in Y. The integral form of K highlights its dependence on the kernel function k ( t , s ) , which encapsulates the physics or underlying principles of the problem at hand.
Discretization of the Fredholm integral operator is commonly performed to solve the problem numerically. This process leads to the matrix formulation:
Y = K x
where Y represents the discrete approximation of f, x is the discrete solution, and  K is the discretized operator derived from k ( t , s ) . However, discretization introduces numerical challenges such as instability due to the ill-conditioning of K . Ill-conditioning means that small errors in f can result in large errors in x , making the system sensitive to perturbations. These challenges necessitate the use of regularization techniques to mitigate the effects of noise and stabilize the solution.
The Fredholm integral equation of the first kind provides a mathematical framework for many inverse problems. The Hilbert space setting and compact operator properties play a central role in the analysis and solution of such equations, while regularization techniques are crucial for addressing their inherent instability.
Herein, we delve into several widely used algorithms developed to solve such problems. These include CONTIN [45], Maximum Entropy [29], ITAMeD [32], TRAIn [31], and PALMA [33], among others. Each algorithm employs unique approaches to tackle the challenges posed by ill-posedness, noise, and instability in the context of Fredholm integral equations. We will explore their mathematical foundations, implementation strategies, and applications across various scientific fields.

Appendix A.1. The CONTIN Algorithm: A Constrained Regularization Approach

The CONTIN algorithm, developed by S.W. Provencher, is a powerful and flexible method for solving noisy linear operator equations, including Fredholm integral equations of the first kind. It is designed to handle the inherent ill-posedness of such problems by incorporating constraints and regularization techniques. CONTIN addresses equations of the form:
f ( t ) = a b k ( t , s ) x ( s ) d s + ϵ t , t = 1 , , N ,
where f ( t ) are the observed data points, k ( t , s ) represents the kernel function, x ( s ) is the unknown solution, and  ϵ k accounts for noise. This formulation encompasses a wide range of experimental contexts, such as photon correlation spectroscopy and Laplace transforms. To solve Equation (A7), CONTIN discretizes it into a system of linear algebraic equations:
Y = K x + ϵ ,
where Y is the vector of observed data, K is the discretized kernel matrix, x is the solution vector, and  ϵ represents noise. The ill-conditioning of K necessitates regularization to ensure stable and meaningful solutions.
CONTIN minimizes the following functional to determine the optimal solution:
V ( α ) = W ( Y K x ) 2 + α 2 R x 2 ,
where W is a weighting matrix based on the covariance of the noise, R represents the regularization operator, and  α is the regularization parameter. The second term penalizes non-physical or oscillatory solutions, promoting stability and smoothness.
The functional V ( α ) comprises two main components:
  • Data fidelity term: W ( Y K x ) 2 , which quantifies the deviation of the model solution from the observed data, weighted by the noise covariance matrix W .
  • Regularization term: α 2 R x 2 , where R encodes prior knowledge about the solution, such as smoothness or sparsity. The parameter α balances these two terms.
The minimization of V ( α ) is performed using numerical optimization techniques. The gradient of V ( α ) with respect to x is given by
V ( α ) = 2 K T W T ( Y K x ) + 2 α 2 R T R x .
Iterative solvers, such as conjugate gradient or quasi-Newton methods, are employed to find x , which minimizes V ( α ) under the imposed constraints.
The regularization parameter α plays a critical role in balancing the data fidelity and smoothness of the solution. CONTIN iteratively adjusts α using the following algorithm (Algorithm A1):
Algorithm A1: CONTIN algorithm for adaptive regularization
Mathematics 13 02166 i005
This algorithm allows for the incorporation of equality and inequality constraints, which reflect prior knowledge about the solution. For instance, non-negativity constraints can be imposed to ensure that the solution s ( λ ) remains physically meaningful:
D x d ,
E x = e ,
where D and E are user-defined matrices, and  d and e are vectors specifying the constraints.

Appendix A.2. The Maximum Entropy Algorithm: A Regularization Method for Ill-Posed Problems

The maximum entropy (MaxEnt) algorithm is a widely used method for solving ill-posed problems. MaxEnt relies on the principle of entropy maximization to find the most probable solution consistent with the observed data, incorporating prior knowledge while avoiding overfitting to noise. We assume an objective of the form,
χ 2 ( x ) α S ( x ) ,
or equivalently
J ( x ) = 1 2 j = 1 p Y j i = 1 n K j i x i 2 σ j 2 α i = 1 n x i ln x i ,
where α > 0 is a Lagrange multiplier (or regularization parameter) controlling the balance between data fidelity χ 2 and entropy S ( x ) .
For the entropy term
S ( x ) = i x i ln x i ,
its gradient with respect to x i is 1 ln ( x i ) . For the χ 2 term, we carefully accumulate contributions from the residual Y K x , scaled by σ j 2 .
After exponentiating the negative gradient, solutions are often re-scaled to respect constraints such as i x i = F (e.g., F = 1 for a probability distribution). This step is crucial for maintaining physically or probabilistically meaningful solutions.
To solve this ill-posed problem, MaxEnt maximizes the entropy functional S:
S = i = 1 n x i log x i F , F = i = 1 n x i ,
where x i is the discretized value of x ( s ) and F ensures normalization. The entropy maximization is subject to constraints that enforce consistency with the observed data. The optimization problem is formulated as
Q = S λ χ 2 ,
where λ is a Lagrange multiplier, and  χ 2 quantifies the fit to the data:
χ 2 = i = 1 p Y j i = 1 n K i j x i 2 σ j 2 ,
where Y i is the observed data, K i j represents the integral kernel, x j is the solution vector, and  σ i is the standard deviation of the noise. The optimization is performed iteratively (see Algorithm A2):
Algorithm A2: MaxEnt algorithm
Mathematics 13 02166 i006

Appendix A.3. Trust-Region Algorithm: A Regularization Approach for Inversion Problems

The trust-region algorithm for the inversion (TRAIn) is a robust and iterative regularization technique. Traditional iterative methods often struggle to balance convergence with robustness, resulting in solutions that may either converge too slowly or become unstable in the presence of measurement errors.
To address these issues, this algorithm has been developed as a powerful iterative regularization method. TRAIn minimizes the standard least-squares objective
Φ ( x ) = K x Y 2 2 ,
while imposing two key constraints: non-negativity of the solution ( x 0 ) and a bound on the change between consecutive iterations ( x ( k + 1 ) x ( k ) 2 r ( k ) ), where r ( k ) denotes the trust-region radius at iteration k. These constraints ensure that each iteration remains within a reliably stable region, effectively controlling the step size and preventing divergence.
At the heart of TRAIn lies the solution of a trust-region subproblem at each iteration. A trial step z is computed by approximately minimizing
min z Φ ( x ( k ) + z ) subject to z 2 r ( k ) ,
using the truncated conjugate gradient method. The quality of this trial step is then quantified by the ratio
ρ = Δ Φ actual Δ Φ predicted ,
which compares the actual reduction in the objective to that predicted by the model. Based on the value of ρ , the trust-region radius is adjusted adaptively:
  • If ρ > μ 2 , the radius is increased ( r ( k + 1 ) = ν 2 r ( k ) ).
  • If μ 1 ρ μ 2 , the radius remains unchanged.
  • If ρ < μ 1 , the radius is decreased ( r ( k + 1 ) = ν 1 r ( k ) ).
Typical parameter values are μ 1 = 0.25 , μ 2 = 0.75 , ν 1 = 0.5 , and  ν 2 = 2 . The iterative process is terminated when the residual norm satisfies
K x ( k ) Y 2 τ ϵ ,
with ϵ representing the noise level and τ is a constant slightly greater than 1 (e.g., τ = 1.02 ).
The following algorithm provides a detailed exposition of TRAIn, describing the adaptive strategies that guarantee both stability and convergence even in the presence of significant noise (see Algorithm A3).
Algorithm A3: Trust-region algorithm for the inversion (TRAIn)
Mathematics 13 02166 i007

Appendix A.4. Iterative Thresholding Algorithm for Inversion Problems

The iterative thresholding algorithm for inversion problems is a robust and efficient regularization method. By employing sparsity-promoting regularization and leveraging the fast iterative shrinkage-thresholding algorithm (FISTA). This provides high-resolution reconstruction of the inverse Laplace transform. This task is notoriously ill-posed, as it is sensitive to noise and requires regularization. The method formulates the problem as an 1 -regularized least-squares minimization:
min x 1 2 K x Y 2 2 + τ x 1 ,
where
  • K is the kernel compact operator mapping elements from vector space X to Y;
  • x is the discretized unknown solution;
  • Y is the discrete approximation of f;
  • x 1 promotes sparsity in x;
  • τ is the regularization parameter.
FISTA is employed to efficiently solve the optimization problem. The algorithm iteratively updates the solution x as follows (see Algorithm A4):
Algorithm A4: Iterative update algorithm with proximal operator
Mathematics 13 02166 i008

Appendix A.5. PALMA Algorithm: Hybrid Regularization for Inversion Problems

The PALMA algorithm, standing for “Proximal Algorithm for 1 combined with MAxEnt prior,” is a hybrid regularization method designed to address the challenges of inversion problems. This algorithm integrates the principles of sparsity and entropy maximization to provide robust and efficient solutions for the inverse Laplace transform problem. The PALMA algorithm solves the following constrained optimization problem:
min x R N Ψ ( x ) subject to K x Y 2 η ,
where:
  • Ψ ( x ) is the hybrid regularization function combining entropy and sparsity priors;
  • η is the tolerance related to the experimental noise level.
The hybrid regularization function is defined as
Ψ ( x ) = λ ent ( x , a ) + ( 1 λ ) x 1 ,
where
  • ent ( X , a ) is the negative Shannon entropy with a flat prior a:
    ent ( x , a ) = i = 1 N x i a log x i a ,
  • x 1 = i = 1 N | x i | is the 1 norm promoting sparsity;
  • λ [ 0 , 1 ] balances the trade-off between entropy and sparsity.
The PALMA algorithm employs a proximal optimization approach to solve the hybrid regularization problem. The key steps are included in the algorihtm as follows (Algorithm A5):
Algorithm A5: Proximal and projection algorithm
Mathematics 13 02166 i009
The PALMA algorithm requires the careful selection of parameters. The entropy prior a is typically chosen as the area under the expected spectrum, providing a meaningful baseline for the algorithm. The noise tolerance η is estimated based on the noise level in the experimental data, ensuring the algorithm’s robustness against measurement inaccuracies. Lastly, the weight λ balances the influence of entropy and sparsity in the regularization process, with suggested values ranging between 0.01 and 0.5 to achieve an optimal trade-off between these competing factors.

References

  1. Carslaw, H.S.H.S.; Jaeger, J.C.J.C. Conduction of Heat in Solids, 2nd ed.; Clarendon Press: Oxford, UK, 1959. [Google Scholar]
  2. Monde, M. Analytical method in inverse heat transfer problem using laplace transform technique. Int. J. Heat Mass Transf. 2000, 43, 3965–3975. [Google Scholar] [CrossRef]
  3. Desoer, C.A.; Kuh, E.S. Basic Circuit Theory; McGraw-Hill: New York, NY, USA, 1969. [Google Scholar]
  4. Inman, D.J. Engineering Vibration, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
  5. Ogata, K. Modern Control Engineering, 4th ed.; Prentice Hall PTR: Hoboken, NJ, USA, 2001. [Google Scholar]
  6. Ioannidis, G.S.; Nikiforaki, K.; Kalaitzakis, G.; Karantanas, A.; Marias, K.; Maris, T.G. Inverse Laplace transform and multiexponential fitting analysis of T2 relaxometry data: A phantom study with aqueous and fat containing samples. Eur. Radiol. Exp. 2020, 4, 28. [Google Scholar] [CrossRef]
  7. Hadamard, J. Lectures on Cauchy’s Problem in Linear Partial Differential Equations; Yale University/Mrs. Hepsa Ely Silliman Memorial Lectures; Yale University Press: New York, NY, USA, 1923. [Google Scholar]
  8. Bertero, M.; Boccacci, P. Introduction to Inverse Problems in Imaging; CRC Press: Boca Raton, FL, USA, 2020. [Google Scholar] [CrossRef]
  9. Beck, J.V.; Clair, C.R.; Blackwell, B. Inverse Heat Conduction; John Wiley and Sons Inc.: New York, NY, USA, 1985. [Google Scholar]
  10. Ha, W.; Shin, C. Laplace-domain full-waveform inversion of seismic data lacking low-frequency information. GEOPHYSICS 2012, 77, R199–R206. [Google Scholar] [CrossRef]
  11. Holt, S.; Qian, Z.; van der Schaar, M. Neural Laplace: Learning diverse classes of differential equations in the Laplace domain. Proc. Mach. Learn. Res. 2022, 162, 8811–8832. [Google Scholar] [CrossRef]
  12. Calvetti, D.; Kaipio, J.P.; Somersalo, E. Inverse problems in the Bayesian framework. Inverse Probl. 2014, 30, 110301. [Google Scholar] [CrossRef]
  13. Calvetti, D.; Somersalo, E. Hypermodels in the Bayesian imaging framework. Inverse Probl. 2008, 24, 34013. [Google Scholar] [CrossRef]
  14. Calvetti, D.; Somersalo, E. Bayesian Scientific Computing; Applied Mathematical Sciences, Springer International Publishing: Cham, Switzerland, 2023; Volume 215. [Google Scholar] [CrossRef]
  15. Bioucas-Dias, J.M.; Figueiredo, M.A.T. A New Twist on Image Denoising and Restoration. IEEE Trans. Image Process. 2007, 16, 3343–3355. [Google Scholar] [CrossRef]
  16. Jaynes, E.T. Probability Theory. The Logic of Science; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  17. Golub, G.H.; Van Loan, C.F. Matrix Computations, 3rd ed.; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
  18. Hansen, P.C. Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1999. [Google Scholar]
  19. Tikhonov, A.N. On the solution of ill-posed problems and the method of regularization. Dokl. Akad. Nauk SSSR 1963, 151, 501–504. [Google Scholar] [CrossRef]
  20. Kaczmarz, S. Angenäherte Auflösung von Systemen linearer Gleichungen (Approximate Solution for Systems of Linear Equations). Bull. Int. Acad. Pol. Sci. Lett. 1937, 35, 355–357. [Google Scholar]
  21. Strohmer, T.; Vershynin, R. A Randomized Kaczmarz Algorithm with Exponential Convergence. J. Fourier Anal. Appl. 2009, 15, 262–278. [Google Scholar] [CrossRef]
  22. Gower, R.M.; Richtarik, P. Randomized iterative methods for linear systems. SIAM J. Matrix Anal. Appl. 2015, 36, 1660–1690. [Google Scholar] [CrossRef]
  23. He, S.; Dong, Q.L.; Li, X. The randomized Kaczmarz algorithm with the probability distribution depending on the angle. Numer. Algorithms 2023, 93, 415–440. [Google Scholar] [CrossRef]
  24. Gordon, R.; Bender, R.; Herman, G.T. Algebraic Reconstruction Techniques (ART) for three-dimensional electron microscopy and X-ray photography. J. Theor. Biol. 1970, 29, 471–481. [Google Scholar] [CrossRef] [PubMed]
  25. Chi, Y.; Lu, Y.M. Kaczmarz Method for Solving Quadratic Equations. IEEE Signal Process. Lett. 2016, 23, 1183–1187. [Google Scholar] [CrossRef]
  26. Kelly, J.D. Reconciliation of process data using other projection matrices. Comput. Chem. Eng. 1999, 23, 785–789. [Google Scholar] [CrossRef]
  27. Gamboa, F.; Gassiat, E. Bayesian methods and maximum entropy for ill-posed inverse problems. Ann. Stat. 1997, 25, 328–350. [Google Scholar] [CrossRef]
  28. Wright, K.M. Chapter 2 Maximum entropy methods in nmr data processing. In Data Handling in Science and Technology; Elsevier Ltd.: Amsterdam, The Netherlands, 1996; pp. 25–43. [Google Scholar] [CrossRef]
  29. Delsuc, M.A.; Malliavin, T.E. Maximum Entropy Processing of DOSY NMR Spectra. Anal. Chem. 1998, 70, 2146–2148. [Google Scholar] [CrossRef]
  30. Coleman, T.F.; Li, Y. An interior trust region approach for nonlinear minimization subject to bounds. SIAM J. Optim. 1996, 6, 418–445. [Google Scholar] [CrossRef]
  31. Xu, K.; Zhang, S. Trust-Region Algorithm for the Inversion of Molecular Diffusion NMR Data. Anal. Chem. 2014, 86, 592–599. [Google Scholar] [CrossRef]
  32. Urbańczyk, M.; Bernin, D.; Koźmiński, W.; Kazimierczuk, K. Iterative Thresholding Algorithm for Multiexponential Decay Applied to PGSE NMR Data. Anal. Chem. 2013, 85, 1828–1833. [Google Scholar] [CrossRef]
  33. Cherni, A.; Chouzenoux, E.; Delsuc, M.A. PALMA, an improved algorithm for DOSY signal processing. Analyst 2017, 142, 772–779. [Google Scholar] [CrossRef] [PubMed]
  34. Arrabal-Campos, F.M.; Aguilera-Sáez, L.M.; Fernández, I. A diffusion NMR method for the prediction of the weight-average molecular weight of globular proteins in aqueous media of different viscosities. Anal. Methods 2019, 11, 142–147. [Google Scholar] [CrossRef]
  35. Pessôa, L.C.; Attar, S.B.e.; Sánchez-Zurano, A.; Ciardi, M.; Morillas-España, A.; Ruiz-Martínez, C.; Fernández, I.; Arrabal-Campos, F.M.; Pontes, L.A.; Silva, J.; et al. Exopolysaccharides as bio-based rheology modifiers from microalgae produced on dairy industry waste: Towards a circular bioeconomy approach. Int. J. Biol. Macromol. 2024, 279, 135246. [Google Scholar] [CrossRef] [PubMed]
  36. Combettes, P.L.; Pesquet, J.C. Proximal Splitting Methods in Signal Processing. In Springer Optimization and Its Applications; Springer: New York, NY, USA, 2011; Volume 49, pp. 185–212. [Google Scholar] [CrossRef]
  37. Nedic, A.; Bertsekas, D.P. Incremental Subgradient Methods for Nondifferentiable Optimization. SIAM J. Optim. 2001, 12, 109–138. [Google Scholar] [CrossRef]
  38. Raguet, H.; Fadili, J.; Peyré, G. A Generalized Forward-Backward Splitting. SIAM J. Imaging Sci. 2013, 6, 1199–1226. [Google Scholar] [CrossRef]
  39. Jordan, R.; Kinderlehrer, D.; Otto, F. The Variational Formulation of the Fokker–Planck Equation. SIAM J. Math. Anal. 1998, 29, 1–17. [Google Scholar] [CrossRef]
  40. Mokrov, P.; Korotin, A.; Li, L.; Genevay, A.; Solomon, J.; Burnaev, E. Large-Scale Wasserstein Gradient Flows. Adv. Neural Inf. Process. Syst. 2021, 19, 15243–15256. [Google Scholar] [CrossRef]
  41. Santambrogio, F. Optimal Transport for Applied Mathematicians; Progress in Nonlinear Differential Equations and Their Applications; Springer International Publishing: Cham, Switzerland, 2015; Volume 87. [Google Scholar] [CrossRef]
  42. Hansen, P.C. The L-Curve and its Use in the Numerical Treatment of Inverse Problems. In Computational Inverse Problems in Electrocardiology; Johnston, P., Ed.; WIT Press: Billerica, MA, USA, 2000; Volume 4, pp. 119–142. [Google Scholar]
  43. Lloyd, J.J.; Taylor, C.J.; Lawson, R.S.; Shields, R.A. The use of the L-curve method in the inversion of diffusion battery data. J. Aerosol Sci. 1997, 28, 1251–1264. [Google Scholar] [CrossRef]
  44. Golub, G.H.; Heath, M.; Wahba, G. Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter. Technometrics 1979, 21, 215–223. [Google Scholar] [CrossRef]
  45. Provencher, S.W. CONTIN: A general purpose constrained regularization program for inverting noisy linear algebraic and integral equations. Comput. Phys. Commun. 1982, 27, 229–242. [Google Scholar] [CrossRef]
Figure 1. Visual comparison of the true distribution, the true signal, and the noisy signal.
Figure 1. Visual comparison of the true distribution, the true signal, and the noisy signal.
Mathematics 13 02166 g001
Figure 2. Comparison of reconstruction algorithms on single-component synthetic data (1% noise level). Panel (a) shows the Wasserstein–Kaczmarz method, which achieves robust noise mitigation with minimal reconstruction error as quantified by low MSE and Wasserstein distance. Panel (b) displays the TRAIn algorithm, noted for its superior accuracy and balanced performance between error reduction and computational efficiency. Panel (c) illustrates the maximum entropy (MaxEnt) approach, offering a compromise that preserves signal details while effectively suppressing noise. Panel (d) presents the Tikhonov–Kaczmarz method, which delivers efficient regularization with an optimal trade-off between reconstruction fidelity and processing time.
Figure 2. Comparison of reconstruction algorithms on single-component synthetic data (1% noise level). Panel (a) shows the Wasserstein–Kaczmarz method, which achieves robust noise mitigation with minimal reconstruction error as quantified by low MSE and Wasserstein distance. Panel (b) displays the TRAIn algorithm, noted for its superior accuracy and balanced performance between error reduction and computational efficiency. Panel (c) illustrates the maximum entropy (MaxEnt) approach, offering a compromise that preserves signal details while effectively suppressing noise. Panel (d) presents the Tikhonov–Kaczmarz method, which delivers efficient regularization with an optimal trade-off between reconstruction fidelity and processing time.
Mathematics 13 02166 g002
Figure 3. Comparison of true and noisy signals with the underlying true distribution.
Figure 3. Comparison of true and noisy signals with the underlying true distribution.
Mathematics 13 02166 g003
Table 1. Algorithm performance comparison on single-component synthetic data (1% noise level).
Table 1. Algorithm performance comparison on single-component synthetic data (1% noise level).
AlgorithmMSEWassersteinTVPSNR (dB)Time (s)
Classical Kaczmarz1.6 × 10 7 5.6 × 10 5 3.5 × 10 2 33.570.42
Tikhonov–Kaczmarz2.4 × 10 8 5.2 × 10 5 3.7 × 10 7 33.941.26
TV–Kaczmarz4.1 × 10 7 1.3 × 10 4 2.8 × 10 2 29.53.06
Wasserstein–Kaczmarz4.7 × 10 8 6.1 × 10 5 6.9 × 10 3 21.540.53
CONTIN1.6 × 10 7 9.0 × 10 5 3.2 × 10 2 33.552.13
MaxEnt1.1 × 10 7 5.7 × 10 5 4.2 × 10 2 35.31.46
TRAIn1.5 × 10 8 5.4 × 10 8 8.1 × 10 3 29.419.36
ITAMeD2.2 × 10 8 6.2 × 10 5 1.9 × 10 3 26.74.35
PALMA1.2 × 10 6 3.2 × 10 4 3.3 × 10 2 24.94261.88
Table 2. Algorithm performance comparison on multi-component synthetic data (1% noise level).
Table 2. Algorithm performance comparison on multi-component synthetic data (1% noise level).
AlgorithmMSEWassersteinTVPSNR (dB)Time (s)
Tikhonov–Kaczmarz5.4 × 10 7 3.5 × 10 4 1.2 × 10 2 19.19108.22
Wasserstein–Kaczmarz3.6 × 10 7 2.6 × 10 4 1.4 × 10 2 20.986.67
MaxEnt4.1 × 10 7 23.1 × 10 4 1.6 × 10 2 20.422.08
TRAIn2.1 × 10 7 2.3 × 10 4 7.9 × 10 2 19.4922.35
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

González-Lázaro, M.; Viciana, E.; Valdivieso, V.; Fernández, I.; Arrabal-Campos, F.M. Regularized Kaczmarz Solvers for Robust Inverse Laplace Transforms. Mathematics 2025, 13, 2166. https://doi.org/10.3390/math13132166

AMA Style

González-Lázaro M, Viciana E, Valdivieso V, Fernández I, Arrabal-Campos FM. Regularized Kaczmarz Solvers for Robust Inverse Laplace Transforms. Mathematics. 2025; 13(13):2166. https://doi.org/10.3390/math13132166

Chicago/Turabian Style

González-Lázaro, Marta, Eduardo Viciana, Víctor Valdivieso, Ignacio Fernández, and Francisco Manuel Arrabal-Campos. 2025. "Regularized Kaczmarz Solvers for Robust Inverse Laplace Transforms" Mathematics 13, no. 13: 2166. https://doi.org/10.3390/math13132166

APA Style

González-Lázaro, M., Viciana, E., Valdivieso, V., Fernández, I., & Arrabal-Campos, F. M. (2025). Regularized Kaczmarz Solvers for Robust Inverse Laplace Transforms. Mathematics, 13(13), 2166. https://doi.org/10.3390/math13132166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop