Next Article in Journal
Analysis of Spatio-Temporal Development Patterns in Key Port Cities Along the Belt and Road Using Nighttime Light Data
Previous Article in Journal
Simulation Teaching of Adaptive Fault-Tolerant Containment Control for Nonlinear Multi-Agent Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CLSP: Linear Algebra Foundations of a Modular Two-Step Convex Optimization-Based Estimator for Ill-Posed Problems

Faculty of International Relations, Prague University of Economics and Business, W. Churchill Sq. 1938/4, Žižkov, 130 67 Prague, Czech Republic
Mathematics 2025, 13(21), 3476; https://doi.org/10.3390/math13213476
Submission received: 24 September 2025 / Revised: 25 October 2025 / Accepted: 27 October 2025 / Published: 31 October 2025
(This article belongs to the Section D: Statistics and Operational Research)

Abstract

This paper develops the linear-algebraic foundations of the Convex Least Squares Programming (CLSP) estimator and constructs its modular two-step convex optimization framework, capable of addressing ill-posed and underdetermined problems. After reformulating a problem in its canonical form, A ( r ) z ( r ) = b , Step 1 yields an iterated (if r > 1 ) minimum-norm least-squares estimate z ^ ( r ) = ( A Z ( r ) ) b on a constrained subspace defined by a symmetric idempotent Z (reducing to the Moore–Penrose pseudoinverse when Z = I ). The optional Step 2 corrects z ^ ( r ) by solving a convex program, which penalizes deviations using a Lasso/Ridge/Elastic net-inspired scheme parameterized by α [ 0 , 1 ] and yields z ^ * . The second step guarantees a unique solution for α ( 0 , 1 ] and coincides with the Minimum-Norm BLUE (MNBLUE) when α = 1 . This paper also proposes an analysis of numerical stability and CLSP-specific goodness-of-fit statistics, such as partial R 2 , normalized RMSE (NRMSE), Monte Carlo t-tests for the mean of NRMSE, and condition-number-based confidence bands. The three special CLSP problem cases are then tested in a 50,000-iteration Monte Carlo experiment and on simulated numerical examples. The estimator has a wide range of applications, including interpolating input–output tables and structural matrices.
MSC:
90C25; 65F22; 65F20; 65F35; 65C05
JEL Classification:
C13; C61; C63; C15

1. Introduction

As the existing literature on optimization—for example, Nocedal and Wright [1] (pp. 1–9) and Boyd and Vandenberghe [2] (pp. 1–2)—points out, constrained optimization plays an important role not only in natural sciences, where it was first developed by Euler, Fermat, Lagrange, Laplace, Gauss, Cauchy, Weierstrass, and others, but also in economics and its numerous applications, such as econometrics and operations research, including managerial planning at different levels, logistics optimization, and resource allocation. The history of modern constrained optimization began in the late 1940s to 1950s, when economics, already heavily reliant on calculus since the marginalist revolution of 1871–1874 [3] (pp. 43–168), and the emerging discipline of econometrics, standing on the shoulders of model fitting [4] (pp. 438–457), internalized a new array of quantitative methods referred to as programming after the U.S. Army’s logistic jargon. Programming—including classical, linear, quadratic, and dynamic variants—originated with Kantorovich in economic planning in the USSR and was independently discovered and developed into a mathematical discipline by Dantzig in the United States (who introduced the bulk of the terminology and the simplex algorithm) [5] (pp. 1–51, [6]). Since the pivotal proceedings published by Koopmans [7], programming has become an integral part of mathematical economics [8,9,10], in both microeconomic analysis (an overview is provided in Intriligator and Arrow [11] (pp. 76–91) and macroeconomic modeling, such as Leontief’s input–output framework outlined in Koopmans [7] (pp. 132–173) and, in an extended form, in Dorfman et al. [12] (pp. 204–264).
Calculus, model fitting, and programming are, however, subject to strong assumptions, including function smoothness, full matrix ranks, well-posedness, well-conditionedness, feasibility, non-degeneracy, and non-cycling (in the case of the simplex algorithm) as summarized by Nocedal and Wright [1] (pp. 304–354, 381–382) and Intriligator and Arrow [11] (pp. 15–76). These limitations have been partially circumvented by developing more strategy-based than analysis-based, computationally intensive constraint programming algorithms; consult Frühwirth and Abdennadher [13] and Rossi et al. [14] for an overview. Still, specific cases of constrained optimization problems in econometric applications, such as incorporating the textbook two-quarter business-cycle criterion into a general linear model [15] or solving underdetermined linear programs to estimate an investment matrix [16], remain unresolved within the existing methodologies and algorithmic frameworks. In response, this text develops a modular two-step convex programming methodology to formalize such problems as a linear system of the form A z = b , including linear(ized) constraints C x + S y * = b 1 : k , with k m , where A R m × n is a partitioned matrix, C R k × p and S R k × ( n p ) are its submatrices, b R m is a known vector, and z R n is the vector of unknowns, which includes both the target solution subvector x R p and an auxiliary slack-surplus subvector y * R n p , introduced to accommodate inequalities in constraints.
The methodology (Convex Least Squares Programming, or CLSP, framework) comprises selectively omittable and repeatable actions for enhanced flexibility—and consists of two steps: (a) the first, grounded in the theory of (constrained) pseudoinverses as summarized in Rao and Mitra [17], Ben-Israel and Greville [18], Lawson and Hanson [19], and Wang et al. [20], is iterable to refine the solution and serves as a mandatory baseline in cases where the second step becomes infeasible (due to mutually inconsistent constraints)—as exemplified by Whiteside et al. [21] for regression-derived constraints and by Blair [22,23] in systems with either too few or too many effective constraints; and (b) the second, grounded in the theory of regularization and convex programming as summarized in Tikhonov et al. [24], Gentle [4], Nocedal and Wright [1], and Boyd and Vandenberghe [2], provides an optional correction of the first step, drawing on the logic of Lasso, Ridge, and Elastic Net to address ill-posedness and compensate for constraint violations or residual approximation errors resulting from the first-step minimum-norm LS estimate. Hence, it can be claimed that the proposed framework is grounded in the general algorithmic logic of Wolfe [25] and Dax [26] as further formalized by Osborne [27], who were among the first to introduce an additional estimation step into simplex-based solvers, and is comparable to the algorithms of Übi [28,29,30,31,32], the closest and most recent elaborations on the topic.
The aim of this work is to define the linear algebra foundations (the theoretical base) of the CLSP framework, present supporting theorems and lemmas, develop tools for analyzing numerical stability of matrix A , and discuss the goodness of fit of vector z ^ * , illustrating them on simulated examples. Topics such as the maximum likelihood estimation of z ^ * and corresponding information criteria (Akaike and Bayesian) are beyond its present scope. The text is organized as follows. Section 2 summarizes the historical development and recent advances in convex optimization, motivating the formulation of a new methodology. Section 3 presents the formalization of the estimator, while Section 4 develops a sensitivity analysis for C . Section 5 introduces goodness-of-fit statistics, including the normalized root mean square error (NRMSE) and its sample-mean test. Section 6 presents special cases, and Section 7 reports the results of a Monte Carlo experiment and solves simulated problems via CLSP-based Python 3 modules. Section 8 concludes with a general discussion. Throughout, the following notation is used: bold uppercase letters (e.g., A , B , C , G , M , and S ) denote matrices; bold lowercase letters (e.g., b , x , y , and z ) denote column vectors; italic letters (e.g., i , j , k , m , n , p , λ , ω ) denote scalars; the transpose of a matrix is denoted by the superscript (e.g., X ); the inverse by 1 (e.g., Q 1 ); generalized inverses are indexed using curly braces (e.g., G { 1 , 2 } ); the Moore–Penrose pseudoinverse is denoted by dagger (e.g., A ); p norms, where 1 p , are denoted by double bars (e.g., · 2 ); and condition numbers by κ ( · ) . All functions, scalars, vectors, and matrices are defined over the real numbers ( R ).

2. Historical and Conceptual Background of the CLSP Framework

The methodology of convex optimization—formally, x * = arg   min x R n f ( x ) , subject to convex inequality constraints g i ( x ) 0 and affine equality constraints h j ( x ) = 0 as defined by Boyd and Vandenberghe [2] (pp. 7, 127–129)—has evolved over more than two centuries through several milestones, including generalized inverses and linear and (convex) quadratic programming, each relaxing the strong assumptions of its predecessors. As documented in the seminal works of Rao and Mitra [17] (pp. vii–viii) and Ben-Israel and Greville [18] (pp. 4–5, 370–374), the first pseudoinverses (original term, now typically reserved for A ) or generalized inverses (current general term) emerged in the theory of integral operators—introduced by Fredholm in 1903 and further developed by Hurwitz in 1912—and, implicitly, in the theory of differential operators by Hilbert in 1904, whose work was subsequently extended by Myller in 1906, Westfall in 1909, Bounitzky in 1909, Elliott in 1928, and Reid in 1931. The cited authors attribute their first application to matrices to Moore in 1920 under the term general reciprocals [33] (though some sources suggest he may have formulated the idea as early as 1906), with independent formulations by Siegel in 1937 and Rao in 1955, and generalizations to singular operators by Tseng in 1933, 1949, and 1956, Murray and von Neumann in 1936, and Atkinson in 1952–1953. A theoretical consolidation occurred with Bjerhammar’s [34] rediscovery of Moore’s formulation, followed by Penrose’s [35,36] introduction of four conditions defining the unique least-squares minimum-norm generalized inverse G A , which, such that G R n × m , can be expressed in Equation (1):
( i ) A G A = A ( ii ) G A G = G ( iii ) ( A G ) = A G ( iv ) ( G A ) = G A
In econometrics, it led to conditionally unbiased (minimum bias) estimators [37] and a redefinition of the Gauss–Markov theorem [38] in the 1960s, superseding earlier efforts [39].
Further contributions to the theory, calculation, and diverse applications of G were made in the 1950s to early 1970s by Rao and Mitra [17,40] (a synthesis of the authors’ previous works from 1955–1969), Ben-Israel and Greville [18], Greville [41,42,43,44], Cline [45], Cline and Greville [46], Golub and Kahan [47], Ben-Israel and Cohen [48], and Lewis and Newman [49] (the most cited sources on the topic in the Web of Science database, although the list is not exhaustive; consult Rao and Mitra [17] (pp. vii–viii) for an extended one), which, among others, introduced the concept of a { · } -generalized inverse G A { · } , where A { · } R n × m is the space of the { · } -generalized inverses of A R m × n — hereafter, { i , j } -inverses for i > 1 , { i , j , k } -inverses, Drazin inverses, and higher-order G are disregarded because of their inapplicability to the CLSP framework—by satisfying a subset or all of the (i)–(iv) conditions in Equation (1) as described in Table 1. Formally, for G { 1 } A { 1 } , I n R n × n , I m R m × m , and arbitrary matrices U , V R n × m , any { 1 } -inverse, including all { 1 , j } -inverses and A , can be expressed as G = G { 1 } + ( I n G { 1 } A ) U + V ( I m A G { 1 } ) , out of which only G A and a constrained G G { 1 , 2 } R n × m —here, the Bott–Duffin inverse [50], expressed in modern notation as A S = P S L ( P S L A P S R ) P S R , where P S L R m × m and P S R R n × n are orthogonal (perpendicular) projectors onto the rows and columns defined by an orthonormal matrix S —can be uniquely defined ( A S under certain conditions) and qualify as minimum-norm unbiased estimators in the sense of Chipman [37] and Price [38].
Starting from the early 1960s, Cline [45], Rao and Mitra [17] (pp. 64–71), Meyer [52], Ben-Israel and Greville [18] (pp. 175–200), Hartwig [53], Campbell and Meyer [51] (pp. 53–61), Rao and Yanai [54], Tian [55], Rakha [56], Wang et al. [20] (pp. 193–210), and Baksalary and Trenkler [57], among others, extended the formulas for A to partitioned matrices, including the general row-wise case, A 1 R m 1 × n , A 2 R m 2 × n , m 1 + m 2 = m —and, equivalently, column-wise one, A 1 R m × n 1 , A 2 R m × n 2 , and n 1 + n 2 = n (Equation (2), used in the numerical stability analysis of the CLSP estimator for C canon = C S in the decomposition A = B C canon + R given A R m × n , C canon R k × n , and B = A C canon = A C S R m × k ):
A 1 A 2 = A 1 A 2 A 1 A 2 A 1 A 2 = A 1 A 1 A 1 A 2 A 2 A 1 A 2 A 2 A 1 A 2
where rank ( A 1 A 2 ) rank ( A 1 ) + rank ( A 2 ) , with equality iff R ( A 1 ) R ( A 2 ) = { 0 } , and strict inequality iff R ( A 1 ) R ( A 2 ) { 0 } . For the original definitions and formulations, consult Rao and Mitra [17] (pp. 64–66) and Ben-Israel and Greville [18] (pp. 175–200).
To sum up, G (especially A ), SVD, and regularization techniques—namely, Lasso (based on the 1 -norm), Tikhonov regularization (hereafter, referred to as Ridge regression, based on the 2 -norm), and Elastic Net (a convex combination of 1 and 2 norms) [4] (pp. 477–479)—have become an integral part of estimation (model-fitting), where A can be defined as a minimum-norm best linear unbiased estimator (MNBLUE), reducing to the classical BLUE in (over)determined cases. These methods have been applied in (a) natural sciences since the 1940s [24] (pp. 68–156) [34,50] (pp. 85–172, [58]) [59,60,61], and (b) econometrics since the 1960s (pp. 36–157, 174–232, [19]) [37,38] (pp. 61–106, [62]) (pp. 37–338, [63]) in both unconstrained and equality-constrained forms, thereby relaxing previous rank restrictions.
The incorporation of inequality constraints into optimization problems during the 1930s–1950s (driven by the foundational contributions of Kantorovich [64], Koopmans [7], and Dantzig [6], along with the later authors [5]) marked the next milestone in the development of convex optimization under the emerging framework of programming, namely for linear cases (LP)—formally, x * = arg   max x R n c x subject to A x b , x 0 (with the dual λ * = arg   min λ R m b λ subject to A λ c ) , where A R m × n , b R m , and c R n —and convex quadratic ones (QP)—formally, x * = arg   min x R n 1 2 x Q x + c x subject to A x b (with the dual λ * = arg   max λ R + m 1 2 ( A λ + c ) Q 1 ( A λ + c ) b λ ), where Q R n × n is symmetric positive definite, c R n , A R m × n , and b R m —[2] (pp. 146–213) (for a classification of programming, consult Figure 1) (pp. 355–597, [1]) (pp. 1–31, [6]), with Dantzig’s simplex and Karmarkar’s interior-point algorithms being efficient solvers [65,66].
In the late 1950s–1970s, the application of LP and QP extended to LS problems (which can be referred to as a major generalization of mainstream convex optimization methods; consult Boyd and Vandenberghe [2] (pp. 291–349) for a comprehensive modern overview) with the pioneering works of Wagner [67] and Sielken and Hartley [68], expanded by Kiountouzis [69] and Sposito [70], who first substituted LS with LP-based ( 1 -norm) least absolute (LAD) and ( -norm) least maximum (LDP) deviations and derived unbiased ( p -norm, 1 < p < ) estimators with non-unique solutions, whilst, in the early 1990s, Stone and Tovey [66] demonstrated algorithmic equivalence between LP algorithms and iteratively reweighted LS. Judge and Takayama [71] and Mantel [72] reformulated (multiple) LS with inequality constraints as QP, and Lawson and Hanson [19] (pp. 144–147) introduced, among others, the non-negative least squares (NNLS) method. These and further developments are reflected in the second step of the CLSP framework, where z ^ = G b is corrected under a regularization scheme inspired by Lasso ( α = 0 ), Ridge ( α = 1 ), and Elastic Net ( 0 < α < 1 ) regularization, where z ^ * = arg   min z R n ( 1 α ) z z ^ 1 + α z z ^ 2 2 , subject to A z = b .
The final frontier that motivates the formulation of a new framework is the class of ill-posed problems in Hadamard’s sense [4] (pp. 143, 241), i.e., systems with no (i) feasibility, x F : = { x | i , j g i ( x ) 0 , h j ( x ) = 0 } , (ii) uniqueness, x 1 x 2 : A x 1 = A x 2 , (iii) stability, ε > 0 , δ > 0 : b b δ < δ A b A b δ ε , or formulation—as a well-posed LS, LP, or QP problem—of solution under mainstream assumptions, where A R m × n , x R n , and b R m , such as the ones described in Bolotov [15], Bolotov [16]: namely, (a) underdetermined, rank ( A ) < min ( m , n ) , and/or ill-conditioned linear systems, κ ( A ) = A 2 A 2 1 , A R m × n , in cases of LS and LP, where solutions are either non-unique or highly sensitive to perturbations [22,73]; and (b) degenerate, # { active constraints at x } > dim ( x ) = n , with cycling behavior, c x ( k + 1 ) = c x ( k ) , and x ( k + 1 ) x ( k ) , x R n , in all LP and QP problems, where efficient methods, such as the simplex algorithm, may fail to converge [27,74]. Such problems have been addressed since the 1940s–1970s by (a) regularization (i.e., a method of perturbations) and (b) problem-specific algorithms [2] (pp. 455–630). For LS cases, the list includes, among others, Lasso, Ridge regularization, and Elastic Net [4] (pp. 477–479), constrained LS methods [75], restricted LS [76], LP-based sparse recovery approaches [73], Gröbner bases [77], as well as derivatives of eigenvectors and Nelson-type, BD-, QR- and SVD-based algorithms [78]. For LS and QP, among others, we have Wolfe’s regularization-based technique for simplex [25] and its modifications [27,74], Dax’s LS-based steepest-descent algorithm [26], a primal-dual NNLS-based algorithm (LSPD) [79], as well as modifications of the ellipsoid algorithm [80]. A major early attempt to unify (NN)LS, LP, and QP within a single framework to solve both well- and ill-posed convex optimization problems was undertaken by Übi [28,29,30,31,32], who proposed a series of problem-specific algorithms (VD, INE, QPLS, and LS1). Figure 2 compares all the mentioned methods by functionality.
To sum up, based on the citation and reference counts in modern literature mapping software (here, Litmaps), as well as in-group relevance (i.e., the number of citations within the collection) of the above-cited works (consult Figure 3), this text incorporates the seminal studies and an almost exhaustive representation of prior research on the topic in question.

3. Construction and Formalization of the CLSP Estimator

To accommodate the above-described problem classes, a modular two-step CLSP consists of a first (compulsory) step—minimum-norm LS estimation, based on A or A S (hereafter denoted as A Z to match z )—followed by a second (optional) step, a regularization-inspired correction. This structure ensures that CLSP is able to yield a unique solution under a strictly convex second-step correction and extends the classical LS framework in both scope and precision, providing a generalized approach that reduces the reliance on problem-specific algorithms [28,29,30,31,32]. The estimator’s algorithmic flow is illustrated in Figure 4 and formalized below.
The first process block in the CLSP algorithmic flow denotes the transformation of the initial problem— x ^ M * = Proj x M Q X x M x L R p : 1 i s g i ( x M , x L ) γ i , j h j ( x M , x L ) = η j , where Proj x M is the projection onto the x M coordinates, x M R p l is the vector of model (target) variables to be estimated ( x ^ M * ), x L R l , l 0 , is a vector of latent variables that appear only in the constraint functions g i ( · ) and h j ( · ) , such that Q X x M x L = x R p , Q X is the permutation matrix, and g i ( · ) γ i and h j ( · ) = η j are the linear(ized) inequality and equality constraints, i , j γ i , η j b —to the canonical form A ( r ) z ( r ) = b (a term first introduced in LP [6] (pp. 75–81)), where (a)  A ( r ) R m × n is the block design matrix consisting of a i g i ( · ) -and- j h j ( · ) constraints matrix C R k × p , a model matrix M = Q M L M partial 0 Q M R R ( m k ) × p , a sign slack matrix S = Q S L S s σ 1 , , σ s { 1 , + 1 } diag σ 1 , , σ s 0 Q S R R k × ( n p ) , and either a zero matrix 0 (if r = 1 ) or a reverse-sign slack matrix S residual = 0 diag sign ( b A ( r 1 ) z ^ ( r 1 ) ) k + 1 : m R ( m k ) × ( n p ) (if r > 1 ) (see Equation (3)):
A ( r ) = C S M 0 , initial iteration , r = 1 S residual , subsequent iterations , r > 1
and (b)  z ( r ) = x ( r ) y ( r ) * R n is the full vector of unknowns, comprising (model) x ( r ) R p and (slack) y ( r ) * = y ( r ) y residual ( r ) R n p , y i ( r ) 0 and y j ( r ) = 0 —an equivalent problem [2] (pp. 130–132), where g i ( · ) a = g i ( · ) + S i i y i ( r ) = 0 and h j ( · ) a = h j ( · ) . The constraint functions g i ( · ) a and h j ( · ) a must be linear(ized), so that C , S A ( r ) C x ( r ) + S y ( r ) * = b 1 : k , k m .
In practical implementations, for problems with ( m > m E ) ( n > n E ) , where E is an estimability limit, a block-wise estimation can be performed through the partitioning A ( r ) = A s u b s e t , i E , j E ( r ) , m subset = m reduced 1 , n subset = n reduced 1 , i E { 1 , , m m subset } , and j E { 1 , , n n subset } , leading to a total of m m subset · n n subset matrices A reduced ( r ) R m reduced × n reduced , composed of A subset ( r )  and a i , j ( r ) , i I subset { 1 , , m } , j J subset { 1 , , n } , and a total of m m subset vectors b r . R m reduced (as formalized in Equation (4)):
A reduced ( r ) = Q A I A I subset , J subset ( r ) j J subset a i , j ( r ) i I subset a i , j ( r ) i I subset j J subset a i , j ( r ) Q A J , b r . = Q b I b I subset i I subset b i
a i , j ( r ) are then treated as slack rows/columns, i.e., incorporated into S , to act as balancing adjustments since the reduced model, by construction, tends to inflate the estimates a ^ i E , j E . However, full compensation of information loss, caused by aggregation, is infeasible; hence, estimable (smaller) full models are always preferred over reduced (per parts) estimation.
The second process block, i.e., the first step of the CLSP estimator, denotes obtaining an (iterated if r > 1 ) estimate z ^ ( r ) = G ( r ) b (alternatively, m m subset · n n subset -times z ^ reduced ( r ) = G reduced ( r ) b r . with z ^ i E , j E ( r ) = z ^ reduced , 1 : m subset , 1 : n subset ( r ) , ( i E , j E ) ) through the (constrained) Bott–Duffin inverse G ( r ) ( A Z ( r ) ) R n × m [50] (pp. 49–64, [20]), defined on a subspace specified by a symmetric idempotent matrix Z R n × n (regulating solution-space exclusion through binary entries 0/1, such that P Z L = Z restricts the domain of estimated variables to the selected subspace, while P Z R = I m leaves the data vector, i.e., the input b , unprojected). G ( r ) ( A Z ( r ) ) reduces to the Moore–Penrose pseudoinverse G ( r ) ( A ( r ) ) R n × m when Z = I n , and equals the standard inverse G ( r ) ( A ( r ) ) 1 iff Z = I n rank ( A ( r ) ) = m = n (the application of a left-sided Bott–Duffin inverse to A ( r ) z ( r ) = b is given by Equation (5)):
z ^ ( r ) = ( A Z ( r ) ) b = Z ( A ( r ) ) A ( r ) Z Z ( A ( r ) ) b , Z 2 = Z = Z
The number of iterations (r) increases until a termination condition is met, such as error 1 m ( b A ( r ) z ^ ( r ) ) 2 1 m ( b A ( r 1 ) z ^ ( r 1 ) ) 2 < ε or r > r limit , whichever occurs first—implemented in CLSP-based software—although any goodness-of-fit metric can be employed. The described condition, maximizing the fit quality—especially under a missing second step—is, however, heuristic, with a formal proof of convergence lying beyond the scope of this work. In practical implementations, z ^ ( r ) is efficiently computed using the SVD (see Appendix A.1) with optional Ridge regularization to stabilize a nearly singular system.
The third process block, i.e., the second step of the CLSP estimator, denotes obtaining a corrected final solution z ^ * (alternatively, m m subset · n n subset -times z ^ reduced * with z ^ i E , j E * = z ^ reduced , 1 : m subset , 1 : n subset * , ( i E , j E ) )—unless the algorithmic flow terminates after Step 1, in which case z ^ * = z ^ ( r ) —by penalizing deviations from the (iterated if r > 1 ) estimate z ^ ( r ) reflecting Lasso ( α = 0 z ^ * = z ^ ( r ) + u ^ , where u ^ = arg   min u u 1 , s . t . A ( r ) ( z ^ ( r ) + u ) = b ), Ridge ( α = 1 z ^ * = z ^ ( r ) + ( A ( r ) ) A ( r ) u ^ , where ( A ( r ) ) A ( r ) = P R ( A ) , a projector equal to I iff A ( r ) is of full column rank, and u ^ = arg   min u u 2 2 , s . t . A ( r ) ( z ^ ( r ) + u ) = b ), and Elastic Net ( 0 < α < 1 z ^ * = z ^ ( r ) + ( 1 α ) u ^ + α ( A ( r ) ) A ( r ) u ^ , where u ^ = arg   min u ( 1 α ) u 1 + α u 2 2 , s . t . A ( r ) ( z ^ ( r ) + u ) = b ) (pp. 306–310, [2]) (pp. 477–479, [4]) (the combination of Lasso, Ridge, and Elastic Net corrections of z ^ * is given by Equation (6)):
z ^ * = arg   min z R n ( 1 α ) z z ^ ( r ) 1 + α z z ^ ( r ) 2 2 , α [ 0 , 1 ] , s . t . A ( r ) z = b
where the parameter α may be selected arbitrarily or determined from the input b using an appropriate criterion, such as minimizing prediction error via cross-validation, satisfying model-selection heuristics (e.g., AIC or BIC), or optimizing residual metrics under known structural constraints. Alternatively, α may be fixed at benchmark values, e.g., α { 0 , 1 2 , 1 } , or based on an error rule, e.g., α = min 1 m ( b A ( r ) z ^ * ) 2 α = 0 1 m ( b A ( r ) z ^ * ) 2 α = 0 + 1 m ( b A ( r ) z ^ * ) 2 α = 1 + ε , 1 , ε 0 + —in CLSP-based software. In practical implementations, z ^ * , obtained from solving a convex optimization problem, is efficiently computed with the help of numerical solvers.
To sum up, the definition of the process blocks ensures that the CLSP estimator yields a unique final solution z ^ * under regularization weight α ( 0 , 1 ] owing to the strict convexity of the objective function; see Theorem 1 (although Theorem 1 is mathematically straightforward, it is included for completeness because its corollaries establish the conditions for uniqueness and convergence that underlie special cases). For α = 0 , the solution remains convex but may not be unique unless additional structural assumptions are imposed. In the specific case of α = 1 , the estimator reduces to the Minimum-Norm Best Linear Unbiased Estimator (MNBLUE), or, alternatively, to a block-wise m subset × n subset -MNBLUE across m m subset · n n subset reduced problems (for a proof, consult Theorem 2 and its corollaries). It is, however, important to note that—when compared to the classical BLUE—the CLSP, as an MNBLUE estimator, by definition, is not constrained by the same Gauss–Markov conditions and remains well-defined under rank-deficient or underdetermined systems ( m < n ) by introducing the minimum-norm condition as a substitute for the homoscedasticity and full-rank assumptions of the original theorem. Therefore, the MNBLUE represents a generalized or, for ill-posed cases, a problem-class-specific extension of the BLUE, applicable to systems which violate one or more Gauss–Markov requirements while still preserving (a) linearity, (b) unbiasedness, and (c) minimum variance (as demonstrated in Theorem 2, although a formal theoretical treatment of the MNBLUE is beyond the scope of this work).
Theorem 1. 
Let z ^ ( r ) = ( A Z ( r ) ) b be the r-iterated estimate obtained in the first step of the CLSP algorithm, and let z ^ * be the second-step correction obtained through convex programming by solving z ^ * = arg   min z R n ( 1 α ) z z ^ ( r ) 1 + α z z ^ ( r ) 2 2 , α ( 0 , 1 ] , s . t . A ( r ) z = b (i.e., the regularization parameter excludes a pure Lasso-type correction), then the final solution z ^ * is unique.
Proof. 
Let z ^ ( r ) = ( A Z ( r ) ) b denote the linear estimator obtained via the Bott–Duffin inverse, defined on a subspace determined by a symmetric idempotent matrix Z , and producing a conditionally unbiased estimate over that subspace. The Bott–Duffin inverse ( A Z ( r ) ) is given by ( A Z ( r ) ) = Z ( A ( r ) ) A ( r ) Z Z ( A ( r ) ) , and the estimate z ^ ( r ) is unique if rank ( A ( r ) Z ) = dim ( R ( Z ) ) . In this case, z ^ ( r ) is the unique minimum-norm solution in R ( Z ) . In all other cases, the solution set is affine and given by z ^ ( r ) = ( A Z ( r ) ) b + ker ( A ( r ) Z ) R ( Z ) , where the null-space component represents degrees of freedom not fixed by the constraint A ( r ) Z , and hence z ^ ( r ) is not unique. At the same time, the second-step estimate z ^ * is obtained by minimizing the function ( 1 α ) z z ^ ( r ) 1 + α z z ^ ( r ) 2 2 over the affine (hence convex and closed) constraint set F ( r ) = z R n | A ( r ) z = b . Under α ( 0 , 1 ] , the quadratic term α z z ^ ( r ) 2 2 contributes a positive-definite Hessian 2 α I 0 to the objective function, making it a strictly convex parabola-like curvature, and, given F ( r ) , the minimizer z * exists and is unique. Therefore, the CLSP estimator with α ( 0 , 1 ] yields a unique z ^ * .    □
Corollary 1. 
Let the final solution be z ^ * = z ^ ( r ) = ( A Z ( r ) ) b , where ( A Z ( r ) ) denotes the Bott–Duffin inverse on a subspace defined by a symmetric idempotent matrix Z . Then (one-step) z ^ * is unique iff rank ( A ( r ) Z ) = dim ( R ( Z ) ) . Else, the solution set is affine and given by z ^ * ( A Z ( r ) ) b + ker ( A ( r ) Z ) R ( Z ) , which implies that the minimizer z * is not unique, i.e., # arg   min ( · ) > 1 .
Corollary 2. 
Let z ^ * be the final solution obtained in the second step of the CLSP estimator by solving z ^ * = arg   min z R n z z ^ ( r ) 1 , s . t . A ( r ) z = b (i.e., the regularization parameter is set to α = 0 , corresponding to a Lasso correction). Then the solution z ^ * exists and the problem is convex. The solution is unique iff the following two conditions are simultaneously satisfied: (1) the affine constraint set F ( r ) = z R n | A ( r ) z = b intersects the subdifferential of z z ^ ( r ) 1 at exactly one point, and (2) the objective function is strictly convex on F ( r ) , which holds when F ( r ) intersects the interior of at most one orthant in R n . In all other cases, the minimizer z * is not unique, and the set of optimal solutions forms a convex subset of the feasible region F ( r ) ; that is, the final solution z ^ * arg   min z F ( r ) z z ^ ( r ) 1 , where arg   min ( · ) is convex and # arg   min ( · ) > 1 .
Theorem 2. 
Let z ^ ( r ) = ( A Z ( r ) ) b be the r-iterated estimate obtained in the first step of the CLSP algorithm, and let z ^ * be the second-step correction obtained through convex programming by solving z ^ * = arg   min z R n z z ^ ( r ) 2 2 , s . t . A ( r ) z = b (i.e., the regularization parameter is α = 1 , corresponding to a Ridge correction), then z ^ * is the Minimum-Norm Best Linear Unbiased Estimator (MNBLUE) of z under the linear(ized) constraints set by the canonical form of the CLSP.
Proof. 
Let z ^ ( r ) = ( A Z ( r ) ) b denote the linear estimator obtained via the Bott–Duffin inverse, defined on a subspace determined by a symmetric idempotent matrix Z , and producing a conditionally unbiased estimate over that subspace. The Bott–Duffin inverse is ( A Z ( r ) ) = Z ( A ( r ) ) A ( r ) Z Z ( A ( r ) ) and the estimate z ^ ( r ) is unique if rank ( A ( r ) Z ) = dim ( R ( Z ) ) . Given the linear model derived from the canonical form b = A ( r ) z ( r ) + ε , where ε are residuals with E [ ε ] = 0 , it follows that E [ z ^ ( r ) ] = E [ ( A Z ( r ) ) b ] = ( A Z ( r ) ) E [ b ] = ( A Z ( r ) ) A ( r ) z ( r ) . Substituting ( A Z ( r ) ) yields E [ z ^ ( r ) ] = Z ( A ( r ) ) A ( r ) Z Z ( A ( r ) ) A ( r ) z ( r ) . By the generalized projection identity ( A Z ( r ) ) A ( r ) Z = Z , one obtains E [ z ^ ( r ) ] = Z z ( r ) , which proves conditional unbiasedness on R ( Z ) (and full unbiasedness if Z = I ). Subsequently, the second-step estimate z ^ * is obtained by minimizing the squared Euclidean distance between z ^ ( r ) and z , subject to the affine constraint A ( r ) z = b , which corresponds, in geometric and algebraic terms, to an orthogonal projection of z ^ ( r ) onto the affine (hence convex and closed) subspace F ( r ) = z R n | A ( r ) z = b . By Theorem 1, the unique minimizer z * exists and is given explicitly by z * = z ^ ( r ) + ( A ( r ) ) b A ( r ) z ^ ( r ) . This expression satisfies the first-order optimality condition of the convex program and ensures that z * is both feasible and closest (in the examined 2 -norm) to z ^ ( r ) among all solutions in F ( r ) . Therefore, z * is (1) linear (being an affine transformation of a linear estimator), (2) unbiased (restricted to the affine feasible space), (3) minimum-norm (by construction), and (4) unique (by strict convexity). By the generalized Gauss–Markov theorem (Chapter 7, Definition 4, p. 139, [17]) (Chapter 8, Section 3.2, Theorem 2, p. 287, [18]), it follows that z ^ * is the Minimum-Norm Best Linear Unbiased Estimator (MNBLUE) under the set of linear(ized) constraints A ( r ) z = b —i.e., the one with the smallest dispersion (in the Löwner sense) among the class of unbiased minimum second-norm least-squares estimators z ^ * = arg   min z b A z 2 2 = G { 1 , 3 , 4 } b + ε , where E ( ε ) = 0 and Var ( ε ) = 0 given the Ridge-inspired convex programming correction (residual sterilization), z ^ * = arg   min z R n z z ^ ( r ) 2 2 , s . t . A ( r ) z = b , ensuring strict satisfaction of the problem constraints A ( r ) z = b , under the assumption of a feasible Step 2.    □
Corollary 3. 
Let the canonical system A ( r ) z ( r ) = b be consistent and of full column rank, A ( r ) R m × n , m n , and rank ( A ( r ) ) = n . Suppose that the CLSP algorithm terminates after Step 1 and that Z = I n , such that z ^ * = z ^ ( r ) = ( A ( r ) ) b = ( ( A ( r ) ) A ( r ) ) 1 ( A ( r ) ) b . Then, provided the classical linear model b = A ( r ) z ( r ) + ε , where E [ ε ] = 0 and Cov ( ε ) = σ 2 I m , holds, the CLSP estimator is equivalent to OLS while z ^ * is the Best Linear Unbiased Estimator (BLUE).
Corollary 4. 
Let z ^ reduced ( r ) = A Z reduced , reduced b r . be the r-iterated estimate obtained via the Bott–Duffin inverse, defined on a subspace determined by a symmetric idempotent matrix Z reduced , and producing a conditionally unbiased estimate over that subspace. Subsequently, the estimate z ^ reduced * is obtained by minimizing the squared Euclidean distance between z ^ reduced ( r ) and z reduced , subject to the affine constraint A reduced ( r ) z reduced = b r . , such that z ^ i E , j E * = z ^ reduced , 1 : m subset , 1 : n subset * , ( i E , j E ) . Then each z ^ reduced * is the Minimum-Norm Best Linear Unbiased Estimator (MNBLUE) under the linear(ized) constraints defined by the canonical form of each reduced system of the CLSP, and z ^ * is a block-wise m subset × n subset -MNBLUE corresponding to each of the m m subset · n n subset reduced problems.

4. Numerical Stability of the Solutions z ^ ( r ) and z ^ *

The numerical stability of solutions, potentially affecting solver convergence in practical implementations, at each step of the CLSP estimator—both the (iterated if r > 1 ) first-step estimate z ^ ( r ) (alternatively, z ^ i E , j E ( r ) = z ^ reduced , 1 : m subset , 1 : n subset ( r ) , ( i E , j E ) ) and the final solution z ^ * (alternatively, z ^ i E , j E * = z ^ reduced , 1 : m subset , 1 : n subset * , ( i E , j E ) )—depends on the condition number of the design matrix, κ ( A ( r ) ) , which, given its block-wise formulation in Equation (3), can be analyzed as a function of the “canonical” constraints matrix C canon = C S R k × n as a fixed, r-invariant, part of A ( r ) . For both full and reduced problems, sensitivity analysis of κ ( A ( r ) ) with respect to constraints (i.e., rows) in C canon , especially focused on their reduction (i.e., dropping), is performed based on the decomposition (Equations (7) and (8)):
A ( r ) = B ( r ) C canon + R ( r ) = B ( r ) C S + R ( r ) , R ( r ) = A ( r ) ( I P R ( C S ) )
where B ( r ) = A ( r ) C canon R m × k denotes the constraint-induced, r-variant, part of A , and, when the rows of A ( r ) lie entirely within R ( C canon ) , i.e., A ( r ) = A ( r ) P R ( [ C S ] ) R ( r ) = 0 ,
κ ( A ( r ) ) κ ( B ( r ) ) · κ ( C canon ) κ ( A ( r ) ) + Δ κ A κ ( B ( r ) ) + Δ κ B · κ ( C canon ) + Δ κ C = κ ( B ( r ) ) · κ ( C canon ) + Δ κ B · κ ( C canon ) + κ ( B ( r ) ) · Δ κ C + Δ κ B · Δ κ C
i.e., a change in κ ( A ( r ) ) consists of a (lateral) constraint-side effect, κ ( B ( r ) ) · Δ κ C , induced by structural modifications in C canon , an (axial) model-side effect, Δ κ B · κ ( C canon ) , reflecting propagation into B ( r ) —where B ( r ) = A ( r ) C canon and A ( r ) = f C canon , M canon ( r ) , M canon ( r ) = g M , ε ( r 1 ) , and ε ( r 1 ) = b A ( r 1 ) z ^ ( r 1 ) (for a “partitioned” decomposition of B ( r ) given the structure of C canon , see Appendix A.2)—and the (non-linear) term, Δ κ B · Δ κ C .
The condition number of C canon R k × n , κ ( C canon ) , the change of which determines the constraint-side effect, κ ( B ( r ) ) · Δ κ C , can itself be analyzed with the help of pairwise angular measure between row vectors c i , c j C canon , where i , j { 1 , , k } . For any two c i , c j R 1 × n (alternatively, in a centered form, c ˜ i = c i c ¯ i , c ˜ j = c j c ¯ j R 1 × n , with c ¯ i = 1 n k = 1 n c i , k , c ¯ j = 1 n k = 1 n c j , k ), i j , the cosine of the angle between them cos ( θ i , j ) (alternatively, the Pearson correlation coefficient ρ = cos ( θ ˜ i , j ) ) captures their angular alignment—values close to ± 1 indicate near-collinearity and potential ill-conditioning, whereas values near 0 imply near-orthogonality and improved numerical stability (for the pairwise definition and its potential aggregated measures, consult Equations (9) and (10)):
c i , c j C canon , i j cos ( θ i , j ) = c i · c j c i 2 · c j 2 [ 1 , 1 ]
The pairwise angular measure can be aggregated across j (i.e., for each row c i ) and jointly across i , j (i.e., for the full set of k 2 = k ( k 1 ) 2 unique row pairs of C canon ), yielding a scalar summary statistic of angular alignment, e.g., the Root Mean Square Alignment (RMSA):
RMSA i = 1 k 1 j = 1 j i k cos 2 ( θ i , j ) , RMSA = 2 k ( k 1 ) i = 1 k 1 j = i + 1 k cos 2 ( θ i , j )
where RMSA i captures the average angular alignment of constraint i with the rest, while RMSA evaluates the overall constraint anisotropy of C canon . As with individual cosines cos ( θ i , j ) (alternatively, ρ = cos ( θ ˜ i , j ) ), values near 1 suggest collinearity and potential numerical instability, while values near 0 reflect angular dispersion and reduced condition number κ ( C canon ) . The use of squared cosines cos 2 ( θ i , j ) (alternatively, ρ 2 = cos 2 ( θ ˜ i , j ) ) in contrast with other metrics (e.g., absolute values) maintains consistency with the Frobenius norms in κ ( C canon ) and assigns greater penalization to near-collinear constraints in C canon . Furthermore, a combination of RMSA i , i { 1 , , k } , and the above-described marginal effects, obtained by excluding row i from C canon —namely, κ ( B ( r ) ) · Δ κ C ( i ) , Δ κ B ( i ) · κ ( C canon ) , and Δ κ B ( i ) · Δ κ C ( i ) if R ( r ) = 0 —together with the corresponding changes in the goodness-of-fit statistics from the solutions z ^ ( r ) ( i ) and z ^ * ( i ) (consult Section 5), can be visually presented over either all i, or a filtered subset, such that RMSA i threshold as depicted in Figure 5.
To sum up, in the CLSP estimator, a feasible solution z ^ * (in limiting cases, z ^ * z ^ ( r ) ), by the theoretical definition of the Bott–Duffin (Moore–Penrose) inverse (consult Section 2), always exists for the canonical form A ( r ) z ( r ) = b (therefore, there is no threshold value κ ( A ( r ) ) > κ ( A ( r ) ) threshold that would imply the nonexistence of a solution). Still, ill-conditioning of the design matrix A ( r ) might hinder the software-dependent computation of (a) the SVD in Step 1 and (b) the convex optimization problem in Step 2 on an ad hoc basis, thereby manifesting the ill-posedness of the problem. The proposed decomposition and RMSA i (RMSA) metric, quantifying the anisotropy of C canon , provide a means to analyze the origin of instability and optimize the constraint structure of A ( r ) —e.g., by mitigating the (numerical) issues discussed in Whiteside et al. [21] and Blair [22,23]—before re-estimation can be performed.

5. Goodness of Fit of the Solutions z ^ ( r ) and z ^ *

Given the hybrid structure of the CLSP estimator—comprising a least-squares-based (iterated if r > 1 ) initial estimate z ^ ( r ) (or z ^ i E , j E ( r ) = z ^ reduced , 1 : m subset , 1 : n subset ( r ) , ( i E , j E ) ), followed by a regularization-inspired final solution z ^ * (or z ^ i E , j E * = z ^ reduced , 1 : m subset , 1 : n subset * , ( i E , j E ) )—and the potentially ill-posed or underdetermined nature of the problems it addresses, standard goodness-of-fit methodology of classical regression—such as explained variance measures (e.g., the coefficient of determination, R 2 ), error-based metrics (e.g., Root Mean Square Error, RMSE, for comparability with the above-described RMSA), hypothesis tests (e.g., F-tests in the analysis of variance and t-tests at the coefficient level), or confidence intervals (e.g., based on Student’s t- and the normal distribution)—are not universally applicable (with the exception of RMSE, these measures are valid only under classical Gauss–Markov assumptions, i.e., for overdetermined problems where z ^ * z ^ ( r ) ). The same limitation applies to model-selection criteria, such as R adjusted 2 , AIC, and BIC, that presuppose the existence of a maximizable likelihood function and well-defined (i.e., greater than zero) degrees of freedom, not satisfied in the examined class of problems (equally, the formulation of a likelihood function for the two-step CLSP estimator lies beyond the scope of this work). This text, therefore, discusses the applicability of selected alternative statistics, most of which are robust to underdeterminedness: partial R 2 (i.e., the adaptation of R 2 to the CLSP structure), normalized RMSE, t-test for the mean of the NRMSE, and a diagnostic interval based on the condition number of A ( r ) —implemented in CLSP-based software (see Equations (11)–(18)).
For the explained variance measures, the block-wise formulation of the full vector of unknowns z ( r ) = x ( r ) y ( r ) * R n —obtained from x ( r ) = Q X x M ( r ) x L ( r ) R p (where x M ( r ) R p l is the vector of model (target) variables and x L ( r ) R l , l 0 , is a vector of latent variables present only in the constraints) and a vector of slack variables y ( r ) * R n p —the design matrix A ( r ) = C canon M canon ( r ) R m × n , where M canon ( r ) = M { 0 , r = 1 ; S residual , r > 1 } R ( m k ) × n is the “canonical” model matrix, and the input b = b C b M R m , all necessitate a “partial” R 2 (hereafter, the term “partial” will be reserved for statistics related to vector x * z ) to isolate the estimated variables x ^ * , where, given the vector of ones 1 = { 1 , , 1 } ,
R partial 2 = 1 b M M x ^ * 2 2 b M b ¯ M 1 2 2 , b ¯ M = 1 m k i = 1 m k b M , i
provided that rank ( A ( r ) ) = n , m n , i.e., applicable strictly to (over)determined problems. If C = k = 0 y = n = p , then A ( r ) M and R partial 2 reduces to the classical R 2 . Overall, R partial 2 has limited use for the CLSP estimator and is provided for completeness.
Next, for the error-based metrics, independent of rank ( A ( r ) ) and thus more robust across underdetermined cases (in contrast to R partial 2 ), a Root Mean Square Error (RMSE) can be defined for both (a) the full solution z ^ * , via RMSE = 1 m b A ( r ) z ^ * 2 , serving as an overall measure of fit, and (b) its variable component x ^ * , via RMSE partial ( r ) = 1 m k b k + 1 : m M x ^ * 2 , quantifying the fit quality with respect to the estimated target variables only (in alignment with R partial 2 ). Then, to enable cross-model comparisons and especially its use in hypothesis testing (see below), RMSE must be normalized—typically by the standard deviation of the reference input, σ b or σ b M —yielding the Normalized Root Mean Square Error (NRMSE) comparable across datasets and/or models (e.g., in the t-test for the mean of the NRMSE):
NRMSE = 1 m b A ( r ) z ^ * 2 σ b , NRMSE partial = 1 m k b k + 1 : m M x ^ * 2 σ b M
The standard deviation is preferred in statistical methodology over the alternatives (e.g., max-min scaling or range-based metrics) because it expresses residuals in units of their variability, producing a scale-free measure analogous to the standard normal distribution. It is also more common in practical implementations (such as Python’s sklearn or MATLAB).
Also, for the hypothesis tests, classical ANOVA-based F-tests and coefficient-level t-tests—relying on variance decomposition and residual degrees of freedom—are applicable exclusively to the least-squares-based (iterated if r > 1 ) first-step estimate z ^ ( r ) and to the final solution z ^ * = z ^ ( r ) , given a strictly overdetermined A ( r ) z ( r ) = b , i.e., rank ( A ( r ) ) = n , m > n (an OLS case), and under the assumption of homoscedastic and normally distributed residuals. Then, in the case of the F-tests, the test statistics follow the distributions (a)  F F ( q , m n ) and (b)  F partial F ( q partial , m n ) , a Wald-type test on a linear restriction—with the degrees of freedom q = n 1 and q partial = p 1 if x ( r ) is a vector of coefficients from a linear(ized) model with an intercept and q = n and q partial = p otherwise—yielding (a)  H 0 : z ^ ( r ) = 0 and (b)  H 0 partial : x ^ ( r ) = R z ^ ( r ) = 0 , where R R p × n is a restriction matrix:
F = 1 q A ( r ) z ^ ( r ) 2 2 1 m n b A ( r ) z ^ ( r ) 2 2 , F partial = 1 q partial ( R z ^ ( r ) ) R ( ( A ( r ) ) A ( r ) ) 1 R 1 ( R z ^ ( r ) ) 1 m n b A ( r ) z ^ ( r ) 2 2
In the case of the t-tests, (a)  i { 1 , , n } t i t m n (n test statistics) and (b)  i { 1 , , p } t partial , i t i (p test statistics), given that the partial result x ^ ( r ) or x ^ * = x ^ ( r ) is a subvector of length p of z ^ ( r ) (also z ^ * = z ^ ( r ) ), yielding (a)  i { 1 , , n } H 0 : z ^ i ( r ) = 0 and (b)  i { 1 , , p } H 0 partial : x ^ i ( r ) = 0 :
i { 1 , , n } t i = z ^ i ( r ) 1 m n b A ( r ) z ^ ( r ) 2 · ( ( A ( r ) ) A ( r ) ) 1 i i , i { 1 , , p } t partial , i t i
An alternative hypothesis test—robust to both rank ( A ( r ) ) and the step of the CLSP algorithm, i.e., valid for the final solution z ^ * (in contrast to the classical ANOVA-based F-tests or coefficient-level t-tests)—can be a t -test for the mean of the NRMSE, comparing the observed NRMSE obs (as μ 0 ), both full and partial, to the mean of a simulated sample NRMSE sim ( τ ) , generated via Monte Carlo simulation, typically from a uniformly (a structureless flat baseline) or normally (a “canonical” choice) distributed random input b ( τ ) U ( 0 , I m ) or b ( τ ) N ( 0 , I m ) —both the distributions being standard (the choice of distributions is, therefore, analogous to employing standard weakly or non-informative priors, with the exception of Jeffrey’s prior, in Bayesian inference)—for τ = 1 , , T , where T is the sample size, yielding t t T 1 with H 0 : NRMSE obs = NRMSE ¯ sim and H 1 : NRMSE obs NRMSE ¯ sim for the two-sided test and H 1 : NRMSE obs NRMSE ¯ sim for the one-sided one:
t = NRMSE ¯ sim NRMSE obs 1 T ( T 1 ) τ = 1 T NRMSE sim ( τ ) NRMSE ¯ sim 2 , NRMSE ¯ sim = 1 T τ = 1 T NRMSE sim ( τ )
justified when b ( τ ) is normally distributed or approximately normally distributed based on the Lindeberg–Lévy Central Limit Theorem (CLT), i.e., when T > 30 (as a rule of thumb) for b ( τ ) U ( 0 , I m ) . Then, H 0 denotes a good fit for z ^ * in the sense that NRMSE obs does not significantly deviate from the simulated distribution, i.e., H 0 should not be rejected for the CLSP model to be considered statistically consistent. In practical implementations (such as Python 3, R 3, Stata 14, SAS 9.4, Julia 1.0, and MATLAB 2015a/Octave 4), T typically ranges from 50 to 50,000.
Finally, for the confidence intervals, classical formulations for (a)  z i ( r ) , i { 1 , , n } , and (b)  x i ( r ) , i { 1 , , p } —based on Student’s t- and, asymptotically, the standard normal distribution, t t m n and z N ( 0 , 1 ) —are equivalently exclusively applicable to the least-squares-based (iterated if r > 1 ) first-step estimate z ^ ( r ) and to the final solution z ^ * = z ^ ( r ) under a strictly overdetermined A ( r ) z ( r ) = b , i.e., rank ( A ( r ) ) = n , m > n (an OLS case), and assuming homoscedastic and normally distributed residuals. Then, provided α ( 0 , 1 ) , such as α { 0.01 , 0.05 , 0.1 } , the confidence intervals for (a)  z i ( r ) and (b)  x i ( r ) are
i { 1 , , n } z i ( r ) z ^ i ( r ) ± t m n , 1 α / 2 · 1 m n b A ( r ) z ^ ( r ) 2 · ( ( A ( r ) ) A ( r ) ) 1 i i , i { 1 , , p } x i ( r ) z i ( r )
given that the partial result x ^ ( r ) or x ^ * = x ^ ( r ) is a subvector of length p of z ^ ( r ) or z ^ * = z ^ ( r ) . For n > 30 , the distribution of the test statistic, t m n , 1 α / 2 , approaches a standard normal one, Z 1 α / 2 , based on the CLT, yielding an alternative notation for the confidence intervals:
i { 1 , , n } z i ( r ) z ^ i ( r ) ± Z 1 α / 2 · 1 m n b A ( r ) z ^ ( r ) 2 · ( ( A ( r ) ) A ( r ) ) 1 i i , i { 1 , , p } x i ( r ) z i ( r )
An alternative “confidence interval”—equivalently robust to both rank ( A ( r ) ) and the step of the CLSP algorithm, i.e., valid for the final solution z ^ * (in contrast to the classical Student’s t- and, asymptotically, the standard normal distribution-based formulations)—can be constructed deterministically via condition-weighted bands, relying on componentwise numerical sensitivity. Let the residual vector b A ( r ) z ^ * be a (backward) perturbation in b , i.e., Δ b = b A ( r ) z ^ * . Then, squaring both sides of the classical first-order inequality Δ z ^ * 2 z ^ * 2 κ ( A ( r ) ) · Δ b 2 b 2 yields i = 1 n ( Δ z ^ i * ) 2 i = 1 n ( z ^ i * ) 2 κ ( A ( r ) ) 2 · b A ( r ) z ^ * 2 2 b 2 2 , where κ ( A ( r ) ) = A ( r ) 2 ( A ( r ) ) 2 = σ max ( A ( r ) ) σ rank ( A ( r ) ) ( A ( r ) ) σ max and σ rank ( A ( r ) ) being the biggest and smallest singular values of A ( r ) —iff σ rank ( A ( r ) ) > 0 b 2 > 0 . Under a uniform relative squared perturbation (a heuristic allocation of error across components as a simplification, not an assumption) of z * , rearranging terms and taking square roots of both sides gives i { 1 , , n } | Δ z ^ i * | | z ^ i * | · κ ( A ( r ) ) · b A ( r ) z ^ * 2 b 2 and a condition-weighted “confidence” band:
i { 1 , , n } z i * z ^ i * · 1 ± κ ( A ( r ) ) · b A ( r ) z ^ * 2 b 2 , i { 1 , , p } x i * z i *
which is a diagnostic interval based on the condition number for z * , consisting of (1) a canonical-form system conditioning, κ ( A ( r ) ) , (2) a normalized model misfit, b A ( r ) z ^ * 2 b 2 , and (3) the “scale” of the final solution, z ^ * , without probabilistic assumptions. A perturbation in one or more of these three components, e.g., caused by a change in RMSA resulting from dropping selected constraints (consult Section 4), will affect the limits of z * . Under a perfect fit, the interval collapses to z i * z ^ i * ± 0 and, for κ ( A ( r ) ) 1 , tends to be very wide. Overall, in practical implementations, where the squared perturbations may violate the uniformity simplification, an aggregated (e.g., average) width of the diagnostic interval for vectors z * and x * becomes more informative, as it represents an adjusted goodness-of-fit statistic—normalized error weighted by the condition number of the design matrix A ( r ) .
To sum up, the CLSP estimator requires a goodness-of-fit framework that reflects both its algorithmic structure and numerical stability. Classical methods remain informative but are valid only under strict overdetermination ( m > n ), full-rank design matrix A ( r ) , and distributional assumptions (primarily, homoscedasticity and normality of residuals). In contrast, the proposed alternatives—(a) NRMSE and NRMSE partial , (b) t-tests for the mean of NRMSE with the help of Monte Carlo sampling, and (c) condition-weighted confidence bands—are robust to underdeterminedness and ill-posedness, making them preferable in practical implementations (e.g., in existing CLSP-based software for Python, R, and Stata).

6. Special Cases of CLSP Problems: APs, CMLS, and LPRLS/QPRLS

The structure of the design matrix  A ( r ) in the CLSP estimator, as defined in Equation (3) (consult Section 3), allows for accommodating a selection of special-case problems, out of which three cases are covered (each case will use modified notation for i, j, m, and p), but the list may not be exhaustive. Among others, the CLSP estimator is, given its ability to address ill-posed or underdetermined problems under linear(ized) constraints, efficient in addressing what can be referred to as allocation problems (APs) (or, for flow variables, tabular matrix problems, TMs)—i.e., in most cases, underdetermined problems involving matrices of dimensions m × p to be estimated, subject to known row and column sums, with the degrees of freedom (i.e., nullity) equal to n rank ( A ( r ) ) m p + s * ( m i + p j ) q , where 0 s * ( m i + p j ) + q is the number of active (non-zero) slack variables and 0 q rank ( M partial ) min ( k , m p ) is the number of known model (target) variables (e.g., a zero diagonal)—whose design matrix, A ( r ) R ( m i + p j + k ) × n , comprises a (by convention) row-sum-column-sum constraints matrix C = I m i 1 i 1 p | 1 m I p j 1 j R ( m i + p j ) × m p (where 1 = { 1 , , 1 } ), a model matrix M = Q M L M partial 0 Q M R R k × m p —in a trivial case, M partial I m p —a sign slack matrix S = Q S L S s σ 1 , , σ s { 1 , + 1 } diag σ 1 , , σ s 0 Q S R R ( m i + p j ) × ( n m p ) , and either a zero matrix 0 (if r = 1 ) or a (standard) reverse-sign slack matrix S residual R k × ( n m p ) (provided r > 1 ) (as given by Equation (19) extending Equation (3)):
A ( r ) = I m i 1 i 1 p 1 m I p j 1 j S M 0 , initial iteration , r = 1 S residual , subsequent iterations , r > 1
where m i and p j denote groupings of the m rows and p columns, respectively, into i and j homogeneous blocks (when no grouped row or column sums are available, i = j = 1 ); with real-world examples including: (a) input–output tables (national, inter-country, or satellite); (b) structural matrices (e.g., trade, country-product, investment, or migration); (c) financial clearing and budget-balancing; and (d) data interpolations (e.g., quarterly data from annual totals). Given the available literature in 2025, the first pseudoinverse-based method of estimating input–output tables was proposed in Pereira-López et al. [81] and the first APs (TMs)-based study was conducted in Bolotov [16], attempting to interpolate a world-level (in total, 232 countries and dependent territories) “bilateral flow” matrix of foreign direct investment (FDI) for the year 2013, based on known row and column totals from UNCTAD data (aggregate outward and inward FDI). The estimation employed a minimum-norm least squares solution under APs (TMs)-style equality constraints, based on the Moore–Penrose pseudoinverse of a “generalized Leontief structural matrix”, rendering it equivalent to the initial (non-corrected) solution in the CLSP framework, z ^ ( 1 ) = A Z ( 1 ) b . As a rule of thumb, AP (TM)-type problems are rank deficient for m > 2 p > 2 with the CLSP being a unique (if α ( 0 , 1 ] ) and an MNBLUE (if α = 1 ) estimator (consult Section 3).
In addition to the APs (TMs), the CLSP estimator is also efficient in addressing what can be referred to as constrained-model least squares (CMLS) (or, more generally, regression problems, RPs)—i.e., (over)determined problems involving vectors of dimension p to be estimated with (standard) OLS degrees of freedom, subject to, among others, linear(ized) data matrix D R k × p -related (in)equality constraints, i { 1 , , t } C i T i D R m i × p , where T i R m i × k is a transformation matrix of D , such as the i-th difference or shift (lag/lead), and C i x γ i —whose design matrix, A ( r ) R ( i = 1 u m i + k ) × n , consists of a u-block constraints matrix C = C 1 C u R ( i = 1 u m i ) × p ( i { 1 , , u } C i R m i × p ), u t , a data matrix as the model matrix M D R k × p , a (standard) sign slack matrix S R ( i = 1 u m i ) × ( n p ) , and either a zero matrix 0 (if r = 1 ) or a (standard) reverse-sign slack matrix S residual R k × ( n p ) (when r > 1 ) (as given by Equation (20) extending Equation (3), with M substituted by data matrix D ):
A ( r ) = C 1 C u S D 0 , initial iteration , r = 1 S residual , subsequent iterations , r > 1
with real-world examples including (a) previously infeasible textbook-definition econometric models of economic (both micro and macro) variables (e.g., business-cycle models), and (b) additional constraints applied to classical econometric models (e.g., demand analysis). Given the available literature in 2025, the first studies structurally resembling CMLS (RPs) were conducted in Bolotov [15,82], focusing on the decomposition of the long U.S. GDP time series into trend ( y t ) and cyclical ( c t ) components—using exponential trend, moving average, Hodrick-Prescott filter, and Baxter-King filter—under constraints on its first difference, based on the U.S. National Bureau of Economic Research (NBER) delimitation of the U.S. business cycle. c t was then modeled with the help of linear regression (OLS), based on an n-th order polynomial with extreme values smoothed via a factor of 1 10 n i and simultaneously penalized via an n-th or ( n + 1 )-th root, c t = β 0 + β 1 i = 1 i = n ( t t i ) · 1 10 n i { n , n + 1 } d t + ε t , where β 0 , β 1 are the model parameters, 1 i n t i are the values of the (externally given) stationary points, and ε t is the error. The order of the polynomial and the ad hoc selection of smoothing and penalizing factors, however, render such a method inferior to the CLSP. Namely, the unique (if α ( 0 , 1 ] ) and an MNBLUE (if α = 1 ) CLSP estimator (consult Section 3) allows (a) the presence of inequality constraints and (b) their ill-posed formulations.
Finally, the CLSP estimator can be used to address (often unsolvable using classical solvers) underdetermined and/or ill-posed—caused by too few and/or mutually inconsistent or infeasible constraints as in the sense of Whiteside et al. [21] and Blair [22,23]—LP and QP problems, an approach hereafter referred to as linear/quadratic programming via regularized least squares (LPRLS/QPRLS): CLSP substitutes the original objective function of the LP/QP problem with the canonical form A ( r ) z ( r ) = b (i.e., focusing solely on the problem’s constraints, without distinguishing between the LP and QP cases) with x ^ M * = Proj x M ( r ) Q X x M ( r ) x L ( r ) R p : 1 i s g i ( x M ( r ) , x L ( r ) ) γ i , j h j ( x M ( r ) , x L ( r ) ) = η j z * being the solution of the original problem, where Q X is a permutation matrix and g i ( · ) γ i and h j ( · ) = η j represent the linear(ized) inequality and equality constraints, i , j γ i , η j b , and the degrees of freedom are equal, under the classical formalization of a primal LP/QP problem (consult Section 2), to n rank ( A ( r ) ) ( p + s ) ( m u b + m e q + m n n ) = p m e q , where p is the length of x M ( r ) , s = m u b + m n n is the number of introduced slack variables, while m u b 0 , m e q 0 , and m n n p denote the numbers of upper-bound, equality, and non-negativity constraints, respectively (under the standard assumption that all model variables are constrained to be non-negative). In the LPRLS/QPRLS, the design matrix, A ( r ) R ( m u b + m e q + m n n ) × n , by the definition of LP/QP, is r-invariant and consists of a block constraints matrix C = C u b C e q C n n R ( m u b + m e q + m n n ) × p , where C u b R m u b × p , C e q R m e q × p , and C n n I p R p × p , a (standard) sign slack matrix S R ( m u b + m e q + m n n ) × ( n p ) , and M canon = (as given by Equation (21) extending Equation (3), under the condition M = S residual = ):
A ( r ) = A ( 1 ) = C u b C e q C n n S
Given the available literature in 2025, the first documented attempts to incorporate LS into LP and QP included, among others, Dax’s LS-based steepest-descent algorithm [26] and the primal-dual NNLS-based algorithm (LSPD) [79], whose structural restrictions—in terms of well-posedness and admissible constraints—can be relaxed through LPRLS/QPRLS, still guaranteeing a unique (if α ( 0 , 1 ] ) and an MNBLUE (if α = 1 ) solution (consult Section 3).
To sum up, the CLSP framework can be applied to three special classes of problems—allocation (APs (TMs)), constrained-model least squares (CMLS (RPs)), and regularized linear or quadratic programming (LPRLS/QPRLS)—differing only in the nature of their constraint blocks, the treatment of slack or residual components, and rank properties of their design matrix A ( r ) , while the estimation method and goodness-of-fit statistics remain identical. The three examined cases correspond to the overwhelming majority of potential real-world uses of the estimator, whereas custom (non-described) problem types are relatively scarce. The AP (TM) case is of particular methodological importance as a means of estimating, among others, input–output tables (without reliance on historical observations).

7. Monte Carlo Experiment and Numerical Examples

This work will finally demonstrate the capability of the CLSP estimator to undergo large-scale Monte Carlo experiments across its special cases—namely, the AP (TM), CMLS (RP), and LPRLS/QPRLS problems (illustrated below on the example of the APs (TMs))—which can be replicated to assess its explanatory power and to calibrate key parameters (r and α ) for real-world applications in future research. However, the construction of a complete econometric framework—requiring not only the results of such calibration but also the formal derivation of its maximum likelihood function (for model selection) and an extension of the Gauss–Markov theorem (to explicitly incorporate MNBLUE cases)—lies beyond the scope of this theoretical text, whose aim is to establish the linear-algebraic foundations of the estimator and to discuss its numerical stability and potential measures of its goodness of fit. Therefore, the Monte Carlo experiment is complemented by simulated numerical examples serving as a proof of the estimator’s practical implementability and the estimability of its diagnostic metrics (i.e., RMSA, NRMSE, t-tests, and diagnostic intervals).
Proceeding to the Monte Carlo experiment, the standardized form of C in the APs (TMs), i.e., C = f ( m , i , p , j ) , with given S and r, enables the large-scale simulation of the distribution of NRMSE—a robust goodness-of-fit metric for the CLSP—through repeated random trials under varying matrix dimensions m × p , row and column group sizes i and j, and composition of M , inferring asymptotic percentile means and standard deviations for practical implementations (e.g., the t-test for the mean). This text presents the results of a simulation study on the distribution of the NRMSE from the first-step estimate z ^ ( 1 ) (implementable in standard econometric software without CLSP modules) for i = 1 , j = 1 , S = , Z = I m + p + k ( A Z ( 1 ) ) = ( A ( 1 ) ) , and r = 1 , conducted in Stata/MP (set to version 14.0) with the help of Mata’s 128-bit floating-point cross-product-based quadcross() for greater precision and SVD-based svsolve() with a strict tolerance of c("epsdouble") for increased numerical stability, and a random-variate seed 123456789 (see Listing A1 containing a list of dependencies [83] and the Stata .do-file). In this 50,000-iterations Monte Carlo simulation study, spanning matrix dimensions from 1 × 2 and 2 × 1 up to 50 × 50, random normal input vectors b N ( 0 , I m + p + k ) were generated for each run, applied with and without zero diagonals (i.e., if, under l = 0 x L = , x M = x = z 1 : m p R m p is reshaped into a ( m × p )-matrix X M = X R m × p , then i { 1 , , min ( m , p ) } X i i = 0 ). Thus, a total of 249.9 million observations ( = ( 50 · 50 1 ) · 2 · 50 , 000 ) was obtained via the formula NRMSE = 1 m b A ( 1 ) ( A ( 1 ) ) b 2 / σ b , resulting in 4998 aggregated statistics of the asymptotic distribution, assuming convergence: mean, sd, skewness, kurtosis, min, max, as well as p1–p99 as presented in Figure 6 (depicting the intensity of its first two moments, mean and sd) and Table 2 (reporting a subset of all thirteen statistics for m , p mod 10 = 0 ).
From the results of the Monte Carlo experiment, it is observable that (1) the NRMSE from the first-step estimate z ^ ( 1 ) and its distributional statistics exhibit increasing stability and boundedness as matrix dimensions m × p increase—specifically, for m , p 1 , both the mean and sd of NRMSE tend to converge, indicating improved estimator performance (e.g., under no diagonal restrictions, at m = p = 5 , mean = 1.84 and sd = 1.29 , while at m = p = 45 , mean = 0.21 and sd = 0.16 ); (2) the zero-diagonal constraints ( i { 1 , , min ( m , p ) } X i i = 0 ) reduce the degrees of freedom and lead to a much more uniform and dimensionally stable distribution of NRMSE across different matrix sizes (e.g., at m = p = 5 , mean drops to 1.17 and sd to 0.85 , while at m = p = 45 , mean rises slightly to 0.26 and sd to 0.19 ); and (3) the distribution of NRMSE exhibits mild right skew and leptokurtosis—less so for the zero-diagonal case: under no diagonal restriction, the average skewness is 0.97 with a coefficient of variation of 38.16 % (i.e., >0.00), and the average kurtosis is 4.15 with a variation of 304.61 % , whereas under the zero-diagonal constraint, skewness averages 0.95 with only 8.57 % variation, and kurtosis averages 3.76 with 25.94 % variation (i.e., >3.00). To sum up, the larger and less underdetermined (i.e., in an AP (TM) problem, the larger the M ( 1 ) block of A ( 1 ) ) the canonical system A ( 1 ) z ( 1 ) = b , the lower the estimation error (i.e., the mean NRMSE) and its variance, and, provided a stable limiting law for NRMSE exists, the skewness and kurtosis from the simulations drift toward 0 and 3, respectively, consistent with—though not proving—a convergence toward a standard-normal distribution. This leads to a straightforward conclusion that the CLSP estimator’s Step 1 performance improves monotonically with increasing problem size and rank, indicating that larger and better-conditioned systems yield more stable normalized errors, thereby allowing future research to concentrate on the calibration of the parameters r and, on the inclusion of Step 2, α .
For an illustration of the implementability of the estimator for the APs (TMs), CMLS (RPs), and LPRLS/QPRLS, the author’s cross-platform Python 3 module, pyclsp (version ≥ 1.6.0, available on PyPI) [84]—together with its interface co-modules for the APs (TMs), pytmpinv (version ≥ 1.2.0, available on PyPI) [85], and the LPRLS/QPRLS, pylppinv (version ≥ 1.3.0, available on PyPI) [86] (a co-module for the CMLS (RPs) is not provided, as the variability of i { 1 , , t } C i T i D within the C = C 1 C u , u t , block of A ( r ) prevents efficient generalization)—is employed for a sample (dataset) of i random elements X i (scalars, vectors, or matrices depending on context) and their transformations, drawn independently from the standard normal distribution, i.e., { X 1 , , X i } iid N ( 0 , 1 ) , under the mentioned random-variate seed 123456789 (configured for consistency with the above-described Monte Carlo experiment). Thus, using the cases of (a) a (non-negative symmetric) input–output table and (b) a (non-negative) zero-diagonal trade matrix as two AP (TM) numerical examples, this text simulates problems similar to the ones addressed in Pereira-López et al. [81] and Bolotov [16]: an underdetermined X R m × p is estimated—from row sums, column sums, and k known values of two randomly generated matrices (a)  X true = 0.5 · | X 1 | + | X 1 | and (b)  X true , i , j = | X 1 ) i , j 1 1 { i = j min ( m , p ) } , X 1 i , j iid N ( 0 , 1 ) , subject to (a)  X i , j 0 and (b)  X i , j 0 i { 1 , , min ( m , p ) } X i i = 0 —via CLSP assuming x = vec ( X ) z , M I m p I , b = b row sums b column sums b known values = X true 1 1 X true vec ( X true ) I , and I U 1 , , m p | I | = k . The Python 3 code, based on pytmpinv and two other modules, installable by executing pip install matplotlib numpy pytmpinv==1.2.0, for (a)  m = p = 20 and k = 0.1 · m p = 40 and (b)  m = p = 40 —estimated with the help of m subset × n subset -MNBLUE across m m subset · n n subset = 40 20 1 · 40 20 1 = 9 reduced models m reduced = p reduced 20 assuming an estimability constraint of m p 400 —and k = 0.2 · m p = 320 , with a non-iterated ( r = 1 ), unique ( α > 0 ), and MNBLUE ( α = 1 ) (two-step) CLSP solution, is implemented in (a) Listing 1 and (b) Listing 2 (e.g., to be executed in a Jupyter Notebook 6.5 or later).
Listing 1. Simulated numerical example for a symmetric input-output-table-based AP (TM) problem.
Mathematics 13 03476 i001
Mathematics 13 03476 i002
In case (a) (see Listing 1 for code), the number of model (target) variables # x ^ M * is m p = 400 with a nullity of dim ( ker ( A ( 1 ) ) ) = n rank ( A ( 1 ) ) = 321 , corresponding to 80.25% of the total unknowns. Given the simulation of X true R 20 × 20 , a matrix unknown in real-world applications—i.e., CLSP is used to estimate the elements of an existing matrix from its row sums, X true 1 R 20 , column sums, 1 X true R 20 , and a randomly selected 10% of its entries, vec ( X true ) I R 40 —the model’s goodness of fit can be measured by a user-defined R 2 max 0 , 1 vec ( X true ) x ^ M * 2 2 vec ( X true ) 1 1 400 i = 1 400 vec ( X true ) i 2 2 0.278256 (i.e., CLSP achieves an improvement of Δ R 2 = 0.178256 over a hypothetical naïve predictor reproducing the known 10% entries of X true , I and yielding a R 2 0.1 but still a modest value of R 2 , i.e., a relatively large error, NRMSE = 12.649111 ) with x ^ M * lying within wide condition-weighted diagnostic intervals x ^ M * min ( x ^ M , lower * ) , max ( x ^ M , upper * ) = 8.0847 × 10 15 , 8.0847 × 10 15 reflecting the ill-conditioning of the strongly rectangular A ( 1 ) R 80 × 400 , κ ( A ( 1 ) ) = 4.9968 × 10 14 , and only the left-sided Monte Carlo-based t-test for the mean of the NRMSE (on a sample of 30 NRMSE values obtained by substituting b with b ( τ ) i iid N ( 0 , 1 ) ) suggesting consistency with expectations (i.e., with the p - value 1 ). In terms of numerical stability, κ ( C canon ) = 4.8023 × 10 14 κ ( A ( 1 ) ) with a low RMSA i { 1 , , 40 } = RMSA = 0.0400 , as confirmed by the correlogram produced in matplotlib, indicating a well-chosen constraints matrix, even given the underdeterminedness of the model (which also prevents a better fit).
Listing 2. Simulated numerical example for a (non-negative) trade-matrix-based AP (TM) problem.
Mathematics 13 03476 i003
Mathematics 13 03476 i004
Mathematics 13 03476 i005
In case (b) (see Listing 2 for code), the corresponding number of model (target) variables in each of the 40 19 · 40 19 = 9 reduced design submatrices A subset ( 1 ) A reduced ( 1 ) , # x ^ M , reduced * , is m subset p subset 361 —where m subset 20 1 = 19 and p subset 20 1 = 19 , one row and one column being reserved for j J a i , j ( 1 ) and i I a i , j ( 1 ) , which enter A reduced ( 1 ) as S = I m m subset 0 0 I p p subset and the vector of slack variables y ^ reduced * * = y ^ reduced * y ^ residual , reduced * z ^ reduced * , so that C reduced x reduced + S reduced y reduced * = b r . , 1 : k (i.e., the slack matrix compensates for the unaccounted row and column sums in the reduced models as opposed to the full one, such as case (a))—with a nullity dim ( ker ( A reduced ( 1 ) ) ) = n reduced rank ( A reduced ( 1 ) ) [ 2 , 290 ] (per reduced model), corresponding to 25.00–72.68% of the total unknowns (per reduced model)—to compare, a full model, under the same inputs and no computational constraints (i.e., m p m E p E ), would have a nullity dim ( ker ( A ( 1 ) ) ) = n rank ( A ( 1 ) ) = 1168 , corresponding to 73.00% of the total unknowns. In the examined example—based on X true 1 R 40 , 1 X true R 40 , and a randomly selected 20% of entries of the true matrix X true R 40 × 40 —the reduced-model block solution’s goodness of fit could not be efficiently measured by a user-defined R 2 max 0 , 1 vec ( X true ) vec x ^ M , reduced , 1 : m subset , 1 : n subset * , ( i E , j E ) 2 2 vec ( X true ) 1 1 1600 i = 1 1600 vec ( X true ) i 2 2 0 (i.e., the block matrix constructed from 40 19 · 40 19 = 9 reduced-model estimates led to R 2 max ( 0 , 5.435377 ) but to an error proportionate to the one in case (a), NRMSE reduced [ 3.464102 , 15.748016 ] (per reduced model))—in contrast, a full model would achieve R 2 max 0 , 1 vec ( X true ) x ^ M * 2 2 vec ( X true ) 1 1 1600 i = 1 1600 vec ( X true ) i 2 2 0.278427 but at a cost of a greater error, NRMSE = 29.427878 —with x ^ M , reduced * lying within strongly varying condition-weighted diagnostic intervals x ^ M , reduced * min ( x ^ M , lower , reduced * ) , max ( x ^ M , upper , reduced * ) , where min ( x ^ M , lower , reduced * ) [ 330.638753 , 2.5009 × 10 19 ] (per reduced model) and max ( x ^ M , upper , reduced * ) [ 2.5456 × 10 19 , 336.547225 ] (per reduced model), and varying results of Monte Carlo-based t-tests for the mean of the NRMSE reduced (on a sample of 30 NRMSE reduced values obtained by substituting b r . with b r . ( τ ) i iid N ( 0 , 1 ) ), where the p-values range is 4.8022 × 10 09 –1.000000 (left-sided), 0.000000–1.000000 (two-sided), and 3.7021 × 10 20 –1.000000 (right-sided) (per reduced model)—alternatively, a full model would lead to wider condition-weighted diagnostic intervals x ^ M * min ( x ^ M , lower * ) , max ( x ^ M , upper * ) = 1.5357 × 10 16 , 1.5357 × 10 16 (i.e., reflecting the ill-conditioning of the strongly rectangular A ( 1 ) R 400 × 1600 , κ ( A ( 1 ) ) = 2.2196 × 10 14 ) and only the left-sided Monte Carlo-based t-test for the mean of the NRMSE (on a sample of 30 NRMSE values obtained by substituting b with b ( τ ) i iid N ( 0 , 1 ) ) suggesting consistency with expectations (i.e., with the p - value 1 ). In terms of numerical stability, κ ( C canon , reduced ) [ 2.236068 , 6.244998 ] < κ ( A reduced ( 1 ) ) [ 4.509742 , 9.325735 ] (per reduced model), which indicates well-conditioning of all the reduced models—conversely, in a full model, κ ( C canon ) = 1.0907 × 10 14 < κ ( A ( 1 ) ) (therefore, a full model ensures an overall better fit but a lower fit quality, i.e., a trade-off).
As the CMLS (RP) numerical example, this text addresses a problem similar to the one solved in Bolotov [15,82]: a coefficient vector β from a (time-series) linear regression model in its classical (statistical) notation y t = x t β + ϵ t , s . t . j = 1 i + 1 β j = c t P Δ y t = 0 t P Δ 2 y t 0 with Δ n y t denoting n-th order differences (i.e., the discrete analogue of d n y t d t n )—where y t is the dependent variable, x t = 1 , x t , 1 , , x t , i is the vector of regressors with a constant, ϵ t is the model’s error (residual), c is a constant, and P { t j : j = 1 , , k } is the set of stationary points—is estimated on a simulated sample (dataset) with the help of CLSP assuming x M β R p and D X = 1 , x 1 , , x i R k × p , p = i + 1 . The Python 3 code, based on numpy and pyclsp modules, installable by executing pip install numpy pyclsp==1.6.0, for k = 500 , p = 6 , c = 1 , y = D β true + ϵ , where j { 2 , , 6 } D · , j X j , X j i iid N ( 0 , 1 ) , X j R 500 , β true = X β 1 X β , X β i iid N ( 0 , 1 ) , X β R 6 , ϵ X ϵ , X ϵ i iid N ( 0 , 1 ) , X ϵ R 500 , and P = { t : Δ y = 0 } , with a non-iterated ( r = 1 ), unique ( α > 0 ), and MNBLUE ( α = 1 ) two-step CLSP solution (consistent with the ones from cases (a) and (b) for APs (TMs)), is implemented in Listing 3 (e.g., for a Jupyter Notebook).
Listing 3. Simulated numerical example for a (time-series) stationary-points-based CMLS (RP) problem.
Mathematics 13 03476 i006
Mathematics 13 03476 i007
Compared to the true values β true = [ 0.1687 , 0.3593 , 0.5969 , 0.0628 , 0.1742 , 0.8307 ] , the CMLS (RP) estimate is β ^ = x ^ M * = [ 0.2024 , 0.1121 , 0.2095 , 0.0396 , 0.0296 , 0.2611 ] with a modest R partial 2 = 0.293328 (i.e., a greater error, NRMSE partial = 0.840638 ), moderate condition-weighted diagnostic intervals x ^ M * [ 29.7780 , 30.3003 ] , and only the right-sided Monte Carlo-based t-test for the mean of the NRMSE partial (on a sample of 30 NRMSE partial values obtained by substituting b with b ( τ ) i iid N ( 0 , 1 ) )—in the example under scrutiny, NRMSE partial is preferable to NRMSE due to y ^ * x ^ * —suggesting consistency with expectations (i.e., with the p - value 1 ). Similarly, in terms of numerical stability, κ ( C canon ) = 138.6109 > κ ( A ( 1 ) ) = 127.8287 , indicating that the constraint block is ill-conditioned, most likely, because of the imposed data- (rather than theory-) based definition of stationary points, which rendered Step 2 infeasible ( x ^ M * = x ^ M ( 1 ) ) (limiting the fit quality).
Finally, in the case of a LPRLS/QPRLS numerical example, this text simulates potentially ill-posed LP (QP) problems, similar to the ones addressed in Whiteside et al. [21] and Blair [22,23]: a solution x ^ * 0 in its classical LP (QP) notation x * = arg   max x R p c x ( x * = arg   min x R p 1 2 x Q x + c x ), s . t . A x b A u b x b u b A e q x = b e q —where Q R p × p is a symmetric positive definite matrix, c R p , A = A u b A e q R ( m u b + m e q ) × p , and b = b u b b e q R m u b + m e q —is estimated from two randomly generated (coefficient) matrices A u b X 1 , X 1 i , j iid N ( 0 , 1 ) , and A e q X 2 , X 2 i , j iid N ( 0 , 1 ) , and two (right-hand-side) vectors b u b X 3 , X 3 i iid N ( 0 , 1 ) , and b e q X 4 , X 4 i iid N ( 0 , 1 ) , via CLSP assuming C = A u b A e q , S = Q S L S s I m u b 0 Q S R , and b = b u b b e q , where Q S L and Q S R are permutation matrices, while omitting Q and c . The Python 3 code, based on numpy and pylppinv modules, installable by executing pip install numpy pylppinv==1.3.0, for m u b = 50 , m e q = 25 , and p = 500 , with a non-iterated ( r = 1 ), unique ( α > 0 ), and MNBLUE ( α = 1 ) (two-step) CLSP solution (analogously consistent with the ones from cases (a) and (b) for APs (TMs) and the one from CMLS (RPs)), is implemented in Listing 4 (e.g., for a Jupyter Notebook).
Listing 4. Simulated numerical example for a underdetermined and potentially infeasible LP (QP) problem.
Mathematics 13 03476 i008
Mathematics 13 03476 i009
The nullity (i.e., underdeterminedness) of the CLSP design matrix, A ( 1 ) , is dim ( ker ( A ( 1 ) ) ) = n rank ( A ( 1 ) ) = 475 , corresponding to 86.36% of the total unknowns, accompanied by a greater (relatively high) error, NRMSE = 12.247449 , moderate condition-weighted diagnostic intervals x ^ M * [ 1.2919 , 1.3918 ] , and only the right-sided Monte Carlo-based t-test for the mean of the NRMSE (on a sample of 30 NRMSE values obtained by substituting b with b ( τ ) i iid N ( 0 , 1 ) ) suggesting consistency with expectations (i.e., with the p - value 1 ). Similarly, in terms of numerical stability, κ ( C canon ) = 2.2168 = κ ( A ( 1 ) ) = 2.2168 , indicating that the constraint block is well-conditioned despite a strongly rectangular A ( 1 ) R 75 × 550 . Hence, CLSP can be efficiently applied to AP (TM), CMLS (RP), and LPRLS/QPRLS special cases using Python 3, and the reader may wish to experiment with the code in Listings 1–4 by relaxing its uniqueness (setting α = 0 ) or the MNBLUE characteristic (setting α [ 0 , 1 ) ).
To sum up, while the simulated numerical examples for the APs (TMs), CMLS (RPs), and LPRLS/QPRLS problems successfully illustrate the theoretical assumptions and numerical behavior of the CLSP estimator (as described in Section 3, Section 4 and Section 5 of this work), they remain limited by simplification and a priori construction. First, each estimation was performed under a fixed r = 1 (i.e., without iterative refinement) and a unique MNBLUE configuration ( α = 1 ), thus omitting the calibration of these parameters. Second, in the AP (TM) case with 10% known values and a non-zero diagonal, similar to the CMLS (RP) one, the estimator achieved a modest R 2 0.3 , consistent with the degree of underdeterminedness, while the reduced models (e.g., zero-diagonal, block-decomposed) performed substantially worse due to aggregation and block-wise estimation (backing the recommendation to prefer full models over reduced ones). Finally, the estimator, given its formalization, is computationally demanding, with the SVD-based Step 1 complexity of O ( max ( m , n ) min ( m , n ) 2 ) for A ( r ) R m × n in each iteration (in CLSP-based software built on Python’s SciPy, R, and Stata) and convex-programming-based Step 2 complexity of O ( k n n z ( A ( r ) ) ) or O ( n 3 ) , where k is the number of solver iterations and n n z ( · ) is the number of non-zero elements in A ( r ) R m × n , (in CLSP-based software built on Python’s CVXPY and R’s CVXR), not including eventual repeated estimations for RMSA i (in which A ( r ) is reduced by one row) and Monte Carlo-based t-test—to exemplify, a 20 × 20 AP (TM) problem with 10% known values and a non-zero diagonal has an A ( r ) R 80 × 400 requiring r-times 2.56 million floating-point operations in Step 1 and up to 320 million under the SCS solver in Step 2, both eventually repeated twenty times to calculate RMSA i and at least thirty times (per CLT) for Monte Carlo-based t-test. Despite its complexity, the CLSP framework remains practically viable and is ready for parameter calibration with subsequent application to real-world problems.

8. Discussion and Conclusions

The proposed modular two-step Convex Least Squares Programming (CLSP) estimator represents an attempt to unify historical efforts developed mainly in the 1960s–1990s to address ill-posed problems in both LS and LP (QP)—namely, the mentioned Wolfe [25], Stoer [75], Waterman [76], Dax [26], Osborne [27], George and Osborne [74], Barnes et al. [79], Donoho and Tanner [73], Grafarend and Awange [77], Bei et al. [80], and Qian et al. [78]—by correcting (1) the estimation, based on a Moore–Penrose pseudoinverse A (which has been applied in an ad hoc manner in the field, particularly in econometrics in the 2010s; consult Pereira-López et al. [81] and Bolotov [16]) and a Bott–Duffin inverse A Z (based on the literature research, this text may be the first to consider a constrained generalized inverse to generalize an A -based estimator) with the help of a (2) regularization-inspired correction [4] (pp. 477–479), achieving greater fit quality in terms of ensuring strict satisfaction of problem constraints (previously infeasible under an A -based solution), while yielding a (potentially) unique and MNBLUE solution (by setting α = 1 ). At the same time, CLSP is well grounded in the algorithmic logic of the works in question, such as Wolfe [25] and Dax [26], as further formalized by, e.g., Osborne [27]. However, the primary strength of the framework in question lies in its “generality”: in contrast to the (individual) problem-specific algorithms, such as the ones developed by Übi [28,29,30,31,32]—routines combining (NN)LS, LP, and QP within a single framework to solve both well- and ill-posed convex optimization problems—CLSP provides a unified estimator with a proper methodology for numerical stability and goodness-of-fit assessment, centered around RMSA and NRMSE. A secondary strength is its practical “versatility”: through at least three classes of special cases—APs (TMs), CMLS (RPs), and LPRLS/QPRLS—it allows estimation of (rectangular) input–output tables (national, inter-country, or satellite), structural matrices (such as trade, country-product, investment, or migration data), and previously infeasible textbook-definition models of economic variables and/or LP (QP) problems. Furthermore, the inclusion of A Z allows probabilistic and Bayesian applications since its subspace restrictions can be interpreted in terms of random variables and prior distributions, as well as Monte Carlo-based inference. To conclude, CLSP has already been fully implemented in Python 3 (pyclsp, pytmpinv, and pylppinv) and is to be ported into R and Stata in the future, making it a readily available alternative to purely theoretical or not freely accessible algorithms or proprietary packages, and applicable in a real-world setting (such as econometric research, replacing [15,16,81] with the two-step corrected estimation).

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The numerical examples used in the manuscript are based on simulated data. The Python 3.10 and Stata 14 code for replication purposes is provided in Listings 1–4 and A1.

Acknowledgments

All arguments, structure, and conclusions in this manuscript are the author’s own. During the preparation of the manuscript, the author recurred to ChatGPT-5 (OpenAI) solely to improve clarity and language, including refinement of style, formal precision in proofs of theorems, and assistance with interpretation of data and results. No content was adopted without the author’s review and editing, and the author claims full responsibility for the final content of the manuscript.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript (in order of their appearance in the sections):
CLSPConvex Least Squares Programming
LSLeast Squares
OLSOrdinary Least Squares
NNLSNon-Negative Least Squares
SVDSingular Value Decomposition
LPLinear Programming
QPQuadratic Programming
LassoLeast Absolute Shrinkage and Selection Operator
RidgeRidge Regression (Tikhonov regularization)
MNBLUEMinimum-Norm Best Linear Unbiased Estimator
BLUEBest Linear Unbiased Estimator
RMSARoot Mean Square Alignment
RMSERoot Mean Square Error
NRMSENormalized Root Mean Square Error
ANOVAAnalysis of Variance
CLTLindeberg–Lévy Central Limit Theorem
APsAllocation Problems
TMsTabular Matrix Problems
UNCTADUN Trade and Development
CMLSConstrained-Model Least Squares
RPsRegression Problems
NBERU.S. National Bureau of Economic Research
LPRLSLinear Programming via Regularized Least Squares
QPRLSQuadratic Programming via Regularized Least Squares
iidindependent and identically distributed

Appendix A

This appendix presents (a) an equation of the Bott–Duffin inverse, ( A Z ( r ) ) , based on the singular value decomposition (SVD), as well as (b) a partitioned decomposition of B ( r ) (consult Equations (A1)–(A3) and, for the history of all three concepts, Section 2 and Section 4).

Appendix A.1

Let A ( r ) R m × n be a real matrix, and let Z R n × n be a symmetric idempotent matrix (i.e., Z 2 = Z = Z ) defining a subspace restriction. Then, the Bott–Duffin inverse ( A Z ( r ) ) R n × m is given by the formula ( A Z ( r ) ) = Z ( A ( r ) ) A ( r ) Z Z ( A ( r ) ) , as defined in Equation (5), where the projected Gram matrix Z ( A ( r ) ) A ( r ) Z is computable via its SVD:
Z ( A ( r ) ) A ( r ) Z = U Σ V = U Σ U
where U , V R n × n are orthogonal (since the projected Gram matrix is symmetric positive semidefinite, U = V ), and Σ = diag ( σ 1 , , σ rank , 0 , , 0 ) R n × n contains the singular values. Hence, for Σ = diag ( σ 1 1 , , σ rank 1 , 0 , , 0 ) , U Σ U substitutes Z ( A ( r ) ) A ( r ) Z :
( A Z ( r ) ) = Z ( A ( r ) ) A ( r ) Z Z ( A ( r ) ) = U Σ U Z ( A ( r ) )

Appendix A.2

Let rank ( C S ) rank ( C ) + rank ( S ) in the analysis of numerical stability of the first estimate z ^ ( r ) as defined in Equation (2). Then a decomposition of B ( r ) reveals how the interaction of constraints C and slack structure S forms the curvature of the solution space:
B ( r ) = A ( r ) C S = f C S , g M , b A ( r 1 ) z ^ ( r 1 ) · C C C S S C S S C S
While the block C C captures the contribution of the constraints and S S reflects the curvature introduced by slack variables, the off-diagonal blocks C S and S C encode cross-interactions between constraints and slacks: if these off-diagonal blocks vanish, the effects are orthogonal and additive (in the column space sense since C S = 0 implies R ( C ) R ( S ) ); otherwise, C is orthogonalized with respect to R ( S ) , which is affected by ( I S S ) C . Hence, the presence of S in C S can improve or worsen the ill-posedness of the canonized problem: namely, C S 0 tends to worsen the condition number κ ( C S ) .

Appendix B

This appendix presents Stata code, i.e., a .do-file, for the Monte Carlo experiment, compatible with version 14.0 and above (Stata/MP with at least four CPUs is recommended for replication—specifically, 50,000 iterations require substantial CPU time and memory).
Stata/MP 14.0 dependencies:
TITLE
’SUMMARIZEBY’: module to use statsby functionality with summarize    
DESCRIPTION/AUTHOR(S)
This routine combines the summarize command with statsby. It iteratively collects summarize results for each variable with statsby and saves the result to memory or to a Stata .dta file.
KW: data management
KW: summarize
KW: statsby
Requires: Stata version 8
Distribution–Date: 20221012
Author: Ilya Bolotov, Prague University of Economics and Business
Support: email ilya.bolotov@vse.cz
INSTALLATION FILES
summarizeby.ado
summarizeby.sthlp
  • (type ssc install summarizeby to install)
Listing A1. Stata code from the Monte Carlo experiment dedicated to mapping the distribution of NRMSE.
Mathematics 13 03476 i010
Mathematics 13 03476 i011
Mathematics 13 03476 i012

References

  1. Nocedal, J.; Wright, S.J. Numerical Optimization, 2nd ed.; Springer Series in Operations Research; Springer: New York, NY, USA, 2006; p. 664. [Google Scholar] [CrossRef]
  2. Boyd, S.; Vandenberghe, L. Convex Optimization, 1st ed.; Cambridge University Press: Cambridge, UK, 2004; p. 727. [Google Scholar] [CrossRef]
  3. Sydsæter, K.; Hammond, P.; Seierstad, A.; Strøm, A. Further Mathematics for Economic Analysis, 2nd ed.; FT Prentice Hall: Harlow, UK; München, Germany, 2011; p. 616. [Google Scholar]
  4. Gentle, J.E. Matrix Algebra: Theory, Computations and Applications in Statistics, 3rd ed.; Springer Texts in Statistics; Springer: Cham, Switzerland, 2024; p. 725. [Google Scholar] [CrossRef]
  5. Dantzig, G.B. Reminiscences about the Origins of Linear Programming. Mem. Am. Math. Soc. 1984, 48, 1–11. [Google Scholar] [CrossRef]
  6. Dantzig, G.B. Linear Programming and Extensions, 1st ed.; Reprint of 1963; Princeton Landmarks in Mathematics and Physics; Princeton University Press: Princeton, NJ, USA, 1998; p. 656. [Google Scholar]
  7. Koopmans, T.C. Activity Analysis of Production and Allocation: Proceedings of a Conference, 1st ed.; Wiley: New York, NY, USA, 1951; p. 404. [Google Scholar]
  8. Allen, R.G.D. Mathematical Economics, 2nd ed.; Reprint of 1959; Palgrave Macmillan: London, UK, 1976; p. 812. [Google Scholar] [CrossRef]
  9. Lancaster, K. Mathematical Economics, 1st ed.; Dover Publications: New York, NY, USA, 1987; p. 411. [Google Scholar]
  10. Intriligator, M.D. Mathematical Optimization and Economic Theory, 1st ed.; Reprint of 1971; Classics in Applied Mathematics; SIAM: Philadelphia, PA, USA, 2002; p. 508. [Google Scholar] [CrossRef]
  11. Intriligator, M.D.; Arrow, K.J. Handbook of Mathematical Economics, 1st ed.; Handbooks in Economics; North-Holland: Amsterdam, The Netherlands; New York, NY, USA, 1981; p. 378. [Google Scholar]
  12. Dorfman, R.; Samuelson, P.A.; Solow, R.M. Linear Programming and Economic Analysis, 1st ed.; Reprint of 1958; Dover Books on Advanced Mathematics; Dover Publications: New York, NY, USA, 1987; p. 525. [Google Scholar]
  13. Frühwirth, T.; Abdennadher, S. Essentials of Constraint Programming, 1st ed.; Cognitive Technologies; Springer: Berlin/Heidelberg, Germany, 2003; p. 144. [Google Scholar] [CrossRef]
  14. Rossi, F.; van Beek, P.; Walsh, T. (Eds.) Handbook of Constraint Programming, 1st ed.; Foundations of Artificial Intelligence; Elsevier: Amsterdam, The Netherlands; Boston, MA, USA, 2006; Volume 2, p. 955. [Google Scholar]
  15. Bolotov, I. Modeling of Time Series Cyclical Component on a Defined Set of Stationary Points and Its Application on the U.S. Business Cycle. In Proceedings of the the 8th International Days of Statistics and Economics. Melandrium, Prague, Czech Republic, 11–13 September 2014; pp. 151–160. [Google Scholar]
  16. Bolotov, I. Modeling Bilateral Flows in Economics by Means of Exact Mathematical Methods. In Proceedings of the 9th International Days of Statistics and Economics. Melandrium, Prague, Czech Republic, 10–12 September 2015; pp. 199–208. [Google Scholar]
  17. Rao, C.R.; Mitra, S.K. Generalized Inverse of Matrices and Its Applications, 1st ed.; Wiley Series in Probability and Mathematical Statistics; Wiley: New York, NY, USA, 1971; p. 240. [Google Scholar] [CrossRef]
  18. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; Reprint of 2003; CMS Books in Mathematics; Springer: New York, NY, USA, 2006; p. 420. [Google Scholar] [CrossRef]
  19. Lawson, C.L.; Hanson, R.J. Solving Least Squares Problems, 1st ed.; Reprint of 1974; Classics in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1995; p. 337. [Google Scholar] [CrossRef]
  20. Wang, G.; Wei, Y.; Qiao, S. Generalized Inverses: Theory and Computations, 1st ed.; Developments in Mathematics; Springer: Singapore, 2018; Volume 53, p. 397. [Google Scholar] [CrossRef]
  21. Whiteside, M.M.; Choi, B.; Eakin, M.; Crockett, H. Stability of Linear Programming Solutions Using Regression Coefficients. J. Stat. Comput. Simul. 1994, 50, 131–146. [Google Scholar] [CrossRef]
  22. Blair, C. Random Linear Programs with Many Variables and Few Constraints. Math. Program. 1986, 34, 62–71. [Google Scholar] [CrossRef]
  23. Blair, C. Random Inequality Constraint Systems with Few Variables. Math. Program. 1986, 35, 135–139. [Google Scholar] [CrossRef]
  24. Tikhonov, A.N.; Goncharskiy, A.V.; Stepanov, V.V.; Yagola, A.G. Chislennyye Metody Resheniya Nekorrektnykh Zadach, 2nd ed.; Nauka: Moscow, Russia, 1990; p. 232. [Google Scholar]
  25. Wolfe, P. A Technique for Resolving Degeneracy in Linear Programming. J. Soc. Ind. Appl. Math. 1963, 11, 205–211. [Google Scholar] [CrossRef]
  26. Dax, A. Linear Programming via Least Squares. Linear Algebra Its Appl. 1988, 111, 313–324. [Google Scholar] [CrossRef]
  27. Osborne, M.R. Degeneracy: Resolve or Avoid? J. Oper. Res. Soc. 1992, 43, 829–835. [Google Scholar] [CrossRef]
  28. Übi, E. Exact and Stable Least Squares Solution to the Linear Programming Problem. Cent. Eur. J. Math. 2005, 3, 228–241. [Google Scholar] [CrossRef]
  29. Übi, E. On Stable Least Squares Solution to the System of Linear Inequalities. Open Math. 2007, 5, 373–385. [Google Scholar] [CrossRef]
  30. Übi, E. A Numerically Stable Least Squares Solution to the Quadratic Programming Problem. Open Math. 2008, 6, 171–178. [Google Scholar] [CrossRef]
  31. Übi, E. Mathematical Programming via the Least-Squares Method. Open Math. 2010, 8, 795–806. [Google Scholar] [CrossRef]
  32. Übi, E. Linear Inequalities via Least Squares. Proc. Est. Acad. Sci. 2013, 62, 238–248. [Google Scholar] [CrossRef]
  33. Dresden, A. The Fourteenth Western Meeting of the American Mathematical Society. Bull. Am. Math. Soc. 1920, 26, 385–397. [Google Scholar] [CrossRef]
  34. Bjerhammar, A. Rectangular Reciprocal Matrices, With Special Reference to Geodetic Calculations. Bull. Géodésique 1951, 20, 188–220. [Google Scholar] [CrossRef]
  35. Penrose, R. A Generalized Inverse for Matrices. Math. Proc. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
  36. Penrose, R. On Best Approximate Solutions of Linear Matrix Equations. Math. Proc. Camb. Philos. Soc. 1956, 52, 17–19. [Google Scholar] [CrossRef]
  37. Chipman, J.S. On Least Squares with Insufficient Observations. J. Am. Stat. Assoc. 1964, 59, 1078–1111. [Google Scholar] [CrossRef]
  38. Price, C.M. The Matrix Pseudoinverse and Minimal Variance Estimates. SIAM Rev. 1964, 6, 115–120. [Google Scholar] [CrossRef]
  39. Plackett, R.L. Some Theorems in Least Squares. Biometrika 1950, 37, 149–157. [Google Scholar] [CrossRef]
  40. Rao, C.R.; Mitra, S.K. Generalized Inverse of a Matrix and Its Applications. In Theory of Statistics: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume I, 1st ed.; Le Cam, L.M., Neyman, J., Scott, E.L., Eds.; University of California Press: Berkeley, CA, USA, 1972; pp. 601–620. [Google Scholar]
  41. Greville, T.N.E. The Pseudoinverse of a Rectangular Matrix and Its Statistical Applications. In Proceedings of the Annual Meeting of the American Statistical Association, Chicago, IL, USA, 27–30 December 1958; American Statistical Association: Alexandria, VA, USA, 1958; pp. 116–121. [Google Scholar]
  42. Greville, T.N.E. The Pseudoinverse of a Rectangular or Singular Matrix and Its Application to the Solution of Systems of Linear Equations. SIAM Rev. 1959, 1, 38–43. [Google Scholar] [CrossRef]
  43. Greville, T.N.E. Some Applications of the Pseudoinverse of a Matrix. SIAM Rev. 1960, 2, 15–22. [Google Scholar] [CrossRef]
  44. Greville, T.N.E. Note on Fitting of Functions of Several Independent Variables. J. Soc. Ind. Appl. Math. 1961, 9, 109–115. [Google Scholar] [CrossRef]
  45. Cline, R.E. Representations for the Generalized Inverse of a Partitioned Matrix. J. Soc. Ind. Appl. Math. 1964, 12, 588–600. [Google Scholar] [CrossRef]
  46. Cline, R.E.; Greville, T.N.E. An Extension of the Generalized Inverse of a Matrix. SIAM J. Appl. Math. 1970, 19, 682–688. [Google Scholar] [CrossRef]
  47. Golub, G.H.; Kahan, W. Calculating the Singular Values and Pseudo-Inverse of a Matrix. SIAM J. Numer. Anal. 1965, 2, 205–224. [Google Scholar] [CrossRef]
  48. Ben-Israel, A.; Cohen, D. On Iterative Computation of Generalized Inverses and Associated Projections. SIAM Rev. 1966, 8, 410–419. [Google Scholar] [CrossRef]
  49. Lewis, T.O.; Newman, T.G. Pseudoinverses of Positive Semidefinite Matrices. SIAM J. Appl. Math. 1968, 16, 701–703. [Google Scholar] [CrossRef]
  50. Bott, R.; Duffin, R.J. On the Algebra of Networks. Trans. Am. Math. Soc. 1953, 74, 99–109. [Google Scholar] [CrossRef]
  51. Campbell, S.L.; Meyer, C.D. Generalized Inverses of Linear Transformations, 1st ed.; Reprint of 1979; Classics in Applied Mathematics; SIAM: Philadelphia, PA, USA, 2009; p. 272. [Google Scholar] [CrossRef]
  52. Meyer, C.D., Jr. Generalized Inverses and Ranks of Block Matrices. SIAM J. Appl. Math. 1973, 25, 597–602. [Google Scholar] [CrossRef]
  53. Hartwig, R.E. Block Generalized Inverses. Arch. Ration. Mech. Anal. 1976, 61, 197–251. [Google Scholar] [CrossRef]
  54. Rao, C.R.; Yanai, H. Generalized Inverses of Partitioned Matrices Useful in Statistical Applications. Linear Algebra Its Appl. 1985, 70, 105–113. [Google Scholar] [CrossRef]
  55. Tian, Y. The Moore-Penrose Inverses of m x n Block Matrices and Their Applications. Linear Algebra Its Appl. 1998, 283, 35–60. [Google Scholar] [CrossRef]
  56. Rakha, M.A. On the Moore-Penrose Generalized Inverse Matrix. Appl. Math. Comput. 2004, 158, 185–200. [Google Scholar] [CrossRef]
  57. Baksalary, O.M.; Trenkler, G. On Formulae for the Moore-Penrose Inverse of a Columnwise Partitioned Matrix. Appl. Math. Comput. 2021, 403, 125913. [Google Scholar] [CrossRef]
  58. Albert, A.E. Regression and the Moore-Penrose Pseudoinverse, 1st ed.; Mathematics in Science and Engineering; Academic Press: New York, NY, USA, 1972; p. 180. [Google Scholar]
  59. Dokmanić, I.; Kolundžija, M.; Vetterli, M. Beyond Moore-Penrose: Sparse Pseudoinverse. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 6526–6530. [Google Scholar] [CrossRef]
  60. Baksalary, O.M.; Trenkler, G. The Moore-Penrose Inverse: A Hundred Years on a Frontline of Physics Research. Eur. Phys. J. H 2021, 46, 9. [Google Scholar] [CrossRef]
  61. Mortari, D. Least-Squares Solution of Linear Differential Equations. Mathematics 2017, 5, 48. [Google Scholar] [CrossRef]
  62. Getson, A.J.; Hsuan, F.C. {2}-Inverses and Their Statistical Application, 1st ed.; Reprint of 1988; Lecture Notes in Statistics; Springer: New York, NY, USA, 2012; p. 110. [Google Scholar] [CrossRef]
  63. Björck, A. Numerical Methods for Least Squares Problems, 1st ed.; SIAM: Philadelphia, PA, USA, 1996; p. 408. [Google Scholar]
  64. Kantorovich, L.V. Matematicheskiye Metody Organizatsii i Planirovaniia Proizvodstva, reprint ed.; Izdatel’skiy dom S.-Peterb. gos. un-ta: Saint Petersburg, Russia, 2012; p. 96. [Google Scholar]
  65. Shamir, R. The Efficiency of the Simplex Method: A Survey. Manag. Sci. 1987, 33, 301–334. [Google Scholar] [CrossRef]
  66. Stone, R.E.; Tovey, C.A. The Simplex and Projective Scaling Algorithms as Iteratively Reweighted Least Squares Methods. SIAM Rev. 1991, 33, 220–237. [Google Scholar] [CrossRef]
  67. Wagner, H.M. Linear Programming Techniques for Regression Analysis. J. Am. Stat. Assoc. 1959, 54, 206–212. [Google Scholar] [CrossRef]
  68. Sielken, R.L.; Hartley, H.O. Two Linear Programming Algorithms for Unbiased Estimation of Linear Models. J. Am. Stat. Assoc. 1973, 68, 639–641. [Google Scholar] [CrossRef]
  69. Kiountouzis, E.A. Linear Programming Techniques in Regression Analysis. Appl. Stat. 1973, 22, 69. [Google Scholar] [CrossRef]
  70. Sposito, V.A. On Unbiased Lp Regression Estimators. J. Am. Stat. Assoc. 1982, 77, 652–653. [Google Scholar] [CrossRef]
  71. Judge, G.G.; Takayama, T. Inequality Restrictions in Regression Analysis. J. Am. Stat. Assoc. 1966, 61, 166–181. [Google Scholar] [CrossRef]
  72. Mantel, N. Restricted Least Squares Regression and Convex Quadratic Programming. Technometrics 1969, 11, 763–773. [Google Scholar] [CrossRef]
  73. Donoho, D.L.; Tanner, J. Sparse Nonnegative Solution of Underdetermined Linear Equations by Linear Programming. Proc. Natl. Acad. Sci. USA 2005, 102, 9446–9451. [Google Scholar] [CrossRef] [PubMed]
  74. George, K.; Osborne, M.R. On Degeneracy in Linear Programming and Related Problems. Ann. Oper. Res. 1993, 46–47, 343–359. [Google Scholar] [CrossRef]
  75. Stoer, J. On the Numerical Solution of Constrained Least-Squares Problems. SIAM J. Numer. Anal. 1971, 8, 382–411. [Google Scholar] [CrossRef]
  76. Waterman, M.S. A Restricted Least Squares Problem. Technometrics 1974, 16, 135–136. [Google Scholar] [CrossRef]
  77. Grafarend, E.W.; Awange, J.L. Algebraic Solutions of Systems of Equations. In Linear and Nonlinear Models, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 527–569. [Google Scholar] [CrossRef]
  78. Qian, J.; Andrew, A.L.; Chu, D.; Tan, R.C.E. Methods for Solving Underdetermined Systems. Numer. Linear Algebra Appl. 2018, 25, 17. [Google Scholar] [CrossRef]
  79. Barnes, E.; Chen, V.; Gopalakrishnan, B.; Johnson, E.L. A Least-Squares Primal-Dual Algorithm for Solving Linear Programming Problems. Oper. Res. Lett. 2002, 30, 289–294. [Google Scholar] [CrossRef]
  80. Bei, X.; Chen, N.; Zhang, S. Solving Linear Programming with Constraints Unknown. In Proceedings of the 42nd International Colloquium, ICALP 2015, Kyoto, Japan, 6–10 July 2015; Springer: Berlin/Heidelberg, Germany, 2015; Volume 9134, pp. 129–142. [Google Scholar] [CrossRef]
  81. Pereira-López, X.; Fernández-Fernández, M.; Carrascal-Incera, A. Rectangular Input-output Models by Moore-Penrose Inverse. Rev. Electrónica De Comun. Y Trab. De Asepuma 2014, 15, 13–24. [Google Scholar]
  82. Bolotov, I. The Problem of Relationships Between Conditional Statements and Arithmetic Functions. Mundus Symb. 2012, 20, 5–12. [Google Scholar]
  83. Bolotov, I. SUMMARIZEBY: Stata Module to Use Statsby Functionality with Summarize, version 1.1.2; Boston College Department of Economics: Boston, MA, USA, 2022. Available online: https://ideas.repec.org/c/boc/bocode/s458870.html (accessed on 26 October 2025).
  84. Bolotov, I. PYCLSP: Modular Two-Step Convex Optimization Estimator for Ill-Posed Problems, version 1.3.0. 2025. Available online: https://pypi.org/project/pyclsp/ (accessed on 26 October 2025).
  85. Bolotov, I. PYTMPINV: Tabular Matrix Problems via Pseudoinverse Estimation, version 1.2.0. 2025. Available online: https://pypi.org/project/pytmpinv/ (accessed on 26 October 2025).
  86. Bolotov, I. PYLPPINV: Linear Programming via Pseudoinverse Estimation, version 1.3.0. 2025. Available online: https://pypi.org/project/pylppinv/ (accessed on 26 October 2025).
Figure 1. LP and QP in programming. Adapted from Dantzig (1990, Figure 1.4.1, p. 8) [6] (pp. 1–11).
Figure 1. LP and QP in programming. Adapted from Dantzig (1990, Figure 1.4.1, p. 8) [6] (pp. 1–11).
Mathematics 13 03476 g001
Figure 2. Applicability of selected types of convex optimization methods across problem classes.
Figure 2. Applicability of selected types of convex optimization methods across problem classes.
Mathematics 13 03476 g002
Figure 3. Map of citation and reference counts for the above-cited sources, where hollow circles are suggested items. Monographs not indexed in the database are denoted as (?), with Lawson and Hanson being included in the bibliography as a 1995 reprint [19]. Adapted from Litmaps (Shared Citations and References), available at https://www.litmaps.com/ (accessed on 30 September 2025).
Figure 3. Map of citation and reference counts for the above-cited sources, where hollow circles are suggested items. Monographs not indexed in the database are denoted as (?), with Lawson and Hanson being included in the bibliography as a 1995 reprint [19]. Adapted from Litmaps (Shared Citations and References), available at https://www.litmaps.com/ (accessed on 30 September 2025).
Mathematics 13 03476 g003
Figure 4. Algorithmic flow of the CLSP Estimator. Decision nodes (diamonds) indicate modularity.
Figure 4. Algorithmic flow of the CLSP Estimator. Decision nodes (diamonds) indicate modularity.
Mathematics 13 03476 g004
Figure 5. Schematic correlogram of constraint rows i with high alignment ( RMSA i threshold ). Each row encodes directional alignment and symbolic marginal effects on model conditioning and fit.
Figure 5. Schematic correlogram of constraint rows i with high alignment ( RMSA i threshold ). Each row encodes directional alignment and symbolic marginal effects on model conditioning and fit.
Mathematics 13 03476 g005
Figure 6. Heatmaps (based on seaborn’s viridis colormap) of means, mean = 1 50 , 000 i = 1 50 , 000 NRMSE i , and standard deviations, sd = 1 50 , 000 1 i = 1 50 , 000 ( NRMSE i mean ) 2 , computed from the NRMSE of z ^ ( 1 ) = ( A ( 1 ) ) b , where x = vec ( X R m × p , 1 m , p 50 ) z , under two structural variants: with and without a zero diagonal ( i { 1 , , min ( m , p ) } X i i = 0 ). Color reflects the magnitude of the statistic, with lighter shades indicating higher values. The figure was prepared using matplotlib in Python 3.
Figure 6. Heatmaps (based on seaborn’s viridis colormap) of means, mean = 1 50 , 000 i = 1 50 , 000 NRMSE i , and standard deviations, sd = 1 50 , 000 1 i = 1 50 , 000 ( NRMSE i mean ) 2 , computed from the NRMSE of z ^ ( 1 ) = ( A ( 1 ) ) b , where x = vec ( X R m × p , 1 m , p 50 ) z , under two structural variants: with and without a zero diagonal ( i { 1 , , min ( m , p ) } X i i = 0 ). Color reflects the magnitude of the statistic, with lighter shades indicating higher values. The figure was prepared using matplotlib in Python 3.
Mathematics 13 03476 g006
Table 1. Selected types of generalized inverses G A { · } , where A { · } R n × m , x R n , and b R m .
Table 1. Selected types of generalized inverses G A { · } , where A { · } R n × m , x R n , and b R m .
Type of InverseTerminologyProperties
{ 1 } Equation Solving Inverse G A { 1 } iff A G A = A . Hence, G b is a solution to A x = b for all b R ( A ) .
{ 1 , 2 } Reflexive Inverse G A { 1 , 2 } iff A G A = A and G A G = G . Each { 1 , 2 } -inverse defines a direct sum N ( A ) R ( G ) = R n , R ( A ) N ( G ) = R m . For complementary subspaces ( N , R ) , G N , R is unique, with R ( G N , R ) = R and N ( G N , R ) = N .
{ 1 , 3 } Least Squares Inverse G A { 1 , 3 } iff A G A = A and ( A G ) = A G . Hence, G b is the least-squares solution minimizing the norm A x b 2 2 .
{ 1 , 4 } Minimum Norm Inverse G A { 1 , 4 } iff A G A = A and ( G A ) = G A . Hence, G b is the minimum-norm solution of A x = b for all b R ( A ) .
{ 1 , 2 , 3 , 4 } Moore–Penrose InverseThe unique G A A { 1 , 2 , 3 , 4 } is the least-squares minimum-norm solution.
Source: Adapted from Campbell and Meyer (2009, Table 6.1, p. 96) [51] (pp. 91–119) as a summary of Rao and Mitra [17] (pp. 44–71), Rao and Mitra [40], Ben-Israel and Greville [18] (pp. 40–51), and Wang et al. [20] (pp. 10–19, 33–42).
Table 2. Subset (at m , p mod 10 = 0 ) of the summary statistics for NRMSE from the simulation study.
Table 2. Subset (at m , p mod 10 = 0 ) of the summary statistics for NRMSE from the simulation study.
mpMeansdSkewnessKurtosisMinMaxp1p5p25p50p75p95p99
Without Zero Diagonal
1010000.81000.59000.82003.24000.00003.52000.01000.06000.33000.70001.18001.94002.47
1020001.53001.14000.92003.60000.00008.10000.02000.12000.62001.31002.21003.70004.80
1030005.30003.96000.95003.69000.00027.25000.09000.41002.13004.51007.65012.95016.90
1040007.56005.67000.97003.77000.00046.23000.12000.61003.02006.45010.90018.59024.18
1050025.08018.95000.99003.84000.00127.63000.39001.98009.99021.10036.31061.64080.75
2010000.57000.43000.93003.60000.00002.73000.01000.04000.23000.49000.83001.39001.79
2020007.90005.85000.89003.48000.00040.71000.13000.64003.18006.77011.48019.17024.57
2030004.10003.03000.92003.61000.00021.59000.06000.33001.68003.50005.92009.92012.87
2040005.85004.37000.95003.71000.00031.73000.09000.47002.36005.00008.44014.27018.65
2050008.52006.36000.94003.63000.00043.37000.14000.68003.44007.22012.36020.73026.90
3010002.89002.16000.97003.78000.00015.48000.05000.23001.16002.47004.17007.05009.19
3020002.85002.11000.92003.58000.00014.28000.05000.23001.16002.43004.12006.88008.93
3030004.44003.32000.94003.66000.00022.66000.07000.35001.78003.76006.43010.85014.08
3040008.73006.50000.95003.69000.00048.55000.14000.69003.56007.47012.61021.13027.85
3050005.61004.22000.96003.69000.00031.96000.09000.44002.25004.75008.10013.70017.94
4010000.55000.41000.98003.80000.00003.10000.01000.04000.22000.47000.79001.35001.76
4020004.67003.48000.93003.66000.00025.54000.07000.35001.87003.97006.74011.34014.83
4030001.39001.04000.95003.69000.00006.70000.02000.11000.56001.18002.01003.40004.39
4040000.27000.20000.96003.71000.00001.48000.00000.02000.11000.23000.39000.67000.87
4050000.29000.21000.95003.69000.00001.58000.00000.02000.11000.24000.41000.70000.91
5010003.74002.82000.99003.84000.00023.34000.06000.29001.50003.16005.37009.20012.13
5020001.65001.24000.95003.68000.00008.17000.03000.13000.67001.40002.39004.05005.26
5030005.24003.90000.94003.69000.00028.36000.08000.42002.12004.47007.56012.74016.54
5040000.47000.35000.97003.73000.00002.40000.01000.04000.19000.40000.67001.15001.49
5050000.23000.17000.94003.64000.00001.21000.00000.02000.09000.20000.33000.56000.73
1010000.71000.52000.85003.37000.00003.20000.01000.06000.29000.62001.04001.71002.20
1020000.69000.51000.92003.60000.00003.54000.01000.05000.28000.59001.00001.67002.16
1030001.93001.44000.95003.74000.00010.17000.03000.15000.78001.64002.77004.69006.13
1040004.65003.50000.97003.76000.00023.68000.07000.37001.86003.92006.72011.43014.89
1050001.94001.47000.98003.84000.00012.42000.03000.15000.77001.64002.81004.75006.25
2010000.70000.52000.92003.59000.00003.70000.01000.06000.28000.60001.02001.70002.20
2020000.70000.52000.93003.57000.00003.68000.01000.05000.28000.60001.01001.71002.21
2030000.42000.31000.94003.68000.00002.24000.01000.03000.17000.36000.61001.02001.34
2040000.27000.20000.94003.64000.00001.27000.00000.02000.11000.23000.39000.65000.85
2050002.06001.55000.97003.81000.00011.72000.03000.16000.83001.75002.98005.04006.64
3010000.45000.34000.95003.67000.00002.40000.01000.04000.18000.38000.65001.10001.43
3020000.42000.32000.95003.71000.00002.16000.01000.03000.17000.36000.61001.03001.34
3030000.45000.34000.96003.76000.00002.53000.01000.03000.18000.38000.65001.10001.43
3040000.34000.26000.97003.73000.00001.88000.01000.03000.14000.29000.50000.85001.10
3050000.57000.43000.96003.75000.00003.21000.01000.05000.23000.48000.82001.39001.81
4010000.34000.26000.96003.76000.00001.83000.01000.03000.14000.29000.49000.84001.08
4020000.33000.25000.98003.77000.00001.68000.01000.03000.13000.28000.47000.80001.05
4030000.63000.47000.97003.80000.00003.36000.01000.05000.25000.53000.90001.52001.99
4040000.38000.28000.98003.82000.00002.23000.01000.03000.15000.32000.54000.92001.20
4050000.30000.23000.97003.78000.00001.76000.00000.02000.12000.26000.44000.75000.98
5010000.61000.46000.97003.74000.00003.16000.01000.05000.24000.52000.88001.48001.95
5020000.17000.13000.97003.76000.00000.87000.00000.01000.07000.14000.24000.41000.54
5030000.32000.24000.97003.82000.00002.10000.00000.03000.13000.28000.47000.80001.03
5040000.61000.46000.95003.67000.00003.04000.01000.05000.24000.52000.89001.49001.95
5050000.68000.51000.96003.70000.00003.76000.01000.05000.27000.58000.98001.66002.16
Source: self-prepared.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bolotov, I. CLSP: Linear Algebra Foundations of a Modular Two-Step Convex Optimization-Based Estimator for Ill-Posed Problems. Mathematics 2025, 13, 3476. https://doi.org/10.3390/math13213476

AMA Style

Bolotov I. CLSP: Linear Algebra Foundations of a Modular Two-Step Convex Optimization-Based Estimator for Ill-Posed Problems. Mathematics. 2025; 13(21):3476. https://doi.org/10.3390/math13213476

Chicago/Turabian Style

Bolotov, Ilya. 2025. "CLSP: Linear Algebra Foundations of a Modular Two-Step Convex Optimization-Based Estimator for Ill-Posed Problems" Mathematics 13, no. 21: 3476. https://doi.org/10.3390/math13213476

APA Style

Bolotov, I. (2025). CLSP: Linear Algebra Foundations of a Modular Two-Step Convex Optimization-Based Estimator for Ill-Posed Problems. Mathematics, 13(21), 3476. https://doi.org/10.3390/math13213476

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop