# A Cubic Class of Iterative Procedures for Finding the Generalized Inverses

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

**Definition**

**1.**

## 2. Iterative Scheme

**Lemma**

**1.**

**Proof.**

## 3. Convergence Behavior

**Theorem**

**1.**

**Proof.**

**Theorem**

**2.**

**Proof.**

## 4. Variants of the New Family (4)

## 5. Numerical Testing

`Mathematica`software [54], version 11. In the resulting tables, the expression of the form $a(\pm b)$ denotes $a\times {10}^{\pm b}$.

**Example**

**1.**

`SeedRandom[123];`

`M=RandomReal[{-2, 2}, {100, 101}];`

**NM1**and

**NM2**, outperformed the existing third-order iterative approaches

**CM**,

**MP**, and

**HM**in terms of providing highly accurate solutions. Furthermore, the new methods were able to achieve the required level of accuracy in a comparatively shorter amount of time.

**Example**

**2.**

`SeedRandom[123];`

`M=RandomReal[{-2+I, 2+I}, {100, 101}];`

**CM**method required more iterations compared to other third-order methods to achieve an enforced exactness of the solution. On the other hand, the

**NM1**method exhibited a superior performance in terms of iterations, accuracy, and time compared to the third-order techniques.

**Example**

**3.**

**NM1**and

**NM2**) for solving PDE (21), we examined the resulting linear system (22). The numerical outcomes were calculated using the coefficient matrices with a tolerance of $\tau ={10}^{-50}$ and are displayed in Table 3. The experimental data revealed that the proposed scheme provided superior results in comparison to the existing methods of the same order for each considered parametric value. In addition, the final approximate values of U up to four decimal places obtained using the method

**NM1**were equal to $[0.2802,0.5329,0.7335,$$0.8623,0.9067,0.8623,0.7335,0.5329,0.2802,0.2540,0.4832,0.6651,0.7818,$ $0.8221,0.7818,0.6651,$$0.4832,0.2540,0.2303,0.4381,0.6030,0.7089,0.7453,0.7089,$$0.6030,0.4381,0.2303,0.2088,0.3972,$$0.5467,0.6427,0.6758,0.6427,0.5467,0.3972,0.2088,0.1893,0.3602,0.4957,0.5827,0.6127,0.5827,$$0.4957,0.3602,0.1893,0.1717,0.3265,0.4494,0.5284,0.5556,0.5284,0.4494,0.3265,0.1717,0.1557,$$0.2961,0.4075,0.4791,0.5037,0.4791,0.4075,0.2961,0.1557,0.1411,0.2684,0.3695,0.4344,0.4567,$$0.4345,0.3695,0.2684,0.1411,0.1280,0.2434,0.3350,0.3938,0.4141,0.3938,0.3350,0.2434,0.1280,$$0.1160,0.2207,0.3037,0.3571,0.3754,0.3571,0.3037,0.2207,0.1160]$. Overall, we can conclude that the developed scheme can be used as a better alternative to the existing cubic-convergent iterative methods.

**CM**,

**MP**,

**HM**,

**NM1**, and

**NM2**approaches reached the convergence phase after 12, 11, 11, 10, and 10 iterations, respectively. Figure 1b,c further demonstrate that the developed iterative procedure achieved theoretical convergence order relatively earlier than the others. Furthermore, as evidenced by the data presented in Table 1, Table 2 and Table 3, the performance of

**NM1**and

**NM2**was superior in terms of both convergence phase and solution accuracy compared to the

**CM**,

**MP**, and

**HM**approaches in each of the considered examples.

**Example**

**4.**

**Example**

**5.**

**NM1**and

**HP4**exhibited the best performances in each aspect of the comparisons. Undoubtedly, the

**HP4**method demonstrated equivalent or, in some cases, superior results compared to

**NM1**and

**NM2**. However, in comparison to the cubic order convergence methods, the new method demonstrated more favorable outcomes.

#### Study of Different Parametric Values

**Example**

**6.**

- The graph of the number of iterations shows that, as the value of $\beta $ increased, the number of iterations did not necessarily decrease. In fact, it can be observed that the presented scheme used fewer iterations for values of $\beta $ close to one compared to values of $\beta $ close to zero, indicating that the scheme converged faster for higher values of $\beta $.
- To achieve a more precise matrix inverse, the maximum error norm should be lower. However, it was observed that the scheme (4), which resulted in fewer iterations as depicted in Figure 4 and Figure 6, corresponded to a higher error norm. Nevertheless, when the accuracy of the solutions obtained from each iterative method was evaluated for a particular iteration, it was found that the scheme with $\beta $ that required fewer iterations yielded a comparatively more accurate and precise matrix inverse.
- On the other hand, the same trend did not necessarily hold for a computational time, due to fluctuations. For example, for matrix $M1$ of (25), the time taken for computation with a $\beta $ close to one was comparatively less than the $\beta $ near to zero. However, such behavior of $\beta $ was not observed for the matrix $M2$ in (26). Therefore, the computational time varied depending on the characteristics of the matrices used.

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Stanimirović, P.S.; Chountasis, S.; Pappas, D.; Stojanović, I. Removal of blur in images based on least squares solutions. Math. Methods Appl. Sci.
**2013**, 36, 2280–2296. [Google Scholar] [CrossRef] [Green Version] - Meister, S.; Stockburger, J.T.; Schmidt, R.; Ankerhold, J. Optimal control theory with arbitrary superpositions of waveforms. J. Phys. A Math. Theor.
**2014**, 47, 495002. [Google Scholar] [CrossRef] - Wang, J.Z.; Williamson, S.J.; Kaufman, L. Magnetic source imaging based on the minimum-norm least-squares inverse. Brain Topogr.
**1993**, 5, 365–371. [Google Scholar] [CrossRef] [PubMed] - Lu, S.; Wang, X.; Zhang, G.; Zhou, X. Effective algorithms of the Moore–Penrose inverse matrices for extreme learning machine. Intell. Data Anal.
**2015**, 19, 743–760. [Google Scholar] [CrossRef] - Pavlíková, S.; Ševčovič, D. On the Moore–Penrose pseudo-inversion of block symmetric matrices and its application in the graph theory. Linear Algebra Appl.
**2023**, 673, 280–303. [Google Scholar] [CrossRef] - Feliks, T.; Hunek, W.P.; Stanimirović, P.S. Application of generalized inverses in the minimum-energy perfect control theory. IEEE Trans. Syst. Man Cybern. Syst.
**2023**, 53, 4560–4575. [Google Scholar] [CrossRef] - Doty, K.L.; Melchiorri, C.; Bonivento, C. A theory of generalized inverses applied to robotics. Int. J. Robot. Res.
**1993**, 12, 1–19. [Google Scholar] [CrossRef] - Soleimani, F.; Stanimirović, P.S.; Soleymani, F. Some matrix iterations for computing generalized inverses and balancing chemical equations. Algorithms
**2015**, 8, 982–998. [Google Scholar] [CrossRef] [Green Version] - Moore, E.H. On the reciprocal of the general algebraic matrix. Bull. Am. Math. Soc.
**1920**, 26, 394–395. [Google Scholar] - Penrose, R. A generalized inverse for matrices. Math. Proc. Camb. Philos. Soc.
**1955**, 51, 406–413. [Google Scholar] [CrossRef] - Rado, R. Note on generalized inverses of matrices. Math. Proc. Camb. Philos. Soc.
**1956**, 52, 600–601. [Google Scholar] [CrossRef] - Ben-Israel, A. Generalized inverses of matrices: A perspective of the work of Penrose. Math. Proc. Camb. Philos. Soc.
**1986**, 100, 407–425. [Google Scholar] [CrossRef] - Lee, M.; Kim, D. On the use of the Moore–Penrose generalized inverse in the portfolio optimization problem. Finance Res. Lett.
**2017**, 22, 259–267. [Google Scholar] [CrossRef] - Kučera, R.; Kozubek, T.; Markopoulos, A.; Machalová, J. On the Moore–Penrose inverse in solving saddle-point systems with singular diagonal blocks. Numer. Linear Algebra Appl.
**2012**, 19, 677–699. [Google Scholar] [CrossRef] [Green Version] - Kyrchei, I. Weighted singular value decomposition and determinantal representations of the quaternion weighted Moore–Penrose inverse. Appl. Math. Comput.
**2017**, 309, 1–16. [Google Scholar] [CrossRef] - Long, J.; Peng, Y.; Zhou, T.; Zhao, L.; Li, J. Fast and Stable Hyperspectral Multispectral Image Fusion Technique Using Moore–Penrose Inverse Solver. Appl. Sci.
**2021**, 11, 7365. [Google Scholar] [CrossRef] - Zhuang, G.; Xia, J.; Feng, J.E.; Wang, Y.; Chen, G. Dynamic compensator design and Hinf admissibilization for delayed singular jump systems via Moore–Penrose generalized inversion technique. Nonlinear Anal. Hybrid Syst.
**2023**, 49, 101361. [Google Scholar] [CrossRef] - Zhang, W.; Wu, Q.M.J.; Yang, Y.; Akilan, T. Multimodel Feature Reinforcement Framework Using Moore–Penrose Inverse for Big Data Analysis. IEEE Trans. Neural Netw. Learn. Syst.
**2021**, 32, 5008–5021. [Google Scholar] [CrossRef] - Castaño, A.; Fernández-Navarro, F.; Hervás-Martínez, C. PCA-ELM: A Robust and Pruned Extreme Learning Machine Approach Based on Principal Component Analysis. Neural Process. Lett.
**2013**, 37, 377–392. [Google Scholar] [CrossRef] - Lauren, P.; Qu, G.; Zhang, F.; Lendasse, A. Discriminant document embeddings with an extreme learning machine for classifying clinical narratives. Neurocomputing
**2018**, 277, 129–138. [Google Scholar] [CrossRef] - Koliha, J.; Djordjević, D.; Cvetković, D. Moore–Penrose inverse in rings with involution. Linear Algebra Its Appl.
**2007**, 426, 371–381. [Google Scholar] [CrossRef] [Green Version] - Jäntschi, L. The Eigenproblem Translated for Alignment of Molecules. Symmetry
**2019**, 11, 1027. [Google Scholar] [CrossRef] [Green Version] - Baksalary, O.M.; Trenkler, G. The Moore–Penrose inverse: A hundred years on a frontline of physics research. Eur. Phys. J. H
**2021**, 46, 9. [Google Scholar] [CrossRef] - Hornick, M.; Tamayo, P. Extending Recommender Systems for Disjoint User/Item Sets: The Conference Recommendation Problem. IEEE Trans. Know. Data Eng.
**2012**, 24, 1478–1490. [Google Scholar] [CrossRef] - Chatterjee, S.; Thakur, R.S.; Yadav, R.N.; Gupta, L. Sparsity-based modified wavelet de-noising autoencoder for ECG signals. Signal Process.
**2022**, 198, 108605. [Google Scholar] [CrossRef] - Shinozaki, N.; Sibuya, M.; Tanabe, K. Numerical algorithms for the Moore–Penrose inverse of a matrix: Direct methods. Ann. Inst. Stat. Math.
**1972**, 24, 193–203. [Google Scholar] [CrossRef] - Katsikis, V.N.; Pappas, D.; Petralias, A. An improved method for the computation of the Moore–Penrose inverse matrix. Appl. Math. Comput.
**2011**, 217, 9828–9834. [Google Scholar] [CrossRef] [Green Version] - Stanimirović, I.P.; Tasić, M.B. Computation of generalized inverses by using the LDL
^{*}decomposition. Appl. Math. Lett.**2012**, 25, 526–531. [Google Scholar] [CrossRef] [Green Version] - Stanimirović, P.S.; Petković, M.D. Gauss—Jordan elimination method for computing outer inverses. Appl. Math. Comput.
**2013**, 219, 4667–4679. [Google Scholar] [CrossRef] - Schultz, G. Iterative berechung der reziproken matrix. ZAMM Z. Angew. Math. Mech.
**1933**, 13, 57–59. [Google Scholar] [CrossRef] - Ben-Israel, A.; Greville, T.N. Generalized Inverses: Theory and Applications; Springer: New York, NY, USA, 2003. [Google Scholar]
- Pan, V.; Schreiber, R. An improved Newton iteration for the generalized inverse of a matrix, with applications. SIAM J. Sci. Statist. Comput.
**1991**, 12, 1109–1130. [Google Scholar] [CrossRef] [Green Version] - Kaur, M.; Kansal, M.; Kumar, S. An Efficient Matrix Iterative Method for Computing Moore–Penrose Inverse. Mediterr. J. Math.
**2021**, 18, 1–21. [Google Scholar] [CrossRef] - Li, W.; Li, Z. A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix. Appl. Math. Comput.
**2010**, 215, 3433–3442. [Google Scholar] [CrossRef] - Petković, M.D. Generalized Schultz iterative methods for the computation of outer inverses. Comput. Math. Appl.
**2014**, 67, 1837–1847. [Google Scholar] [CrossRef] - Stanimirović, P.S.; Soleymani, F. A class of numerical algorithms for computing outer inverses. J. Comput. Appl. Math.
**2014**, 263, 236–245. [Google Scholar] [CrossRef] - Liu, X.; Jin, H.; Yu, Y. Higher-order convergent iterative method for computing the generalized inverse and its application to Toeplitz matrices. Linear Algebra Appl.
**2013**, 439, 1635–1650. [Google Scholar] [CrossRef] - Soleymani, F.; Stanimirović, P.S. A higher order iterative method for computing the Drazin inverse. Sci. World J.
**2013**, 2013, 708647. [Google Scholar] [CrossRef] [Green Version] - Soleymani, F.; Stanimirović, P.S.; Ullah, M.Z. An accelerated iterative method for computing weighted Moore–Penrose inverse. Appl. Math. Comput.
**2013**, 222, 365–371. [Google Scholar] [CrossRef] - Sun, L.; Zheng, B.; Bu, C.; Wei, Y. Moore–Penrose inverse of tensors via Einstein product. Linear Multilinear Algebra
**2016**, 64, 686–698. [Google Scholar] [CrossRef] - Ma, H.; Li, N.; Stanimirović, P.S.; Katsikis, V.N. Perturbation theory for Moore–Penrose inverse of tensor via Einstein product. Comput. Appl. Math.
**2019**, 38, 111. [Google Scholar] [CrossRef] - Liang, M.; Zheng, B. Further results on Moore–Penrose inverses of tensors with application to tensor nearness problems. Comput. Math. Appl.
**2019**, 77, 1282–1293. [Google Scholar] [CrossRef] - Huang, B. Numerical study on Moore–Penrose inverse of tensors via Einstein product. Numer. Algorithms
**2021**, 87, 1767–1797. [Google Scholar] [CrossRef] - Zhang, Y.; Yang, Y.; Tan, N.; Cai, B. Zhang neural network solving for time-varying full-rank matrix Moore–Penrose inverse. Computing
**2011**, 92, 97–121. [Google Scholar] [CrossRef] - Wu, W.; Zheng, B. Improved recurrent neural networks for solving Moore–Penrose inverse of real-time full-rank matrix. Neurocomputing
**2020**, 418, 221–231. [Google Scholar] [CrossRef] - Miao, J.M. General expressions for the Moore–Penrose inverse of a 2 × 2 block matrix. Linear Algebra Appl.
**1991**, 151, 1–15. [Google Scholar] [CrossRef] [Green Version] - Kyrchei, I.I. Determinantal representations of the Moore–Penrose inverse over the quaternion skew field and corresponding Cramer’s rules. Linear Multilinear Algebra
**2011**, 59, 413–431. [Google Scholar] [CrossRef] [Green Version] - Wojtyra, M.; Pekal, M.; Fraczek, J. Utilization of the Moore–Penrose inverse in the modeling of overconstrained mechanisms with frictionless and frictional joints. Mech. Mach. Theory
**2020**, 153, 103999. [Google Scholar] [CrossRef] - Zhuang, H.; Lin, Z.; Toh, K.A. Blockwise Recursive Moore–Penrose Inverse for Network Learning. IEEE Trans. Syst. Man Cybern. Syst.
**2022**, 52, 3237–3250. [Google Scholar] [CrossRef] - Sharifi, M.; Arab, M.; Haghani, F.K. Finding generalized inverses by a fast and efficient numerical method. J. Comput. Appl. Math.
**2015**, 279, 187–191. [Google Scholar] [CrossRef] - Li, H.B.; Huang, T.Z.; Zhang, Y.; Liu, X.P.; Gu, T.X. Chebyshev-type methods and preconditioning techniques. Appl. Math. Comput.
**2011**, 218, 260–270. [Google Scholar] [CrossRef] - Chun, C. A geometric construction of iterative functions of order three to solve nonlinear equations. Comput. Math. Appl.
**2007**, 53, 972–976. [Google Scholar] [CrossRef] [Green Version] - Altman, M. An optimum cubically convergent iterative method of inverting a linear bounded operator in Hilbert space. Pac. J. Math.
**1960**, 10, 1107–1113. [Google Scholar] [CrossRef] [Green Version] - Trott, M. The Mathematica Guidebook for Programming; Springer: New York, NY, USA, 2013. [Google Scholar]
- Matrix Market. Available online: https://math.nist.gov/MatrixMarket (accessed on 24 April 2023).

**Figure 1.**Iterations (i) versus computational convergence order ($\rho $). (

**a**) Example 1. (

**b**) Example 2. (

**c**) Example 3.

**Figure 4.**Number of iterations and corresponding maximum error norm ${e}_{max}$ attained by the proposed scheme for different parametric values $\beta $ for the matrix defined in (25).

**Figure 5.**Computational time used by the proposed scheme for different parametric values $\beta $ for the matrix defined in (25).

**Figure 6.**Number of iterations and corresponding maximum error norms ${e}_{max}$ attained by the proposed scheme for different parametric values $\beta $ for the matrix defined in (26).

**Figure 7.**Computational time used by the proposed scheme for different parametric values the $\beta $ for matrix defined in (26).

Method | i | ${\mathit{e}}_{1}$ | ${\mathit{e}}_{2}$ | ${\mathit{e}}_{3}$ | ${\mathit{e}}_{4}$ | $\mathit{\rho}$ | Time |
---|---|---|---|---|---|---|---|

SM [30] | 21 | $1.6(-55)$ | $5.6(-54)$ | 0 | 0 | 2.0001 | 277.203 |

CM [51] | 14 | $1.1(-124)$ | $3.7(-123)$ | 0 | 0 | 3.0000 | 194.312 |

MP [51] | 13 | $3.4(-88)$ | $1.1(-86)$ | 0 | 0 | 3.0000 | 194.325 |

HM [51] | 12 | $8.7(-57)$ | $3.0(-55)$ | 0 | 0 | 3.0000 | 179.031 |

NM1 (18) | 12 | $1.4(-147)$ | $4.7(-146)$ | 0 | 0 | 3.0000 | 174.985 |

NM2 (19) | 12 | $1.5(-115)$ | $1.9(-113)$ | 0 | 0 | 3.0000 | 171.470 |

HP4 [53] | 11 | $1.6(-109)$ | $5.3(-108)$ | 0 | 0 | 4.0000 | 155.702 |

Method | i | ${\mathit{e}}_{1}$ | ${\mathit{e}}_{2}$ | ${\mathit{e}}_{3}$ | ${\mathit{e}}_{4}$ | $\mathit{\rho}$ | Time |
---|---|---|---|---|---|---|---|

SM [30] | 25 | $7.4(-90)$ | $1.2(-88)$ | 0 | 0 | 2.0000 | 1644.86 |

CM [51] | 16 | $6.6(-115)$ | $1.1(-113)$ | 0 | 0 | 3.0000 | 1680.99 |

MP [51] | 15 | $1.2(-94)$ | $1.9(-93)$ | 0 | 0 | 3.0000 | 1153.19 |

HM [51] | 14 | $7.8(-69)$ | $1.2(-67)$ | 0 | 0 | 3.0000 | 941.094 |

NM1 (18) | 13 | $1.4(-70)$ | $4.7(-69)$ | 0 | 0 | 3.0000 | 793.515 |

NM2 (19) | 13 | $2.6(-53)$ | $4.2(-52)$ | 0 | 0 | 3.0000 | 817.546 |

HP4 [53] | 13 | $2.2(-178)$ | $3.4(-177)$ | 0 | 0 | 4.0000 | 831.734 |

Method | i | ${\mathit{e}}_{1}$ | ${\mathit{e}}_{2}$ | ${\mathit{e}}_{3}$ | ${\mathit{e}}_{4}$ | $\mathit{\rho}$ | Time |
---|---|---|---|---|---|---|---|

SM [30] | 16 | $1.7(-90)$ | $9.2(-90)$ | 0 | 0 | 2.0000 | 128.719 |

CM [51] | 10 | $1.2(-81)$ | $6.5(-81)$ | 0 | 0 | 3.0068 | 63.071 |

MP [51] | 10 | $5.3(-131)$ | $2.8(-130)$ | 0 | 0 | 3.0015 | 66.297 |

HM [51] | 9 | $1.1(-67)$ | $6.1(-67)$ | 0 | 0 | 3.0671 | 109.813 |

NM1 (18) | 9 | $2.4(-134)$ | $1.3(-133)$ | 0 | 0 | 3.0589 | 54.781 |

NM2 (19) | 9 | $2.7(-111)$ | $1.5(-110)$ | 0 | 0 | 3.0493 | 61.610 |

HP4 [53] | 8 | $1.7(-90)$ | $9.2(-90)$ | 0 | 0 | 4.1557 | 54.875 |

**Table 4.**Information of matrices considered from the Matrix-Market Library [55].

${\mathit{M}}_{\#}$ | Name of Problem | Description |
---|---|---|

${M}_{1}$ | 1138 BUS | Order: $(1138,1138)$, rank =1138, condition number (est.): 1 (+2) |

${M}_{2}$ | YOUNG1C | Order: $(841,841)$, rank = 841, condition number (est.): 2.9 (+2) |

${M}_{3}$ | BP$\underline{\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}}$600 | Order: $(822,822)$, rank = 822, condition number (est.): 5.1 (+06) |

${M}_{4}$ | ILLC1850 | Order: $(1850,712)$, rank = 712 |

${M}_{5}$ | WM3 | Order: $(207,260)$, rank = 207 |

${M}_{6}$ | BEAUSE | Order: $(497,507)$, rank = 459 |

**Table 5.**Performance of iterative methods for the different matrices defined in Table 4.

${\mathit{M}}_{\#}$ | Method | i | ${\mathit{e}}_{1}$ | ${\mathit{e}}_{2}$ | ${\mathit{e}}_{3}$ | ${\mathit{e}}_{4}$ | Time |
---|---|---|---|---|---|---|---|

${M}_{1}$ | SM [30] | 51 | $6.5(-7)$ | $4.1(-10)$ | $1.2(-10)$ | $1.3(-6)$ | 113.688 |

CM [51] | 32 | $6.0(-7)$ | $3.3(-9)$ | $1.3(-10)$ | $1.6(-6)$ | 71.687 | |

MP [51] | 30 | $5.3(-7)$ | $9.7(-10)$ | $1.2(-10)$ | $1.3(-6)$ | 68.750 | |

HM [51] | 28 | $7.8(-7)$ | $1.6(-6)$ | $1.6(-10)$ | $1.6(-6)$ | 63.297 | |

NM1 (18) | 26 | $7.5(-7)$ | $2.4(-9)$ | $1.7(-10)$ | $1.3(-6)$ | 51.437 | |

NM2 (19) | 27 | $5.2(-7)$ | $4.2(-10)$ | $1.2(-10)$ | $1.6(-6)$ | 53.750 | |

HP4 [53] | 26 | $5.3(-7)$ | $4.3(-10)$ | $1.2(-10)$ | $1.4(-6)$ | 42.548 | |

${M}_{2}$ | SM [30] | 21 | $5.8(-6)$ | $4.5(-6)$ | $3.7(-14)$ | $2.7(-13)$ | 47.203 |

CM [51] | 14 | $9.7(-12)$ | $7.7(-13)$ | $3.5(-14)$ | $2.9(-13)$ | 34.015 | |

MP [51] | 13 | $1.5(-10)$ | $1.2(-10)$ | $3.7(-10)$ | $2.7(-13)$ | 31.344 | |

HM [51] | 12 | $9.6(-8)$ | $7.4(-8)$ | $4.6(-14)$ | $2.5(-13)$ | 28.749 | |

NM1 (18) | 11 | $1.2(-7)$ | $9.4(-8)$ | $3.6(-14)$ | $3.4(-13)$ | 21.016 | |

NM2 (19) | 11 | $6.3(-6)$ | $4.9(-6)$ | $3.8(-14)$ | $2.9(-13)$ | 21.781 | |

HP4 [53] | 11 | $3.0(-11)$ | $2.3(-11)$ | $3.8(-14)$ | $3.6(-13)$ | 20.843 | |

${M}_{3}$ | SM [30] | 46 | $8.4(-12)$ | $8.8(-11)$ | $1.2(-12)$ | $3.3(-11)$ | 131.891 |

CM [51] | 29 | $1.3(-11)$ | $1.9(-10)$ | $1.1(-12)$ | $2.3(-11)$ | 57.125 | |

MP [51] | 27 | $1.6(-11)$ | $3.3(-8)$ | $1.6(-12)$ | $2.0(-11)$ | 49.343 | |

HM [51] | 26 | $1.8(-11)$ | $1.9(-12)$ | $1.3(-12)$ | $1.9(-10)$ | 48.515 | |

NM1 (18) | 24 | $1.7(-11)$ | $4.1(-12)$ | $1.6(-12)$ | $4.2(-11)$ | 25.845 | |

NM2 (19) | 24 | $1.9(-11)$ | $2.8(-9)$ | $1.5(-12)$ | $2.0(-10)$ | 25.844 | |

HP4 [53] | 23 | $1.5(-11)$ | $9.0(-11)$ | $1.4(-12)$ | $4.3(-11)$ | 26.984 | |

${M}_{4}$ | SM [30] | 26 | $3.0(-13)$ | $6.3(-12)$ | $8.7(-13)$ | $3.9(-12)$ | 117.156 |

CM [51] | 16 | $5.1(-13)$ | $2.2(-7)$ | $7.1(-13)$ | $2.7(-12)$ | 67.656 | |

MP [51] | 15 | $1.1(-12)$ | $4.6(-7)$ | $7.1(-13)$ | $2.9(-12)$ | 66.499 | |

HM [51] | 15 | $3.0(-13)$ | $1.3(-11)$ | $9.9(-13)$ | $2.7(-12)$ | 66.439 | |

NM1 (18) | 13 | $2.0(-12)$ | $8.7(-7)$ | $7.4(-13)$ | $3.0(-12)$ | 54.313 | |

NM2 (19) | 14 | $3.4(-13)$ | $4.3(-12)$ | $8.1(-13)$ | $4.2(-12)$ | 56.781 | |

HP4 [53] | 13 | $6.1(-13)$ | $1.0(-11)$ | $9.3(-13)$ | $2.4(-12)$ | 55.281 | |

${M}_{5}$ | SM [30] | 27 | $4.1(-14)$ | $3.3(-11)$ | $1.2(-13)$ | $7.6(-14)$ | 3.009 |

CM [51] | 17 | $4.5(-14)$ | $9.8(-11)$ | $4.7(-14)$ | $8.7(-14)$ | 2.094 | |

MP | 16 | $4.8(-14)$ | $4.8(-11)$ | $7.0(-14)$ | $6.8(-14)$ | 2.048 | |

HM [51] | 15 | $1.3(-13)$ | $2.4(-9)$ | $6.0(-14)$ | $7.1(-14)$ | 1.922 | |

NM1 (18) | 14 | $5.0(-14)$ | $2.5(-12)$ | $5.5(-14)$ | $9.4(-14)$ | 1.890 | |

NM2 (19) | 14 | $1.3(-12)$ | $2.4(-8)$ | $8.1(-14)$ | $1.2(-13)$ | 1.891 | |

HP4 [53] | 14 | $3.9(-14)$ | $1.8(-13)$ | $7.1(-14)$ | $8.5(-14)$ | 1.806 | |

${M}_{6}$ | SM [30] | 36 | $2.1(-12)$ | $2.0(-9)$ | $1.7(-12)$ | $1.1(-12)$ | 18.656 |

CM [51] | 23 | $1.9(-12)$ | $2.0(-19)$ | $1.8(-12)$ | $1.0(-12)$ | 12.688 | |

MP [51] | 21 | $2.6(-12)$ | $2.4(-7)$ | $1.8(-12)$ | $9.9(-13)$ | 12.094 | |

HM [51] | 20 | $2.1(-12)$ | $2.0(-9)$ | $2.2(-12)$ | $1.1(-12)$ | 12.094 | |

NM1 (18) | 19 | $3.2(-12)$ | $3.9(-9)$ | $1.7(-12)$ | $1.2(-12)$ | 11.652 | |

NM2 (19) | 19 | $1.7(-12)$ | $1.8(-9)$ | $1.7(-12)$ | $1.7(-12)$ | 11.922 | |

HP4 [53] | 18 | $1.7(-12)$ | $1.3(-9)$ | $1.2(-12)$ | $1.2(-12)$ | 10.982 |

Method | i | ${\mathit{e}}_{1}$ | ${\mathit{e}}_{2}$ | ${\mathit{e}}_{3}$ | ${\mathit{e}}_{4}$ | $\mathit{\rho}$ | Time |
---|---|---|---|---|---|---|---|

SM [30] | 12 | $2.1(-1497)$ | $1.1(-1498)$ | 0 | 0 | 2.000 | 0.562 |

CM [51] | 8 | $1.7(-2398)$ | $8.9(-2400)$ | 0 | 0 | 3.0000 | 0.500 |

MP [51] | 8 | $1.2(-2673)$ | $6.5(-2675)$ | 0 | 0 | 3.0000 | 0.548 |

HM [51] | 7 | $3.0(-1009)$ | $1.6(-1010)$ | 0 | 0 | 3.0000 | 0.470 |

NM1 (18) | 7 | $5.3(-1359)$ | $2.8(-1360)$ | 0 | 0 | 3.0000 | 0.453 |

NM2 (19) | 7 | $2.6(-1229)$ | $1.4(-1230)$ | 0 | 0 | 3.0000 | 0.480 |

HP4 [53] | 6 | $2.1(-1497)$ | $1.1(-1498)$ | 0 | 0 | 4.0000 | 0.484 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Kansal, M.; Kaur, M.; Rani, L.; Jäntschi, L.
A Cubic Class of Iterative Procedures for Finding the Generalized Inverses. *Mathematics* **2023**, *11*, 3031.
https://doi.org/10.3390/math11133031

**AMA Style**

Kansal M, Kaur M, Rani L, Jäntschi L.
A Cubic Class of Iterative Procedures for Finding the Generalized Inverses. *Mathematics*. 2023; 11(13):3031.
https://doi.org/10.3390/math11133031

**Chicago/Turabian Style**

Kansal, Munish, Manpreet Kaur, Litika Rani, and Lorentz Jäntschi.
2023. "A Cubic Class of Iterative Procedures for Finding the Generalized Inverses" *Mathematics* 11, no. 13: 3031.
https://doi.org/10.3390/math11133031