Abstract
In this paper, we propose two scaled Dai–Yuan (DY) directions for solving constrained monotone nonlinear systems. The proposed directions satisfy the sufficient descent condition independent of the line search strategy. We also reasonably proposed two different relations for computing the scaling parameter at every iteration. The first relation is proposed by approaching the quasi-Newton direction, and the second one is by taking the advantage of the popular Barzilai–Borwein strategy. Moreover, we propose a robust projection-based algorithm for solving constrained monotone nonlinear equations with applications in signal restoration problems and reconstructing the blurred images. The global convergence of this algorithm is also provided, using some mild assumptions. Finally, a comprehensive numerical comparison with the relevant algorithms shows that the proposed algorithm is efficient.
1. Introduction
Consider the constrained algebraic system
where the function is a continuous mapping and is considered to be monotone, i.e.,
In addition, the set is known to be a closed convex subset of . If , the problem (1) is referred to as an unconstrained monotone nonlinear system. Both cases have been extensively studied by many researchers. The constrained system (1) appeared in many physical and mathematical applications, such as the financial forecasting problems [1], compressive sensing [2], Bregman distances [3], and monotone variational inequalities [4].
Moreover, the Newton method [5] and the quasi-Newton method [6,7] are well-known methods of solving the system of nonlinear equations. However, these methods are undesirable due to the Jacobian computation or its approximations at each iteration, as well as the storage requirements for storing the Jacobian matrix or its approximations at each iteration. Due to these limitations, these methods are unsuitable for large-scale problems involving both smooth and nonsmooth systems. The spectral gradient (SG) and conjugate gradient (CG) methods are another set of methods, which are used to solve systems of nonlinear equations. These methods are matrix-free; they can successfully handle large-scale problems, see [8,9,10,11,12,13,14,15] and the references therein. The main iterative procedure of CG methods for the system of monotone equations given an initial point is
where is the step size that can be computed using the line search strategies. The term is the spectral gradient direction defined by
the scalar parameter is the parameter that differentiates the CG methods. Among the most efficient CG parameters is one proposed by Dai and Yuan [16], given by
where . Recently, the DY CG parameter and its modification have been used to solve the monotone nonlinear systems, for example, the two descent DY algorithm for monotone equations [17], the modified DY with sufficient descent property [18], the efficient DY-type spectral CG method [19], and the descent three-term DY CG method with application [20]. Most of these methods are efficient to some extent and apply to real-life applications. Motivated by these DY approaches, we will present a scaled, Dai–Yuan, projection-based conjugate gradient method to solve monotone equations with applications in signal and image recovery. In addition, we reasonably propose different relations to compute the scaling parameter at every iteration, namely, by tending the proposed direction to approach the quasi-Newton direction, and by taking advantage of the popular Barzilai–Borwein strategy.
2. A Scaled Dai–Yuan CG Methods
This section will present spectral DY CG methods for solving monotone equations with the convex constrained. Among the well-known efficients is the one proposed by Dai and Young [16]. Moreover, the scaling strategy proved to be an efficient strategy for enhancing the performance of CG-based algorithms, see [21]. In this work, to enhance the performance of the DY CG parameter, we present the following scaled DY CG parameter
such that . The main contribution of this work is to propose some reasonable ways of computing the scalar at every iteration.
2.1. Scaling Parameter Based on the Quasi-Newton Approach
Newton and quasi-Newton directions contain the full Jacobian information or its approximates. Recall that the quasi-Newton equation is defined as
where is a positive definite and symmetric Jacobian estimate. Again, assuming that is the inverse of , the quasi-Newton direction is as follows:
Multiplying Equation (10) by and to obtain
Solving for in Equation (12) to obtain
Furthermore, to achieve and to benefit from the nonnegative restriction of Polak–Ribiére-Polyak, we proposed the following modified version of Equation (14)
Moreover, to ensure the sufficient condition is satisfied, independent of the line search procedure, we proposed the following scaled DY CG direction
where
2.2. Scaling Parameter Based on Barzilai–Borwein Approach
The most prevalent choices for the spectral scalars are those offered by Barzilai–Borwein [22], given as follows:
We aimed to take advantage of the Barzilai–Borwein [22] approach; hence, we put forward the parameter as the solution to the following minimization problem:
where stands for the Frobenius matrix norm and
where the . Putting forward the relation , we obtained the following solution for (20) as follows
Furthermore, to achieve and to benefit from the non-negative restriction of Polak–Ribiére-Polyak, we proposed the following modified version of (22) as follows
Moreover, to ensure the sufficient condition is satisfied, independent of the line search procedure, we proposed the following scaled DY CG direction
where
Next, we define the projection operator , given by
one of the appealing features of this operator is its non-expansive property, i.e.,
Now, we present the spectral DY CG projection-based algorithm (Algorithm 1) for solving convex constrained monotone nonlinear equations.
| Algorithm 1: The spectral DY CG projection-based algorithm (SDYCG) |
Step 0 Initialize: , , , , , and . Set and . Step 1 If , stop; otherwise, go to Step 2. , and . Step 3 Set and determine satisfying
Step 4 If and stop; otherwise
where
Step 5 Set and then go to Step 1. |
3. Global Convergence
This section presents the SDYCG algorithm’s global convergence result using the following assumptions:
- A1 The solution set is non-empty.
- A2 The function F is Lipschitz continuous for , i.e., for
- A3 The function F satisfies (2).
Remark:
- Additionally, from the monotonicity assumption on F, we have
Lemma 1.
For all , the line search (28) is well-defined.
Proof.
Assume that there exists , such that, for any non-negative integer i,
Let in (33), we have
Lemma 2.
If the sequences and are generated by HSGP algorithm, then
Proof.
From the line search (28), we have
Assume that such that ; using the monotonicity of F, we have
Therefore, is a decreasing sequence and convergent. Now, utilizing (41) and the Lispchitz continuity of F, we obtain
This implies
The proof is now complete. □
Lemma 3.
The directions generated by SDYCG algorithms are bounded.
Proof.
Starting with (15) for the case ,
Now, for , we obtain
Theorem 1.
If the sequence is generated by SDYCG algorithm, then
Proof.
Suppose that (49) is not true. Then, there exists , such that
Using the sufficient descent condition and the Cauchy–Schwartz inequality, we have
Hence,
Using Lemma 2 and the inequality (52), we obtain
By line-search (28), there is , so that
From the fact that and are bounded, there is an accumulation point , such that , for . Similarly, there is a set and an accumulation point , such that , for , where are infinite index sets. Hence, taking the limit as in (54), we can obtain
Similarly, from the sufficient condition, taking the limit as in the sufficient descent condition to obtain
4. Numerical Experiment and Applications
This section comprises numerical experiments using the proposed algorithm to solve large-scale constrained monotone nonlinear systems and image and signal restoration problems.
4.1. Application to the Monotone Nonlinear Equations
The proposed two-direction algorithm was compared to the modified spectral gradient projection (MSGP) algorithm [23] and the derivative-free spectral projection (DFSP) [24]. Furthermore, for the proposed algorithm, we set the starting parameters as follows: , , , , and . Except for the stopping conditions, all of the compared algorithms were implemented using the published values. The following problems were tested:
Problem 1
([25]).
- , and
Problem 2
([25]).
- , ,
- and
Problem 3
([26]).
- for and
Problem 4
(This problem is from Reference).
- , and .
Problem 5
([21]).
- for ,
- , and .
Problem 6
([27]).
- , for , and .
From Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, The NORM the norm of the function F at the stopping point. The following initial points are considered: , , , , , , and .
Table 1.
Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 2.
Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 3.
Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 4.
Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 5.
Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
Table 6.
Numerical comparison of SDYCG algorithm versus DFSP [24], and MSGP algorithms [23].
For the sake of the numerical comparison, we used the three parameters, namely, the number of iterations (ITER), the number of function evaluations (FEV), and computational or CPU time (TIME). Starting with Problem 1, the proposed algorithm successfully solves the entire initial guess, with fewer ITER, FEV, and TIME. Additionally, the DFSP algorithm solved all the initial guesses, with a relatively high ITER and FEV compared to the SDYCG algorithm. We also observed that the MSGP algorithm failed for the three initial guesses at the start, and failed for four initial guesses regarding dimension increases, although it performed wonderfully for Problem 2 for all three parameters.
Moreover, for Problem 3, the proposed algorithm won in terms of the minimum ITER, FEV, and CPU time. In addition, the DFSP algorithm showed a high level of efficiency for the three parameters, but MSGP uniformly failed for three initial guesses across all five dimensions. For Problem 4, all the algorithms successfully and efficiently solved all the initial guesses. The SDYCG algorithm was the winner, followed by the DFSP algorithm and, lastly, the MSGP algorithm, for the three parameters. A similar observation can be made for Problems 5–6; it can also be seen the MSGP algorithm is relatively dimension-dependent, as the failure for the initial guesses increases as the dimension increases.
Furthermore, to ease the numerical comparisons in the above tables, we used the well-known Dolan and Moré technique [28] performance profile method. The three figures were plotted for ITER, TIME, and FVAL. Figure 1, Figure 2 and Figure 3 show that, on average, the SDYCG algorithm has less ITER, computing time, and the number of function evaluations than the DFSP and MSGP algorithms.
Figure 1.
Performance profile of SDYCG algorithm versus MSGP algorithms [23], and DFSP [24] for number of iterations.
Figure 2.
Performance profile of SDYCG algorithm versus MSGP algorithms [23], and DFSP [24] for the CPU time.
Figure 3.
Performance profile of SDYCG algorithm versus MSGP algorithms [23], and DFSP [24] for number of function evaluations.
4.2. Application in Signal Recovery
This subsection Considers the pursuit of denoising problem. Let be the original sparse signal, and be an observation satisfying
The standard sparse -regularization problem sparse term is represented as
this problem arises in compressive sensing, where w is a nonnegative parameter, is an observation, is a linear operator, , and , represent the and norms, respectively, see [2,29,30,31]. The problem (58) can be transformed to the following monotone nonlinear system
where , , and , for some and . The function (59) is Lipschitz-continuous, and the monotone, for more detail information about the transformation of (59) from (58), see [32,33]. This was omitted to avoid repeats. The SDYCG algorithm will be used to address this problem. Similar methods have been employed to deal with this problem; see [34,35,36].
All codes in this work are written in Matlab R2014a and run on an HP core i5, 8th Gen personal computer. We carried out two restoration experiments: signal and image restoration. Further, we employed the direction (15) for signal restoration and the direction (24) for image restoration. Mean square error (MSE) is defined as
and the recovered image signal-to-ratio (SNR) as
The primary purpose of the signal restoration experiment is to reconstruct a sparse signal with a length n from observations of length k. We initialized the following free variables: , , , and . We conducted the experiment on the signal of length from observations of length using 128 original signal’s randomly nonzero components. Y is the Gaussian matrix produced by the MATLAB instruction . Furthermore, the measurement w is noise distributed, that is, , where is Gaussian noise distributed normally with mean 0 and variance . We used , as the merit function and stopped the iterations when
In comparison to the PCG [37] and MSP [38] algorithms, we used the SDYCG algorithm and restored the signal to almost its original form in a few iterations, MSE, and CPU time. We rua each code from the same starting point and used the same continuation strategy on parameter w. The convergence behavior of each algorithm was observed to achieve an accurate solution. As shown in Figure 4 and Figure 5, which has been replicated more than a hundred times with remarkably similar results. The SDYCG algorithm recovers the sparse single with a minimal processing time and minimal MSE error. Furthermore, for the objective function and MSE, the SDYCG method reduces faster than the PCG and MSP algorithms.
Figure 4.
The original picture, measurement, and recovered signals using SDYCG, PCG, and MSP algorithms are shown from top to bottom.
Figure 5.
Comparison of the SDYCG, PCG, and MSP algorithms. The x-axes represent the number of iterations and the CPU time in seconds, respectively. MSE and objective function values are represented on the y-axes.
In addition, for the image restoration experiment, we used Y as a partial DWT matrix, with s rows chosen at random from the DWT matrix. This type of encoding matrix Y does not need storage and allows for quick matrix-vector multiplications of Y and . As a result, it may be tested on large images without the need to store any matrix. , , , and are the parameter values. The initial parameter values are , , , and . In this experiment, we utilized the standard Lena colour image with a size of and the colour baby image with a size of . In contrast, for performance comparison, we considered the well-known PSGM [34], CGD [35], and TPRP [36] iterative algorithms. The iteration procedure of all algorithms began at and ended when (60) was less than . Figure 6 and Figure 7 show the original, blurred, and reconstructed images generated by each algorithm. As can be seen from the figures, all of the algorithms considered generated images of similar quality. However SDYCG was quicker. As a result, we can infer that SDYCG is the winner.
Figure 6.
Original image, blurred image, restored image by SDYCG with ITER = 10, obj = 8.128, TIME = 3.27, MSE = 6.3421, SNR = 20.65, SSIM = 0.90, restored image by PSGM with ITER = 45, obj = 7.365, TIME = 14.00, MSE = 3.8961, SNR = 22.77, SSIM = 0.93, restored image by CGD with ITER = 713, obj = 7.268, TIME = 167.89, MSE = 4.4565, SNR = 22.19, SSIM = 0.93, and restored image by TPRP with ITER = 104, obj = 7.445, TIME = 3049.66, MSE = 3.8931, SNR = 22.77, SSIM = 0.92.
Figure 7.
Original image, blurred image, restored image by SDYCG with ITER = 25, obj = 1.251, TIME = 5.67, MSE = 6.7866, SNR = 22.55, SSIM = 0.85, restored image by PSGM with ITER = 29, obj = 1.222, TIME = 6.03, MSE =5.8080, SNR = 23.23, SSIM = 0.87, restored image by CGD with ITER = 491, obj = 1.214, TIME = 106.47, MSE = 6.0494, SNR = 23.05, SSIM = 0.88, and restored image by TPRP with ITER = 41, obj = 1.256, TIME = 4069.64, MSE = 7.0079, SNR = 22.41, SSIM = 0.85.
5. Conclusions
We suggested a scaled DY projection-based CG algorithm for solving convex constraint nonlinear monotone equations, with applications for signal and image restoration problems. Independent of the line search strategy that was employed, the proposed directions satisfied the sufficient descent condition. We also offer two alternate ways of determining the scaling parameter at each iteration, namely by tending the proposed direction to approach the quasi-Newton direction and by utilizing the popular Barzilai–Borwein [22] technique. The proposed algorithm’s global convergence result was established using some mild assumptions. The robustness of the proposed algorithm was demonstrated by its ability to solve large-scale monotone nonlinear systems with convex constraints. These two proposed directions may be employed in all fields of CG method application, such as unconstrained optimization problems, control motion of robotic manipulators, etc.
Author Contributions
Methodology, software, supervision, A.A.; validation, writing-original draft preparation, investigation, J.S.; visualization, project administration, formal analysis, H.E.; writing-review and editing, funding acquisition, resources, P.J.; writing-review and editing, data curation, S.K.S. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by Taif University Researches Supporting Project number (TURSP-2020/326), Taif University, Taif, Saudi Arabia.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Dai, Z.; Zhou, H.; Wen, F.; He, S. Efficient predictability of stock return volatility: The role of stock market implied volatility. N. Am. J. Econ. Finance 2020, 52, 101174. [Google Scholar] [CrossRef]
- Figueiredo, M.A.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction, application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
- Iusem, A.N.; Solodov, M.V. Newton-type methods with generalized distances for constrained optimization. Optimization 1997, 41, 257–278. [Google Scholar] [CrossRef]
- Zhao, Y.B.; Li, D.H. Monotonicity of fixed point and normal mapping associated with variational inequality and its application. SIAM J. Optim. 2001, 4, 962–973. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
- Zhou, G.; Toh, K.C. Superlinear convergence of a Newton-type algorithm for monotone equations. J. Optimiz. Theory App. 2005, 125, 205–221. [Google Scholar] [CrossRef]
- Zhou, W.J.; Li, D.H. A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 2008, 77, 2231–2240. [Google Scholar] [CrossRef]
- Sabi’u, J.; Shah, A.; Waziri, M.Y.; Ahmed, K. Modified Hager-Zhang conjugate gradient methods via singular value analysis for solving monotone nonlinear equations with convex constraint. Int. J. Comput. Methods 2020, 18, 2050043. [Google Scholar] [CrossRef]
- Sabi’u, J.; Shah, A.; Waziri, M.Y. A modified Hager-Zhang conjugate gradient method with optimal choices for solving monotone nonlinear equations. Int. J. Comput. Math. 2021, 99, 1–23. [Google Scholar] [CrossRef]
- Sabi’u, J.; Shah, A. An efficient three-term conjugate gradient-type algorithm for monotone nonlinear equations. RAIRO-Oper Res. 2021, 55, 1113. [Google Scholar] [CrossRef]
- Waziri, M.Y.; Ahmed, K.; Sabi’u, J.; Halilu, A.S. Enhanced Dai–Liao conjugate gradient methods for systems of monotone nonlinear equations. SeMA J. 2021, 78, 15–51. [Google Scholar] [CrossRef]
- Abubakar, A.B.; Sabi’u, J.; Kumam, P.; Shah, A. Solving nonlinear monotone operator equations via modified sr1 update. J. Appl. Math. Comput. 2021, 67, 1–31. [Google Scholar] [CrossRef]
- Waziri, M.Y.; Hungu, K.A.; Sabi’u, J. Descent Perry conjugate gradient methods for systems of monotone nonlinear equations. Numer. Algorithms 2020, 85, 763–785. [Google Scholar] [CrossRef]
- Waziri, M.Y.; Ahmed, K.; Sabi’u, J. A Dai–Liao conjugate gradient method via modified secant equation for system of nonlinear equations. Arab. J. Math. 2020, 9, 443–457. [Google Scholar] [CrossRef]
- Sabi’u, J.; Shah, A.; Waziri, M.Y. Two optimal Hager-Zhang conjugate gradient methods for solving monotone nonlinear equations. Appl. Numer. Math. 2020, 153, 217–233. [Google Scholar] [CrossRef]
- Dai, Y.H.; Yuan, Y. A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 1999, 10, 177–182. [Google Scholar] [CrossRef]
- Waziri, M.Y.; Ahmed, K. Two Descent Dai–Yuan Conjugate Gradient Methods for Systems of Monotone Nonlinear Equations. J. Sci. Comput. 2022, 90, 1–53. [Google Scholar] [CrossRef]
- Kambheera, A.; Ibrahim, A.H.; Muhammad, A.B.; Abubakar, A.B.; Hassan, B.A. Modified Dai–Yuan Conjugate Gradient Method with Sufficient Descent Property for Nonlinear Equations. Thai J. Math. 2022, 145–167. Available online: http://thaijmath.in.cmu.ac.th/index.php/thaijmath/article/viewFile/6026/354355047 (accessed on 6 June 2022).
- Aji, S.; Kumam, P.; Awwal, A.M.; Yahaya, M.M.; Sitthithakerngkiet, K. An efficient DY-type spectral conjugate gradient method for system of nonlinear monotone equations with application in signal recovery. AIMS Math. 2021, 6, 8078–8106. [Google Scholar] [CrossRef]
- Abdullahi, H.; Awasthi, A.K.; Waziri, M.Y.; Halilu, A.S. Descent three-term DY-type conjugate gradient methods for constrained monotone equations with application. Comput. Appl. Math. 2022, 41, 1–28. [Google Scholar] [CrossRef]
- Sabi’u, J.; Aremu, K.O.; Althobaiti, A.; Shah, A. Scaled three-term conjugate gradient methods for solving monotone equations with application. Symmetry 2022, 14, 936. [Google Scholar] [CrossRef]
- Barzilai, J.; Borwein, J.M. Two-point step size gradient methods. IMA J. Numer. Anal. 1988, 8, 141–148. [Google Scholar] [CrossRef]
- Zheng, L.; Yang, L.; Liang, Y. A modified spectral gradient projection method for solving non-linear monotone equations with convex constraints and its application. IEEE Access 2020, 8, 92677–92686. [Google Scholar] [CrossRef]
- Amini, K.; Faramarzi, P.; Bahrami, S. A spectral conjugate gradient projection algorithm to solve the large-scale system of monotone nonlinear equations with application to compressed sensing. Int. J. Comp. Math. 2022, 1–18. [Google Scholar] [CrossRef]
- La Cruz, W.; Martinez, J.; Raydan, M. Spectral residual method without gradient information for solving large-scale nonlinear systems of equations. Math. Comput. 2006, 75, 1429–1448. [Google Scholar] [CrossRef]
- Halilu, A.S.; Majumder, A.; Waziri, M.Y.; Ahmed, K. Signal recovery with convex constrained nonlinear monotone equations through conjugate gradient hybrid approach. Math Comput. Simul. 2021, 187, 520–539. [Google Scholar] [CrossRef]
- Liu, J.; Duan, Y. Two spectral gradient projection methods for constrained equations and their linear convergence rate. J. Inequal. Appl. 2015, 2015, 1–13. [Google Scholar] [CrossRef][Green Version]
- Dolan, E.D.; Moré, J.J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. [Google Scholar] [CrossRef]
- Hale, E.T.; Yin, W.; Zhang, Y. A fixed-point continuation method for l1 regularized minimization with applications to compressed sensing. SIAM J. Optim. 2008, 19, 1107–1130. [Google Scholar] [CrossRef]
- Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
- Van den Berg, E.; Friedlander, M.P. Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 2008, 31, 890–912. [Google Scholar] [CrossRef]
- Xiao, Y.H.; Wang, Q.Y.; Hu, Q.J. Non-smooth equations based method for l1-norm problems with applications to compressed sensing. Nonlinear Anal. TMA 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
- Pang, J.S. Inexact Newton methods for the nonlinear complementarity problem. Math. Program. 1986, 36, 54–71. [Google Scholar] [CrossRef]
- Awwal, A.M.; Kumam, P.; Mohammad, H.; Watthayu, W.; Abubakar, A.B. A Perry-type derivative-free algorithm for solving nonlinear system of equations and minimizing l1 regularized problem. Optimization 2021, 70, 1231–1259. [Google Scholar] [CrossRef]
- Xiao, Y.H.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
- Ibrahim, A.H.; Deepho, J.; Abubakar, A.B.; Adamu, A. A three-term Polak-Ribière-Polyak derivative-free method and its application to image restoration. Sci. Afr. 2021, 13, e00880. [Google Scholar] [CrossRef]
- Liu, J.K.; Lia, S.J. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
- Abubakar, A.B.; Kumam, P.; Mohammad, H.; Awwal, A.M. A Barzilai-Borwein gradient projection method for sparse signal and blurred image restoration. J. Franklin Inst. 2020, 357, 7266–7285. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).