Abstract
The resolvent is a fundamental concept in studying various operator splitting algorithms. In this paper, we investigate the problem of computing the resolvent of compositions of operators with bounded linear operators. First, we discuss several explicit solutions of this resolvent operator by taking into account additional constraints on the linear operator. Second, we propose a fixed point approach for computing this resolvent operator in a general case. Based on the Krasnoselskii–Mann algorithm for finding fixed points of non-expansive operators, we prove the strong convergence of the sequence generated by the proposed algorithm. As a consequence, we obtain an effective iterative algorithm for solving the scaled proximity operator of a convex function composed by a linear operator, which has wide applications in image restoration and image reconstruction problems. Furthermore, we propose and study iterative algorithms for studying the resolvent operator of a finite sum of maximally monotone operators as well as the proximal operator of a finite sum of proper, lower semi-continuous convex functions.
Keywords:
maximally monotone operators; Krasnoselskii–Mann algorithm; Yoshida approximation; resolvent; Douglas–Rachford splitting algorithm MSC:
49J53; 65K10; 47H05
1. Introduction
Let H be a real Hilbert space, the associated product is denoted by and the corresponding norm is . Let be a maximally monotone operator with its domain and range denoted by and . Let I be the identity operator. We consider the simplest monotone inclusion problem:
Many problems in variational inequalities, partial differential equations, signal and image processing can be solved via the monotone inclusion problem (1). See, for example, [1,2,3,4]. It is well known that x is a solution of (1) if and only if , for any . Here and in what follows, let I be the identity operator, the resolvent of T with parameter is defined by , and the Yoshida approximation of T with index is denoted by , respectively. The proximal point algorithm (PPA) is the most popular method to solve (1).
Let be a proper, lower semi-continuous convex function. By Fermat lemma, the following convex minimization problem
is equivalent to the monotone inclusion problem (1), where . Here, is the classical subdifferential of . For any and , the PPA for solving the convex minimization problem (2) is defined as
In fact, the resolvent of is equivalent to the proximity operator , which was first introduced by Moreau [5]. More precisely, we have
The proximity operators play an important role in studying various operator splitting algorithms for solving convex optimization problems. See, for example, [6,7,8]. In particular, Combettes et al. [9] proposed a forward-backward splitting algorithm for solving the dual of the proximity operator of a sum of composed convex functions. Adly et al. [10] provided an explicit decomposition of the proximity operator of the sum of two closed convex functions. Since the resolvent operator is a natural generalization of the proximity operators, Bauschke and Combettes [11] extended the Dykstra algorithm [12] for computing the projection onto the intersection of two closed convex sets to compute the resolvent of the sum of two maximally monotone operators. Combettes [13] generalized the Douglas–Rachford splitting and Dykstra-like algorithm to solve the resolvent of a sum of maximally monotone operators. Very recently, Artacho and Campoy [14] generalized the averaged alternating modified reflection algorithm [15] to compute the resolvent of the sum of two maximally monotone operators. On the other hand, in order to compute the resolvent of composed operators , where and are two Hilbert spaces, is a continuous linear operator and its adjoint is , and is a maximally monotone operator. Fukushima [16] proved that, if is invertible, then is maximally monotone. Moreover, Fukushima [16] showed that
and
where z is the unique solution of . The difference between (5) and (6) is that the former requires evaluating , while the latter has to calculate . In Proposition 23.25 of [17], Bauschke and Combettes proved that, if , for some , then
Since the computation of the resolvent of composed operators in (5)–(7) are restricted, Moudafi [18] developed a fixed point approach for computing the resolvent of the composed operators without the requirement of and . The basic assumption is that the operator is maximally monotone. This is true if [19,20], where stands for the relative interior of a set; otherwise, [17], where denotes conical hull of a set and stands for closed span of a set. The most general condition for the maximal monotonicity of the composition can be found in [21].
Let ; then, the resolvent of is equivalent to evaluating the proximity operator . More precisely, we have
This convex optimization problem (8) is a general extension of the famous Rudin–Osher–Fatemi (ROF) image denoising model [22]. It is worth mentioning that Micchelli et al. [23] proposed a fixed point algorithm to solve the proximity operator (8). The work of Moudafi [18] is a generalization of Micchelli et al. [23].
In recent years, Newton-type methods have been combined with the forward-backward splitting (FBS) algorithm to accelerate the speed of the original FBS algorithm. See, for example, [24,25,26]. Argyriou et al. [27] considered the following convex optimization problem:
where , is a linear transformation, is a symmetric positive definite matrix, and is a proper, lower semi-continuous convex function. They proved that the minimizer of (9) can be found via a fixed point equation. In particular, when , the convex optimization problem (9) is equivalent to the problem of computing the proximity operator of in (8). In [28], Chen et al. proposed an accelerated primal-dual fixed point (PDFP2O) algorithm based on an adapted metric method for solving the convex optimization of the sum of two convex functions, where one of which is differentiable and the other is composed by a linear transformation. This algorithm could be viewed as a combination of the original PDFP2O [29] with a Quasi-Newton method. This convex optimization problem (9) could be viewed as the proximity operator of relative to the metric induced by U. Recall that the proximity operator of a proper, lower semi-continuous convex function from to relative to the metric induced by U is defined by,
which was introduced in [30]. See also [31]. Thus, (9) is equivalent to
where , A, U and are the same as (9). Let in (11); it becomes the scaled proximal operators of (10). By the first-order optimality condition, the convex optimization problem (11) is equivalent to the following monotone inclusion problem:
which means that . It is worth mentioning that the scaled proximity operator (10) was extensively used in [32,33] for deriving effective iterative algorithms to solve structural convex optimization problems. However, these works didn’t consider the general scaled proximity operator of and the related resolvent operator problem.
For this purpose, in this paper, we investigate the solution of the resolvent of composed operators , where is a continuous linear operator, is a maximally monotone operator, and U is a self-adjoint strongly positive definite operator. In particular, when , the resolvent of composed operators is equivalent to the proximity operator as follows:
The convex minimization problem (13) is an extension of the convex minimization problem (8). In this paper, we always assume that is maximally monotone under some qualification conditions. According to Minty’s theorem, if is maximally monotone, then , for any . Thus, the resolvent is single-valued, for any . To study the solution of the resolvent of composed operators , we divide our work into two parts. First, we present an explicit solution of the resolvent of composed operators under some conditions on A and U. Second, we develop a fixed point algorithm to solve the resolvent of composed operators. As an application, we discuss the resolvent of a finite sum of maximally monotone operators. Furthermore, we employ the obtained results to solve the problem of computing scaled proximity operators of a convex function composed by a linear operator and a finite sum of proper, lower semi-continuous convex functions, respectively.
The rest of the paper is organized as follows. In Section 2, we review some backgrounds on monotone operator theory. In Section 3, we first investigate the solution of the resolvent of composed operators . Second, we propose a fixed point approach for solving the resolvent of . Finally, we employ the proposed fixed point algorithm to compute the resolvent of the sum of a finite number of maximally monotone operators with U. In Section 4, we apply the obtained results to solve the problem of computing scaled proximity operators of a convex function composed by a linear operator and a finite sum of proper, lower semi-continuous convex functions, respectively. We give some conclusions and future work in the last section.
2. Preliminaries
In this section, we review some definitions and lemmas in monotone operator theory and convex analysis, which are used throughout the paper. Most of them can be found in [17].
Let H, and be real Hilbert spaces with inner product and induced norm , respectively. stands for converging weakly to x, and stands for converging strongly to x. I denotes the identity operator. Let be a continuous linear operator and its adjoint be such that , for any and .
Let be a set-valued operator. We denote by its domain , by its range , by its graph , and by its set of zeros . We say that T is monotone if , for all . T is said to be maximally monotone if its graph is not contained in the graph of any other monotone operator. Letting , the resolvent of is defined by
and the Yoshida approximation of T with index is
The resolvent and Yoshida approximation of have the following relationship:
We follow the notation as [31]. Let be the space of bounded linear operators from to , and . We set , where denotes the adjoint of U. In the , the Loewner partial ordering is defined by
Let . We set
Let P be a orthogonal matrix and its inverse be . Let , where and is the eigenvalue of U. In addition, let . Then, , so is defined as the square root of . For every , we define a scalar product and a norm by
Let be a single-valued operator. We say that T is non-expansive if , . T is firmly non-expansive if , . T is for some if for every and , . T is averaged if there exists a non-expansive operator such that or for every and , .
We collect several useful lemmas.
Lemma 1.
([17]) Let be a maximally monotone operator and . Then, the following hold:
- (i) and are firmly non-expansive and maximally monotone;
- (ii) the Yoshida approximation is λ-cocoercive and maximally monotone.
Lemma 2.
([17]) Let C be a nonempty subset of H and let . We have
- (i) T is non-expansive if and only if is ;
- (ii) T is firmly non-expansive if and only if is firmly non-expansive;
- (iii) T is if and only if is , where .
Lemma 3.
([34,35]) Let S be a nonempty subset of H, let be -averaged and let be -averaged. Then, is -averaged.
Lemma 4.
([31]) Let be a maximally monotone operator, let and let . The scalar product of H is defined by , for any . Then, the following hold:
- (i) is maximally monotone;
- (ii) is firmly non-expansive;
- (iii) .
The Kransnoselskii–Mann algorithm is a popular iterative algorithm for finding fixed points of non-expansive operators. The convergence of it is summarized in the following theorem.
Theorem 1.
([17]) (Kransnoselskii–Mann algorithm) Let C be a nonempty closed convex subset of H, let be a non-expansive operator such that , where denotes the fixed point set of T. Let such that . For any , define
Then, the following hold:
- (i) is Fejer monotone with respect to , i.e., , for any ;
- (ii) converges strongly to 0;
- (iii) converges weakly to a fixed point in .
3. Computing Method for the Resolvent of Composed Operators
In this section, we consider the problem of computing the resolvent of composed operators (13). The obtained results extend and generalize the corresponding results of Fukushima [16] and Bauschke and Combettes [17], respectively. Second, we develop a fixed point approach for computing the resolvent of . We also propose a simple and efficient iterative algorithm to approximate the fixed point. The convergence of this algorithm is established in general Hilbert spaces. Finally, we apply the fixed point method to solve the resolvent of the sum of a finite family of maximally monotone operators.
3.1. Analytic Approach Method of Resolvent Operator
The following proposition is a direct generalization of Proposition 23.25 of [17].
Proposition 1.
Let and . Let is a maximally monotone operator, and let be a continuous linear operator and its adjoint is . Suppose that is invertible. Let , Then, the following hold:
- (i) We have
- (ii) Suppose that , for some . Then,
Proof.
By Lemma 4, we know that is maximally monotone, if is maximally monotone. Thus, is single-valued, for any .
(i) Let . Define . Notice that , then is well defined. Set . Then, and . This leads to . Therefore,
which means that
□
In the Formula (21), needs to be calculated. However, it is sometimes difficult to evaluate it. Inspired by the method introduced by Fukushima [16], we provide an alternative way to compute the resolvent of composed operators, which avoids computing the inverse of operator T.
Proposition 2.
Let and . Let be a maximally monotone operator, and let be a continuous linear operator and its adjoint is . Suppose that is invertible. Then, the resolvent of is
where z is the unique solution of .
3.2. Fixed-Point Approach Method of Resolvent Operator
In Propositions 1 and 2, the resolvent of composed operators is computed either with or requiring to satisfy additional conditions. In practice, it is still difficult to evaluate it without these conditions. To overcome this difficulty, in this subsection, we propose a fixed point algorithm to compute the resolvent of . Our method discards these conditions on and .
Lemma 5.
Let and . Let be a maximally monotone operator, and let be a continuous linear operator. Let . Then, the following hold:
and
Proof.
(1) Let ; then, we have
(2) Let , we have
□
In the next lemma, we provide a fixed point characterization of the resolvent of composed operators . To achieve this goal, we define two operators and . Let and . Let , and define
and
Lemma 6.
Let and . Let be a maximally monotone operator, and let be a continuous linear operator and its adjoint is . Let . Then, we have
if and only if y is a fixed-point of .
Proof.
Let . By (27), we have . Then, .
Let y be a fixed-point of . Then, we can get (32) via the above.
Lemma 7.
Let and . Let be a maximally monotone operator, and let be a continuous linear operator and its adjoint is . Let , define operator , ; then, the following hold:
- (i) The operator is -cocoercive;
- (ii) For any , is -averaged; furthermore, the operator is -averaged.
Proof.
(i) Let , we have
In virtue of and , we have
Because and for any , , we obtain
Thus, the operator is -cocoercive.
(ii) Because is -cocoercive, by Lemma 2 (iii), for any , we have that F is -averaged.
On the other hand, because is firmly non-expansive, it is also -averaged. Let and , and, by Lemma 3, we find that is -averaged also and . We also have
Then, is -averaged. This completes the proof.
□
Lemma 6 tells us that the resolvent of composed operators can be computed via the fixed point of operator . Furthermore, Lemma 7 shows that is an averaged operator. Therefore, we can define an iterative algorithm to approximate the fixed point of . For any , let the sequences and be defined by
where and .
Now, we are ready to prove the convergence of the iterative scheme (36).
Theorem 2.
Let and . Let be a maximally monotone operator, and let be a continuous linear operator and its adjoint is . Let the sequences and be generated by (36). Assume that . Then, the following hold:
- (i) converges weakly to a fixed-point of ;
- (ii) Furthermore, if , then converges strongly to the resolvent .
Proof.
(i) By Lemma 7, is -averaged. Then, there exists an non-expansive operator C such that
where . Therefore, the iterative sequence in (36) can be rewritten as
The condition on implies that and . It follows from Lemma 6 that , and we observe that . Then, .
According to Theorem 1, we can conclude that (a) exists, for any ; (b) , and ; (c) converges weakly to a fixed point of C, which is also a fixed point of .
(ii) Let . By using F as -averaged, and as non-expansive, we have
For and y, we have
Hence, we arrive
We notice that exists, and . By letting , the right of inequality (41) is equal to zero. Together with the condition , and . Then, we obtain
By virtue of , . Then, we have
Taking into account the fact that exists and (42), we obtain from the above inequality that
Since the two norms and are equivalent, we have . Hence, converges strongly to the resolvent operator . This completes the proof. □
3.3. Resolvent of a Sum of m Maximally Monotone Operators with U
In this subsection, we apply the fixed-point approach method that was proposed in Section 3.2 to solve the resolvent of the sum of a finite number of maximally monotone operators.
Problem 1.
Let and . Let be an integer. For any , let be a maximally monotone operator. Letting , the problem is to solve the resolvent operator of the form,
To solve the resolvent operator (43), we formally reformulate it as a special case of the resolvent operator (13), which was studied in the previous section. More precisely, we obtain the following convergence theorem.
Theorem 3.
Let and . Let . For any , let be a maximally monotone operator. Let and let , . Let the sequences and be generated by the following:
where and satisfy the following conditions:
- (a);
- (b).
Then, the sequence converges strongly to the resolvent operator(43).
Proof.
Let . The inner product of is defined by
The associated norm is
Let us introduce the operators:
and
Therefore, T is a maximally monotone operator, and A is a bounded linear operator with .
Let , , by the definition of A, we have
Hence, we have
In addition, letting , we have
Let . Then, the iterative scheme (44) can be rewritten as
According to Theorem 2 (ii), we can conclude that the sequence converges strongly to the resolvent operator (43).
□
4. Applications
In this section, we apply the obtained results in the last section to solve the problem of computing proximity operators of convex functions.
The first problem is a generalization of the proximity operator of a convex function consists of a linear transformation. Before we state our main problems, let’s introduce some notation. Let , f is proper, if . We denote by the class of proper, lower semi-continuous convex functions from H to .
Problem 2.
Let and . Let . Let and let . We consider the following scaled proximity operator:
Theorem 4.
For Problem 2, let , and set
where and such that and . Then, the sequence converges strongly to the proximity operator .
Proof.
Because the proximity operator is equivalent to the resolvent operator . In Theorem 2, let , we can conclude that the sequence generated by (49) converges strongly to the proximity operator . □
Next, we consider the problem of computing scaled proximity operators of a finite sum of convex functions.
Problem 3.
Let and . Let be an integer. For any , let and let . We consider the problem of a computing scaled proximity operator:
Theorem 5.
For Problem 3, let and set
where and such that and . Then, the sequence converges strongly to the proximity operator .
5. Conclusions
Inspired and motivated by the work of Moudafi [18], in this paper, we discussed the resolvent of composed operators . Under some additional conditions, we obtained explicit solutions of the resolvent of composed operators. The obtained results generalized and extended the classical results of Fukushima [16] and Bauschke and Combettes [17]. On the other hand, we presented a fixed point algorithm approach for computing the resolvent of composed operators. By virtue of the Krasnoselskii–Mann algorithm for finding fixed points of non-expansive operators, we proved that the strong convergence of the proposed fixed-point iterative algorithm. As applications, we employed the proposed algorithm to solve the scaled proximity operator of a convex function composed of a linear operator (48), and the proximity operator of a finite sum of proper, lower semi-continuous convex functions (50).
We observed that the considered resolvent of composed operators (13) is closely related to Newton’s method to non-smooth sparse optimization problems. However, how to choose the symmetric positive definite matrix in finite dimensional spaces for implementing the proposed algorithm more efficiently is not clear now. We will discuss it in future work.
Author Contributions
Conceptualization, Y.Y. and Y.T.; writing–original draft preparation, Y.Y.; writing–review and editing, Y.T.; supervision, C.Z.
Funding
This research was funded by the National Natural Science Foundations of China (11661056,11771198, 11771347, 91730306, 41390454, 11401293), the China Postdoctoral Science Foundation (2015M571989) and the Jiangxi Province Postdoctoral Science Foundation (2015KY51).
Acknowledgments
We would like to thank the associate editor and the two reviewers for their helpful comments to improve the paper.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
- Rockafellar, R.T. Augmented lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1976, 1, 97–116. [Google Scholar] [CrossRef]
- Tossings, P. The perturbed proximal point algorithm and some of its applications. Appl. Math. Optim. 1994, 29, 125–159. [Google Scholar] [CrossRef]
- Spingarn, J.E. Applications of the method of partial inverses to convex programming: Decomposition. Math. Program. 1985, 32, 199–223. [Google Scholar] [CrossRef]
- Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
- Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
- Combettes, P.L.; Pesquet, J.-C. A Douglas–Rachford splitting approach to nonsmooth conve variational signal recovery. IEEE J. Sel. Top. Signal Process. 2007, 1, 564–574. [Google Scholar] [CrossRef]
- Parikh, N.; Boyd, S. Proximal algorithms. Found. Trends Optim. 2014, 1, 123–231. [Google Scholar] [CrossRef]
- Combettes, P.L.; Dũng, D.; Vũ, B.C. Proximity for sums of composite functions. J. Math. Anal. Appl. 2011, 380, 680–688. [Google Scholar] [CrossRef]
- Adly, S.; Bourdin, L.; Caubet, F. On a decomposition formula for the proximal operator of the sum of two convex functions. arXiv, 2018; arXiv:1707.08509v2. [Google Scholar]
- Bauschke, H.H.; Combettes, P.L. A Dykstra-like algorithm for two montone operators. Pac. J. Optim. 2008, 4, 383–391. [Google Scholar]
- Dykstra, R.L. An algorithm for restricted least squares regression. J. Am. Stat. Assoc. 1983, 78, 837–842. [Google Scholar] [CrossRef]
- Combettes, P.L. Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 2009, 16, 727–748. [Google Scholar]
- Aragón Artacho, F.J.; Campoy, R. Computing the resolvent of the sum of maximally monotone operators with the averaged alternating modified reflections algorithm. arxiv, 2018; arXiv:1805.09720. [Google Scholar]
- Aragón Artacho, F.J.; Campoy, R. A new projection method for finding the closest point in the intersection of convex sets. Comput. Optim. Appl. 2018, 69, 99–132. [Google Scholar] [CrossRef]
- Fukushima, M. The primal Douglas–Rachford splitting algorithm for a class of monotone mappings with application to the traffic equilibrium problem. Math. Program. 1996, 72, 1–15. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Motonone Operator Theory in Hilbert Spaces, 2nd ed.; Springer: London, UK, 2017. [Google Scholar]
- Moudafi, A. Computing the resolvent of composite operators. Cubo 2014, 16, 87–96. [Google Scholar] [CrossRef]
- Robinson, S.M. Composition duality and maximal monotonicity. Math. Program. 1999, 85, 1–13. [Google Scholar] [CrossRef]
- Pennanen, T. Dualization of generalized equations of maximal monotone type. SIAM J. Optim. 2000, 10, 809–835. [Google Scholar] [CrossRef]
- Bot, R.I.; Grad, M.S.; Wanka, G. Maximal monotonicity for the precomposition with a linear operator. SIAM J. Optim. 2007, 17, 1239–1252. [Google Scholar]
- Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
- Micchelli, C.A.; Shen, L.; Xu, Y. Proximity algorithms for image models: Denoising. Inverse Probl. 2011, 27, 045009. [Google Scholar] [CrossRef]
- Lee, J.D.; Sun, Y.; Saunders, M.A. Proximal Newton-type methods for minimizing composite functions. SIAM J. Optim. 2014, 24, 1420–1443. [Google Scholar] [CrossRef]
- Hager, W.; Ngo, C.; Yashtini, M.; Zhao, H.C. An alternating direction approximate newton algorithm for ill-conditioned inverse problems with application to parallel MRI. J. Oper. Res. Soc. China 2015, 3, 139–162. [Google Scholar] [CrossRef]
- Li, X.D.; Sun, D.F.; Toh, K.C. A highly efficient semismooth newton augmented lagrangian method for solving Lasso problems. SIAM J. Optim. 2018, 28, 433–458. [Google Scholar] [CrossRef]
- Argyriou, A.; Micchelli, C.A.; Pontil, M.; Shen, L.X.; Xu, Y.S. Efficient first order methods for linear composite regularizers. arXiv, 2011; arXiv:1104.1436. [Google Scholar]
- Chen, D.Q.; Zhou, Y.; Song, L.J. Fixed point algorithm based on adapted metric method for convex minimization problem with application to image deblurring. Adv. Comput. Math. 2016, 42, 1287–1310. [Google Scholar] [CrossRef]
- Chen, P.J.; Huang, J.G.; Zhang, X.Q. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 2013, 29, 025011. [Google Scholar] [CrossRef]
- Hiriart-Urruty, J.-B.; Lemaréchal, C. Conve Analysis and Minimization Algorithms; Spinger: New York, NY, USA, 1993. [Google Scholar]
- Combettes, P.L.; Vũ, B.C. Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 2014, 63, 1289–1318. [Google Scholar] [CrossRef]
- Zhang, X.; Burger, M.; Osher, S. A unified primal-dual framework based on Bregman iteration. J. Sci. Comput. 2011, 46, 20–46. [Google Scholar] [CrossRef]
- Bitterlich, S.; Bot, R.I.; Csetnek, E.R.; Wanka, G. The proximal alternating minimization algorithm for two-block separable convex optimization problems with linear constraints. J. Optim. Theory Appl. 2018. [Google Scholar] [CrossRef]
- Ogura, N.; Yamada, I. Non-strictly convex minimization over the fixed point set of the asymptotically shrinking non-expansive mapping. Numer. Funct. Anal. Optim. 2002, 23, 113–137. [Google Scholar] [CrossRef]
- Combettes, P.L.; Yamada, I. Compositions and convex combinations of averaged non-expansive operators. J. Math. Anal. Appl. 2015, 425, 55–70. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).