Next Article in Journal
LP-Based Leader-Following Positive Consensus of T-S Fuzzy Multi-Agent Systems
Previous Article in Journal
A Cluster-Level Information Fusion Framework for D-S Evidence Theory with Its Applications in Pattern Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Generalized Averaged Gauss Quadrature Rules: A Survey

by
Dušan L. Djukić
1,
Rada M. Mutavdžić Djukić
1,
Lothar Reichel
2 and
Miodrag M. Spalević
1,*
1
Department of Mathematics, Faculty of Mechanical Engineering, University of Belgrade, Kraljice Marije 16, 35, 11120 Belgrade, Serbia
2
Department of Mathematical Sciences, Kent State University, Kent, OH 44242, USA
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(19), 3145; https://doi.org/10.3390/math13193145
Submission received: 22 August 2025 / Revised: 25 September 2025 / Accepted: 28 September 2025 / Published: 1 October 2025

Abstract

Consider the problem of approximating an integral of a real-valued integrand on a real interval by a Gauss quadrature rule. The classical approach to estimate the quadrature error of a Gauss rule is to evaluate an associated Gauss–Kronrod rule and compute the difference between the value of the Gauss–Kronrod rule and that of the Gauss rule. However, for a variety of measures and a number of nodes of interest, Gauss–Kronrod rules do not have real nodes or positive weights. This makes these rules impossible to apply when the integrand is defined on a real interval only. This has spurred the development of several averaged Gauss quadrature rules for estimating the quadrature error of Gauss rules. A significant advantage of the averaged Gauss rules is that they have real nodes and positive weights also in situations when Gauss–Kronrod rules do not. The most popular averaged rules include Laurie’s averaged Gauss quadrature rules, optimal averaged Gauss quadrature rules, weighted averaged Gauss quadrature rules, and two-measure-based generalized Gauss quadrature rules. This paper reviews the accuracy, numerical construction, and internality of averaged Gauss rules.

1. Introduction

Let d σ = w ( x ) d x be a positive measure on the interval [ a , b ] such that all moments
μ i = a b x i d σ ( x ) , i = 0 , 1 , 2 , ,
exist and are bounded. We are interested in approximating integrals of the form
I ( f ) = a b f ( x ) d σ ( x )
by an n-node quadrature rule
Q n ( f ) = k = 1 n w k f ( x k )
with real nodes x k and positive weights w k that may depend on n. Gauss quadrature rules are particularly useful for this purpose. The nodes and weights of the n-node Gauss rule
G n ( f ) = k = 1 n w k f ( x k )
associated with the measure d σ are such that the rule is of (algebraic) degree of precision 2 n 1 , i.e.,
G n ( f ) = I ( f ) f P 2 n 1 ,
where P 2 n 1 denotes the set of all polynomials of degree at most 2 n 1 . The requirement (5) determines the nodes and weights uniquely. The nodes of the Gauss rule (4) are known to be distinct and to live in the interval [ a , b ] , and the weights w k are positive; see, e.g., Gautschi [1], Chihara [2], and Szegő [3] for properties of Gauss quadrature formulas.
The Gauss rule (4) can be associated with the symmetric tridiagonal matrix
J n G = α 0 β 1 0 β 1 α 1 β 2 β n 2 α n 2 β n 1 0 β n 1 α n 1 R n × n ,
where the α k R and β k > 0 are recursion coefficients for the sequence of monic orthogonal polynomials { P k } k = 0 (with deg ( P k ) = k ) associated with the inner product
( g , h ) = g ( x ) h ( x ) d σ ( x )
determined by the measure d σ . Thus,
P k + 1 ( x ) = ( x α k ) P k ( x ) β k P k 1 ( x ) , k = 0 , 1 , ,
where P 1 ( x ) 0 , P 0 ( x ) 1 , and
α k = ( x P k , P k ) ( P k , P k ) , β k = ( P k , P k ) ( P k 1 , P k 1 ) .
The eigenvalues of J n G are the nodes and the squared first component of suitably normalized eigenvectors are the weights of G n ; see [1,4]. Fairly efficient numerical methods are available for computing the nodes and weights of G n from J n G for a quite general positive measure d σ ; see [5,6].
It is important to be able to estimate the quadrature error
I ( f ) G n ( f )
to assess whether the number of nodes, n, has been chosen large enough to achieve an approximation of the integral (2) of desired accuracy. Moreover, we would like not to choose n much larger than necessary for the Gauss rule G n to achieve the requested accuracy. Generally, the error (9) is estimated by replacing I ( f ) by a quadrature rule of higher degree of precision than G n , and several new quadrature rules have recently been developed for this purpose.
This paper reviews classical and new quadrature rules for estimating the error (9), discusses their properties and computational aspects, and illustrates their performance. Section 2 examines known results for the averaged Gauss rules by Laurie [7] and the optimal averaged rules proposed by Ehrich [8] and Spalević [9]. The internality of the quadrature rules is considered. A quadrature rule is said to be internal if all its nodes live in the convex hull of the support of the measure d σ . Gauss rules are known to be internal, but quadrature rules that are used to estimate the quadrature error of Gauss rules might not be. It is desirable that a quadrature rule be internal, because then the rule can be applied to approximate the integral (2) for any integrand that is defined on the convex hull of the support of the measure d σ .
Section 3 comments on some computational issues, Section 4 outlines some applications, and Section 5 displays the performance of some of the quadrature rules discussed and illustrates internality. Concluding remarks can be found in Section 6.

2. Some Methods for Estimating the Quadrature Error of Gauss Rules

A classical approach to estimate the quadrature error (9) is to replace the integral by the ( 2 n + 1 ) -node Gauss–Kronrod quadrature rule associated with the n-node Gauss rule (4). This Gauss–Kronrod rule is a quadrature formula of the form
K 2 n + 1 ( f ) = k = 1 n w ^ k f ( x k ) + k = n + 1 2 n + 1 w ^ k f ( x ^ k ) ,
such that the nodes x k , k = 1 , 2 , , n are the nodes of the Gauss rule (4), and the Kronrod nodes x ^ k , k = n + 1 , n + 2 , , 2 n + 1 and the weights w ^ k , k = 1 , 2 , , 2 n + 1 are determined so that
K 2 n + 1 ( f ) = I ( f ) f P 3 n + 1 ;
see Kronrod [10] and Gautschi [1,11]. Generally, the Kronrod nodes x ^ k , k = n + 1 , n + 2 , , 2 n + 1 are required to be real and to be interlaced by the Gauss nodes x k , k = 1 , 2 , , n . In addition, the Gauss–Kronrod weights w ^ k , k = 1 , 2 , , 2 n + 1 should be positive. Efficient numerical methods for computing the nodes and weights of the Gauss–Kronrod rule K 2 n + 1 , whose nodes x ^ k , k = n + 1 , n + 2 , , 2 n + 1 and weights w ^ k , k = 1 , 2 , , 2 n + 1 , satisfy these conditions are described in [12,13]. The quadrature error (9) then can be estimated by
K 2 n + 1 ( f ) G n ( f ) ,
and the integral (2) can be approximated by K 2 n + 1 ( f ) .
However, for many measures d σ , including various measures d σ ( x ) = w s , t ( x ) d x determined by a Jacobi weight function
w s , t ( x ) = ( 1 x ) s ( 1 + x ) t , 1 < x < 1 , s > 1 , t > 1 ,
and for certain numbers of nodes, Gauss–Kronrod rules with real nodes and positive weights do not exist; see Notaris [14] for a detailed survey of Gauss–Kronrod rules and their properties, as well as [15,16,17]. This difficulty with Gauss–Kronrod rules has prompted the development of other quadrature rules for estimating the error in Gauss quadrature formulas.
For instance, one may construct an ( n + 1 ) -node quadrature rule H n + 1 ( θ ) for approximating the functional
I ( θ ) ( f ) = I ( f ) θ G n ( f )
and use a linear combination of θ G n ( f ) and H n + 1 ( θ ) for some real scalar θ ,
Q 2 n + 1 = θ G n + H n + 1 ( θ ) ,
to estimate the error I ( f ) G n ( f ) ; see [18,19] for discussions of this approach. The rule (13) generally has 2 n + 1 distinct nodes and its evaluation then requires the calculation of the integrand f at n + 1 nodes, in addition to the n values of f needed to calculate G n ( f ) . Thus, the number of required evaluations of the integrand f generally is the same as for the ( 2 n + 1 ) -node Gauss–Kronrod rule (10).
A special case of the quadrature Formula (13) was proposed by Laurie [7], who introduced the so-called ( n + 1 ) -node anti-Gauss rule G ˘ n + 1 associated with the n-node Gauss rule G n . The rule G ˘ n + 1 is characterized by
( I G ˘ n + 1 ) ( f ) = ( I G n ) ( f ) f P 2 n + 1 .
This rule corresponds to choosing θ = 1 2 in (12) and letting H n + 1 ( θ ) = 1 2 G ˘ n + 1 in (13). The quadrature Formula (13) then becomes
G 2 n + 1 L : = Q 2 n + 1 = 1 2 ( G n + G ˘ n + 1 ) .
We refer to this rule as Laurie’s averaged Gauss rule associated with G n . It has degree of precision at least 2 n + 1 .
The nodes of (15) are the nodes of G n as well as the n + 1 zeros of the polynomial
P ˘ n + 1 ( x ) P n + 1 ( x ) λ P n 1 ( x ) ( λ = λ n )
for λ = β n ; see Spalević [9] for a proof. The special cases when d σ is a Hermite or Laguerre measure are discussed by Ehrich [8], and when d σ is a Gegenbauer measure by Hascelik [20]. The attractions of Laurie’s averaged Gauss rule (15), when compared to the Gauss–Kronrod rule K 2 n + 1 , include that the former rule is guaranteed to have real nodes and positive weights and is easier to compute.
When Ehrich [8] considered Gauss–Hermite and Gauss–Laguerre quadrature rules, he varied θ in (13) or, equivalently, λ in (16), to increase the degree of precision of the rule (13). He referred to the quadrature rule (13) with θ chosen to give the highest degree of precision as the optimal averaged Gauss rule associated with G n .
Using results by Peherstorfer [21,22] on positive quadrature formulas, Spalević [9] derived a simple method for constructing the ( 2 n + 1 ) -node optimal averaged Gauss rule that is associated with the Gauss rule (4). Its n + 1 nodes that are not nodes of G n are the zeros of the polynomial P ˘ n + 1 in (16) with λ = β n + 1 . We denote this quadrature rule by G 2 n + 1 S . They have degree of precision at least 2 n + 2 .
The difference
G 2 n + 1 S ( f ) G n ( f )
furnishes an estimate of the quadrature error (9) and the integral (2) can be approximated by G 2 n + 1 S ( f ) . The optimal averaged quadrature formula G 2 n + 1 S shares the following advantages of Laurie’s averaged Gauss rule G 2 n + 1 L when compared to the Gauss–Kronrod rule K 2 n + 1 : The rule G 2 n + 1 S is guaranteed to have real nodes and positive weights, and it is simpler to compute than K 2 n + 1 .
Neither Laurie’s averaged Gauss rules nor optimal averaged Gauss rules are guaranteed to be internal. A possible technique of constructing internal averaged Gauss quadrature rules is to use modified anti-Gauss rules
G ˘ n + 1 ( γ ) ( f ) = μ = 1 n + 1 η μ ( γ ) f t μ ( γ ) ,
whose nodes and weights depend on a parameter γ . This is described in [23]. The quadrature rule (18) has n + 1 nodes and is determined by the requirement that
( I G ˘ n + 1 ( γ ) ) ( f ) = ( 1 + γ ) ( I G n ) ( f ) f P 2 n + 1
for some scalar γ > 1 , where we note that γ may depend on n. These kind of quadrature rules have been discussed in [8,20,24]. When γ = 0 the rule (18) simplifies to the anti-Gauss rule (14) due to Laurie [7].
Analogously to Laurie’s averaged Gauss rule (15), one can define the weighted averaged Gauss formula
G ˜ 2 n + 1 ( γ ) = 1 2 + γ ( 1 + γ ) G n + G ˘ n + 1 ( γ ) .
This quadrature rule was first considered by Ehrich [8]. It is shown in [23] how the parameter γ = γ n can be chosen to make the rule (19) internal for certain measures d σ . Moreover, it is pointed out in [23] that for λ = λ n = ( 1 + γ n ) β n > 0 , the quadrature rule (19) can be represented by
G ˜ 2 n + 1 ( γ ) = λ n β n + λ n G n + β n β n + λ n G ˘ n + 1 ( γ ) ,
where the modified anti-Gauss quadrature rule G ˘ n + 1 ( γ ) is associated with the ( n + 1 ) × ( n + 1 ) symmetric tridiagonal matrix
J ˘ n + 1 ( γ ) = α 0 β 1 0 β 1 α 1 β 2 β n 2 α n 2 β n 1 β n 1 α n 1 β n + λ n 0 β n + λ n α n .
The modified anti-Gauss quadrature rules G ˘ n + 1 ( γ ) (18) are not guaranteed to be internal. The same can be said for the weighted averaged Gauss quadrature rules G ˜ 2 n + 1 ( γ ) (19). However, when λ n satisfies certain conditions, these rules are internal for some measures; see [23,25].
Thorough investigations about the internality of Laurie’s averaged rules and of optimal averaged rules, as well as of related quadrature rules, can be found in [7,9,23,26,27,28,29,30,31,32]. A few ways to construct internal rules, when the rules G 2 n + 1 L or G 2 n + 1 S are not internal, are described in [23,25,26].
We conclude this section by describing a new approach for estimating the quadrature error in G n . This technique has recently been described and analyzed in [33,34,35]. It is based on constructing a ( 2 n + 1 ) -node quadrature rule by using the measure d σ as well as an auxiliary measure d μ , whose support is in the convex hull of the measure d σ . We refer to these quadrature rules as two-measure-based generalized Gauss quadrature rules. They were referred to as new averaged Gauss (NAG) quadrature formulas in [35]. We will use this acronym in the present paper as well.
Let
Q ^ 2 n + 1 ( f ) = j = 1 2 n + 1 ω ^ j f ( x ^ j )
denote the ( 2 n + 1 ) -node NAG rule associated with the Gauss rule G n . The application of NAG quadrature formulas to the estimation of the error in G n is particularly attractive when the rule Q ^ 2 n + 1 is internal but the rules G 2 n + 1 L , G 2 n + 1 S , and K 2 n + 1 are not. The internality of NAG quadrature rules for certain weight functions is discussed in [33,34,35].
The NAG rule Q ^ 2 n + 1 is easy to compute; this is discussed in the next section. Its 2 n + 1 nodes are generally distinct from the nodes of G n . This is not a major concern when it is inexpensive to evaluate the integrand f at the quadrature nodes. The rule Q ^ 2 n + 1 has the same degree of precision as the optimal averaged Gauss quadrature rule G 2 n + 1 S , i.e., at least 2 n + 2 ; see [35].

3. Computation of Averaged Gauss Quadrature Rules

The evaluation of Laurie’s averaged rules G 2 n + 1 L can be carried out by evaluating the Gauss rule G n and the associated anti-Gauss rule G ˘ n + 1 . The tridiagonal matrix J n G associated with the Gauss rule is given by (6) and the tridiagonal matrix associated with the corresponding anti-Gauss rule is (21) with λ n = β n . The eigenvalues of J n G are the nodes of the Gauss rule G n and the eigenvalues of the matrix (21) with λ n = β n are the nodes of the anti-Gauss rule G ˘ n + 1 . The weights of the rule G 2 n + 1 L are the square of the first components of suitably scaled eigenvectors of these matrices. These nodes and weights can be computed quite efficiently in O ( n 2 ) arithmetic floating point operations by a divide-and-conquer method that exploits that the matrix (6) is the n × n leading principal submatrix of the matrix (21). This is described by Alqahtani et al. [36]. Computed examples reported in [5] show this approach to be faster than application of the Golub–Welsch algorithm [6] and to yields nodes and weights with higher accuracy. Software is provided with [5].
We also are interested in calculating the nodes and weights of optimal averaged quadrature rules. The optimal averaged Gauss rule G 2 n + 1 S associated with the Gauss rule G n is given by (20) with λ n = β n + 1 . It follows that the nodes and weights of G 2 n + 1 S can be evaluated quite efficiently by a divide-and-conquer method analogously as the nodes and weights of Laurie’s averaged rule G 2 n + 1 L . This is discussed in [36]. Further computed examples as well as code can be found in [5].
NAG rules also can be evaluated quite efficiently by the divide-and-conquer approach described in [5,36]. The computations are analogous to those for the rules G 2 n + 1 S .
Concerning with the stable computation of the averaged Gaussian quadrature rules we mention and the methods proposed in [37,38].
We conclude this section by noting that algorithms for evaluating the nodes and weights of Gauss–Kronrod and Patterson-type rules (which have almost the same precision as Gauss–Kronrod; cf. [39]) are described in [12,13] and [40], respectively. The computation of these rules is more complicated than the computation of any of the averaged Gauss rules with the same number of nodes considered in this paper.

4. Some Extensions and Applications of Laurie’s Averaged and Optimal Averaged Gauss Quadrature Rules

We already have discussed the application of averaged Gauss rules to the estimation of the quadrature error (9). This section mentions a few additional applications and some extensions of the quadrature rules considered above.
  • Let A R × be a large symmetric matrix, let v R { 0 } , and let the function f be defined on the convex hull of the spectrum of A. The need to evaluate matrix functionals of the form
    v T f ( A ) v ,
    where the superscript T denotes transposition, arises in a variety of applications including in network analysis and the solution of linear discrete ill-posed problems; see, e.g., [41,42,43,44]. When the matrix A is large, it may be prohibitively expensive, or impossible, to first compute the matrix f ( A ) and then evaluate (22). The Lanczos method provides a less expensive approach to compute an approximation of (22). The application of n steps of the Lanczos process to A with initial vector v yields, generically, a symmetric tridiagonal matrix (6) that determines an n-node Gauss quadrature rule that is associated with a nonnegative measure d σ . This measure is defined by the vector v and the spectral factorization of A; see Golub and Meurant [44] for a thorough discussion. In particular, the Lanczos algorithm allows the computation of Gauss quadrature rules without explicit knowledge of the measure that defines these rules. The quadrature error of Gauss rules generated in this manner can be estimated with the aid of averaged Gauss rules.
  • The problem of evaluating expressions of the form
    V T f ( A ) V ,
    where A and f are as in (22) and V R × k for some 1 < k , also arises in various applications including color image restoration [45] and the determination of the optical absorption spectrum of a material [46]. An application of m steps of the symmetric block Lanczos algorithm to A with initial block V gives a symmetric block tridiagonal matrix that can be associated with a block Gauss quadrature rule for the approximation of (23). Analogues of Laurie’s averaged rule and the optimal averaged rule for the estimation of the quadrature error in the block Gauss rule are described in [47], where applications to network analysis are discussed.
  • Let A R × be a large symmetric matrix, let v R , and let f be a function that is defined on the convex hull of the spectrum of A. The need to evaluate expressions of the form
    f ( A ) v
    arises, e.g., in network analysis, see [48,49], as well as when solving systems of ordinary differential equations; see [50,51]. The expression (24) can be approximated by carrying out a few steps of the symmetric Lanczos algorithm applied to A with initial vector v. Estimates of the error can be determined with the aid of Laurie’s averaged or optimal averaged Gauss quadrature rules; see [52].
  • Averaged quadrature rules associated with Gauss–Radau and Gauss–Lobatto quadrature formulas are described in [53]. They can be applied to estimate the quadrature error in Gauss–Radau and Gauss–Lobatto rules.
  • Analogues of Laurie’s averaged rules and of optimal averaged rules have been developed for the estimation of the quadrature error of Gauss–Szegő rules; see [54] and [55], respectively. Gauss–Szegő rules are Gauss-type quadrature rules for the approximation of integrals of a periodic function.
  • Padé-type approximants are rational functions that approximate a formal series of polynomials; see Djukić et al. [56] describe the construction and performance of Padé-type approximants that correspond to optimal averaged Gauss quadrature rules.
  • The conjugate gradient method is the default iterative method for the solution of linear systems of equations with a large symmetric positive definite matrix. This method is closely related to the symmetric Lanczos method and, therefore, to orthogonal polynomials. Almutairi et al. [57] discuss how Gauss quadrature rules, Laurie’s averaged Gauss rules, and optimal averaged Gauss rules can be applied to estimate the norm of the error in approximate solutions computed by the conjugate gradient method.
  • The iterative solution of linear systems with a large symmetric, indefinite, nonsingular matrix by a Lanczos-type method is discussed by Alibrahim et al. [58], who describe how the norm of the error in computed approximate solutions can be estimated by Gauss rules, Laurie’s averaged Gauss rules, and optimal averaged Gauss rules.
  • Fredholm integral equations of the second kind that are defined on a finite or infinite interval arise in many applications. Djukić et al. [59], Díaz de Alba et al. [60], and Fermo et al. [61] discuss their numerical solution by Nyström methods that are based on Gauss quadrature rules. It is important to be able to estimate the error in the computed solution because this makes it possible to choose an appropriate number of nodes in the Gauss quadrature rule used. These papers explore the application of anti-Gauss, Laurie’s averaged Gauss, and weighted averaged Gauss quadrature rules for this purpose and analyze the numerical stability of these methods.
  • Cubature rules that generalize Laurie’s averaged rule are described in [59,62]. The development of averaged rules for problems in several space-dimensions is still an active area of research. A difficulty is that the domain of integration in higher dimensions may be of a variety of shapes, see, e.g., [63]. Another issue is that when the domain of integration is simple, say, the unit square in the first quadrant, and the integral is approximated by integrating one space-dimension at a time by Gauss quadrature, integration along the first dimension, say, the horizontal axis, yields approximations that are used when integrating along the vertical axis. It remains to be investigated how the errors in the approximations obtained when integrating in the horizontal direction affect the error estimates obtained when integrating in the vertical direction.

5. Internality of Averaged and Optimal Averaged Gauss Rules

Laurie’s averaged Gauss rule and the optimal averaged Gauss rule are not guaranteed to be internal; the smallest and largest nodes may be outside the convex hull of the support of the measure d σ . For notational simplicity, we will, in this section, assume that the convex hull of the support of d σ is the interval [ 1 , 1 ] . Nodes that are inside the interval [ 1 , 1 ] are said to be internal, while nodes that are outside this interval are said to be external. A quadrature rule with all nodes internal is said to be internal. The smallest and largest nodes are internal if and only if
P n + 1 ( 1 ) P n 1 ( 1 ) β n + 1 and P n + 1 ( 1 ) P n 1 ( 1 ) β n + 1 ,
respectively, where β n + 1 = β n for Laurie’s averaged Gauss rule G 2 n + 1 L and β n + 1 = β n + 1 for the optimal averaged Gauss rule G 2 n + 1 S .
When external nodes occur, one possibility is to remove up to n 1 last rows and columns from the corresponding Jacobi matrix; see [26,64]. It follows from the interlacing property of the eigenvalues of Jacobi matrices that in this way, the outermost nodes will get closer to the middle of the interval, while the degree of precision will not fall below 2 n + 2 . Hence, the highest chance of obtaining an internal quadrature rule is achieved by removing the n 1 last rows and columns. The resulting Jacobi matrix will be of order n + 2 and the corresponding quadrature rule is denoted by G n + 2 t and referred to as the truncated (averaged Gauss) quadrature rule. Its n + 2 nodes are the zeros of the polynomial
P n + 2 t ( x ) = ( x α n 1 ) P n + 1 ( x ) β n + 1 P n ( x ) .
They are interlaced by the Gauss nodes. Therefore, only the two outermost nodes may be external. The rule Q n + 2 t is internal if and only if
x n + 2 P n + 2 t ( x ) 0 for x = ± 1 .

5.1. Results for Classical Weight Functions

We provide a short overview of the internality of Laurie’s averaged Gauss rules and of optimal averaged Gauss rules for several classical weight functions, and we first note that in the case of Hermite and generalized Hermite weight functions w ( x ) = | x | a e x 2 on R , for a > 1 , all the discussed formulas are trivially internal. For the generalized Laguerre weight function
w ( x ) = x α e x , over [ 0 , ) with   α > 1 ,
Laurie [7] shows that the anti-Gauss formula is internal for α > 1 . Ehrich [8] proves that the optimal averaged Gauss formula is internal for α 1 and, that for 1 < α < 1 , there is precisely one negative node.
In the case of the Jacobi weight function w on the interval [ 1 , 1 ] for a , b ( 1 , + ) ,
w ( x ) = w a , b ( x ) = ( 1 x ) a ( 1 + x ) b ,
the anti-Gauss formula with n 1 is internal if and only if (see [7])
( 2 a + 1 ) n 2 + ( 2 a + 1 ) ( a + b + 1 ) n + 1 2 ( a + 1 ) ( a + b ) ( a + b + 1 ) 0 , ( 2 b + 1 ) n 2 + ( 2 b + 1 ) ( a + b + 1 ) n + 1 2 ( b + 1 ) ( a + b ) ( a + b + 1 ) 0 ,
and the optimal averaged Gauss quadrature formula is internal if and only if (see [65])
( 2 a + 1 ) n 2 + ( 2 a + 1 ) ( a + b + 1 ) n + 1 2 ( a + b ) [ ( a + 1 ) ( a + b + 1 ) + 2 ( a b ) ] 0 , ( 2 b + 1 ) n 2 + ( 2 b + 1 ) ( a + b + 1 ) n + 1 2 ( a + b ) [ ( b + 1 ) ( a + b + 1 ) + 2 ( b a ) ] 0 .
Hence, when n is large enough,
  • if a , b > 1 2 or | a | = | b | = 1 2 , then both averaged Gauss formulas are internal;
  • if a < 1 2 or b < 1 2 , then both averaged Gauss formulas are external;
  • if a = 1 2 , then for b ( 1 2 , 1 2 ) , only the optimal averaged Gauss formula is internal, and for b > 1 2 , only Laurie’s averaged Gauss formula is internal.

Modifications by Linear Divisors and Factors

Given a weight function w ( x ) on the interval [ 1 , 1 ] , one often needs to work with its modifications by a linear divisor or factor,
w ˜ ( x ) = w ( x ) z x and w ^ ( x ) = ( z x ) w ( x ) ,
where z is a given real constant with | z | > 1 . It follows from [1] (Theorem 2.52 (Uvarov)) that the monic orthogonal polynomials P ˜ n for the weight function w ˜ ( x ) given by (28) satisfy
P ˜ n ( x ) = P n ( x ) r n 1 P n 1 ( x ) , where r n = 1 1 P n + 1 ( x ) w ˜ ( x ) d x 1 1 P n ( x ) w ˜ ( x ) d x .
The sequence r n can be computed inductively by the relations
r 1 = 1 1 w ( x ) z x d x and r n = z α n β n r n 1 , n 0 .
The recurrence coefficients α ˜ n and β ˜ n for the polynomials P ˜ n can be computed in terms of the sequence r n by an algorithm described in [1] (Equations (2.4.26–27)):
α ˜ n = α n + r n r n 1 and β ˜ n = β n 1 · r n 1 r n 2 .
Similarly, by [1] (Theorem 2.52), the monic orthogonal polynomials P ^ n with respect to the weight function w ^ ( x ) given by (28) satisfy
( x z ) P ^ n ( x ) = P n + 1 ( x ) s n P n ( x ) , where s n = P n + 1 ( z ) P n ( z ) .
The coefficients s n can be computed inductively by using the relations
s 0 = z α 0 and s n = z α n β n s n 1 , n 1 ,
and the recurrence coefficients α ^ n and β ^ n for the polynomials P ^ n can be computed by the corresponding algorithm in [1] (Equations (2.4.12–13)):
α ^ n = α n + 1 + s n + 1 s n and β ^ n = β n · s n s n 1 .

5.2. Chebyshev Weight Functions

Here, we will discuss the internality of averaged and optimal averaged quadrature rules when the weight function has one of the forms
w ˜ i ( x ) = w i ( x ) x z , w ˘ i ( x ) = x z x z w i ( x ) ,
and w ( x ) is one of the Chebyshev weight functions:
w 1 ( x ) = 1 1 x 2 , w 2 ( x ) = 1 x 2 , w 3 ( x ) = 1 + x 1 x .
Here, z = ( c + 1 c ) is an arbitrary real constant with | z | > 1 and
z = ( c 2 + 1 c ) .
The relation between z and z was first studied by Milovanović et al. [15] and was later extended to averaged Gauss and optimal averaged Gauss quadrature rules in [28,29,31].

5.2.1. Modifications by a Linear Divisor

Throughout this subsection, we will use the notation c ´ = min { c , c 1 } . When the weight function is w ˜ i ( x ) for i = 1 , 2 , 3 , given by (35) and (36), the recurrence relations (30) imply that r k = 1 2 c ´ for k 1 . Using the relations (31), the recurrence coefficients can be shown to satisfy
w ˜ 1 : α ˜ 0 = c ´ , α ˜ 1 = 1 2 c ´ , β ˜ 1 = 1 c ´ 2 2 , ( α k , β k ) = ( 0 , 1 4 ) for   k 2 , w ˜ 2 : α ˜ 0 = 1 2 c ´ , ( α k , β k ) = ( 0 , 1 4 ) for   k 1 , w ˜ 3 : α ˜ 0 = 1 c ´ 2 , α ˜ 1 = 0 , β ˜ 1 = 1 + c ´ 4 , ( α k , β k ) = ( 0 , 1 4 ) for   k 2 .
By (29), the monic orthogonal polynomials are
w ˜ 1 : P ˜ k ( x ) = T k ( x ) + 1 2 c ´ · T k 1 ( x ) for   k 2 , w ˜ 2 : P ˜ k ( x ) = U k ( x ) + 1 2 c ´ · U k 1 ( x ) for   k 1 , w ˜ 3 : P ˜ k ( x ) = V k ( x ) + 1 2 c ´ · V k 1 ( x ) for   k 2 ,
with P ˜ 0 ( x ) = 1 and P ˜ 1 ( x ) = x + c ´ , where the T k , U k , and V k are the monic Chebyshev orthogonal polynomials of the first, second, and third kind, respectively, defined by
T n ( cos θ ) = 1 2 n 1 cos n θ , U n ( cos θ ) = 1 2 n 1 · sin ( n + 1 ) θ sin θ , V n ( cos θ ) = 1 2 n · cos ( n + 1 2 ) θ cos 1 2 θ .
In all three cases, it is easily verified that the averaged, optimal averaged, and truncated quadrature rules are internal for n 2 .

5.2.2. Modifications by a Linear-Over-Linear Factor

The orthogonal polynomials P ˘ k with respect to any of the weight functions w ˘ i defined by (35) are given by Formula (32). In this case, the recurrence (33) simplifies to
s k = z 1 4 s k 1 ( k 2 ) .
It is known ([28] Theorem 4) that every sequence ( s k ) k = 1 with s 1 1 2 ζ 1 that satisfies (37) has the form
s k = 1 2 ζ · ζ 2 k 2 + A ζ 2 k 4 + A ( k N ) , where ζ = c 2 + 2 + c 4 + 4 2 c ,
for some constant A. The constant A is determined from the initial term s 1 , which is easy to evaluate for each of the three weight functions w ˘ i . The following values are obtained:
w ˘ 1 : A = ζ 4 c 2 + c 4 + 4 2 2 , | c | < 1 , ζ 2 c 2 + c 4 + 4 2 2 , | c | 1 , w ˘ 2 : A = ζ 6 c 2 + c 4 + 4 2 2 , | c | < 1 , ζ 4 c 2 + c 4 + 4 2 2 , | c | 1 , w ˘ 3 : A = ζ 5 c 2 + c 4 + 4 2 2 , | c | < 1 , ζ 3 c 2 + c 4 + 4 2 2 , | c | 1 .
Knowing the terms s k and the coefficients α ˜ n and β ˜ n , and using the relations (34), we easily obtain formulas for the recurrence coefficients α ˘ n and β ˘ n for the weight functions w ˘ i ( x ) , i = 1 , 2 , 3 . Now, the internality of the rules G 2 n + 1 L , G 2 n + 1 S , and G n + 2 t can be verified by using (25) and (26). The following results are shown in [28,31], and [29], respectively.
Theorem 1. 
Let the weight function be w ˘ 1 ( x ) and let n 2 . The averaged rule G 2 n + 1 L and the optimal averaged rule G 2 n + 1 S have one and two external nodes, respectively. However, the truncated rule G n + 2 t is internal.
Theorem 2. 
Let the weight function be w ˘ 2 ( x ) and let n 2 . Then, the optimal averaged rule G 2 n + 1 S and the truncated rule G n + 2 t are internal.
Theorem 3. 
Let the weight function be w ˘ 3 ( x ) and let n 2 . The averaged rule G 2 n + 1 L has an external node, while the optimal averaged rule G 2 n + 1 S is internal if and only if c > 0 . The truncated rule G n + 2 t is internal.

5.3. Modifications of the Jacobi Weight Functions

We now consider modifications of the Jacobi weight functions (27) of the form
w ˜ ( x ) = ( 1 x ) a ( 1 + x ) b z x and w ^ ( x ) = ( z x ) · ( 1 x ) a ( 1 + x ) b for 1 < x < 1 ,
where z is a given real constant with | z | > 1 . The condition on z is conveniently secured by setting
z = 1 2 ( c + 1 c ) with 1 < c < 1 .
In [32] asymptotic expansions for the recurrence coefficients α ˜ n , β ˜ n , and α ^ n , β ^ n are obtained, leading to the following results.
Theorem 4. 
Let the weight function be any of the two weight functions defined by (38): w ˜ ( x ) or w ^ ( x ) . Then, for n large enough, the averaged Gauss rule G 2 n + 1 L has
  • the largest node internal if a > 1 2 , or a = 1 2 and | b | > 1 2 ,
  • the smallest node internal if b > 1 2 , or b = 1 2 and | b | < 1 2 .
For n large enough, the optimal averaged Gauss rule G 2 n + 1 S has
  • the largest node internal if a > 1 2 , or a = 1 2 and | b | < 1 2 ,
  • the smallest node internal if b > 1 2 , or b = 1 2 and | b | > 1 2 .
Thus, the averaged and optimal averaged Gauss quadrature rules are simultaneously internal or external, except when min { a , b } = 1 2 .
Theorem 5. 
The truncated rule G n + 2 t associated with either one of the weight functions given by (38) is internal when n is large enough.
Numerical experiments suggest that, generally, n does not have to be very large in order for the rule G n + 2 t to be internal; see Table 1 for an illustration.
Example 1.
Consider a weight function w ˜ ( x ) given by (38) with the pole z clear from the interval of integration: a = 0.25 , b = 0.75 , and z = 2.5 . The conclusions of Theorems 4 and 5 hold already for fairly small values of n. Since b < 1 2 , the two averaged rules are external on the left, and only the truncated rules are internal.
As mentioned above, it is desirable for a quadrature rule to be internal. Gauss rules are internal, but averaged rules are not, in some cases. Figure 1 and Figure 2 depict the distribution of the nodes of Gauss, optimal averaged Gauss, averaged Gauss, and NAG quadrature rules with respect to some Jacobi weight functions w s , t ( x ) in (11) and n or 2 n + 1 nodes for n = 10 . The auxiliary measure d μ used to define the NAG rules is the Chebyshev measure of the second kind. The internality of these averaged and optimal averaged Gauss quadrature rules is discussed in [7,9], and of NAG rules in [35]. Figure 1 displays the nodes of the quadrature rules when s = 1 and t = 0.5 . All quadrature rules can be seen to be internal.
Figure 2 shows the nodes of the quadrature rules when s = t = 0.6 . The averaged and optimal averaged Gauss quadrature rules are not internal. However, the NAG rule is internal for this Jacobi weight function. This is a reason for our interest in NAG quadrature rules.
Table 2 summarizes some properties of the quadrature rules considered. None of the quadrature rules are guaranteed to be internal, but all nodes of the averaged and optimal averaged Gauss rules are real and all weights are positive. Moreover, modifications of averaged and optimal averaged Gauss rules have been proposed that can be internal when the “standard” averaged or optimal averaged rules are not. Gauss–Kronrod rules may have complex-valued nodes. We remark that the nodes and weights of the ( 2 n + 1 ) -node Gauss–Kronrod rule are more difficult to compute than those of the other quadrature rules considered.
Numerous computed results with NAG rules for Jacobi-type measures that use the Chebyshev measure of the second kind as auxiliary measure suggest the conjecture that the nodes x ^ 1 , x ^ 3 , , x ^ 2 n + 1 of the NAG quadrature rule Q ^ 2 n + 1 so obtained interlace (separate) the nodes of the corresponding Gauss quadrature rule G n ; see Figure 1 and Figure 2 for illustrations. It would be interesting to prove this conjecture.
Let us mention some of our very recent work in this theory. At first let us mention the paper [66] on the error bounds for the averaged Gauss quadrature rules for functions analytic on an ellipse. There are measures for which precision of averaged Gauss quadrature rules increases; see [67]. During our investigation of the averaged and optimal averaged Gauss rules we observed that they typically are much more accurate than their degrees of precision suggest. The paper [68] discusses and illustrates this property. It is based on an analog investigation of the Clenshaw-Curtis quadrature rules (see [69,70]), and their accuracy, derived by Trefethen (see [71,72,73]).

6. Conclusions

Gauss quadrature rules are frequently used to approximate integrals over a real interval. It is important to be able to estimate the quadrature error to choose a Gauss rule with an appropriate number of nodes. Laurie’s averaged Gauss rule and the optimal averaged Gauss rule are useful for this purpose. This paper reviews recent developments in the analysis of these averaged rules and discusses some applications. In particular, the location of the extreme nodes of the averaged rules is discussed and illustrated.

Author Contributions

Conceptualization, D.L.D., R.M.M.D., L.R. and M.M.S.; methodology, D.L.D., R.M.M.D., L.R. and M.M.S.; software, D.L.D., R.M.M.D., L.R. and M.M.S.; validation, D.L.D., R.M.M.D., L.R. and M.M.S.; formal analysis, D.L.D., R.M.M.D., L.R. and M.M.S.; investigation, D.L.D., R.M.M.D., L.R. and M.M.S.; resources, D.L.D., R.M.M.D., L.R. and M.M.S.; data curation, D.L.D., R.M.M.D., L.R. and M.M.S.; writing—original draft preparation, D.L.D., R.M.M.D., L.R. and M.M.S.; writing—review and editing, D.L.D., R.M.M.D., L.R. and M.M.S.; visualization, D.L.D., R.M.M.D., L.R. and M.M.S.; supervision, D.L.D., R.M.M.D., L.R. and M.M.S. The authors contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

The research by D.L.D., R.M.M.D. and M.M.S. was supported in part by the Serbian Ministry of Science, Technological Development, and Innovations, according to Contract 451-03-137/2025-03/200105 dated on 4 February 2025.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gautschi, W. Orthogonal Polynomials: Computation and Approximation; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  2. Chihara, T.S. An Introduction to Orthogonal Polynomials; Gordon and Breach, Science Publishers: New York, NY, USA; London, UK; Paris, France, 1978. [Google Scholar]
  3. Szegő, G. Orthogonal Polynomials, 4th ed.; American Mathematical Society: Providence, RI, USA, 1975. [Google Scholar]
  4. Wilf, H.S. Mathematics for the Physical Sciences; Wiley: New York, NY, USA, 1962. [Google Scholar]
  5. Borges, C.F.; Reichel, L. Computation of Gauss-type quadrature rules. Electron. Trans. Numer. Anal. 2024, 61, 121–136. [Google Scholar] [CrossRef]
  6. Golub, G.H.; Welsch, J.H. Calculation of Gauss quadrature rules. Math. Comp. 1969, 23, 221–230. [Google Scholar] [CrossRef]
  7. Laurie, D.P. Anti-Gaussian quadrature formulas. Math. Comp. 1996, 65, 739–747. [Google Scholar] [CrossRef]
  8. Ehrich, S. On stratified extensions of Gauss–Laguerre and Gauss–Hermite quadrature formulas. J. Comput. Appl. Math. 2002, 140, 291–299. [Google Scholar] [CrossRef]
  9. Spalević, M.M. On generalized averaged Gaussian formulas. Math. Comp. 2007, 76, 1483–1492. [Google Scholar] [CrossRef]
  10. Kronrod, A.S. Integration with control of accuracy. Soviet Phys. Dokl. 1964, 9, 17–19. [Google Scholar]
  11. Gautschi, W. A historical note on Gauss–Kronrod quadrature. Numer. Math. 2005, 100, 483–484. [Google Scholar] [CrossRef]
  12. Calvetti, D.; Golub, G.H.; Gragg, W.B.; Reichel, L. Computation of Gauss–Kronrod rules. Math. Comp. 2000, 69, 1035–1052. [Google Scholar] [CrossRef]
  13. Laurie, D.P. Calculation of Gauss–Kronrod quadrature rules. Math. Comp. 1997, 66, 1133–1145. [Google Scholar] [CrossRef]
  14. Notaris, S.E. Gauss–Kronrod quadrature formulae—A survey of fifty years of research. Electron. Trans. Numer. Anal. 2016, 45, 371–404. [Google Scholar]
  15. Kahaner, D.K.; Monegato, G. Nonexistence of extended Gauss–Laguerre and Gauss–Hermite quadrature rules with positive weights. Z. Angew. Math. Phys. 1978, 29, 983–986. [Google Scholar] [CrossRef]
  16. Peherstorfer, F.; Petras, K. Ultraspherical Gauss–Kronrod quadrature is not possible for λ > 3. SIAM J. Numer. Anal. 2000, 37, 927–948. [Google Scholar] [CrossRef]
  17. Peherstorfer, F.; Petras, K. Stieltjes polynomials and Gauss–Kronrod quadrature for Jacobi weight functions. Numer. Math. 2003, 95, 689–706. [Google Scholar] [CrossRef]
  18. Laurie, D.P. Stratified sequence of nested quadrature formulas. Quest. Math. 1992, 15, 365–384. [Google Scholar] [CrossRef]
  19. Patterson, T.N.L. Stratified nested and related quadrature rules. J. Comput. Appl. Math. 1999, 112, 243–251. [Google Scholar] [CrossRef]
  20. Hascelik, A.I. Modified anti-Gauss and degree optimal average formulas for Gegenbauer measure. Appl. Numer. Math. 2008, 58, 171–179. [Google Scholar] [CrossRef]
  21. Peherstorfer, F. On positive quadrature formulas. In ISNM International Series of Numerical Mathematics; Brass, H., Hämmerlin, G., Eds.; Numerical Integration IV; Birkhäuser: Basel, Switzerland, 1993; Volume 112, pp. 297–313. [Google Scholar]
  22. Peherstorfer, F. Positive quadrature formulas III: Asymptotics of weights. Math. Comp. 2008, 77, 2241–2259. [Google Scholar] [CrossRef]
  23. Reichel, L.; Spalević, M.M. Averaged Gauss quadrature formulas: Properties and applications. J. Comput. Appl. Math. 2022, 410, 114232. [Google Scholar] [CrossRef]
  24. Calvetti, D.; Reichel, L. Symmetric Gauss–Lobatto and modified anti-Gauss rules. BIT Numer. Math. 2003, 43, 541–554. [Google Scholar] [CrossRef]
  25. Djukić, D.L.; Mutavdžić Djukić, R.M.; Reichel, L.; Spalević, M.M. Weighted averaged Gaussian quadrature rules for modified Chebyshev measures. Appl. Numer. Math. 2024, 200, 195–208. [Google Scholar] [CrossRef]
  26. Djukić, D.L.; Reichel, L.; Spalević, M.M. Truncated generalized averaged Gauss quadrature rules. J. Comput. Appl. Math. 2016, 308, 408–418. [Google Scholar] [CrossRef]
  27. Djukić, D.L.; Reichel, L.; Spalević, M.M. Internality of generalized averaged Gaussian quadratures and their truncated variants for measures induced by Chebyshev polynomials. Appl. Numer. Math. 2019, 142, 190–205. [Google Scholar] [CrossRef]
  28. Djukić, D.L.; Mutavdžić Djukić, R.M.; Reichel, L.; Spalević, M.M. Internality of generalized averaged quadrature rules and truncated variants for modified Chebyshev measures of the first kind. J. Comput. Appl. Math. 2021, 398, 113696. [Google Scholar] [CrossRef]
  29. Djukić, D.L.; Mutavdžić Djukić, R.M.; Reichel, L.; Spalević, M.M. Internality of generalized averaged quadrature rules and truncated variants for modified Chebyshev measures of the third and fourth kind. Numer. Algorithms 2023, 92, 523–544. [Google Scholar] [CrossRef]
  30. Djukić, D.L.; Reichel, L.; Spalević, M.M.; Tomanović, J.D. Internality of the averaged Gaussian quadratures and their truncated variants with Bernstein-Szegő weight functions. Electron. Trans. Numer. Anal. 2016, 45, 405–419. [Google Scholar]
  31. Djukić, D.L.; Reichel, L.; Spalević, M.M.; Tomanović, J.D. Internality of generalized averaged Gaussian quadrature rules and their truncated variants for modified Chebyshev measures of the second kind. J. Comput. Appl. Math. 2019, 345, 70–85. [Google Scholar] [CrossRef]
  32. Djukić, D.L.; Mutavdžić Djukić, R.M.; Reichel, L.; Spalević, M.M. Internality of averaged Gauss quadrature rules for certain modification of Jacobi measures. Appl. Comput. Math. 2023, 22, 426–442. [Google Scholar] [CrossRef]
  33. Djukić, D.L.; Mutavdžić Djukić, R.M.; Pejčev, A.V.; Reichel, L.; Spalević, M.M.; Spalević, S.M. Internality of two-measure-based generalized Gauss quadrature rules for modified Chebyshev measures. Electron. Trans. Numer. Anal. 2024, 61, 157–172. [Google Scholar] [CrossRef]
  34. Djukić, D.L.; Mutavdžić Djukić, R.M.; Pejčev, A.V.; Reichel, L.; Spalević, M.M.; Spalević, S.M. Internality of two-measure-based generalized Gauss quadrature rules for modified Chebyshev measures II. Mathematics 2025, 13, 513. [Google Scholar] [CrossRef]
  35. Pejčev, A.V.; Reichel, L.; Spalević, M.M.; Spalević, S.M. A new class of quadrature rules for estimating the error in Gauss quadrature. Appl. Numer. Math. 2024, 204, 206–221. [Google Scholar] [CrossRef]
  36. Alqahtani, H.; Borges, C.; Djukić, D.L.; Mutavdžić Djukić, R.M.; Reichel, L.; Spalević, M.M. Computation of pairs of related Gauss quadrature rules. Appl. Numer. Math. 2025, 208, 32–42. [Google Scholar] [CrossRef]
  37. Reichel, L.; Spalević, M.M. A new representation of generalized averaged Gauss quadrature rules. Appl. Numer. Math. 2021, 165, 614–619. [Google Scholar] [CrossRef]
  38. Djukić, D.L.; Mutavdžić Djukić, R.M.; Reichel, L.; Spalević, M.M. Decompositions of optimal averaged Gauss quadrature rules. J. Comput. Appl. Math. 2024, 438, 115586. [Google Scholar] [CrossRef]
  39. de la Calle Ysern, B.; Spalević, M.M. Modified Stieltjes polynomials and Gauss–Kronrod quadrature rules. Numer. Math. 2018, 138, 1–35. [Google Scholar] [CrossRef]
  40. de la Calle Ysern, B.; Spalević, M.M. On the computation of Patterson-type quadrature rules. J. Comput. Appl. Math. 2022, 403C, 113850. [Google Scholar] [CrossRef]
  41. Baglama, J.; Fenu, C.; Reichel, L.; Rodriguez, G. Analysis of directed networks via partial singular value decomposition and Gauss quadrature. Linear Algebra Appl. 2014, 456, 93–121. [Google Scholar] [CrossRef]
  42. Calvetti, D.; Golub, G.H.; Reichel, L. Estimation of the L-curve via Lanczos bidiagonalization. BIT Numer. Math. 1999, 39, 603–619. [Google Scholar] [CrossRef]
  43. Fenu, C.; Martin, D.; Reichel, L.; Rodriguez, G. Network analysis via partial spectral factorization and Gauss quadrature. SIAM J. Sci. Comput. 2013, 35, A2046–A2068. [Google Scholar] [CrossRef]
  44. Golub, G.H.; Meurant, G. Matrices, Moments and Quadrature with Applications; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  45. Bentbib, A.; Ghomari, M.E.; Jbilou, K.; Reichel, L. The extended symmetric block Lanczos method for matrix-valued Gauss-type quadrature rules. J. Comput. Appl. Math. 2022, 407, 113965. [Google Scholar] [CrossRef]
  46. Shao, M.; da Jornada, F.H.; Lin, L.; Yang, C.; Deslippe, J.; Louie, S.G. A structure preserving Lanczos algorithm for computing the optical absorption spectrum. SIAM J. Matrix. Anal. Appl. 2018, 39, 683–711. [Google Scholar] [CrossRef]
  47. Reichel, L.; Rodriguez, G.; Tang, T. New block quadrature rules for the approximation of matrix functions. Linear Algebra Appl. 2016, 502, 299–326. [Google Scholar] [CrossRef]
  48. De la Cruz Cabrera, O.; Matar, M.; Reichel, L. Edge importance in a network via line graphs and the matrix exponential. Numer. Algorithms 2020, 83, 807–832. [Google Scholar] [CrossRef]
  49. Estrada, E.; Higham, D.J. Network properties revealed through matrix functions. SIAM Rev. 2010, 52, 696–714. [Google Scholar] [CrossRef]
  50. Beckermann, B.; Reichel, L. Error estimation and evaluation of matrix functions via the Faber transform. SIAM J. Numer. Anal. 2009, 47, 3849–3883. [Google Scholar] [CrossRef]
  51. Hochbruck, M.; Lubich, C. On Krylov subspace approximations to the matrix exponential operator. SIAM J. Numer. Anal. 1997, 34, 1911–1925. [Google Scholar] [CrossRef]
  52. Eshghi, N.; Reichel, L. Estimating the error in matrix function approximations. Adv. Comput. Math. 2021, 47, 57. [Google Scholar] [CrossRef]
  53. Reichel, L.; Spalević, M.M. Radau- and Lobatto-type averaged Gauss rules. J. Comput. Appl. Math. 2024, 437, 115477. [Google Scholar] [CrossRef]
  54. Kim, S.-M.; Reichel, L. Anti-Szegő quadrature rules. Math. Comp. 2007, 76, 795–810. [Google Scholar] [CrossRef]
  55. Jagels, C.; Reichel, L.; Tang, T. Generalized averaged Szegő quadrature rules. J. Comput. Appl. Math. 2017, 311, 645–654. [Google Scholar] [CrossRef]
  56. Djukić, D.L.; Mutavdžić Djukić, R.M.; Reichel, L.; Spalević, M.M. Optimal averaged Padé-type approximants. Electron. Trans. Numer. Anal. 2023, 59, 145–156. [Google Scholar] [CrossRef]
  57. Almutairi, H.; Meurant, G.; Reichel, L.; Spalević, M.M. New error estimates for the conjugate gradient method. J. Comput. Appl. Math. 2025, 459, 116357. [Google Scholar] [CrossRef]
  58. Alibrahim, M.; Darvishi, M.T.; Reichel, L.; Spalević, M.M. Error estimators for a Krylov subspace method for the solution of linear systems of equations with a symmetric indefinite matrix. Axioms 2025, 14, 179. [Google Scholar] [CrossRef]
  59. Djukić, D.L.; Fermo, L.; Mutavdžić Djukić, R.M. Averaged Nyström interpolants for bivariate Fredholm integral equations on the real positive semi-axis. Electron. Trans. Numer. Anal. 2024, 61, 51–65. [Google Scholar] [CrossRef]
  60. Díaz de Alba, P.; Fermo, L.; Rodriguez, G. Solution of second kind Fredholm integral equations by means of Gauss and anti-Gauss quadrature rules. Numer. Math. 2020, 146, 699–728. [Google Scholar] [CrossRef]
  61. Fermo, L.; Reichel, L.; Rodriguez, G.; Spalević, M.M. Averaged Nyström interpolants for the solution of Fredholm integral equations of the second kind. Appl. Math. Comput. 2024, 467, 128482. [Google Scholar] [CrossRef]
  62. Djukić, D.L.; Fermo, L.; Mutavdžić Djukić, R.M. Averaged cubature schemes on the real positive semiaxis. Numer. Algorithms 2023, 92, 545–569. [Google Scholar]
  63. Orive, R.; Santos-León, J.C.; Spalević, M.M. Cubature formulae for the Gaussian weight. Some old and new rules. Electron. Trans. Numer. Anal. 2020, 53, 426–438. [Google Scholar] [CrossRef]
  64. Reichel, L.; Spalević, M.M.; Tang, T. Generalized averaged Gaussian quadrature rules for the approximation of matrix functionals. BIT Numer. Math. 2016, 56, 1045–1067. [Google Scholar] [CrossRef]
  65. Spalević, M.M. A note on generalized averaged Gaussian formulas. Numer. Algorithms 2007, 76, 253–264. [Google Scholar] [CrossRef]
  66. Spalević, M.M. Error bounds of positive interpolatory quadrature rules for functions analytic on ellipses. TWMS J. Pure Appl. Math. 2025; in press. [Google Scholar]
  67. Spalević, M.M. On generalized averaged Gaussian formulas. II. Math. Comp. 2017, 86, 1877–1885. [Google Scholar] [CrossRef]
  68. Reichel, L.; Spalević, M.M. On the Accuracy of Averaged Gauss Quadrature Rules; In Preparation; Elsevier: Amsterdam, The Netherlands, 2025. [Google Scholar]
  69. Clenshaw, C.W.; Curtis, A.R. A method for numerical integration on an automatic computer. Numer. Math. 1960, 2, 197–205. [Google Scholar] [CrossRef]
  70. Davis, P.J.; Rabinowitz, P. Methods of Numerical Integration; Academic Press: Cambridge, MA, USA, 1975. [Google Scholar]
  71. Trefethen, L.N. Is Gauss quadrature better than Clenshaw-Curtis? SIAM Rev. 2008, 50, 67–87. [Google Scholar] [CrossRef]
  72. Trefethen, L.N. Approximation Theory and Approximation Practice; Extended Edition; SIAM: Philadelphia, PA, USA, 2019. [Google Scholar]
  73. Trefethen, L.N. Exactness of quadrature formulas. SIAM Rev. 2022, 64, 132–150. [Google Scholar] [CrossRef]
Figure 1. The distribution of the nodes x k of the Gauss rule G n (marked by black dots), of the optimal averaged Gauss rule G 2 n + 1 S (marked by red dots), of the averaged Gauss rule G 2 n + 1 L (marked by blue dots), and of the NAG quadrature rule Q ^ 2 n + 1 (marked by green dots) for n = 10 with respect to the Jacobi weight function w s , t ( x ) (11) with s = 1 , t = 0.5 .
Figure 1. The distribution of the nodes x k of the Gauss rule G n (marked by black dots), of the optimal averaged Gauss rule G 2 n + 1 S (marked by red dots), of the averaged Gauss rule G 2 n + 1 L (marked by blue dots), and of the NAG quadrature rule Q ^ 2 n + 1 (marked by green dots) for n = 10 with respect to the Jacobi weight function w s , t ( x ) (11) with s = 1 , t = 0.5 .
Mathematics 13 03145 g001
Figure 2. The distribution of the nodes x k of the Gauss rule G n (marked by black dots), of the optimal generalized averaged Gauss rule G 2 n + 1 S (marked by red dots), of the averaged Gauss rule G 2 n + 1 L (marked by blue dots), and of the NAG quadrature rule Q ^ 2 n + 1 (marked by green dots) for n = 10 with respect to the Jacobi weight function w s , t ( x ) (11) with s = 0.6 , t = 0.6 .
Figure 2. The distribution of the nodes x k of the Gauss rule G n (marked by black dots), of the optimal generalized averaged Gauss rule G 2 n + 1 S (marked by red dots), of the averaged Gauss rule G 2 n + 1 L (marked by blue dots), and of the NAG quadrature rule Q ^ 2 n + 1 (marked by green dots) for n = 10 with respect to the Jacobi weight function w s , t ( x ) (11) with s = 0.6 , t = 0.6 .
Mathematics 13 03145 g002
Table 1. Example 1. Extreme nodes of averaged, optimal averaged, and truncated averaged Gauss rules associated with the Gauss rule G n .
Table 1. Example 1. Extreme nodes of averaged, optimal averaged, and truncated averaged Gauss rules associated with the Gauss rule G n .
n x 1 L x 2 n + 1 L x 1 S x 2 n + 1 S
5 1.0025266868 0.9924256470 1.0025396026 0.9924548807
10 1.0006599719 0.9981855958 1.0006613415 0.9981886094
25 1.0001087020 0.9997163258 1.0001087454 0.9997164192
50 1.0000274529 0.9999295941 1.0000274558 0.9999296003
a = 0.4 , b = 0.6 , and  z = 2 . The outermost nodes of G 2 n + 1 L and G 2 n + 1 S .
n x 1 t x n + 2 t
5 0.9893754976 0.9571995230
10 0.9962769604 0.9856442099
25 0.9992464251 0.9972046478
50 0.9997949122 0.9992507975
a = 0.25 , b = 0.75 , and  z = 2.5 . The outermost nodes of G n + 2 t .
Table 2. Properties of the Gauss–Kronrod, optimal averaged Gauss, averaged Gauss, and NAG rules with 2 n + 1 nodes. Positivity refers to the weights of the quadrature rules; interlacing refers to that the nodes are interlaced by those of the n-node Gauss rules that they are associated with. Complexity refers to the difficulty of computing the quadrature rules.
Table 2. Properties of the Gauss–Kronrod, optimal averaged Gauss, averaged Gauss, and NAG rules with 2 n + 1 nodes. Positivity refers to the weights of the quadrature rules; interlacing refers to that the nodes are interlaced by those of the n-node Gauss rules that they are associated with. Complexity refers to the difficulty of computing the quadrature rules.
PropertyGauss–KronrodOptimal Averaged GaussAveraged GaussNAG
existencenot alwaysyesyesyes
positivitynot alwaysyesyesyes
internalitynot alwaysnot alwaysnot alwaysnot always
interlacingyesyesyesno
complexityhighlowlowlow
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Djukić, D.L.; Mutavdžić Djukić, R.M.; Reichel, L.; Spalević, M.M. Generalized Averaged Gauss Quadrature Rules: A Survey. Mathematics 2025, 13, 3145. https://doi.org/10.3390/math13193145

AMA Style

Djukić DL, Mutavdžić Djukić RM, Reichel L, Spalević MM. Generalized Averaged Gauss Quadrature Rules: A Survey. Mathematics. 2025; 13(19):3145. https://doi.org/10.3390/math13193145

Chicago/Turabian Style

Djukić, Dušan L., Rada M. Mutavdžić Djukić, Lothar Reichel, and Miodrag M. Spalević. 2025. "Generalized Averaged Gauss Quadrature Rules: A Survey" Mathematics 13, no. 19: 3145. https://doi.org/10.3390/math13193145

APA Style

Djukić, D. L., Mutavdžić Djukić, R. M., Reichel, L., & Spalević, M. M. (2025). Generalized Averaged Gauss Quadrature Rules: A Survey. Mathematics, 13(19), 3145. https://doi.org/10.3390/math13193145

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop