Next Article in Journal
Nonlinear Dynamics and Control of a Cube Robot
Previous Article in Journal
An Approach for Mathematical Modeling and Investigation of Computer Processes at a Macro Level

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Principle of Duality in Cubic Smoothing Spline

by 1 and
1
Graduate School of Social Sciences, Hiroshima University, 1-2-1 Kagamiyama, Higashi-Hiroshima 739-8525, Japan
2
School of Informatics and Data Science, Hiroshima University, 1-2-1 Kagamiyama, Higashi-Hiroshima 739-8525, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(10), 1839; https://doi.org/10.3390/math8101839
Received: 20 September 2020 / Revised: 12 October 2020 / Accepted: 15 October 2020 / Published: 19 October 2020

## Abstract

:
Fitting a cubic smoothing spline is a typical smoothing method. This paper reveals a principle of duality in the penalized least squares regressions relating to the method. We also provide a number of results derived from them, some of which are illustrated by a real data example.
MSC:
62G05

## 1. Introduction

Fitting a cubic smoothing spline, which was developed by [1,2] and others, is a typical smoothing method. The cubic smoothing spline fitted to a scatter plot of ordered pairs $( x i , y i )$ for $i = 1 , … , n$ is a function such that
$f ^ ( x ) = arg min f ∈ W ∑ i = 1 n y i − f ( x i ) 2 + λ ∫ a b f ″ ( x ) 2 d x ,$
where $x 1 , … , x n$ are points satisfying $a < x 1 < ⋯ < x n < b$, $W$ denotes a function space that contains all functions of which the second derivative is square integrable over $[ a , b ]$, and $λ$ is a positive smoothing/tuning parameter, which controls the trade-off between goodness of fit and smoothness.
Let $f ^ = [ f ^ ( x 1 ) , … , f ^ ( x n ) ] ⊤$. Then, given $f ^ ( x )$ is a natural cubic spline of which the knots are $x 1 , … , x n$ (see, e.g., [3,4]), it follows that
$f ^ = arg min f ∈ R n ∥ y − f ∥ 2 + λ f ⊤ C ⊤ R − 1 C f$
$= I n + λ C ⊤ R − 1 C − 1 y ,$
where $y = [ y 1 , … , y n ] ⊤$, $I n$ denotes the $n × n$ identity matrix, and $C$ and $R$ are explicitly presented later. Then, as shown in [3], $f ^ ( x )$ in (1) is uniquely determined by $f ^ ∈ R n$ in (3). Thus, estimating $f ^ ( x )$ is equivalent to estimating $f ^$.
Let $Π = [ ι n , x ] ∈ R n × 2$, where $ι n = [ 1 , … , 1 ] ⊤ ∈ R n$ and $x = [ x 1 , … , x n ] ⊤$. Note that since $x 1 < ⋯ < x n$, $ι n$ and $x$ are linearly independent and thus $Π$ is of full column rank. Let
$τ ^ = Π ( Π ⊤ Π ) − 1 Π ⊤ y .$
Denote the difference between $f ^$ and $τ ^$ (resp. $y$ and $f ^$) by $c ^$ (resp. $u ^$):
$c ^ = f ^ − τ ^ , u ^ = y − f ^ .$
Accordingly, we have
$y = τ ^ + c ^ + u ^ .$
In this paper, we present a comprehensive list of penalized least squares regressions relating to (6). One such example is the ridge regression [5] that leads to $c ^$. Then, we reveal a principle of duality in them. In addition, based on them, we provide a number of theoretical results, for example, $ι n ⊤ c ^ = 0$.
This paper is organized as follows. Section 2 fixes some notations and gives key preliminary results used to derive the main results in the paper. Section 3 provides a comprehensive list of penalized least squares regressions relating to (6), and reveals a principle of duality in them. Section 4 shows some results that are obtainable from the regressions shown in Section 3. Section 5 illustrates some results provided in Section 3 and Section 4 by a real data example. Section 6 deals with the cases such that the other right-inverse matrices are used. Section 7 concludes the paper.

## 2. Preliminaries

In this section, we give key preliminary results used to derive the main results of this paper. Before stating them, we fix some notations.

#### 2.1. Notations

Let $f ^ i$ (resp. $τ ^ i$) denote the ith entry of $f ^$ (resp. $τ ^$) for $i = 1 , … , n$, $δ i = x i + 1 − x i$, which is positive by definition, for $i = 1 , … , n − 1$, $Δ = diag ( δ 1 , … , δ n − 1 ) ∈ R ( n − 1 ) × ( n − 1 )$, and for a full-row-rank matrix $M ∈ R m × n$, $M ⊤ ( M M ⊤ ) − 1 ∈ R n × m$, which is a right-inverse matrix of $M$, be denoted by $M r − 1$. For a full-column-rank matrix $W ∈ R n × p$, let $S ( W )$ (resp. $S ⊥ ( W )$) denote the column space of $W$ (resp. the orthogonal complement of $S ( W )$) and $P W$ (resp. $Q W$) denote the orthogonal projection matrix to the space $S ( W )$ (resp. $S ⊥ ( W )$). Explicitly, they are $P W = W ( W ⊤ W ) − 1 W ⊤$ and $Q W = I n − P W$. $D ( i ) ∈ R ( n − i ) × ( n − i + 1 )$ is a Toeplitz matrix of which the first (resp. last) row is $[ − 1 , 1 , 0 , … , 0 ]$ (resp. $[ 0 , … , 0 , − 1 , 1 ]$) and we define matrices $C ∈ R ( n − 2 ) × n$ and $R ∈ R ( n − 2 ) × ( n − 2 )$ as follows:
$C = δ 1 − 1 − δ 1 − 1 − δ 2 − 1 δ 2 − 1 0 ⋯ 0 0 ⋱ ⋱ ⋱ ⋱ ⋮ ⋮ ⋱ ⋱ ⋱ ⋱ 0 0 ⋯ 0 δ n − 2 − 1 − δ n − 2 − 1 − δ n − 1 − 1 δ n − 1 − 1$
and
$R = 1 3 ( δ 1 + δ 2 ) 1 6 δ 2 0 ⋯ 0 1 6 δ 2 1 3 ( δ 2 + δ 3 ) ⋱ ⋱ ⋮ 0 ⋱ ⋱ ⋱ 0 ⋮ ⋱ ⋱ ⋱ 1 6 δ n − 2 0 ⋯ 0 1 6 δ n − 2 1 3 ( δ n − 2 + δ n − 1 ) .$
Finally, we denote the eigenvalues of $R$ by $ω 1 , … , ω n − 2$ in descending order.

#### 2.2. Key Preliminary Results

Lemma 1.
(i)
$C$ can be factorized as $C = D ( 2 ) Δ − 1 D ( 1 )$.
(ii)
We have the following inequalities:
$ω n − 2 ≥ min 1 3 δ 1 + 1 6 δ 2 , 1 6 ( δ 2 + δ 3 ) , … , 1 6 ( δ n − 3 + δ n − 2 ) , 1 6 δ n − 2 + 1 3 δ n − 1 > 0 .$
Proof of Lemma 1.
(i)
Let $w = [ w 1 , … , w n ] ⊤$ be an n-dimensional column vector. Then, by definition of $C$, it follows that
$C w = − w 2 − w 1 δ 1 + w 3 − w 2 δ 2 ⋮ − w n − 1 − w n − 2 δ n − 2 + w n − w n − 1 δ n − 1 = D ( 2 ) w 2 − w 1 δ 1 ⋮ w n − w n − 1 δ n − 1 = D ( 2 ) Δ − 1 − w 1 + w 2 ⋮ − w n − 1 + w n = D ( 2 ) Δ − 1 D ( 1 ) w ∈ R n − 2 ,$
which leads to $C = D ( 2 ) Δ − 1 D ( 1 )$.
(ii)
The first inequality follows by applying the Gershgorin circle theorem and the second inequality holds from $δ i > 0$ for $i = 1 , … , n − 1$.
□
Remark 1.
In Appendix A, we give some remarks on a special case such that $x = [ 1 , … , n ] ⊤$.
Lemma 2.
(i)
$S ( C ⊤ )$ equals $S ⊥ ( Π )$ and
(ii)
$S ( C r − 1 )$ equals $S ⊥ ( Π )$.
Proof of Lemma 2.
(i)
Given that $δ i > 0$ for $i = 1 , … , n − 1$, both $Π$ and $C ⊤$ are of full column rank. In addition, $[ Π , C ⊤ ]$ is a square matrix. Thus, if $( C ⊤ ) ⊤ Π = C Π = 0$, then it follows that $S ( C ⊤ ) = S ⊥ ( Π )$. From $D ( 1 ) ι n = 0$, we have $C ι n = D ( 2 ) Δ − 1 D ( 1 ) ι n = 0$. Likewise, from $Δ − 1 D ( 1 ) x = Δ − 1 Δ ι n − 1 = ι n − 1$ and $D ( 2 ) ι n − 1 = 0$, we obtain $C x = D ( 2 ) Δ − 1 D ( 1 ) x = 0$. Accordingly, we have $C Π = 0$, which completes the proof.
(ii)
Recall that $C r − 1 = C ⊤ ( C C ⊤ ) − 1$. It is clear that $C r − 1$ is a full-column-rank matrix such that $[ Π , C r − 1 ]$ is a square matrix. In addition, $( C r − 1 ) ⊤ Π = ( C C ⊤ ) − 1 C Π = 0$. Thus, it follows that $S ( C r − 1 ) = S ⊥ ( Π )$.
□
Denote the spectral decomposition of $R$ by $V Ω V ⊤$ and let $R − 1 / 2 = V Ω − 1 / 2 V ⊤$, where $Ω − 1 / 2 = diag 1 / ω 1 , … , 1 / ω n − 2$. Then, $R − 1 / 2$ is a positive definite matrix such that $R − 1 / 2 R − 1 / 2 = R − 1$. Define
$D = R − 1 / 2 C .$
Then, given that $C ⊤$ is of full column rank and $R − 1 / 2$ is nonsingular, $D ∈ R ( n − 2 ) × n$ is also of full row rank. In addition, we have
$D ⊤ D = C ⊤ R − 1 C .$
(We provide Matlab/GNU Octave and R functions for calculating $C$, $R$, and $D$ in Appendix A).
Lemma 3.
(i)
$S ( D ⊤ )$ equals $S ⊥ ( Π )$ and
(ii)
$S ( D r − 1 )$ equals $S ⊥ ( Π )$.
Proof of Lemma 3.
Both (i) and (ii) may be proved similarly to Lemma 2 (ii). For example, given $C Π = 0$, we have $( D ⊤ ) ⊤ Π = D Π = R − 1 / 2 C Π = 0$. □
Denote the eigenvalues of $C ⊤ R − 1 C$ by $g 1 , … , g n$ in ascending order and the spectral decomposition of $C ⊤ R − 1 C$ by $U G U ⊤$, where $U = [ u 1 , … , u n ]$ and $G = diag ( g 1 , … , g n )$. Let $T = [ u 1 , u 2 ] ∈ R n × 2$, $E ⊤ = [ u 3 , … , u n ] ∈ R n × ( n − 2 )$, and $S = diag ( g 3 , … , g n ) ∈ R ( n − 2 ) × ( n − 2 )$.
Lemma 4.
(i)
$S ( T )$ equals $S ( Π )$,
(ii)
$S ( E ⊤ )$ equals $S ⊥ ( Π )$, and
(iii)
$S ( E r − 1 )$ equals $S ⊥ ( Π )$.
Proof of Lemma 4.
(i) Since $C ⊤ R − 1 C ∈ R n × n$ is a nonnegative definite matrix of which the rank is $n − 2$, we have $0 = g 1 = g 2 < g 3 < ⋯ < g n$. In addition, given $C Π = 0$, it follows that $C ⊤ R − 1 C Π = 0 · Π$, which completes the proof. (ii) and (iii) may be proved similarly to Lemma 2 (ii). □
Given $g 1 = g 2 = 0$, we have
$E ⊤ S E = C ⊤ R − 1 C .$
Define
$F = S 1 / 2 E ,$
where $S 1 / 2 = diag ( g 3 , … , g n ) ∈ R ( n − 2 ) × ( n − 2 )$. Then, we have
$F ⊤ F = C ⊤ R − 1 C .$
Lemma 5.
(i)
$S ( F ⊤ )$ equals $S ⊥ ( Π )$ and
(ii)
$S ( F r − 1 )$ equals $S ⊥ ( Π )$.
Proof of Lemma 5.
Both (i) and (ii) may be proved similarly to Lemma 2 (ii). For example, given $E Π = 0$, we have $( F ⊤ ) ⊤ Π = F Π = S 1 / 2 E Π = 0$. □
Lemma 6.
There exists an orthogonal matrix $Υ ∈ R ( n − 2 ) × ( n − 2 )$ such that $F ⊤ = D ⊤ Υ$.
Proof of Lemma 6.
Recall that both $D ⊤ ∈ R n × ( n − 2 )$ and $F ⊤ ∈ R n × ( n − 2 )$ are of full column rank and $S ( D ⊤ ) = S ( F ⊤ )$. Accordingly, there exists a nonsingular matrix $Υ ∈ R ( n − 2 ) × ( n − 2 )$ such that $F ⊤ = D ⊤ Υ$. Given that $D ⊤ D = F ⊤ F$, we have $D ⊤ ( I n − 2 − Υ Υ ⊤ ) D = 0$. Then, from $D r − 1 ⊤ D ⊤ ( I n − 2 − Υ Υ ⊤ ) D D r − 1 = I n − 2 − Υ Υ ⊤ = 0$, we have $Υ ⊤ = Υ − 1$. □
Let (i) $A = D , F$, (ii) $( B , Q ) = ( C , R ) , ( E , S − 1 )$, (iii) $D = C , D , E , F$, and (iv) $P = Π , T$. From the results above, we immediately obtain the following results:
Proposition 1.
(i)
$C ⊤ R − 1 C = A ⊤ A = B ⊤ Q − 1 B$,
(ii)
$D P = D r − 1 ⊤ P = 0$,
(iii)
both $[ P , D ⊤ ]$ and $[ P , D r − 1 ]$ are nonsingular, and
(iv)
$P D ⊤ = P D r − 1 = Q P$.

## 3. Several Regressions Relating to (6) and Principle of Duality in Them

In this section, we provide a comprehensive list of penalized least squares regressions relating to (6), and reveal a principle of duality in them. The penalized regressions are, more precisely, those to compute $c ^$, $u ^$, $τ ^$, $τ ^ + c ^$, $c ^ + u ^$, and $τ ^ + u ^$.

#### 3.1. Penalized Regressions to Compute $τ ^ + c ^$

Concerning $τ ^ + c ^$, we have the following results:
Lemma 7.
It follows that
$τ ^ + c ^ = arg min f ∈ R n ∥ y − f ∥ 2 + λ ∥ A f ∥ 2 = I n + λ A ⊤ A − 1 y$
$= arg min f ∈ R n ∥ y − f ∥ 2 + λ f ⊤ B ⊤ Q − 1 B f = I n + λ B ⊤ Q − 1 B − 1 y .$
Proof of Lemma 7.
From Proposition 1, we have $C ⊤ R − 1 C = A ⊤ A = B ⊤ Q − 1 B$. Then, (2) and (3) can be represented as follows:
$f ^ = arg min f ∈ R n ∥ y − f ∥ 2 + λ ∥ A f ∥ 2 = I n + λ A ⊤ A − 1 y = arg min f ∈ R n ∥ y − f ∥ 2 + λ f ⊤ B ⊤ Q − 1 B f = I n + λ B ⊤ Q − 1 B − 1 y .$
In addition, by definition of $c ^$, we have $f ^ = τ ^ + c ^$. Hence, we obtain (14) and (15). □

#### 3.2. Penalized Regressions to Compute $c ^$

Concerning $c ^$, we have the following results:
Lemma 8.
Consider the following penalized regressions:
$γ ^ = arg min γ ∈ R n − 2 ∥ y − A r − 1 γ ∥ 2 + λ ∥ γ ∥ 2 = ( A r − 1 ⊤ A r − 1 + λ I n − 2 ) − 1 A r − 1 ⊤ y ,$
$κ ^ = arg min κ ∈ R n − 2 ∥ y − B r − 1 κ ∥ 2 + λ κ ⊤ Q − 1 κ = ( B r − 1 ⊤ B r − 1 + λ Q − 1 ) − 1 B r − 1 ⊤ y .$
Then, we have
$c ^ = A r − 1 γ ^ = B r − 1 κ ^ .$
Proof of Lemma 8.
Let $K = [ P , A r − 1 ]$. From Proposition 1, it follows that $A P = 0$, $A r − 1 ⊤ P = 0$, and $K$ is nonsingular. Accordingly, given that $K ⊤ K = diag ( P ⊤ P , A r − 1 ⊤ A r − 1 )$ and $A K = [ A P , A A r − 1 ] = [ 0 , I n − 2 ]$, it follows that
$f ^ = K K ⊤ K + λ K ⊤ A ⊤ A K − 1 K ⊤ y = [ P , A r − 1 ] ( P ⊤ P ) − 1 0 0 ( A r − 1 ⊤ A r − 1 + λ I n − 2 ) − 1 P ⊤ A r − 1 ⊤ y = P ( P ⊤ P ) − 1 P ⊤ y + A r − 1 ( A r − 1 ⊤ A r − 1 + λ I n − 2 ) − 1 A r − 1 ⊤ y = τ ^ + A r − 1 γ ^ ,$
from which we have $f ^ − τ ^ = A r − 1 γ ^$. Given $f ^ − τ ^ = c ^$, we thus obtain $c ^ = A r − 1 γ ^$. Similarly, we can obtain $c ^ = B r − 1 κ ^$. □
Lemma 9.
$c ^$ can be calculated by the following penalized regressions:
$c ^ = arg min c ∈ R n ∥ ( y − τ ^ ) − c ∥ 2 + λ ∥ A c ∥ 2 = I n + λ A ⊤ A − 1 ( y − τ ^ )$
$= arg min c ∈ R n ∥ ( y − τ ^ ) − c ∥ 2 + λ c ⊤ B ⊤ Q − 1 B c = I n + λ B ⊤ Q − 1 B − 1 ( y − τ ^ ) .$
Proof of Lemma 9.
Given (14), $f ^ = τ ^ + c ^$, and $A P = 0$, we have
$y = ( I n + λ A ⊤ A ) f ^ = ( I n + λ A ⊤ A ) ( τ ^ + c ^ ) = τ ^ + ( I n + λ A ⊤ A ) c ^ ,$
which leads to (19). Similarly, we can obtain (20). □
Remark 2.
We add some more exposition about (16). Let $K = [ P , A r − 1 ]$ as before. In addition, let $θ = [ β ⊤ , γ ⊤ ] ⊤ ∈ R n$ be a vector such that $f = K θ = P β + A r − 1 γ$. Then, it follows that $A f = A ( P β + A r − 1 γ ) = γ$. Given that $f = P β + A r − 1 γ$ and $A f = γ$, the minimization problem in (14) can be represented as follows:
$min β ∈ R 2 , γ ∈ R n − 2 ∥ y − P β − A r − 1 γ ∥ 2 + λ ∥ γ ∥ 2 .$
It is noteworthy that $β$ is not penalized in (21) and $( A r − 1 ) ⊤ P = 0$. Thus, the minimization problem (21) can be decomposed into (16) and (40). Moreover, (21) gives the best linear unbiased predictors of $β$ and $γ$ of the following linear mixed model:
$y = P β + A r − 1 γ + u , [ u ⊤ , γ ⊤ ] ⊤ ∼ N 0 , diag ( σ u 2 I n , σ v 2 I n − 2 ) ,$
where $λ = σ u 2 / σ v 2$.
Remark 3.
By using $C r − 1$, ref. [6] derived the following expressions in our notations:
$f ^ = τ ^ + C r − 1 κ ^ , κ ^ = ( C r − 1 ⊤ C r − 1 + λ R − 1 ) − 1 C r − 1 ⊤ y .$
Here, we make the following remarks on (23).
(i)
First, $κ ^$ is the solution of the following penalized regression:
$min κ ∈ R n − 2 ∥ y − C r − 1 κ ∥ 2 + λ κ ⊤ R − 1 κ .$
(ii)
Moreover, (23) is a special case of $c ^ = B r − 1 κ ^$ in (18).

#### 3.3. Penalized Regressions to Compute $u ^$

Concerning $u ^$, we have the following results:
Lemma 10.
Consider the following penalized regressions:
$η ^ = arg min η ∈ R n − 2 ∥ y − A ⊤ η ∥ 2 + λ − 1 ∥ η ∥ 2 = ( A A ⊤ + λ − 1 I n − 2 ) − 1 A y ,$
$υ ^ = arg min υ ∈ R n − 2 ∥ y − B ⊤ υ ∥ 2 + λ − 1 υ ⊤ Q υ = ( B B ⊤ + λ − 1 Q ) − 1 B y .$
Then, it follows that
$u ^ = A ⊤ η ^ = B ⊤ υ ^ .$
Proof of Lemma 10.
Applying the matrix inversion lemma to $I n + λ A ⊤ A − 1$, we have
$I n + λ A ⊤ A − 1 = I n − A ⊤ ( A A ⊤ + λ − 1 I n − 2 ) − 1 A .$
Postmultiplying (28) by $y$ yields $y − f ^ = A ⊤ η ^$. Given $y − f ^ = u ^$, we thus have $u ^ = A ⊤ η ^$. Similarly, we can obtain $u ^ = B ⊤ υ ^$. □
Lemma 11.
$u ^$ can be calculated by the following penalized regressions:
$u ^ = arg min u ∈ R n ∥ ( y − τ ^ ) − u ∥ 2 + λ − 1 ∥ A r − 1 ⊤ u ∥ 2 = I n + λ − 1 A r − 1 A r − 1 ⊤ − 1 ( y − τ ^ )$
and
$u ^ = arg min u ∈ R n ∥ ( y − τ ^ ) − u ∥ 2 + λ − 1 u ⊤ B r − 1 Q B r − 1 ⊤ u = I n + λ − 1 B r − 1 Q B r − 1 ⊤ − 1 ( y − τ ^ ) .$
Proof of Lemma 11.
Given (34), $g ^ = τ ^ + u ^$, and $A r − 1 ⊤ P = 0$, we have
$y = ( I n + λ − 1 A r − 1 A r − 1 ⊤ ) g ^ = ( I n + λ − 1 A r − 1 A r − 1 ⊤ ) ( τ ^ + u ^ ) = τ ^ + ( I n + λ − 1 A r − 1 A r − 1 ⊤ ) u ^ ,$
which leads to (29). Similarly, we can obtain (30). □
Remark 4.
In [2] and ([3], p. 20), there are equations expressed in our notation as follows:
$( R + λ C C ⊤ ) ϕ = C y , f ^ = y − λ C ⊤ ϕ .$
Here, we make the following remarks on (31).
(i)
First, these lead to a penalized least squares problem. Given that $u ^ = y − f ^$, removing $ϕ$ from the above equations leads to
$u ^ = y − f ^ = λ C ⊤ ( R + λ C C ⊤ ) − 1 C y = C ⊤ ( C C ⊤ + λ − 1 R ) − 1 C y = C ⊤ υ ^ ,$
where
$υ ^ = arg min υ ∈ R n − 2 ∥ y − C ⊤ υ ∥ 2 + λ − 1 υ ⊤ R υ .$
(ii)
Moreover, (32) is a special case of $u ^ = B ⊤ υ ^$ in (27).

#### 3.4. Penalized Regression to Compute $τ ^ + u ^$

Concerning $τ ^ + u ^$, we have the following results:
Lemma 12.
Let $g ^ = τ ^ + u ^$. Then, it follows that
$g ^ = arg min g ∈ R n ∥ y − g ∥ 2 + λ − 1 ∥ A r − 1 ⊤ g ∥ 2 = I n + λ − 1 A r − 1 A r − 1 ⊤ − 1 y$
$= arg min g ∈ R n ∥ y − g ∥ 2 + λ − 1 g ⊤ B r − 1 Q B r − 1 ⊤ g = I n + λ − 1 B r − 1 Q B r − 1 ⊤ − 1 y .$
Proof of Lemma 12.
Let $J = [ P , A ⊤ ]$. From Proposition 1, it follows that $A P = 0$, $A r − 1 ⊤ P = 0$, and $J$ is nonsingular. Accordingly, given that $J ⊤ J = diag ( P ⊤ P , A A ⊤ )$ and $A r − 1 ⊤ J = [ A r − 1 ⊤ P , A r − 1 ⊤ A ⊤ ] = [ 0 , I n − 2 ]$, it follows that
$I n + λ − 1 A r − 1 A r − 1 ⊤ − 1 y = J J ⊤ J + λ − 1 J ⊤ A r − 1 A r − 1 ⊤ J − 1 J ⊤ y = [ P , A ⊤ ] ( P ⊤ P ) − 1 0 0 ( A A ⊤ + λ − 1 I n − 2 ) − 1 P ⊤ A y = P ( P ⊤ P ) − 1 P ⊤ y + A ⊤ ( A A ⊤ + λ − 1 I n − 2 ) − 1 A y = τ ^ + A ⊤ η ^ .$
Given $u ^ = A ⊤ η ^$, we obtain (34). Similarly, we can obtain (35). □
Remark 5.
Similarly to Remark 2, we add some more exposition about (25). Let $ξ = [ β ⊤ , η ⊤ ] ⊤ ∈ R n$ be such that $g = J ξ = P β + A ⊤ η$. As stated, $A r − 1 ⊤ J = [ 0 , I n − 2 ]$. Then, it follows that
$A r − 1 ⊤ g = A r − 1 ⊤ J ξ = η .$
Given $g = P β + A ⊤ η$ and $A r − 1 ⊤ g = η$, the minimization problem (34) can be represented as follows:
$min β ∈ R 2 , η ∈ R n − 2 ∥ y − P β − A ⊤ η ∥ 2 + λ − 1 ∥ η ∥ 2 .$
Again, it is noteworthy that $β$ is not penalized in (37). Moreover, it follows that $( A ⊤ ) ⊤ P = A P = 0$. Thus, the minimization problem (37) can be decomposed into (25) and (40).

#### 3.5. Ordinary Regressions to Compute $c ^ + u ^$ and $τ ^$

Concerning $c ^ + u ^$ and $τ ^$, we have the following results:
Lemma 13.
(i)
Let $h ^ = D ⊤ α ^$, where
$α ^ = arg min α ∈ R n − 2 ∥ y − D ⊤ α ∥ 2 = ( D D ⊤ ) − 1 D y .$
Then, it follows that
$c ^ + u ^ = h ^ .$
(ii)
It follows that $τ ^ = P β ^$, where
$β ^ = arg min β ∈ R 2 ∥ y − P β ∥ 2 = ( P ⊤ P ) − 1 P ⊤ y .$
Proof of Lemma 13.
Given Proposition 1, both results are easily obtainable. For example, the former result can be proved as follows:
$h ^ = D ⊤ α ^ = P D ⊤ y = Q P y = y − τ ^ = c ^ + u ^ .$
□
Remark 6.
From Proposition 1, we also have $h ^ ( = c ^ + u ^ ) = D r − 1 ρ ^$, where
$ρ ^ = arg min ρ ∈ R n − 2 ∥ y − D r − 1 ρ ∥ 2 = ( D r − 1 ⊤ D r − 1 ) − 1 D r − 1 ⊤ y .$

#### 3.6. Principle of Duality in the Penalized Regressions

See the second columns of Table 1 and Table 2. In the columns, the penalized regressions shown above are arranged in pairs that mirror one another. We reveal a principle of duality in the penalized regressions. As stated in Section 1, (D1) is obtainable by replacing $A ⊤ , λ$ in (P1) by $A r − 1 , λ − 1$, respectively. Likewise, for example, (D6) in Table 2 is obtainable by replacing $B ⊤ , Q , λ − 1$ in (P6) by $B r − 1 , Q − 1 , λ$, respectively. In Table 1 and Table 2, we may observe five other pairs of regressions that are duals of each other. From the seven dual pairs shown in Table 1 and Table 2, we observe that the following principle exists:
Proposition 2 (Principle of duality).
The regressions labeled with the letter D in Table 1 and Table 2, for example, (D1), are obtainable by replacing each occurrence of $A ⊤ , B ⊤ , D ⊤ , Q , Q − 1 , λ , λ − 1$ in the regressions labeled with the letter P, for example, (P1), by $A r − 1 , B r − 1 , D r − 1 , Q − 1 , Q , λ − 1 , λ$, respectively.

## 4. Results That Are Obtainable from the Regressions

In this section, we show how the regressions listed in the previous section are of use to obtain a deeper understanding of the fitting a cubic smoothing spline. Before proceeding, recall $f ^ = τ ^ + c ^$ and so on.
First, given that (16) is a ridge regression, it immediately follows that $lim λ → ∞ γ ^ = 0$, which leads to $lim λ → ∞ c ^ = A r − 1 lim λ → ∞ γ ^ = 0$ and at the same time we have
$lim λ → ∞ f ^ = τ ^ + lim λ → ∞ c ^ = τ ^ ,$
$lim λ → ∞ u ^ = y − τ ^ − lim λ → ∞ c ^ = y − τ ^ ,$
$lim λ → ∞ g ^ = τ ^ + lim λ → ∞ u ^ = τ ^ + ( y − τ ^ ) = y .$
Second, (25) is again a ridge regression, we have $lim λ → 0 η ^ = 0$, which yields $lim λ → 0 u ^ = A ⊤ lim λ → 0 η ^ = 0$ and accordingly we obtain
$lim λ → 0 f ^ = y − lim λ → 0 u ^ = y ,$
$lim λ → 0 c ^ = y − τ ^ − lim λ → 0 u ^ = y − τ ^ ,$
$lim λ → 0 g ^ = τ ^ + lim λ → 0 u ^ = τ ^ .$
Third, from (19) and $u ^ = y − τ ^ − c ^$, we have
$c ^ = I n + λ A ⊤ A − 1 ( y − τ ^ ) ,$
$u ^ = I n − ( I n + λ A ⊤ A ) − 1 ( y − τ ^ ) .$
Thus, $f ^$ can be represented as
$f ^ = τ ^ + I n + λ A ⊤ A − 1 ( y − τ ^ ) .$
Here, we remark that, given that $I n + λ A ⊤ A − 1$ is a smoother matrix, the second term on the right-hand side of (50) represents a low-frequency part of $y − τ ^$. In addition, from (49), $u ^$ represents a high-frequency part of $y − τ ^$. Thus, $c ^$ is generally smoother than $u ^$.
Fourth, given $A P = 0$, $A r − 1 ⊤ P = 0$, $c ^ = A r − 1 γ ^$, and $u ^ = A ⊤ η ^$, we have
$ζ ^ ⊤ τ ^ = 0 , ζ ^ = c ^ , u ^ , h ^ .$
Fifth, given $A P = 0$, $A r − 1 ⊤ P = 0$, (28), and
$I n + λ − 1 A r − 1 A r − 1 ⊤ − 1 = I n − A r − 1 ( A r − 1 ⊤ A r − 1 + λ I n − 2 ) − 1 A r − 1 ⊤ ,$
if $y ∈ S ( P )$, or in other words, if $y = P ψ$, then we have
$τ ^ = y , f ^ = y , g ^ = y , c ^ = 0 , u ^ = 0 , h ^ = 0 .$
Sixth, given $ι n ∈ S ( P )$, we have
$P P ι n = ι n ,$
$( I n + λ A ⊤ A ) − 1 ι n = ι n ,$
$( I n + λ − 1 A r − 1 A r − 1 ⊤ ) − 1 ι n = ι n ,$
$A r − 1 ( A r − 1 ⊤ A r − 1 + λ I n − 2 ) − 1 A r − 1 ⊤ ι n = 0 ,$
$A ⊤ ( A A ⊤ + λ − 1 I n − 2 ) − 1 A ι n = 0 ,$
$P D ⊤ ι n = 0 .$
Note that $( I n + λ A ⊤ A ) − 1 ι n = ι n$, for example, indicates that the sum of the entries in each row of the hat matrix of $f ^$ equals unity.
Seventh, given (54)–(59), we have
$1 n ι n ⊤ ζ ^ = y ¯ , ζ ^ = τ ^ , f ^ , g ^ ,$
$1 n ι n ⊤ ζ ^ = 0 , ζ ^ = c ^ , u ^ , h ^ ,$
where $y ¯ = 1 n ∑ i = 1 n y i$. $1 n ι n ⊤ f ^ = y ¯$, for example, shows that $1 n ∑ i = 1 n f ^ i = y ¯$.

## 5. Illustrations of Some Results

In this section, we illustrate some of the results in the previous sections by a real data example.
Panel A of Figure 1 shows a scatter plot of North Pacific sea surface temperature (SST) anomalies (1891–2018). SST is an essential climate variable and has been used for the detection of climate change. See, for example, Høyer and Karagali [7] and the references therein. We obtained the data from the website of the Japan Meteorological Agency. The solid line in the panel plots $( x i , τ ^ i )$ for $i = 1 , … , n$, where $τ ^ = [ τ ^ 1 , … , τ ^ n ] ⊤$ in (4) and $n = 128$. Panel B of Figure 1 depicts a scatter plot of $( x i , y i − τ ^ i )$ for $i = 1 , … , n$. The solid line in the panel plots $( x i , c ^ i )$ for $i = 1 , … , n$, where $c ^ = [ c ^ 1 , … , c ^ n ] ⊤$ is calculated by (18) with $λ = 10 3$. The solid line in Panel C denotes $( x i , f ^ i )$, where $f ^ = [ f ^ 1 , … , f ^ n ] ⊤$ is calculated by (14) with $λ = 10 3$. Panel D illustrates a scatter plot of $( x i , y i − τ ^ i )$ for $i = 1 , … , n$. The solid line in the panel plots $( x i , u ^ i )$ for $i = 1 , … , n$, where $u ^ = [ u ^ 1 , … , u ^ n ] ⊤$ is calculated by (27) with $λ = 10 3$. Figure 2, Figure 3 and Figure 4 correspond to the cases such that $λ = 10 5 , 10 10 , 10 − 10$, respectively.
Recall that concerning $y$, $τ ^$, $c ^$, $f ^$, and $u ^$, the following equations hold:
$τ ^ + c ^ = f ^ , c ^ + u ^ = y − τ ^ , lim λ → ∞ c ^ = 0 , lim λ → ∞ f ^ = τ ^ , lim λ → ∞ u ^ = y − τ ^ , lim λ → 0 c ^ = y − τ ^ , lim λ → 0 f ^ = y , lim λ → 0 u ^ = 0 .$
From Figure 1, Figure 2, Figure 3 and Figure 4, we can observe that these theoretical results are well illustrated in these figures. For example, from Panel D in Figure 4, we can observe that $u ^$ almost equals $0$ when $λ = 10 − 10$.

## 6. The Cases Such That the Other Right-Inverse Matrices Are Used

In this section, we illustrate what happens if the other right-inverse matrices are used.
Let $M ∈ R m × n$ be of full row rank. Recall that in this paper $M r − 1$ denotes $M ⊤ ( M M ⊤ ) − 1$, which is a right-inverse matrix of a full-row-rank matrix $M ∈ R m × n$. Define a set of matrices
$Γ M = { Ξ ∈ R n × m : M Ξ = I m } .$
$Γ M$ denotes the set of right-inverse matrices of $M$ and accordingly $M r − 1$ belongs to $Γ M$.
Lemma 14.
$N = M r − 1$ if and only if $N ∈ Γ M$ and $S ( N ) = S ( M ⊤ )$.
Proof of Lemma 14.
It is clear that if $N = M r − 1$, then $N ∈ Γ M$ and $S ( N ) = S ( M ⊤ )$. Conversely, suppose that $N ∈ Γ M$ and $S ( N ) = S ( M ⊤ )$. Then, $M N = I m$ and there exists a nonsingular matrix $Σ ∈ R m × m$ such that $N = M ⊤ Σ$. By removing $N$ from these equations, we have $Σ = ( M M ⊤ ) − 1$, which leads to $N = M ⊤ ( M M ⊤ ) − 1 = M r − 1$. □
From Lemma 14, if $N ≠ M r − 1$, then $N ∉ Γ M$ or $S ( N ) ≠ S ( M ⊤ )$. Accordingly, we have the following result:
Proposition 3.
If $N ∈ Γ M \ { M r − 1 }$, then $S ( N ) ≠ S ( M ⊤ )$.
Based on the result, we illustrate what happens if the other right-inverse matrices are used. We give an example. Let $Z ∈ Γ D \ { D r − 1 }$. Then, from Proposition 3 and Lemma 3, it follows that $S ( Z ) ≠ S ( D r − 1 ) = S ⊥ ( Π )$. Accordingly, letting $L = [ Π , Z ]$, it follows that $Z ⊤ Π ≠ 0$ and $D L = [ D Π , D Z ] = [ 0 , I n − 2 ]$. In addition, given that $D Π = 0$, $D Z = I n − 2$, and $Π$ is of full column rank, $L$ is nonsingular. Thus, from [8], for example, we have
$f ^ = L ( L ⊤ L + λ L ⊤ D ⊤ D L ) − 1 L ⊤ y = Π π ^ + Z ε ^ ,$
where
$π ^ = arg min π ∈ R 2 ∥ ( y − Z ε ^ ) − Π π ∥ 2 = ( Π ⊤ Π ) − 1 Π ⊤ ( y − Z ε ^ )$
and
$ε ^ = arg min ε ∈ R n − 2 ∥ Q Π y − Q Π Z ε ∥ 2 + λ ∥ ε ∥ 2 = ( Z ⊤ Q Π Z + λ I n − 2 ) − 1 Z ⊤ Q Π y ,$
which shows that we may obtain (penalized) regressions relating to the cubic smoothing spline even if we use the other right-inverse matrices of $D$ such that $Z ∈ Γ D \ { D r − 1 }$. Nevertheless, as illustrated here, they are more complex than those shown in Table 1 and Table 2.

## 7. Concluding Remarks

In this paper, we provided a comprehensive list of penalized least squares regressions relating to the cubic smoothing spline, and then revealed a principle of duality in them. This is the main contribution of this study. Such penalized regressions are tabulated in Table 1 and Table 2 and the principle of duality revealed is stated in Proposition 2. In addition, we also provided a number of results derived from them, most of which are also tabulated in Table 1 and Table 2 and some of which are illustrated in Figure 1, Figure 2, Figure 3 and Figure 4.

## Author Contributions

R.D. wrote an initial draft under the supervision of H.Y. and then H.Y. edited the manuscript. All authors have read and agreed to the published version of the manuscript.

## Funding

The Japan Society for the Promotion of Science supported this work through KAKENHI Grant Number 16H03606.

## Acknowledgments

We thank Kazuhiko Hayakawa and three anonymous referees for their valuable comments on an earlier version of this paper. The usual caveat applies.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A

#### Appendix A.1. Some Remarks on a Special Case Such That x = [1,…,n]⊤

(i)
If $x = [ 1 , … , n ] ⊤$, then $C = D ( 2 ) D ( 1 ) ∈ R ( n − 2 ) × n$, which is a Toeplitz matrix of which the first (resp. last) row is $[ 1 , − 2 , 1 , 0 , … , 0 ]$ (resp. $[ 0 , … , 0 , 1 , − 2 , 1 ]$).
(ii)
If $x = [ 1 , … , n ] ⊤$, then $( I n + λ C ⊤ R − 1 C ) − 1$ is bisymmetric (i.e., symmetric centrosymmetric), which may be proved as in Yamada (2020a).
(iii)
If $x = [ 1 , … , n ] ⊤$, then $R$ in (8) is not only a symmetric tridiagonal matrix but also a Toeplitz matrix. In the case, we have
$ω k = 2 3 + 1 3 cos k π n − 1 , k = 1 , … , n − 2 ,$
and thus $ω n − 2$, which is the smallest eigenvalue of $R$, satisfies the following inequality (see, e.g., [9]):
$ω n − 2 = 2 3 + 1 3 cos n − 2 n − 1 π > 1 3 .$
(iv)
If $x = [ 1 , … , n ] ⊤$ and $R = I n − 2$ in (2) and (3), then (2) and (3) reduce to
$f ^ = arg min f ∈ R n ∥ y − f ∥ 2 + ∥ D ( 2 ) D ( 1 ) f ∥ 2 = I n + λ ( D ( 2 ) D ( 1 ) ) ⊤ ( D ( 2 ) D ( 1 ) ) − 1 y .$
It is a type of the Whittaker–Henderson (WH) method of graduation, which was developed by Bohlmann [10], Whittaker [11] and others. See Weinert [12] for a historical review of the WH method of graduation. (A3) is also referred to as the Hodrick–Prescott (HP) [13] filtering in econometrics. For more details about the HP filtering, see, for example, Schlicht [14], Kim et al. [15], Paige and Trindade [16], and Yamada [17,18,19,20,21].

## References

1. Schoenberg, I.J. Spline functions and the problem of graduation. Proc. Natl. Acad. Sci. USA 1964, 52, 947–950. [Google Scholar] [CrossRef] [PubMed][Green Version]
2. Reinsch, C. Smoothing by spline functions. Numer. Math. 1967, 10, 177–183. [Google Scholar] [CrossRef]
3. Green, P.J.; Silverman, B.W. Nonparametric Regression and Generalized Linear Models: A roughness Penalty Approach; Chapman and Hall/CRC: Boca Raton, FL, USA, 1994. [Google Scholar]
4. Wood, S.N. Generalized Additive Models: An Introduction with R, 2nd ed.; Chapman & Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar]
5. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
6. Verbyla, A.P.; Cullis, B.R.; Kenward, M.G.; Welham, S.J. The analysis of designed experiments and longitudinal data by using smoothing splines. J. R. Stat. Soc. 1999, 48, 269–311. [Google Scholar] [CrossRef]
7. Høyer, J.L.; Karagali, I. Sea surface temperature climate data record for the North Sea and Baltic Sea. J. Clim. 2016, 29, 2529–2541. [Google Scholar] [CrossRef]
8. Yamada, H. The Frisch–Waugh–Lovell theorem for the lasso and the ridge regression. Commun. Stat. Theory Methods 2017, 46, 10897–10902. [Google Scholar] [CrossRef]
9. Pesaran, M.H. Exact maximum likelihood estimation of a regression equation with a first-order moving-average error. Rev. Econ. Stud. 1973, 40, 529–535. [Google Scholar] [CrossRef]
10. Bohlmann, G. Ein ausgleichungsproblem. Nachrichten von der Gesellschaft der Wissenschaften zu Gottingen, Mathematisch-Physikalische Klasse 1899, 1899, 260–271. [Google Scholar]
11. Whittaker, E.T. On a new method of graduation. Proc. Edinb. Math. Soc. 1923, 41, 63–75. [Google Scholar] [CrossRef][Green Version]
12. Weinert, H.L. Efficient computation for Whittaker–Henderson smoothing. Comput. Stat. Data Anal. 2007, 52, 959–974. [Google Scholar] [CrossRef]
13. Hodrick, R.J.; Prescott, E.C. Postwar U.S. business cycles: An empirical investigation. J. Money Credit. Bank. 1997, 29, 1–16. [Google Scholar] [CrossRef]
14. Schlicht, E. Estimating the smoothing parameter in the so-called Hodrick–Prescott filter. J. Jpn. Stat. Soc. 2005, 35, 99–119. [Google Scholar] [CrossRef][Green Version]
15. Kim, S.; Koh, K.; Boyd, S.; Gorinevsky, D. 1 trend filtering. SIAM Rev. 2009, 51, 339–360. [Google Scholar] [CrossRef]
16. Paige, R.L.; Trindade, A.A. The Hodrick–Prescott filter: A special case of penalized spline smoothing. Electron. J. Stat. 2010, 4, 856–874. [Google Scholar] [CrossRef]
17. Yamada, H. Ridge regression representations of the generalized Hodrick–Prescott filter. J. Jpn. Stat. Soc. 2015, 45, 121–128. [Google Scholar] [CrossRef][Green Version]
18. Yamada, H. Why does the trend extracted by the Hodrick–Prescott filtering seem to be more plausible than the linear trend? Appl. Econ. Lett. 2018, 25, 102–105. [Google Scholar] [CrossRef]
19. Yamada, H. Several least squares problems related to the Hodrick–Prescott filtering. Commun. Stat. Theory Methods 2018, 47, 1022–1027. [Google Scholar] [CrossRef]
20. Yamada, H. A note on Whittaker–Henderson graduation: Bisymmetry of the smoother matrix. Commun. Stat. Theory Methods 2020, 49, 1629–1634. [Google Scholar] [CrossRef]
21. Yamada, H. A smoothing method that looks like the Hodrick–Prescott filter. Econom. Theory 2020. [Google Scholar] [CrossRef]
Figure 1. Panel A shows a scatter plot of North Pacific sea surface temperature anomalies (1891–2018). The solid line in the panel plots $( x i , τ ^ i )$ for $i = 1 , … , n$, where $τ ^ = [ τ ^ 1 , … , τ ^ n ] ⊤$ in (4) and $n = 128$. Panel B depicts a scatter plot of $( x i , y i − τ ^ i )$ for $i = 1 , … , n$. The solid line in the panel plots $( x i , c ^ i )$ for $i = 1 , … , n$, where $c ^ = [ c ^ 1 , … , c ^ n ] ⊤$ is calculated by (18) with $λ = 10 3$. The solid line in Panel C denotes $( x i , f ^ i )$, where $f ^ = [ f ^ 1 , … , f ^ n ] ⊤$ is calculated by (14) with $λ = 10 3$. Panel D illustrates a scatter plot of $( x i , y i − τ ^ i )$ for $i = 1 , … , n$. The solid line in the panel plots $( x i , u ^ i )$ for $i = 1 , … , n$, where $u ^ = [ u ^ 1 , … , u ^ n ] ⊤$ is calculated by (27) with $λ = 10 3$.
Figure 1. Panel A shows a scatter plot of North Pacific sea surface temperature anomalies (1891–2018). The solid line in the panel plots $( x i , τ ^ i )$ for $i = 1 , … , n$, where $τ ^ = [ τ ^ 1 , … , τ ^ n ] ⊤$ in (4) and $n = 128$. Panel B depicts a scatter plot of $( x i , y i − τ ^ i )$ for $i = 1 , … , n$. The solid line in the panel plots $( x i , c ^ i )$ for $i = 1 , … , n$, where $c ^ = [ c ^ 1 , … , c ^ n ] ⊤$ is calculated by (18) with $λ = 10 3$. The solid line in Panel C denotes $( x i , f ^ i )$, where $f ^ = [ f ^ 1 , … , f ^ n ] ⊤$ is calculated by (14) with $λ = 10 3$. Panel D illustrates a scatter plot of $( x i , y i − τ ^ i )$ for $i = 1 , … , n$. The solid line in the panel plots $( x i , u ^ i )$ for $i = 1 , … , n$, where $u ^ = [ u ^ 1 , … , u ^ n ] ⊤$ is calculated by (27) with $λ = 10 3$.
Figure 2. This figure corresponds to the case where $λ = 10 5$. For the other explanations, see Figure 1.
Figure 2. This figure corresponds to the case where $λ = 10 5$. For the other explanations, see Figure 1.
Figure 3. This figure corresponds to the case where $λ = 10 10$. For the other explanations, see Figure 1.
Figure 3. This figure corresponds to the case where $λ = 10 10$. For the other explanations, see Figure 1.
Figure 4. This figure corresponds to the case where $λ = 10 − 10$. For the other explanations, see Figure 1.
Figure 4. This figure corresponds to the case where $λ = 10 − 10$. For the other explanations, see Figure 1.
Table 1. Most of the main results (I).
Table 1. Most of the main results (I).
Regressions Relating to the Cubic Smoothing SplineAverage$λ → ∞$$λ → 0$$y ∈ S ( P )$Sum
(P1)$f ^ ( = τ ^ + c ^ ) = arg min f ∈ R n ∥ y − f ∥ 2 + λ ∥ A f ∥ 2 = I n + λ A ⊤ A − 1 y$$y ¯$$τ ^$$y$$y$1
(D1)$g ^ ( = τ ^ + u ^ ) = arg min g ∈ R n ∥ y − g ∥ 2 + λ − 1 ∥ A r − 1 ⊤ g ∥ 2 = I n + λ − 1 A r − 1 A r − 1 ⊤ − 1 y$$y ¯$$y$$τ ^$$y$1
(P2)$u ^ = A ⊤ η ^$, where $η ^ = arg min η ∈ R n − 2 ∥ y − A ⊤ η ∥ 2 + λ − 1 ∥ η ∥ 2 = ( A A ⊤ + λ − 1 I n − 2 ) − 1 A y$0$y − τ ^$$0$$0$0
(D2)$c ^ = A r − 1 γ ^$, where $γ ^ = arg min γ ∈ R n − 2 ∥ y − A r − 1 γ ∥ 2 + λ ∥ γ ∥ 2 = ( A r − 1 ⊤ A r − 1 + λ I n − 2 ) − 1 A r − 1 ⊤ y$0$0$$y − τ ^$$0$0
(P3)$c ^ = arg min c ∈ R n ∥ ( y − τ ^ ) − c ∥ 2 + λ ∥ A c ∥ 2 = I n + λ A ⊤ A − 1 ( y − τ ^ )$0$0$$y − τ ^$$0$0
(D3)$u ^ = arg min u ∈ R n ∥ ( y − τ ^ ) − u ∥ 2 + λ − 1 ∥ A r − 1 ⊤ u ∥ 2 = I n + λ − 1 A r − 1 A r − 1 ⊤ − 1 ( y − τ ^ )$0$y − τ ^$$0$$0$0
(P4)$h ^ ( = c ^ + u ^ ) = D ⊤ α ^$, where $α ^ = arg min α ∈ R n − 2 ∥ y − D ⊤ α ∥ 2 = ( D D ⊤ ) − 1 D y$0$y − τ ^$$y − τ ^$$0$0
(D4)$h ^ ( = c ^ + u ^ ) = D r − 1 ρ ^$, where $ρ ^ = arg min ρ ∈ R n − 2 ∥ y − D r − 1 ρ ∥ 2 = ( D r − 1 ⊤ D r − 1 ) − 1 D r − 1 ⊤ y$0$y − τ ^$$y − τ ^$$0$0
$τ ^ = P β ^$, where $β ^ = arg min β ∈ R 2 ∥ y − P β ∥ 2 = ( P ⊤ P ) − 1 P ⊤ y$$y ¯$$τ ^$$τ ^$$y$1
$y = τ ^ + c ^ + u ^$. $A = D , F$. $D = C , D , E , F$. $M r − 1 = M ⊤ ( M M ⊤ ) − 1$ for $M = A , D$. $P = Π , T$. $λ > 0$ is a smoothing/tuning parameter. $y ¯ = 1 n ∑ i = 1 n y i$. $S ( P )$ denotes the column space of $P$. ‘Sum’ denotes the sum of the entries in each row of the hat matrices. ∘ indicates that the corresponding component belongs to the orthogonal complement of $S ( P )$.
Table 2. Most of the main results (II).
Table 2. Most of the main results (II).
Regressions Relating to the Cubic Smoothing SplineAverage$λ → ∞$$λ → 0$$y ∈ S ( P )$Sum
(P5)$f ^ ( = τ ^ + c ^ ) = arg min f ∈ R n ∥ y − f ∥ 2 + λ f ⊤ B ⊤ Q − 1 B f = I n + λ B ⊤ Q − 1 B − 1 y$$y ¯$$τ ^$$y$$y$1
(D5)$g ^ ( = τ ^ + u ^ ) = arg min g ∈ R n ∥ y − g ∥ 2 + λ − 1 g ⊤ B r − 1 Q B r − 1 ⊤ g = I n + λ − 1 B r − 1 Q B r − 1 ⊤ − 1 y$$y ¯$$y$$τ ^$$y$1
(P6)$u ^ = B ⊤ υ ^$, where $υ ^ = arg min υ ∈ R n − 2 ∥ y − B ⊤ υ ∥ 2 + λ − 1 υ ⊤ Q υ = ( B B ⊤ + λ − 1 Q ) − 1 B y$0$y − τ ^$$0$$0$0
(D6)$c ^ = B r − 1 κ ^$, where $κ ^ = arg min κ ∈ R n − 2 ∥ y − B r − 1 κ ∥ 2 + λ κ ⊤ Q − 1 κ = ( B r − 1 ⊤ B r − 1 + λ Q − 1 ) − 1 B r − 1 ⊤ y$0$0$$y − τ ^$$0$0
(P7)$c ^ = arg min c ∈ R n ∥ ( y − τ ^ ) − c ∥ 2 + λ c ⊤ B ⊤ Q − 1 B c = I n + λ B ⊤ Q − 1 B − 1 ( y − τ ^ )$0$0$$y − τ ^$$0$0
(D7)$u ^ = arg min u ∈ R n ∥ ( y − τ ^ ) − u ∥ 2 + λ − 1 u ⊤ B r − 1 Q B r − 1 ⊤ u = I n + λ − 1 B r − 1 Q B r − 1 ⊤ − 1 ( y − τ ^ )$0$y − τ ^$$0$$0$0
$y = τ ^ + c ^ + u ^$. $( B , Q ) = ( C , R ) , ( E , S − 1 )$. $B r − 1 = B ⊤ ( B B ⊤ ) − 1$. $λ > 0$ is a smoothing/tuning parameter. $y ¯ = 1 n ∑ i = 1 n y i$. $S ( P )$ denotes the column space of $P$, where $P = Π , T$. ‘Sum’ denotes the sum of the entries in each row of the hat matrices. ∘ indicates that the corresponding component belongs to the orthogonal complement of $S ( P )$.
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Du, R.; Yamada, H. Principle of Duality in Cubic Smoothing Spline. Mathematics 2020, 8, 1839. https://doi.org/10.3390/math8101839

AMA Style

Du R, Yamada H. Principle of Duality in Cubic Smoothing Spline. Mathematics. 2020; 8(10):1839. https://doi.org/10.3390/math8101839

Chicago/Turabian Style

Du, Ruixue, and Hiroshi Yamada. 2020. "Principle of Duality in Cubic Smoothing Spline" Mathematics 8, no. 10: 1839. https://doi.org/10.3390/math8101839

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.