Next Article in Journal
Interaction of Particles and Filter Fabric in Ultrafine Filtration
Previous Article in Journal
An Iterative Hybrid Algorithm for Roots of Non-Linear Equations

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# The VIT Transform Approach to Discrete-Time Signals and Linear Time-Varying Systems

by
Edward W. Kamen
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
Eng 2021, 2(1), 99-125; https://doi.org/10.3390/eng2010008
Submission received: 24 January 2021 / Accepted: 16 February 2021 / Published: 10 March 2021

## Abstract

:
A transform approach based on a variable initial time (VIT) formulation is developed for discrete-time signals and linear time-varying discrete-time systems or digital filters. The VIT transform is a formal power series in $z − 1$, which converts functions given by linear time-varying difference equations into left polynomial fractions with variable coefficients, and with initial conditions incorporated into the framework. It is shown that the transform satisfies a number of properties that are analogous to those of the ordinary $z$-transform, and that it is possible to do scaling of $z − i$ by time functions, which results in left-fraction forms for the transform of a large class of functions including sinusoids with general time-varying amplitudes and frequencies. Using the extended right Euclidean algorithm in a skew polynomial ring with time-varying coefficients, it is shown that a sum of left polynomial fractions can be written as a single fraction, which results in linear time-varying recursions for the inverse transform of the combined fraction. The extraction of a first-order term from a given polynomial fraction is carried out in terms of the evaluation of $z i$ at time functions. In the application to linear time-varying systems, it is proved that the VIT transform of the system output is equal to the product of the VIT transform of the input and the VIT transform of the unit-pulse response function. For systems given by a time-varying moving average or an autoregressive model, the transform framework is used to determine the steady-state output response resulting from various signal inputs such as the step and cosine functions.

## 1. Introduction

The introduction of a time-varying $z$-transform for the study of linear time-varying discrete-time systems or digital filters goes back to the discrete-time counterpart of the Zadeh system function, which first appeared in [1]. In that work, linear time-varying systems/filters are studied in terms of the time-varying $z$-transform
$H Z a d e h z , k = ∑ i = 0 ∞ h k , k − i z − i$
, where $k$ is an integer valued variable and $h n , k$ is the unit-pulse response function of the system. For papers that utilize this construction, see [2,3,4,5]. It is known that it is not possible to express the $z$-transform of the output response as the product of $H Z a d e h z , k$ with the $z$-transform of the input. Moreover, as discussed in [6], when the system or filter is finite-dimensional, this transform is seldom expressible as a polynomial fraction in $z$ with time-varying coefficients. These two limitations were circumvented in [7] by defining the transfer function to be the formal power series
$H z , k = ∑ i = 0 ∞ h k + i , k z − i .$
In addition, in [7] the generalized $z$-transform of a discrete-time signal $x n$ is defined to be
$x ^ z , k = ∑ i = 0 ∞ z − i x i δ k ,$
where $δ k$ is the unit-pulse function (). Then, as shown in [7], $y ^ z , k = H z , k u ^ z , k$, where $u ^ z , k$ and $y ^ z , k$ are the generalized $z$-transforms of the input and output, respectively. It is also shown that if the system is given by a finite-dimensional state representation, the transfer function is a matrix polynomial fraction in $z$ with time-varying coefficients.
The generalized $z$-transform defined by Equation (2) is equal to the ordinary $z$-transform multiplied on the right by the unit pulse $δ k$. There is a simple modification of Equation (2) results in a time-varying transform that satisfies a number of basic properties, which are analogous to the properties of the ordinary $z$-transform. The modification is based on the observation that the generalized $z$-transform defined in Equation (2) can be expressed in the form
$x ^ z , k = ∑ i = 0 ∞ z − i x i + k δ k .$
In Equation (3), $x i + k$ is the value of the signal $x n$ at the time point $n = i + k$, which is $i$ steps after the time point $k$, where $k$ is the initial time. The variable initial time (VIT) transform of the signal $x n$ is then defined to be the formal power series
$X z , k = ∑ i = 0 ∞ z − i x i + k .$
Note that $x ^ z , k = X z , k δ k$. The VIT transform can also be extended to any two-variable function $f n , k$ defined on $Z × Z$ where $Z$ is the set of integers, and when this extension is applied to a unit-pulse response function $h n , k$, the result is the transfer function defined by Equation (1).
The formal definition of the VIT transform and some simple examples of the transform are given in Section 2. Various properties of the VIT transform are proved in Section 3, including the property that multiplication by a function $a n$ in the time domain is equivalent to multiplication by $a k$ on the left in the VIT transform domain. It is this property along with the left-shift property that converts signals or two-variable time functions given by linear time-varying difference equations into left polynomial fractions consisting of polynomials in $z$ with variable coefficients. It is also proved in Section 3 that the transform of a fundamental operation between two functions defined on $Z × Z$ is equal to the product of the VIT transforms. It is this result that yields a transfer function framework for the study of linear time-varying discrete-time systems.
In Section 4, it is shown that the powers $z − i$ of the symbol $z − 1$ can be scaled by a time function, which is given in terms of a semilinear transformation $S a$ defined on the ring $A$ consisting of all functions from the integers $Z$ into the reals $R$. Given a VIT transform that is a polynomial fraction in $z − 1$, the scaling of $z − i$ by a time function results in a large collection of new transforms which are polynomial fractions. This construct results in the generation of a class of signals that satisfy linear time-varying recursions. Examples are given in the case of the Gabor-Morlet wavelet [8] and sinusoids with general time-varying frequencies.
The addition and decomposition of VIT transforms is studied in Section 5. It is shown that the addition of two left polynomial fractions can be expressed in a single-fraction form by using the extended right Euclidean algorithm in a skew (noncommutative) polynomial ring with coefficients in the quotient field of ring $A$ of time functions. This results in recursions over $A$ for the inverse transform of the sum of the fractions, although in general the recursions may have singularities. The decomposition of a polynomial fraction is carried out in Section 5 in terms of the evaluation of $z i$ at time functions defined in terms of semilinear transformations. In Section 6, the VIT transform approach is applied to linear time-varying discrete-time systems or digital filters. It is shown that the VIT transform of the system output is equal to the product of the VIT transform of the input and the VIT transform of the unit-pulse response function. This result is used to derive an expression for the steady-state output response resulting from signal inputs having a first-order transform. The focus is on the case when the system is given by a time-varying moving average or autoregressive model. Section 7 contains some concluding comments.

## 2. The VIT Transform

With $Z$ equal to the set of integers and $R$ equal to the field of real numbers, let $A$ denote the set of all functions from $Z$ into $R$. Given we define addition by and multiplication by . With these two pointwise operations, $A$ is a commutative ring with multiplicative identity $1 n$, where for all . Let $σ$ denote the left shift operator on $A$ defined by With the shift operator σ, the ring A is called a difference ring.
With $z$ equal to a symbol or indeterminate, let $A z − 1$ denote the set of all formal Laurent series of the form
where . Note that the coefficients of the power series in (4) are written on the right of the $z − i$. With the usual addition of Laurent series and with multiplication defined by
$A z − 1$ is a noncommutative ring with multiplicative identity $1 n$. Let $A z$ denote the subring of $A z − 1$ consisting of all polynomials in $z$. That is, the elements of $A z$ are of the form
$∑ i = − N 0 z − i a i = ∑ i = 0 N z i a − i .$
Finally, let $A z − 1$ denote the subring of $A z − 1$ consisting of all formal power series in $z − 1$ given by (4) with $N = 0$.
The rings $A z$, $A z − 1$, and $A z − 1$ are called skew rings due to the noncommutative multiplication defined in Equation (6). Skew polynomial rings were first introduced and studied by Oystein Ore in his 1933 paper [9]. These ring structures have appeared in past work [7,10,11] on the algebraic theory of linear time-varying discrete-time systems.
Now, let $x n$ denote a real-valued discrete-time signal. For each fixed integer $i ≥ 0$, let . Then, is equal to the value of the signal $x n$ at the time point $n = i + k$, which is located $i$ steps after the time point $k$, where $k$ is viewed as the initial time. The initial time $k$ is taken to be an integer variable ranging over $Z$. Then, for each fixed , $x i k$ is a function from $Z$ into $R$, and thus $x i k$ is an element of the difference ring $A$. If the given signal $x n$ is defined only for $n ≥ k 0$ for some fixed , then the values of the are known only for $k ≥ k 0$. In this case, the pointwise operations of addition and multiplication can still be carried out on the $x i k$, but the results will be known only for $k ≥ k 0$. In addition, for any positive integer $q$, the $q$-step left shift operation can be performed on the $x i k$, but the result $x i k + q$ will be known only for $k + q ≥ k 0$ or $k ≥ k 0 − q$. Hence, the $x i k$ can still be viewed as elements of the difference ring $A$. Then, we have the following concept.
Definition 1.
The variable initial time (VIT) transform $X z , k$ of a real-valued discrete-time signal $x n$ is the element of $A z − 1$ defined by
$X z , k = ∑ i = 0 ∞ z − i x i k = ∑ i = 0 ∞ z − i x i + k .$
Note that the coefficients of the power series in Equation (7) are written on the right. As shown below, this leads to left polynomial fractions for the transform in the case when $x n$ satisfies a linear time-varying difference equation. Moreover, note that for each fixed integer value of $k$, $X z , k$ is the one-sided formal z-transform of $x i + k$, where “formal” means that $z$ is viewed as a formal symbol, not a complex variable. In particular, is the $z$-transform of . Finally, if the given signal $x n$ is defined only for $n ≥ k 0$, then the transform $X z , k$ is defined only for $k ≥ k 0$.
The VIT transform can be extended to any real-valued two-variable function $f n , k$ defined on $Z × Z$: Given $f n , k$, the VIT transform $F z , k$ of $f$ is defined to be the element of $A z − 1$ given by
$F z , k = ∑ i = 0 ∞ z − i f i + k , k .$
Given a discrete-time signal $x n$, let $f n , k = x n$. Then, from Equations (7) and (8), the VIT transform $F z , k$ of $f n , k$ is equal to the VIT transform $X z , k$ of $x n$. Hence, all of the results derived in this work on the VIT transform of a general two-variable function $f n , k$ can be directly applied to the VIT transform of a discrete-time signal $x n .$ In addition, if we define $f n , k = h n , k$, where $h n , k$ is the unit-pulse response function of a linear time-varying discrete-time system, the VIT transform of $h n , k$ is the transfer function of the system as defined in [7]. Thus, results on the VIT transform of a two-variable function can also be directly applied to linear time-varying systems.
Given a VIT transform $F z , k$, the original time function $f n , k$ can be recovered from the transform by setting equal to the right coefficient of $z − i$ in the power series representation given in Equation (8). In the following development, we will use the notation.
$f n , k ↔ F z , k$
To denote a VIT transform pair, it should be noted that in operations involving the VIT transform $F z , k$, the values of the initial time $k$ can be restricted to a finite interval $k 0 ≤ k ≤ k 1$, where $k 1 > k 0$. This is illustrated in Section 6, in the application to computing the steady-state output responses to various inputs in a linear time-varying system.
We shall now give some simple examples of the VIT transform. Let the function $f n , k$ be the unit pulse $δ n − k$ located at the initial time $k$. Then, $f i + k , k = δ i$ and inserting this into Equation (8), we have that the VIT transform is equal to 1 for all . Therefore, we have the transform pair
Now, suppose that , where $f k = f k , k$ is the value of $f$ at the initial time $k$. Then, the VIT transform of $f$ is equal to
$∑ i = 0 ∞ z − i f i + k , k = ∑ i = 0 ∞ z − i a i f k = z − a − 1 z f k .$
Thus, we have the transform pair
Note that the VIT transform in (11) is a fraction.
Given , consider the function $f n , k$ defined by the first-order linear time-varying difference equation
With initial value $f k , k = f k$ at initial time $k$. The solution to Equation (12) is
Which can be written in the product form
Note that the variable $k$ in Equation (13) can be evaluated at any specific initial time $k 0 ,$ which gives
Inserting $f i + k , k$ into Equation (8), the VIT transform of $f$ is equal to
$1 + z − 1 a k + z − 2 a k + 1 a k + z − 3 a k + 2 a k + 1 a k + … f k .$
The power series in (14) can be written in the left fraction form $z − a k − 1 z f k$. To verify this, using the multiplication defined by Equation (6), multiply Equation (14) by $z − a k$ on the left. This results in $z f k$, which proves the validity of the fraction form. Therefore, we have the VIT transform pair
Note that the transform pair (11) follows directly from the transform pair (15) by setting $a k = a$ for all $k$ ϵ Z. The left fraction form of the VIT transform given in (15) is a result of the function $f n , k$ satisfying the first-order linear time-varying recursion . As will be shown below, any $f n , k$ satisfying a linear time-varying recursion has a VIT transform which is a left polynomial fraction. This is the primary motivation for considering the VIT transform.
To illustrate the application of the transform pair (15), consider the Gaussian function given by
Then,
$x n + 1 = e x p − c 2 n − N + 1 2 = e x p − c 2 n − N 2 + 2 n − N + 1 = e x p − c 2 2 ( n − N + 1 ) e x p − c 2 n − N 2 = e x p − c 2 2 ( n − N + 1 ) x n .$
The solution to Equation (17) is where $n =$$e x p − c 2 2 ( n − N + 1 )$, and $x k$ is the value of the Gaussian function at the initial time $k$. Using the transform pair (15) with $f n , k = x n$, we have that the VIT trans form $X z , k$ of the Gaussian has the left fraction form.
$X z , k = z − e − c 2 2 ( k − N + 1 ) − 1 z x k .$
In this work, we will focus on the case when the VIT transform of $f n , k$ can be written as a left polynomial fraction
$F z , k = μ z , k − 1 ν z , k ,$
where is a nonzero monic (leading coefficient is equal to $1$) polynomial, and . The term $μ z , k$ in the fraction is the denominator and $ν z , k$ is the numerator. The order of the fraction $μ z , k − 1 ν z , k$ is defined to be the degree of the denominator $μ z , k$, assuming that $μ z , k$ and $ν z , k$ do not have any common left factors. In the left fraction form (19), the factor is the element given by $μ z , k γ z , k = 1$. In other words, $γ z , k$ is the right inverse of $μ z , k$ in the ring $A z − 1$. Since $μ z , k$ is monic, it has an inverse in $A z − 1$ which can be computed by dividing $μ z , k$ into 1 using left long division. The product of $μ z , k − 1$ and ν$z , k$ in (19) is carried out using multiplication in the ring $A z − 1$. For example, in the case of the transform pair (15), using the multiplication given by Equation (6) and dividing $z − a k$ into 1 on the left gives
Then, multiplying the above quotient on the right by $z f k$, we obtain the power series for the VIT transform given by (14).

## 3. Properties of the VIT Transform

The VIT transform satisfies several properties that are analogous to the properties of the ordinary z-transform. It also satisfies a key property involving multiplication by an arbitrary time function which is not shared by the ordinary z-transform. We begin with linearity and then consider the VIT transform of left and right time shifts. In the last part of the section, we utilize the results to prove that functions satisfying a linear time-varying difference equation have transforms which are left polynomial fractions.
It is obvious from the definition given by Equation (8) that taking the VIT transform is a $R$-linear operation. That is, if $F z , k$ and $G z , k$ are the transforms of the functions $f n , k$ and $g n , k$, then for any real numbers $c 1 , c 2$, the transform of $c 1 f n , k + c 2 g n , k$ is equal to $c 1 F z , k + c 2 G z , k$. Thus, we have the transform pair
In addition to being $R$-linear, the VIT transform is also right $A$-linear. That is, given we have the transform pair
This follows directly from the definition of the VIT transform.
Given the function $f n , k$ with initial value $f k , k = f k$, consider the one-step left shift $f n + 1 , k$. The VIT transform of $f n + 1 , k$ is equal to $∑ i = 0 ∞ z − i f i + k + 1 , k$. Defining the change of index gives
where $F z , k$ is the VIT transform of $f n , k$. Therefore, we have the following transform pair
This result is a direct analogue of the left-shift property of the ordinary z-transform.
Given a positive integer $q$, the VIT transform pairs for the $q$-step left shift $f n + q , k$ and $q$-step right shift $f n − q , k$ are
where is the transform of $f n , k$ and for . The straight-forward proof of these transform pairs is omitted.
For the ordinary z-transform, there are several properties arising from the multiplication by particular time functions. These all have analogues in the VIT transform domain. We begin by considering multiplication by $n$.
Given $f n , k$ with VIT transform $F z , k$ defined by Equation (8), for each fixed , let $d d z F z , k$ denote the derivative of $F z , k$ with respect to $z$. Then, the VIT transform pair for the function is
To prove the transform pair (25), take the derivative with respect to $z$ of both sides of Equation (8) for each fixed value of . This gives
$d d z F z , k = ∑ i = 0 ∞ − i z − i − 1 f i + k , k = − z − 1 ∑ i = 0 ∞ i z − i f i + k , k .$
Hence,
$− z d d z F z , k = ∑ i = 0 ∞ z − i i f i + k , k .$
Note that $i z − i = z − i i$, since the coefficient $i$ of does not depend on the initial time time $k$. Then, adding $F z , k k$ to both sides of Equation (26) results in
$− z d d z F z , k + F z , k k = ∑ i = 0 ∞ z − i i + k f i + k , k .$
The right side of Equation (27) is equal to the VIT transform of $n f n , k$, and thus (25) is verified.
To illustrate the application of the transform pair (25), let . Then, using the transform pair (11) with $a = 1$ and $f k = 1$, we have $F z , k = z − 1 − 1 z$, and using the transform pair (25), we have that the VIT transform of the ramp function ≥ k, is given by
This results in the following transform pair
We shall now consider multiplication by $c n$, where $c$ is a nonzero real or complex number. When $c$ is a complex number, we need to generalize the above ring framework to include coefficients which are functions from $Z$ into the field $C$ of complex numbers. In other words, ring $A$ now consists of all functions from $Z$ into $C$.
Given a function $f n , k$ with VIT transform $F z , k$ defined by (8), and given a nonzero real or complex number c, we can scale z in $F z , k$ by replacing z by $z c$. This results in
$F z c , k = ∑ i = 0 ∞ z c − i f i + k , k = ∑ i = 0 ∞ z − i c i f i + k , k .$
The right side of Equation (29) is equal to the VIT transform of . Thus, we have the transform pair
Using the right $A$-linearity property, we can multiply both sides of the transform pair (30) on the right by $c k$, which results in the transform pair
If $F z , k$ is given in the left fraction form $F z , k = μ z , k − 1 ν z , k$, where $μ z , k$ and $ν z , k$ are polynomials belonging to $A z$, then for any real or complex number $c$, we have
$F z c , k = μ z c , k − 1 ν z c , k$
In other words, the scaling of $z$ in $F z , k$ can be carried out in the numerator and denominator of the left fraction. This is the case since $c$ is a constant and the noncommutativity of multiplication in the ring $A z − 1$ has no effect on constant functions. Hence, for example, from the transform pair (15) and using (30) with the scaling (31), we obtain the transform pair
We can use the transform pair (31) to compute the VIT transform of a function $f n , k$ multiplied by a sine or cosine: Let $Ω$ be a positive real number and consider the complex exponentials $e j Ω$ and $e − j Ω$, where $j = − 1$. Then, given the function $f n , k$ with transform $F z , k$, using Euler’s formula and the transform pair (31), we have the transform pairs
$s i n Ω n f n , k ↔ j 2 F e − j Ω z , k e − j Ω k − F e j Ω z , k e j Ω k .$
From the transform pairs (33) and (34), we can determine the VIT transforms of the cosine and sine functions: Again taking , so that $F z , k = z − 1 − 1 z$, we have:
This results in the transform pair
A similar derivation gives the pair
Next, we consider the summation property: Given the function $f n , k ,$ with transform $F z , k$, let $s n , k$ denote the sum of $f n , k$ defined by $s n , k = ∑ r = k n f r , k$, $n ≥ k$. Then,
$s n , k = s n − 1 , k + f n , k ,$
And taking the VIT transform of both sides of Equation (37) and using the right shift property given by the transform pair (24) results in $S z , k = z − 1 S z , k + s k − 1 , k + F z , k$. Setting $s k − 1 , k = 0$ and solving for $S z , k$ gives $S z , k = z − 1 − 1 z F z , k$. Thus, we have the transform pair
Now, given functions $f n , k , g n , k$ with $f n , k = 0$ and $g n , k = 0$ for $n < k$, let $d n , k$ denote the function defined by
$d n , k = ∑ r = k n f n , r g r , k .$
The operation in Equation (39) arises in the study of linear time-varying systems, which are considered in Section 6. We have the following result on the VIT transform of $d n , k$.
Proposition 1.
With $d n , k$ defined by Equation (39), the VIT transform of $d n , k$ is given by
$D z , k = F z , k G z , k ,$
where $F z , k$ and $G z , k$ are the VIT transforms of $f n , k$ and $g n , k .$
Proof.
Since , the upper value of the summation in Equation (39) can be taken to be $∞$. Then, with the change of index $r ¯ = r − k$, Equation (39) becomes
$d n , k = ∑ r ¯ = 0 ∞ f n , r ¯ + k g r ¯ + k , k .$
Taking the VIT transform of both sides of Equation (41) gives
$D z , k = ∑ i = 0 ∞ ∑ r ¯ = 0 ∞ z − i f i + k , r ¯ + k g r ¯ + k , k .$
Applying the index change $i ¯ = i − r ¯$ in Equation (42) yields
$D z , k = ∑ i ¯ = − r ¯ ∞ ∑ r ¯ = 0 ∞ z − i ¯ z − r ¯ f i ¯ + r ¯ + k , r ¯ + k g r ¯ + k , k .$
By definition of multiplication in $A z − 1$, $z − r ¯ f i ¯ + r ¯ + k , r ¯ + k = f i ¯ + k , k z − r ¯$ and since $f i ¯ + k , k = 0$ for $i ¯ < 0$, Equation (43) reduces to
$D z , k = ∑ i ¯ = 0 ∞ z − i ¯ f i ¯ + k , k ∑ r ¯ = 0 ∞ z − r ¯ g r ¯ + k , k .$
The right side of Equation (44) is equal to $F z , k G z , k$, and thus, Equation (40) is verified. □
The final property we consider is multiplication by an arbitrary function: Given $f n , k$, and , the VIT transform of the product $a n f n , k$ is equal to
$∑ i = 0 ∞ z − i a i + k f i + k , k .$
By definition of multiplication in $A z − 1$, Equation (45) can be written as , and thus, we have the transform pair
Therefore, multiplication by a function of $n$ in the time domain is equivalent to multiplication by the function on the left in the transform domain with the time variable $n$ replaced by the initial time variable $k$.
For example, let $a n = n$ and . Then, by (46), we have the transform pair
The transform in (47) looks quite different from the result in (28), but the transforms must be equal. That is, we must have
$k z − 1 − 1 z = z − 1 − 2 z + z − 1 z k .$
To verify Equation (48), multiply both sides on the left by $z − 1 2$ and on the right by $z − 1$. This gives
$z − 1 2 k z = z + z − 1 z k z − 1 .$
By the definition of multiplication in $A z − 1$, $k z = z k − 1$, and using this in the right side of Equation (49) gives
Finally, using $k z = z k − 1$ in the left side of Equation (49) and comparing the result with Equation (50) verifies Equation (48).
Using the transform pair (46) and the transform pair (23) for the left shift, we have the following result relating linear time-varying difference equations and left polynomial fractions in the ring $A z − 1$.
Theorem 1.
The VIT transform $F z , k$ of $f n , k$ has the left polynomial fraction form
If and only if $f n , k$ satisfies the $N$th-order linear time-varying difference equation:
Proof.
Note that in Equation (51), we are writing the coefficients of the $z i$ on the left. Suppose $f n , k$ satisfies Equation (52). Then, taking the transform of Equation (52) and using the transform pair (46) and the left-shift property given by (23) results in
$z N + ∑ i = 0 N − 1 μ i k z i F z , k + ∑ i = 1 N z i q i k = 0 ,$
where the $q i k$ are combinations of the initial values of $f n , k$ at the initial times . Then, solving Equation (53) for $F z , k$ yields the left-fraction form
$F z , k = z N + ∑ i = 0 N − 1 μ i k z i − 1 − ∑ i = 1 N z i q i k .$
Conversely, suppose that the transform $F z , k$ of the function $f n , k$ is given by Equation (51). Multiplying both sides of Equation (51) on the left by $z N + ∑ i = 0 N − 1 μ i k z i$ Yields.
$z N + ∑ i = 0 N − 1 μ i k z i F z , k = ∑ i = 1 M ν i k z i .$
Using the transform pair , we have that the inverse transform of the right side of Equation (55) is , and using the transform pairs (23) and (46), the inverse transform of the left side of Equation (55) is equal to $f n + N , k + ∑ i = 0 N − 1 μ i n f n + i , k , n ≥ k$, and thus Equation (52) is verified. □
The properties of the VIT transform which were derived in this section are given in Table 1, and Table 2 contains a list of basic transform pairs. Various additional transform pairs are computed in the next section by using scaling of by time functions.

## 4. Scaling of z−i by Time Functions

In the VIT transform domain, it is possible to carry out scaling of $z − i$ by time functions. This results in transform pairs for a large class of time functions including sinusoids with general time-varying amplitudes and frequencies. The development is given in terms of a semilinear transformation from $A$ into $A$, where as before, $A$ consists of all functions from $Z$ into $R$ or $C$.
Given a function , let $S a$ denote the mapping from $A$ into $A$ defined by , where $σ$ is the left shift operator on $A$. Hence, $a σ b$ is the element of $A$ equal to . In the mathematics literature [12], $S a$ is said to be a semilinear transformation with respect to $σ$. This type of operator was utilized in [10] in the state-space theory of linear time-varying discrete-time systems.
The $i$-fold composition of the operator $S a$ is given by
And when $i = 0$, $S a 0 b = 1$. Evaluating Equation (56) at gives
Note that when $b k = 1$ for all and $a$ is the constant function , then $S a i 1 = a i$, and thus $S a i 1$ is a time-varying version of the power function. Then, we have the following result.
Proposition 2.
Suppose that the two-variable function $f n , k$ satisfies the first-order recursion
With initial value $f k , k = f k$ at initial time $k .$ then,
Proof.
Setting $i = n − k$ and $b k = 1 k = 1 ,$ in Equation (57) yields
Rearranging the factors in Equation (60) and comparing with the result given by Equa- tion (13) verifies that $f n , k$ is given by Equation (59). □
Using (15), we have the transform pair
This is the transform pair for the general form in the case of a first-order left polynomial fraction, with the time function $f n , k$ expressed in terms of the semilinear transforma- tion $S a$. We shall now define scaling in terms of $S a$.
Given the time function $f n , k$ with VIT transform
$F z , k = ∑ i = 0 ∞ z − i f i + k , k$
We can scale in Equation (62) by replacing $z − i$ with $z − i S a i 1$, where $S a i 1$ is the time function defined by Equation (57) with $b k = 1 k$. The resulting VIT transform is given by
$∑ i = 0 ∞ z − i S a i 1 k f i + k , k ,$
Which will be denoted by $F z − i S a i 1 , k$. We formalize this construction as follows.
Definition 2.
Given and the VIT transform , the time-function scaled transform is the power series defined by
$F z − i S a i 1 , k = ∑ i = 0 ∞ z − i S a i 1 k f i + k , k .$
We have the following result on the inverse transform of the scaled transform given by (63).
Proposition 3.
Given $f n , k$ with transform $F z , k ,$ the inverse VIT transform of the scaled transform $F z − i S a i 1 , k$ is equal to .
Proof.
The result follows directly from the definition of the VIT transform applied to the function . □
By Proposition 3, scaling of $F z , k$ by replacing $z − i$ with $z − i S a i 1$ corresponds to the multiplication of $f n , k$ by $S a n − k 1$ in the time domain. This results in the transform pair
The transform pair (65) is the time-varying version of the transform pair (30). In fact, when $a k = c$ for all , $S a n − k 1 = c n − k$, and (65) reduces to (30).
Given , by right $A$-linearity of the VIT transform operation, we can multiply (65) on the right by $b k$, which results in the transform pair
Note that the time function $w n , k = S a n − k 1 b k$ in (66) satisfies the difference equation with initial value $w k , k = b k$. Since in (66) are arbitrary functions from $Z$ into $R$ or $C$, a large number of transform pairs can be generated from (66). As shown now, by taking $a$ to be a complex exponential function, this result can be used to determine the transform of functions multiplied by a sinusoid with arbitrary time-varying frequency .
Let $γ n = e j Ω n n$, where again . Then $γ n + 1 = e j Ω n + 1 n + 1 = e j Ω n + 1 n + 1 − Ω n n e j Ω n n = a n γ n$, where $a n = ex p j Ω n + 1 n + 1 − Ω n n$. By Proposition 2, . Now, given $f n , k$, by Euler’s formula we have
$c o s Ω n n f n , k = 1 2 γ n f n , k + γ ¯ n f n , k ,$
where $γ ¯ n$ is the complex conjugate of $γ n$. Then, taking the transform of the right side of Equation (67) and using (66), we have the transform pair
where $a ¯$ is the complex conjugate of $a$ and $F z , k$ is the transform of $f n , k$. Similarly, we have the following transform pair for multiplication by $s i n Ω n n$
The application of the transform pairs (65) and (68) is illustrated below in the case when $F z , k$ is a left polynomial fraction.
Suppose that $F z , k = μ z , k − 1 ν z , k$, where $μ z , k ≠ 0$ and ν$z , k$ are elements of the skew polynomial ring $A z$. With $N$ equal to the degree of $μ z , k$, the degree of ν$z , k$ must be less than or equal to $N$, since $F z , k$ is a power series in $z − 1$. Then,
where the elements comprising the right side of Equation (70) are polynomials in $z − 1$. Hence, the transform $F z , k$ can be written as a left fraction consisting of polynomials in $z − 1$.
Theorem 2.
Suppose that $F z , k = μ z , k − 1 ν z , k$, where $μ z , k = z − N + ∑ i = 0 N − 1 z − i μ i$ and . Given , let $μ z − i S a i 1 , k$ and $ν z − i S a i 1 , k$ denote the time-function scaled polynomials defined by $μ z − i S a i 1 , k = z − N + ∑ i = 0 N − 1 z − i S a i 1 k μ i k$, $ν z − i S a i 1 , k = ∑ i = 0 M z − i S a i 1 k ν i k$. Then
$F z − i S a i 1 , k = μ z − i S a i 1 , k − 1 ν z − i S a i 1 , k .$
Proof.
By definition of $F z , k$
$F z , k μ z , k F z , k = ν z , k ,$
where the multiplication $μ z , k F z , k$ is carried out in the ring $A z − 1$. Define the mapping
Then, the operation of scaling of by the time function $S a i 1$ is equivalent to apply- ing the mapping Applying $ρ a$ to both sides of Equation (72) gives $ρ a μ z , k F z , k =$ $ν z − i S a i 1 , k$. It will be shown that $ρ a$ is a multiplicative mapping, and thus $ρ a μ z , k F z , k = ρ a μ z , k ρ a F z , k$, which proves that $μ z − i S a i 1 , k F z − i S a i 1 , k = ν z − i S a i 1 , k$, and (71) is verified: For any integers , $ρ a z − i z − j = ρ a z − i + j = z − i + j S a i + j 1$, and using Equation (57) yields.
Hence, $ρ a z − i z − j = ρ a z − i ρ a z − j$. Finally, for any $ρ a z − i e = z − i S a i 1 e =$$ρ a z − i e$, and thus $ρ a$ is multiplicative. □
Combining Proposition 3 and Theorem 2 yields the following result.
Theorem 3.
Suppose that $f n , k$ has VIT transform where . Then, for any the transform of is given by $F z − i S a i 1 , k = μ z − i S a i 1 , k − 1 ν z − i S a i 1 , k$.
As illustrated now, Theorem 3 can be used to generate left polynomial fraction transforms from a given polynomial fraction such as the ones in Table 2: Let , , and given , let , where . From (35), the transform $F z , k$ of $f n , k$ is equal to
$F z , k = z 2 − z 2 c o s Ω + 1 − 1 z 2 c o s Ω k − z c o s Ω k + 1 .$
Rewriting the right side of (74) as a polynomial in $z − 1$ gives
$F z , k = 1 − z − 1 2 c o s Ω + z − 2 − 1 c o s Ω k − z − 1 c o s Ω k + 1 .$
Then, scaling $z − i$ by $S a i 1$ in Equation (75) and using Theorem 3, we have
Hence, the transform of is equal to
$1 − z − 1 a k 2 c o s Ω + z − 2 a k a k + 1 − 1 c o s Ω k − z − 1 a k c o s Ω k + 1 b k .$
Rewriting the transform (76) in terms of powers of $z$ with coefficients moved to the left of the $z i$, and applying Theorem 1, we have that $h n , k$ satisfies the second-order difference equation
Note that if $a n = c$ for all , then $S a n − k 1 = c n − k , n ≥ k$, and Equation (77) reduces to the well-known recursion for the exponentially-weighted cosine function.
The difference Equation (77) is the recursion for the cosine function $c o s Ω n$ with a general weighting function $w n , k$, where the only constraint on $w n , k$ is that it satisfies the first-order recursion . As an application of this result, let the weighting $w n , k$ be equal to the Gaussian $x n$ defined by Equation (16). Then, is the Gaussian-windowed cosine function, which is equal to the real part of the Gabor-Morlet wavelet [8]. By Equation (17), $x n + 1 = a n x n$ with $a n = e x p − c 2 2 ( n − N + 1 ) .$ Thus, inserting $a n$ into Equation (77), we have that the wavelet $h n , k = h n = x n c o s ( Ω n$) satisfies the second-order recursion . This result can be derived in the time domain by attempting to express $h n + 2$ in terms of $h n + 1$ and $h n$, but as seen here, it is an immediate consequence of Theorems 1 and 3.

## 5. Combining and Decomposing Polynomial Fractions

In the first part of this section, it is shown that left polynomial fractions can be combined using the extended right Euclidean algorithm. The algorithm is carried out with the coefficients of the polynomials belonging to the quotient field $Q A$ of the ring $A$. We begin with the definition of $Q A$ and then give the extended right Euclidean algorithm for elements belonging to the skew polynomial ring $Q A$[z].

#### 5.1. Extended Euclidean Algorithm

The quotient field $Q A$ of $A$ consists of all formal ratios $a / b$ of elements , $b ≠ 0$. If $b k ≠ 0$ for all , the ratio $a / b$ defines a function from $Z$ into $R$ or $C$, and thus it is an element of $A$. If $b k$ has zero values, then when $a / b$ is viewed as a function on $Z$, it will have singularities. That is, $a k / b k$ is not defined for any such values of $k$. With multiplication and addition defined by
is a field. The left shift operator $σ$ extended to $Q A$ is defined by .
The skew polynomial ring $Q A z$ consists of all polynomials in $z$ with coefficients in $Q A$, and with the noncommutative multiplication . Since $Q A$ is a field, it follows from the results in [9] that $Q A z$ is a right Euclidean ring, and since $σ$ is surjective, it is also a left Euclidean ring. As a result, the extended left and right Euclidean algorithms can be carried out in the ring $Q A z$. A description of the algorithms is given in [13] for a general skew polynomial ring (see also [14]). For completeness, the extended right Euclidean algorithm is given next.
Let , with deg$r 2 ≤$ deg$r 1$, where “deg” denotes degree. Dividing $r 2$ into $r 1$ on the right in the ring $Q A z$ gives $r 1 = q 2 r 2 + r 3$, where the remainder $r 3$ is equal to zero or deg$r 3 <$ deg$r 2$. The division process is repeated by dividing $r 3$ into $r 2$, which gives remainder $r 4$ with $r 4 = 0$ or deg$r 4 <$ deg$r 3$. The process is continued by dividing $r 4$ into $r 3$, etc. until $r m$ is equal to zero for some integer $m$. It is important to note that even though $r 1 z$ and $r 2 z$ are polynomials in $z$ with coefficients belonging to $A$, in general the remainders are elements of $Q A z$.
Given the sequence of divisions
$r i − 2 = q i − 1 r i − 1 + r i , i = 3 , 4 , … , m ,$
We then have the following known result ([13,14]).
Proposition 4.
With $r i$ and $q i$ given by Equation (78), define , where $s 1 = 1 , s 2 = 0 ,$ $t 1 = 0$, $t 2 = 1$. Then,
Proof.
When $i = 3$, , and Equation (79) becomes $r 3 = r 1 − q 2 r 2$, which is equivalent to Equation (78) when $i = 3$. When $i = 4$, , and
$s 4 r 1 + t 4 r 2 = − q 3 r 1 + 1 + q 3 q 2 r 2 .$
Setting $i = 4$ in Equation (78) gives
$r 4 = r 2 − q 3 r 3 = r 2 − q 3 r 1 − q 2 r 2 = r 2 − q 3 r 1 + q 3 q 2 r 2 .$
The right sides of Equations (80) and (81) are equal, and thus Equation (79) is verified for $i = 4$. For any $i > 4$, $s i + 1 = s i − 1 − q i s i$, and $t i + 1 = t i − 1 − q i t i$. Hence,
$s i + 1 r 1 + t i + 1 r 2 = s i − 1 − q i s i r 1 + t i − 1 − q i t i r 2 s i + 1 r 1 + t i + 1 r 2 = s i − 1 r 1 + t i − 1 r 2 − q i s i r 1 + t i r 2$
Suppose Equation (79) holds for $i − 1$ and $i ,$ then the right side of Equation (82) is equal to $r i − 1 − q i r i$, which by Equation (78) is equal to $r i + 1$. Therefore, $r i + 1 =$ $s i + 1 r 1 + t i + 1 r 2 ,$ and by the second principle of mathematical induction, Equation (79) is true for all $i ≥ 3$. □
Since $r m = 0$, by Proposition 4,
$s m r 1 = − t m r 2 .$
By Equation (83), both $r 1$ and $r 2$ divide $s m r 1$ on the right, and thus the polynomial $s m r 1$ is a common right multiple of $r 1$ and $r 2$. As a consequence of the properties of the Euclidean algorithm, $s m r 1$ is the least common right multiple (lcrm) of $r 1$ and $r 2$. The lcrm is unique up to a multiplicative factor in $Q A$.

#### 5.2. Sum of Two Polynomial Fractions

Suppose that the discrete-time functions $f 1 n , k$ and $f 2 n , k$ (which will be denoted by $f 1 n$ and $f 2 n$, respectively) satisfy the following linear time-varying difference equations
Let $f n$ denote the sum $f n = f 1 n + f 2 n$. It follows from the VIT transform approach that $f n$ also satisfies a recursion over $A$. To show this, let denote the transforms of $f 1 n$ and $f 2 n$, respectively. Using Theorem 1, we have $F 1 z , k = μ z , k − 1 ν z , k , F 2 z , k = ξ z , k − 1 η z , k$, where $ξ z , k = z N 1 + ∑ i = 0 N 2 ξ i z i$, and $ν , η$ are polynomials belonging to $A z$. Then, by linearity of the transform operation, the transform $F z , k$ of $f n$ is equal to
$F z , k = μ z , k − 1 ν z , k + ξ z , k − 1 η z , k .$
Applying the extended right Euclidean algorithm to $μ z , k$ and $ξ z , k$ results in the lcrm $s m z , k μ z , k = − t m z , k ξ z , k$, where in general $s m z , k$ and are polynomials with coefficients in $Q A$. Then, multiplying both sides of Equation (86) on the left by $s m z , k μ z , k$ gives $s m z , k$μ$z , k F z , k = s m z , k ν z , k − t m z , k η z , k$ and thus, the left polynomial fraction form of $F z , k$ is
$F z , k = s m z , k μ z , k − 1 s m z , k ν z , k − t m z , k η z , k .$
Suppose that $s m z , k μ z , k = z N + ∑ i = 0 N − 1 e i k z i$. Then, by Theorem 1, the inverse transform $f n$ of $F z , k$ satisfies the $N t h$-order linear time-varying difference equation
Since , the coefficients $e i n$ in Equation (88) are elements of $Q A$ in general, and thus (88) is a linear recursion over $Q A$. We can rewrite Equation (88) as a recursion over $A$ as follows: Suppose that , and let . Then, for all $i$, and multiplying both sides of (88) by $p n$ results in the following recursion over $A$
Note that if $p q = 0$ for some value $q$ of $n$, then Equation (88) is singular when $n = q$, and $f q + N$ cannot be determined from either Equation (88) or (89). When $p q = 0$, $f q + N$ can be computed using the relationship $f q + N = f 1 q + N + f 2 q + N$, where $f 1 n$ and $f 2 n$ are given by the recursions (84) and (85).
The possible zero values of $p n$ in the recursion Equation (89) are a result of common factors appearing in $μ z , k$ and $ξ z , k$ when $k$ is evaluated at particular integer values. To see an example of this, suppose that $f 1 n + 1 = a n f 1 n$ and $f 2 n + 1 = f 2 n$ for $n ≥ k ,$ with initial values $f 1 k = f 2 k = 1$, and where . Taking the transform using the transform pair (15) yields
$F z , k = z − a k − 1 z + z − 1 − 1 z .$
Applying the extended right Euclidean algorithm to $z − a k$ and $z − 1$ results in the lcrm
$z − a k 1 1 − a k z − 1 = z − 1 1 1 − a k z − a k .$
Multiplying both sides of Equation (90) on the left by the term in Equation (91), we have
$z − a k 1 1 − a k z − 1 F z , k = z − 1 1 1 − a k z + z − a k 1 1 − a k z .$
Therefore,
$F z , k = z − a k 1 1 − a k z − 1 − 1 z − 1 1 1 − a k z + z − a k 1 1 − a k z .$
This is the left polynomial fraction form of the VIT transform of $f n = f 1 n + f 2 n$. Using the definition of multiplication in we obtain
$z − a k 1 1 − a k z − 1 = 1 1 − a k + 1 z − a k 1 − a k z − 1 = 1 1 − a k + 1 z 2 − a k 1 − a k + 1 1 − a k + 1 z + a k 1 − a k$
Hence, $f n = f 1 n + f 2 n$ satisfies the following recursion over $Q A$
In this example, $p n = 1 − a n 1 − a n + 1$. Then, multiplying Equation (92) by $p n$ results in the following recursion over $A$
Clearly, if $a q = 1$ for some integer $q$, $f q + 2$ cannot be computed from Equation (93). However, $f q + 2$ can be computed from $f q + 2 = f 1 q + 2 + f 2 q + 2 = a q + 1 f 1 q + 1 + f 2 q + 1 .$ Note that when $a q = 1$, the factors $z − a q$ and $z − 1$ are identical, so they have a common factor when viewed as polynomials in $R z .$
In Section 6, it is shown that for a linear time-varying finite-dimensional system, the VIT transform of the unit-pulse response function is a left polynomial fraction (the transfer function). Hence, by the results given here, the transfer function of a parallel connection will in general consist of polynomials over $Q A$.
As an application of summing fractions, we shall determine the transform of $c o s Ω ( n n )$ with arbitrary frequency function Using the transform pair (68) with , since $F z , k = 1 − z − 1 − 1$, we have the transform pair
where $a k = e x p j Ω k + 1 k + 1 − Ω k k$. Applying the extended right Euclidean algorithm to $z − a$ and $z − a ¯$ results in the lcrm
$z − a ¯ 1 a − a ¯ z − a = z − a 1 a − a ¯ z − a ¯ .$
Now, let $Ψ z , k$ denote the VIT transform of Then, by the transform pair (94)
$Ψ z , k = z − a − 1 z e j Ω k k + z − a ¯ − 1 z e − j Ω k k .$
Multiplying Equation (96) on the left by $z − a ¯ 1 a − a ¯ z − a = z − a 1 a − a ¯ z − a ¯$ results in
Hence,
This is the left polynomial fraction form of the VIT transform of $c o s Ω ( n n )$, where the frequency $Ω n$ is an arbitrary real-valued function of $n$.
It is possible to rewrite Equation (97) in terms of polynomials with real-valued coefficient functions: Beginning with the denominator, using the definition of multiplication in $A z$, we have
$z − a ¯ 1 a − a ¯ z − a = 1 σ a − σ a ¯ z 2 − σ a σ a − σ a ¯ + a ¯ a − a ¯ z + 1 a − a ¯ .$
Here, we are using the fact that $a ¯ a = 1$. By (95), we also have
$z − a ¯ 1 a − a ¯ z − a = 1 σ a − σ a ¯ z 2 − σ a ¯ σ a − σ a ¯ + a a − a ¯ z + 1 a − a ¯ .$
Adding both sides of Equations (98) and (99) gives
$z − a ¯ 1 a − a ¯ z − a = 1 2 1 σ a − σ a ¯ z 2 − σ a + σ a ¯ σ a − σ a ¯ + a ¯ + a a − a ¯ z + 1 a − a ¯ .$
Factoring out $1 σ a − σ a ¯$ in the right side of Equation (100) results in
$z − a ¯ 1 a − a ¯ z − a = 1 2 σ a − σ a ¯ z 2 − σ a + σ a ¯ + σ a − σ a ¯ a ¯ + a a − a ¯ z + σ a − σ a ¯ a − a ¯ ,$
And thus,
$z − a ¯ 1 a − a ¯ z − a − 1 = z 2 − σ a + σ a ¯ + σ a − σ a ¯ a ¯ + a a − a ¯ z + σ a − σ a ¯ a − a ¯ − 1 2 σ a − σ a ¯ .$
Since $a k = e x p j Ω k + 1 k + 1 − Ω k k$, it follows that the coefficients of $z$ in
$z 2 − σ a + σ a ¯ + σ a − σ a ¯ a ¯ + a a − a ¯ z + σ a − σ a ¯ a − a ¯ ,$
Are real-valued functions of $k$. Hence, (101) is the real form of the denominator poly- nomial of $Ψ z , k$. The derivation of the real form of the numerator is omitted.
Let $ζ n = c o s Ω ( n n )$. Then, applying Theorem 1, we have that the cosine function $ζ n$ with time-varying frequency $Ω n$ satisfies the second-order recursion.
$ζ n + 2 − σ a + σ a ¯ + σ a − σ a ¯ a ¯ + a a − a ¯ ζ n + 1 + σ a − σ a ¯ a − a ¯ ζ n = 0 ,$
where $a n = e x p j Ω n + 1 n + 1 − Ω n n$. Note that Equation (102) is the recursion for $c o s Ω ( n n )$ for any frequency function $Ω n$, including the linear frequency chirp $Ω n = Ω o + c n − k$ and the exponential chirp $Ω n = Ω o c n − k$, where $c$ is a positive real number. Moreover, note that when the frequency function $Ω n$ is equal to a constant $Ω$, $a + a ¯ = 2 c o s Ω$, and Equation (102) reduces to the recursion for the cosine function $c o s Ω n$.

#### 5.3. Fraction Decomposition

The decomposition of polynomial fractions with varying coefficients can be carried out in terms of an evaluation of polynomials with coefficients in $A$ or $Q A$, which is defined as follows. Given or , let $S a$ denote the semilinear transformation from $Q A$ into $Q A$ defined by . This is the extension from $A$ to $Q A$ of the semilinear transformation defined in Section 4. Then, applying the notion of skew polynomial evaluation given in [15], we define the evaluation of the polynomial at $z i = S a i 1$ to be the function given by
$γ S a i 1 , k = S a N 1 + ∑ i = 0 N − 1 γ i S a i 1 .$
Let
$γ ^ z , k = z N + ∑ i = 0 N − 1 σ − i γ i z i ,$
And let $T a$ denote the semilinear transformation on $Q A$ defined by . Then, the evaluation of $γ ^ z , k$ at $z i = T a i 1$ is given by
$γ ^ T a i 1 , k = T a N 1 + ∑ i = 0 N − 1 σ − i γ i T a i 1 .$
We then have the following known result.
Proposition 5.
Given , and , the remainder after dividing $z − a$ into $γ z , k$ on the right is equal to , and the remainder after dividing $z − a$ into $γ z , k$ on the left is equal to .
Proof.
The result on the remainder after division on the right follows from Lemma 2.4 in [15] by setting $N i a = S a i 1$. The second part of the proposition follows from Theorem 3.1 in [13] by setting $M i a =$ $T a i 1$. □
The concept of skew polynomial evaluation leads to the following decomposition result.
Theorem 4.
Given , and , suppose that
$z − a ξ z , k = φ z , k z − β ,$
where, and$z − β$does not divide$ξ z , k$on the right.
Then,
$z − a ξ z , k − 1 = ξ S β i 1 , k − 1 z − a − 1 + ψ z , k φ z , k − 1$
For some .
Proof.
Suppose that the hypothesis of the theorem is satisfied, so that (106) is true. Dividing $z − β$ into $ξ z , k$ on the right gives
$ξ z , k z − β − 1 ) = q z , k + r k z − β − 1 .$
By Proposition 5, the remainder $r k$ in Equation (108) is equal to the evaluation $r k = ξ S β i 1 , k$. Further, $r k ≠ 0$ since $z − β$ does not divide $ξ z , k$ on the right. Multi- plying both sides of (108) on the right by $z − β$ and on the left by $ξ S d i 1 , k − 1$, we have $ξ S β i 1 , k − 1 ξ z , k − ξ S β i 1 , k − 1 q z , k z − β = 1$. Hence, $ξ z , k − 1 z − a − 1 = ξ S β i 1 , k − 1 ξ z , k − ξ S β i 1 , k − 1 q z , k z − β ξ z , k − 1 z − a − 1$.
It follows from Equation (106) that $ξ z , k − 1 z − a − 1 = z − β − 1 φ z , k − 1$, and thus,
$ξ z , k − 1 z − a − 1 = ξ S β i 1 , k − 1 z − a − 1 − ξ S β i 1 , k − 1 q z , k φ z , k − 1 .$
Also, $ξ z , k − 1 z − a − 1$ = $z − a ξ z , k − 1$, and therefore, Equation (107) is satisfied with $ψ z , k =$ $− ξ S β i 1 , k − 1 q z , k$. □
There is a second decomposition of $z − a ξ z , k − 1$ which is given next.
Corollary 1.
Suppose that the hypothesis of Theorem 4 is satisfied so that (106) is true with , and $z − a$ does not divide $φ z , k$ on the left. Let $φ ^ z , k = z N + ∑ i = 0 N − 1 σ − i z i$. Then,
$z − a ξ z , k − 1 = z − β − 1 φ ^ T a i 1 , k − 1 + ξ z , k − 1 χ z , k ,$
where , $φ ^ T a i 1 , k = T a N 1 + ∑ i = 0 N − 1 φ i k − i T a i 1$, and .
Proof.
Dividing $z − a$ into $φ z , k$ on the left and carrying out steps similar to those in the proof of Theorem 4 yields the result. □
Note that the decomposition in Equation (109) is given in terms of left polynomial fractions, whereas (107) is in terms of right polynomial fractions. Moreover, note that the decompositions (107) and (109) are identical when the $ξ i$ and $a$ are constant functions, in which case $φ z , k = ξ z , k$ and $β = a$.
Corollary 2.
Suppose that Equation (109) is true. Let . Then, given , with $M ≤ N ,$
$z − a ξ z , k − 1 η z , k = z − β − 1 ∑ i = 0 M σ − i η i w T β i 1 + ξ z , k − 1 λ z , k ,$
For some .
Proof.
Multiplying both sides of Equation (109) on the right by $η z , k$ gives $z − a ξ z , k − 1 η z , k = z − β − 1 1 w k η z , k + ξ z , k − 1 χ z , k η z , k$. Dividing $z − β$ into $1 w k η z , k$ on the left, we have
$z − β − 1 1 w k η z , k = τ z , k + z − β − 1 v k ,$
where and . By Proposition 5, the remainder in (111) is equal to the evaluation of the polynomial $∑ i = 0 M σ − i η i w z i$ at $z i = T β i 1$. Hence, $v k = ∑ i = 0 M σ − i η i w T β i 1$. Then,
$z − a ξ z , k − 1 η z , k = z − β − 1 ∑ i = 0 M σ − i η i w T β i 1 + τ z , k + ξ z , k − 1 χ z , k η z , k .$
Now, since deg$η ≤$ deg$ξ$, $ξ z , k − 1$ $z − d − 1 η z , k$ is a strictly proper polynomial fraction, and thus $τ z , k + ξ z , k − 1 α z , k η z , k$ can be written in the form $ξ z , k − 1 λ z , k$ for some with deg$λ <$ deg$ξ$, which verifies Equation (110). □
Corollary 2 is a generalization of the first step of the partial fraction expansion for rational functions with real coefficients to left polynomial fractions with variable coefficients. The decomposition process can be continued if the polynomial $ξ z , k$ in Equation (110) has left factors and ζ with deg$ζ = N − 2$. Note that if the $ξ i$ and $a$ in Theorem 4 are constant functions, then $z − a$ commutes with $ξ z , k$, and thus (106) is satisfied with $β = a$ and $φ z , k = ξ z , k$. In this case, $T a i 1 = a i$, and thus, evaluated at $z = a$. If the $η i$ are also constant functions, the coefficient of $z − β − 1$ in Equation (110) is equal to the rational function $η z / ξ z$ evaluated at $z = a$.
In the case when the $μ i$ and $a$ are nonconstant functions, the computation of $β$ and $φ z , k$ in Equation (106) is considered in the next section when the decomposition is used to determine the steady-state output response of a linear time-varying system or digital filter.

## 6. The VIT Transfer Function Representation

Consider the causal linear time-varying discrete-time system or digital filter given by the input/output relationship
$y n = ∑ r = − ∞ n h n , r u r ,$
where $h n , r$ is the unit-pulse response function, $u n$ is the input, and $y n$ is the out- put response resulting from $u n$ with zero initial energy (zero initial conditions) prior to the application of the input. Recall that $h n , r$ is the output response at time $n$ resulting from the unit pulse $δ n − r$ applied at time $r$. Moreover, note that by causality, $h n , r = 0$ when $n < r$.
For each fixed integer $i ≥ 0$, let $h i k$ denote the element of the ring $A$ defined by
The function $h i k$ is equal to the value of the unit-pulse response function $h n , k$ at the time point $n = i + k$, which is located $i$ steps after the initial time $k$. As first defined in [7], the transfer function $H z , k$ of the system given by Equation (112) is the element of the power series ring $A z − 1$ defined by
$H z , k = ∑ i = 0 ∞ z − i h i k .$
From (113), we see that $H z , k$ is equal to the VIT transform of the unit-pulse response function $h n , k$.
The transfer function representation can be generated by taking the VIT transform of the input/output relationship in Equation (112) defined in terms of an arbitrary initial time $k$. To set this up, suppose that the input $u n$ is applied to the system at initial time , so that for $n < k$. In general, $u n$ depends on the initial time $k$, so we shall write $u n = u n , k$. Then, the output response $y k , n$ resulting from $u n , k$ will also be a function of $n$ and $k$, and is given by
Taking the VIT transform of both sides of Equation (114) and using Proposition 1, we have the following result.
Preposition 6.
Let denote the VIT transforms of , respectively. Then,
$Y z , k = H z , k U z , k .$
The relationship in Equation (115) is the VIT transfer function representation of the given system. Using Theorem 1, we have the following result on systems defined by a linear time-varying difference equation.
Preposition 7.
The system transfer function $H z , k$ has the left polynomial fraction form , if and only if the system input $u n , k$ and system output $y n , k$ satisfy the linear time-varying difference equation
By Proposition 7, a linear time-varying system is finite-dimensional if and only if its transfer function is a left polynomial fraction.
We shall apply the VIT transfer function framework to the problem of determining the steady-state response to the input where with $a k ≠ 0$ for all . Then, is the initial value of $u k , n$, and by definition of $S a$,
It is assumed that $∏ r = k n − 1 a r$ is a bounded function of $n$ and $∏ r = k n − 1 a r$ does not converge to zero as $n → ∞$. Hence, $u n , k$ does not decay to zero as $n → ∞$. Two simple examples of signals satisfying these conditions are the unit-step function , and the complex exponential . By Equation (61), the VIT transform of the input $u n , k$ defined by (117) is $U z , k = z − a k − 1 z b k$.
Now suppose that the system or digital filter is a time-varying moving average given by the input/output relationship
Taking the VIT transform of both sides of Equation (118) yields $Y z , k = ∑ i = 0 M v i k z − i U z , k ,$ and thus the transfer function of the moving average filter is
$H z , k = ∑ i = 0 M v i k z − i .$
Then, when the input $u k , n$ is given by Equation (117), the transform of the resulting output is
$Y z , k = ∑ i = 0 M v i k z − i z − a k − 1 z b k .$
Now, $z − a k z i = z i z − a k − i$, and applying Theorem 4 with $β k = a k − i$, and $ξ z , k = φ z , k = z i$, we have
$z − i z − a k − 1 = ξ S β i 1 , k − 1 z − a k − 1 + ψ i z , k z − i ,$
For some . Since $β k = a k − i$ and $ξ z , k = z i$,
$ξ S β i 1 , k = S β i 1 = a k − i a k − i + 1 ⋯ a k − 1 .$
Multiplying both sides of Equation (121) on the left by $v i k$ and on the right by $z b k$, and summing the results for $i = 0 , 1 , … , M$, we have that the transform of the output re- sponse is
Let
$Y s s n , k = ∑ i = 0 M v i k S β i 1 z − a k − 1 z b k$
$Y t r z , k = ∑ i = 0 M v i k ψ i z , k z − i z b k$
so that $Y z , k = Y s s n , k + Y t r z , k$, and $y n , k = y s s n , k + y t r n , k$, where $y s s n , k$ and $y t r n , k$ are the inverse VIT transforms of $Y s s n , k$ and $Y t r z , k$, respectively. Then, since the highest power of $z − 1$ in Equation (125) is equal to M, $y t r n , k = 0$ for $n > k + M$, and thus $y t r n , k$ is the transient part of the output response, and $y s s n , k$ is the steady-state part of the output response. Taking the inverse transform of $Y s s n , k$, we then have the following result.
Theorem 5.
The steady-state output response $y s s n , k$ of the time-varying moving average to the input $u k , n$ defined by Equation (117) is
$y s s n , k = ∑ i = 0 M v i n S β i 1 u n , k , n ≥ k ,$
where$S β i 1 = a n − i a n − i + 1 ⋯ a n − 1$.
Proof.
It follows directly from the transform pair property that the inverse transform of the right side of Equation (124) is equal to the right side of (126). □
A key point here is that the steady-state response $y s s n , k$ is equal to a scaling of the input by the time function $∑ i = 0 M v i n S β i 1$. As an illustration of this result, suppose that $a k = e j Ω$ and $b k = 1$ for all $k$. Then, $S a i 1 = e j Ω i$ and $n , k =$ $e j Ω n − k , n ≥ k$. In this case, $β = e j Ω$ and $S β i 1 = e j Ω i$. Hence, $∑ i = 0 M v i n S β i 1 = ∑ i = 0 M v i n e j Ω − i$.
Now suppose that
Then, by Theorem 5 and Equation (126), the steady-state response to the cosine input (127) is
where $R e$ denotes the real part. Then, $y s s n , k = ∑ i = 0 M v i n c o s i Ω c o s ( n − k Ω − ∑ i = 0 M v i n s i n i Ω s i n n − k Ω$.
Defining $y s s n , k$ can be written in the form.
$y s s n , k = w 1 2 n , Ω + w 2 2 n , Ω c o s n − k Ω + t a n − 1 − w 2 n , Ω w 1 n , Ω , n ≥ k .$
Hence, the steady-state response of a time-varying moving average filter to the cosine input given by Equation (127) is scaled in magnitude by the time function $w 1 2 n , Ω + w 2 2 n , Ω$ and phase shifted by the time function $t a n − 1 − w 2 n , Ω w 1 n , Ω$. Based on this result, the time-varying frequency response function $H n , Ω$ of the moving average filter can be defined to be
$H n , Ω = w 1 2 n , Ω + w 2 2 n , Ω e x p j t a n − 1 − w 2 n , Ω w 1 n , Ω .$
We now consider linear time-varying systems given by an autoregressive model. First, we need to restrict attention to systems that are stable in the following sense.
Definition 3.
A linear time-varying system with transfer function $H z , k = ξ z , k − 1 η z , k ,$ where , is asymptotically stable if for any initial conditions , and any initial time $k$ ϵ Z, the solution $y n , k$ to the difference equation , converges to zero as $n → ∞ .$
Now suppose that the system or digital filter is given by the following time-varying autoregressive model
In this case, the transfer function of the system is equal to $ξ z , k − 1$, where $ξ z , k = z N + ∑ i = 0 N − 1 ξ i k z − i$, and when the input $u n , k$ is defined by Equation (117), the trans- form of the output response is
$Y z , k = ξ z , k − 1 z − a k − 1 z b k .$
The steady-state part of the output response can be determined by decomposing the right side of Equation (130) using the result in Corollary 1. This requires that $z − a ξ z , k$ be expressed in the form
$z − a ξ z , k = φ z , k z − β ,$
For some . If $z − a$ commutes with $ξ z , k$, (131) is satisfied with $β = a$. In the general case, the computation of $β$ can be carried out as follows.
Let $z − a ξ z , k = γ z , k = z N + 1 + ∑ i = 0 N γ i z i$. Suppose that $β$ satisfies Equation (131). Then, since $z − β$ is a right factor of $γ z , k$, by Proposition 5, the evaluation $γ S β i 1 , k$ is equal to zero. That is,
$γ S β i 1 , k = S β N + 1 1 + ∑ i = 0 N γ i k S β i 1 = 0 .$
By (56), , and thus $S β N + 1 1 = β k + N S β N$(1). Inserting this into Equation (132) gives
$β k + N S β N 1 + ∑ i = 0 N γ i S β i 1 = 0 .$
Solving Equation (133) for $β k + N$, we have
$β k + N = − ∑ i = 0 N γ i S β N 1 − 1 S β i 1 ,$
where , and when , $S β N 1 − 1 S β i 1 = S β N 1 − 1$. The function $β k + N$ can be computed for a finite range $k 0 ≤ k ≤ k 1$ by solving Equation (134) recursively for a given set of initial conditions . Since $β = a$ when $a$ and the coefficients $ξ i$ of $ξ z , k$ are constant functions, we shall take the initial conditions to be $β k 0 + i = a k 0 + i , i = 0 , 1 , … , N − 1$. Since $a k ≠ 0$ for all $k$, Equation (134) can be solved recursively with these initial conditions, although there is a possibility that the time variance can result in a zero value for $β k + N$ for some value of $k > k 0 + N$. Here, we assume that Equation (134) yields a solution with $β k + N ≠ 0$ for $k 0 ≤ k ≤ k 1 − N$.
Once $β k$ has been computed for $k 0 ≤ k ≤ k 1$, the coefficients of the polynomial $φ z , k$ can be computed from the relationship in Equation (131): Let $φ z , k = z N + ∑ i = 0 N − 1 φ i z i$. Then,
Equating the right side of Equation (135) to $γ z , k = z N + 1 + ∑ i = 0 N γ i z i$ gives
$φ i − 1 − φ i σ i β = γ i , i = 1 , 2 , … , N − 1$
From Equation (137), $φ N − 1 k = β k + N + γ N k$, $φ 0 k = − γ 0 k β k$, and from Equation (136), . Then, inserting the values of $β k$ for $k = k 0 , k 0 + 1 , … , k 1$ yields the values of the $φ i k$ for $k = k 0 , k 0 + 1 , … , k 1 − N$. We then have the following result.
Theorem 6.
Suppose that the system given by the time-varying autoregressive model in Equation (129) is stable, $φ z , k$ and $β$ satisfy Equation (131), and the division of $φ z , k$ on the left by $z − a$ does not result in a remainder that is identically zero. Then, the steady-state response $y s s n , k$ to the input Equation (117) is given by
Proof.
By Corollary 1, the transform $Y z , k$ of the output response resulting from the input defined by Equation (117) has the decomposition
$Y z , k = z − β − 1 φ ^ T a i 1 , k − 1 b k + ξ z , k − 1 χ z , k ,$
For some . Since the system is stable, the inverse transform of the term $ξ z , k − 1 χ z , k$ in (139) must converge to zero as $n → ∞$, and thus the transform $Y s s n , k$ of the steady-state part of the output response is
$Y s s n , k = z − β − 1 φ ^ T a i 1 , k − 1 b k .$
Taking the inverse transform of Equation (140) using the transform pair (15) yields the steady-state response given by Equation (138). □
In contrast to the moving average case, by Theorem 6 the steady-state response to the input defined by Equation (117) is not a scaled version of the input when the system is given by the autoregressive model in Equation (129). This is a consequence of the fact that $β = a$ does not satisfy the relationship in Equation (131) as a result of the time variance of the coefficients of $ξ z , k$. In the case when $a$ is the complex exponential $a = e j Ω$, where $Ω$ is a fixed frequency, the solution for $β$ given by Equation (134) can be expressed in the polar form $β k = m k e j θ k$ with $θ k ≠ Ω$ in general. Hence, the time variance will result in new frequencies appearing in the steady-state output response.
It is also interesting to note that if the decomposition in Theorem 4 is applied to $Y z , k$, we obtain the first-order term
$ξ S β i 1 , k − 1 z − a − 1 b k .$
The inverse transform of (141) is a scaled version of the input. However, in general it is not the steady-state response since $φ z , k$ in Equation (131) may not be stable (i.e., $φ z , k − 1$ may not be the transfer function of a stable system). If $φ z , k$ is stable, then the inverse transform of (141) can be defined to be the steady-state response and the scal- ing factor $ξ S β i 1 , k − 1$ defines a frequency response function for the time-varying auto- regressive system model. The derivation of an expression for this frequency function is omitted.
In the general case when the system is given by the input/output relationship (116), the steady-state response to the input defined by Equation (117) can be computed by combining the above results for the moving average and autoregressive models. The details are omitted.

One of the key constructs in the paper is the scaling of $z − i$ by a time function defined in terms of the semilinear transformation $S a$. As illustrated in Section 5 and Section 6, this result can be used to generate linear time-varying recursions for a large class of discrete-time signals. Another key construct is the extraction of a first-order term from $F z , k = z − a ξ z , k − 1 η z , k$, where . It follows from the results in Section 4 and Section 5 that $F z , k$ cannot be decomposed into terms having denominators equal to $z − a$ and $ξ z , k$ unless $a$ and the coefficients of $ξ z , k$ are constant functions (the time-invariant case). In the time-varying case, to carry out a decomposition with one of the terms being a first-order polynomial fraction, it is necessary to write $z − a ξ z , k$ in the form $φ z , k z − β$ for some and . An interesting characterization of this result is that the factor $z − a$ must be “passed through” the polynomial $ξ z , k$ to yield the factor $z − β$. Of course, this is always possible in the case when $a$ and the coefficients of $ξ z , k$ are constant functions, in which case $β = a$. In general, time variance “perturbs” $a$ when it is passed through $ξ z , k$, resulting in $β$ which differs from $a$. This raises the question as to whether or not there is a unique $β$ corresponding to $a$. In Section 6, $β$ is constructed by taking the initial values $β k 0 + i = a k 0 + i , i = 0 , 1 , … , N − 1$, where $k 0$ is the initial time and $N$ is the degree of $ξ z , k$. Then, solving (134) yields a unique $β$ for these initial values. Hence, the $β$ constructed here is the unique function for which $z − a ξ z , k = φ z , k z − β$, and which matches the values of $a k 0 + i$ for $i = 0 , 1 , … , N − 1$.
As discussed in Section 5, $z − a ξ z , k − 1 η z , k$ has two decompositions, one with denominators equal to $z − a$ and $φ z , k$, and a second one with denominators equal to $z − β$ and $ξ z , k$. Note that the denominators are equal to the left factors of $z − a ξ z , k = φ z , k z − β$ in the one decomposition, and equal to the right factors in the second decomposition. As noted in Section 5, when $a$ and the coefficients of $ξ z , k$ are constant functions, there is only one decomposition since $β = a$ and $φ z , k = ξ z , k$. In the decomposition with denominator $φ z , k$, an interesting open problem is determining when $φ z , k − 1$ remains stable when $ξ z , k − 1$ is stable. This will most likely depend on the rate of change of $a$ and the coefficients of $ξ z , k$.

## Funding

This research received no external funding.

Not applicable.

Not applicable.

## Data Availability Statement

Data sharing not applicable.

## Acknowledgments

The author thanks the USA National Science Foundation for the support of the author’s research on time-varying systems in past years.

## Conflicts of Interest

The author declares no conflict of interest.

## References

1. Jury, E.I. Theory and Application of the z-Transform Method; Wiley: New York, NY, USA, 1964. [Google Scholar]
2. Huang, N.C.; Aggarwal, J.K. On linear shift-variant digital filters. IEEE Trans. Circuits Syst. 1980, 27, 672–679. [Google Scholar]
3. Iturricha, A.G.; Sabatier, J.; Oustaloup, A. Analysis of time-varying systems using time varying s-transforms and time varying z-transforms. IFAC Proc. 2002, 35, 249–254. [Google Scholar]
4. Iturricha, A.G.; Sabatier, J.; Oustaloup, A. Time-varying z-transform for the analysis of discrete-time linear time periodic systems. J. Dyn. Control Syst. 2003, 9, 365–392. [Google Scholar]
5. Park, S.; Aggarwal, J.K. PRecursive synthesis of linear time-variant digital filters via Chebyshev approximation. IEEE Trans. Circuits Syst. 1985, 32, 245–251. [Google Scholar]
6. Kamen, E.W.; Sills, J.A. The frequency response function of a linear time-varying System. IFAC Proc. 1993, 26, 315–318. Available online:https://www.sciencedirect.com/science/article/pii/S1474667017491348 (accessed on 20 March 2020).
7. Kamen, E.W.; Khargonekar, P.P.; Poolla, K.R. A transfer function approach to linear time-varying discrete-time systems. SIAM J. Control Optim. 1985, 23, 550–565. [Google Scholar]
8. Russell, B.; Han, J. Jean Morlet and the Continuous Wavelet Transform, CREWES Research Rpt. 28. 2016. Available online: https://www.crewes.org/ForOurSponsors/ResearchReports/2016/CRR201668.pdf (accessed on 20 March 2020). [CrossRef]
9. Ore, O. Theory of non-commutative polynomials. Ann. Math. 1993, 34, 480–508. [Google Scholar]
10. Kamen, E.W.; Hafez, K.M. Algebraic theory of linear time-varying systems. SIAM J. Control Optim. 1979, 17, 500–510. [Google Scholar]
11. Poolla, K.R.; Khargonekar, P.P. Stabilizability and Stable-Proper Factorizations for Linear Time-Varying Systems. SIAM J. Control Optim. 1987, 25, 723–736. [Google Scholar]
12. Jacobson, N. Pseudo-linear transformations. Ann. Math. 1937, 38, 484–507. [Google Scholar]
13. Baumbaugh, T. JResults on Common Left/Right Divisors of Skew Polynomials, Clemson University. 2016. Available online: https://tigerprints.clemson.edu/all_theses/2413 (accessed on 16 February 2021). [CrossRef]
14. Cohn, P.M. Rings with a weak algorithm. Trans. Am. Math Soc. 1963, 109, 332–356. [Google Scholar]
15. Lam, T.Y.; Leroy, A. Vandermonde and wronskian matrices over division rings. J. Algebra 1988, 119, 308–336. [Google Scholar]
Table 1. Properties of the variable initial time (VIT) transform.
Table 1. Properties of the variable initial time (VIT) transform.
 Property Transform Pair Linearity Right $A$-linearity Left shift Right shift Multiplication by $n$ Multiplication by $c n − k$ Multiplication by $c o s Ω n$ Multiplication by $s i n Ω n$ Summation Multiplication by $a n$
Table 2. Basic VIT transform pairs.
Table 2. Basic VIT transform pairs.
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Kamen, E.W. The VIT Transform Approach to Discrete-Time Signals and Linear Time-Varying Systems. Eng 2021, 2, 99-125. https://doi.org/10.3390/eng2010008

AMA Style

Kamen EW. The VIT Transform Approach to Discrete-Time Signals and Linear Time-Varying Systems. Eng. 2021; 2(1):99-125. https://doi.org/10.3390/eng2010008

Chicago/Turabian Style

Kamen, Edward W. 2021. "The VIT Transform Approach to Discrete-Time Signals and Linear Time-Varying Systems" Eng 2, no. 1: 99-125. https://doi.org/10.3390/eng2010008