1. Introduction
The introduction of a time-varying
-transform for the study of linear time-varying discrete-time systems or digital filters goes back to the discrete-time counterpart of the Zadeh system function, which first appeared in [
1]. In that work, linear time-varying systems/filters are studied in terms of the time-varying
-transform
,
where
is an integer valued variable and
is the unit-pulse response function of the system. For papers that utilize this construction, see [
2,
3,
4,
5]. It is known that it is not possible to express the
-transform of the output response as the product of
with the
-transform of the input. Moreover, as discussed in [
6], when the system or filter is finite-dimensional, this transform is seldom expressible as a polynomial fraction in
with time-varying coefficients. These two limitations were circumvented in [
7] by defining the transfer function to be the formal power series
In addition, in [
7] the generalized
-transform of a discrete-time signal
is defined to be
where
is the unit-pulse function (
). Then, as shown in [
7],
, where
and
are the generalized
-transforms of the input and output, respectively. It is also shown that if the system is given by a finite-dimensional state representation, the transfer function is a matrix polynomial fraction in
with time-varying coefficients.
The generalized
-transform defined by Equation (2) is equal to the ordinary
-transform multiplied on the right by the unit pulse
. There is a simple modification of Equation (2) results in a time-varying transform that satisfies a number of basic properties, which are analogous to the properties of the ordinary
-transform. The modification is based on the observation that the generalized
-transform defined in Equation (2) can be expressed in the form
In Equation (3),
is the value of the signal
at the time point
, which is
steps after the time point
, where
is the initial time. The variable initial time (VIT) transform of the signal
is then defined to be the formal power series
Note that . The VIT transform can also be extended to any two-variable function defined on where is the set of integers, and when this extension is applied to a unit-pulse response function , the result is the transfer function defined by Equation (1).
The formal definition of the VIT transform and some simple examples of the transform are given in
Section 2. Various properties of the VIT transform are proved in
Section 3, including the property that multiplication by a function
in the time domain is equivalent to multiplication by
on the left in the VIT transform domain. It is this property along with the left-shift property that converts signals or two-variable time functions given by linear time-varying difference equations into left polynomial fractions consisting of polynomials in
with variable coefficients. It is also proved in
Section 3 that the transform of a fundamental operation between two functions defined on
is equal to the product of the VIT transforms. It is this result that yields a transfer function framework for the study of linear time-varying discrete-time systems.
In
Section 4, it is shown that the powers
of the symbol
can be scaled by a time function, which is given in terms of a semilinear transformation
defined on the ring
consisting of all functions from the integers
into the reals
. Given a VIT transform that is a polynomial fraction in
, the scaling of
by a time function results in a large collection of new transforms which are polynomial fractions. This construct results in the generation of a class of signals that satisfy linear time-varying recursions. Examples are given in the case of the Gabor-Morlet wavelet [
8] and sinusoids with general time-varying frequencies.
The addition and decomposition of VIT transforms is studied in
Section 5. It is shown that the addition of two left polynomial fractions can be expressed in a single-fraction form by using the extended right Euclidean algorithm in a skew (noncommutative) polynomial ring with coefficients in the quotient field of ring
of time functions. This results in recursions over
for the inverse transform of the sum of the fractions, although in general the recursions may have singularities. The decomposition of a polynomial fraction is carried out in
Section 5 in terms of the evaluation of
at time functions defined in terms of semilinear transformations. In
Section 6, the VIT transform approach is applied to linear time-varying discrete-time systems or digital filters. It is shown that the VIT transform of the system output is equal to the product of the VIT transform of the input and the VIT transform of the unit-pulse response function. This result is used to derive an expression for the steady-state output response resulting from signal inputs having a first-order transform. The focus is on the case when the system is given by a time-varying moving average or autoregressive model.
Section 7 contains some concluding comments.
2. The VIT Transform
With equal to the set of integers and equal to the field of real numbers, let denote the set of all functions from into . Given we define addition by and multiplication by . With these two pointwise operations, is a commutative ring with multiplicative identity , where for all . Let denote the left shift operator on defined by With the shift operator σ, the ring A is called a difference ring.
With
equal to a symbol or indeterminate, let
denote the set of all formal Laurent series of the form
where
. Note that the coefficients of the power series in (4) are written on the right of the
. With the usual addition of Laurent series and with multiplication defined by
is a noncommutative ring with multiplicative identity
. Let
denote the subring of
consisting of all polynomials in
. That is, the elements of
are of the form
Finally, let denote the subring of consisting of all formal power series in given by (4) with .
The rings
,
, and
are called skew rings due to the noncommutative multiplication defined in Equation (6). Skew polynomial rings were first introduced and studied by Oystein Ore in his 1933 paper [
9]. These ring structures have appeared in past work [
7,
10,
11] on the algebraic theory of linear time-varying discrete-time systems.
Now, let denote a real-valued discrete-time signal. For each fixed integer , let . Then, is equal to the value of the signal at the time point , which is located steps after the time point , where is viewed as the initial time. The initial time is taken to be an integer variable ranging over . Then, for each fixed , is a function from into , and thus is an element of the difference ring . If the given signal is defined only for for some fixed , then the values of the are known only for . In this case, the pointwise operations of addition and multiplication can still be carried out on the , but the results will be known only for . In addition, for any positive integer , the -step left shift operation can be performed on the , but the result will be known only for or . Hence, the can still be viewed as elements of the difference ring . Then, we have the following concept.
Definition 1. The variable initial time (VIT) transform of a real-valued discrete-time signal is the element of defined by Note that the coefficients of the power series in Equation (7) are written on the right. As shown below, this leads to left polynomial fractions for the transform in the case when satisfies a linear time-varying difference equation. Moreover, note that for each fixed integer value of , is the one-sided formal z-transform of , where “formal” means that is viewed as a formal symbol, not a complex variable. In particular, is the -transform of . Finally, if the given signal is defined only for , then the transform is defined only for .
The VIT transform can be extended to any real-valued two-variable function
defined on
: Given
, the VIT transform
of
is defined to be the element of
given by
Given a discrete-time signal
, let
. Then, from Equations (7) and (8), the VIT transform
of
is equal to the VIT transform
of
. Hence, all of the results derived in this work on the VIT transform of a general two-variable function
can be directly applied to the VIT transform of a discrete-time signal
In addition, if we define
, where
is the unit-pulse response function of a linear time-varying discrete-time system, the VIT transform of
is the transfer function of the system as defined in [
7]. Thus, results on the VIT transform of a two-variable function can also be directly applied to linear time-varying systems.
Given a VIT transform
, the original time function
can be recovered from the transform by setting
equal to the right coefficient of
in the power series representation given in Equation (8). In the following development, we will use the notation.
To denote a VIT transform pair, it should be noted that in operations involving the VIT transform
, the values of the initial time
can be restricted to a finite interval
, where
. This is illustrated in
Section 6, in the application to computing the steady-state output responses to various inputs in a linear time-varying system.
We shall now give some simple examples of the VIT transform. Let the function
be the unit pulse
located at the initial time
. Then,
and inserting this into Equation (8), we have that the VIT transform is equal to 1 for all
. Therefore, we have the transform pair
Now, suppose that
, where
is the value of
at the initial time
. Then, the VIT transform of
is equal to
Thus, we have the transform pair
Note that the VIT transform in (11) is a fraction.
Given
, consider the function
defined by the first-order linear time-varying difference equation
With initial value
at initial time
. The solution to Equation (12) is
Which can be written in the product form
Note that the variable
in Equation (13) can be evaluated at any specific initial time
which gives
Inserting
into Equation (8), the VIT transform of
is equal to
The power series in (14) can be written in the left fraction form
. To verify this, using the multiplication defined by Equation (6), multiply Equation (14) by
on the left. This results in
, which proves the validity of the fraction form. Therefore, we have the VIT transform pair
Note that the transform pair (11) follows directly from the transform pair (15) by setting for all ϵ Z. The left fraction form of the VIT transform given in (15) is a result of the function satisfying the first-order linear time-varying recursion . As will be shown below, any satisfying a linear time-varying recursion has a VIT transform which is a left polynomial fraction. This is the primary motivation for considering the VIT transform.
To illustrate the application of the transform pair (15), consider the Gaussian function given by
The solution to Equation (17) is
where
, and
is the value of the Gaussian function at the initial time
. Using the transform pair (15) with
, we have that the VIT trans form
of the Gaussian has the left fraction form.
In this work, we will focus on the case when the VIT transform of
can be written as a left polynomial fraction
where
is a nonzero monic (leading coefficient is equal to
) polynomial, and
. The term
in the fraction is the denominator and
is the numerator. The order of the fraction
is defined to be the degree of the denominator
, assuming that
and
do not have any common left factors. In the left fraction form (19), the factor
is the element
given by
. In other words,
is the right inverse of
in the ring
. Since
is monic, it has an inverse in
which can be computed by dividing
into 1 using left long division. The product of
and ν
in (19) is carried out using multiplication in the ring
. For example, in the case of the transform pair (15), using the multiplication given by Equation (6) and dividing
into 1 on the left gives
Then, multiplying the above quotient on the right by , we obtain the power series for the VIT transform given by (14).
3. Properties of the VIT Transform
The VIT transform satisfies several properties that are analogous to the properties of the ordinary z-transform. It also satisfies a key property involving multiplication by an arbitrary time function which is not shared by the ordinary z-transform. We begin with linearity and then consider the VIT transform of left and right time shifts. In the last part of the section, we utilize the results to prove that functions satisfying a linear time-varying difference equation have transforms which are left polynomial fractions.
It is obvious from the definition given by Equation (8) that taking the VIT transform is a
-linear operation. That is, if
and
are the transforms of the functions
and
, then for any real numbers
, the transform of
is equal to
. Thus, we have the transform pair
In addition to being
-linear, the VIT transform is also right
-linear. That is, given
we have the transform pair
This follows directly from the definition of the VIT transform.
Given the function
with initial value
, consider the one-step left shift
. The VIT transform of
is equal to
. Defining the change of index
gives
where
is the VIT transform of
. Therefore, we have the following transform pair
This result is a direct analogue of the left-shift property of the ordinary z-transform.
Given a positive integer
, the VIT transform pairs for the
-step left shift
and
-step right shift
are
where
is the transform of
and
for
. The straight-forward proof of these transform pairs is omitted.
For the ordinary z-transform, there are several properties arising from the multiplication by particular time functions. These all have analogues in the VIT transform domain. We begin by considering multiplication by .
Given
with VIT transform
defined by Equation (8), for each fixed
, let
denote the derivative of
with respect to
. Then, the VIT transform pair for the function
is
To prove the transform pair (25), take the derivative with respect to
of both sides of Equation (8) for each fixed value of
. This gives
Note that
, since the coefficient
of
does not depend on the initial time time
. Then, adding
to both sides of Equation (26) results in
The right side of Equation (27) is equal to the VIT transform of , and thus (25) is verified.
To illustrate the application of the transform pair (25), let
. Then, using the transform pair (11) with
and
, we have
, and using the transform pair (25), we have that the VIT transform of the ramp function
≥ k, is given by
This results in the following transform pair
We shall now consider multiplication by , where is a nonzero real or complex number. When is a complex number, we need to generalize the above ring framework to include coefficients which are functions from into the field of complex numbers. In other words, ring now consists of all functions from into .
Given a function
with VIT transform
defined by (8), and given a nonzero real or complex number c, we can scale z in
by replacing z by
. This results in
The right side of Equation (29) is equal to the VIT transform of
. Thus, we have the transform pair
Using the right
-linearity property, we can multiply both sides of the transform pair (30) on the right by
, which results in the transform pair
If
is given in the left fraction form
, where
and
are polynomials belonging to
, then for any real or complex number
, we have
In other words, the scaling of
in
can be carried out in the numerator and denominator of the left fraction. This is the case since
is a constant and the noncommutativity of multiplication in the ring
has no effect on constant functions. Hence, for example, from the transform pair (15) and using (30) with the scaling (31), we obtain the transform pair
We can use the transform pair (31) to compute the VIT transform of a function
multiplied by a sine or cosine: Let
be a positive real number and consider the complex exponentials
and
, where
. Then, given the function
with transform
, using Euler’s formula and the transform pair (31), we have the transform pairs
From the transform pairs (33) and (34), we can determine the VIT transforms of the cosine and sine functions: Again taking
, so that
, we have:
This results in the transform pair
A similar derivation gives the pair
Next, we consider the summation property: Given the function
with transform
, let
denote the sum of
defined by
,
. Then,
And taking the VIT transform of both sides of Equation (37) and using the right shift property given by the transform pair (24) results in
. Setting
and solving for
gives
. Thus, we have the transform pair
Now, given functions
with
and
for
, let
denote the function defined by
The operation in Equation (39) arises in the study of linear time-varying systems, which are considered in
Section 6. We have the following result on the VIT transform of
.
Proposition 1. With defined by Equation (39), the VIT transform of is given bywhere and are the VIT transforms of and Proof. Since
, the upper value of the summation in Equation (39) can be taken to be
. Then, with the change of index
, Equation (39) becomes
Taking the VIT transform of both sides of Equation (41) gives
Applying the index change
in Equation (42) yields
By definition of multiplication in
,
and since
for
, Equation (43) reduces to
The right side of Equation (44) is equal to , and thus, Equation (40) is verified. □
The final property we consider is multiplication by an arbitrary function: Given
, and
, the VIT transform of the product
is equal to
By definition of multiplication in
, Equation (45) can be written as
, and thus, we have the transform pair
Therefore, multiplication by a function of in the time domain is equivalent to multiplication by the function on the left in the transform domain with the time variable replaced by the initial time variable .
For example, let
and
. Then, by (46), we have the transform pair
The transform in (47) looks quite different from the result in (28), but the transforms must be equal. That is, we must have
To verify Equation (48), multiply both sides on the left by
and on the right by
. This gives
By the definition of multiplication in
,
, and using this in the right side of Equation (49) gives
Finally, using in the left side of Equation (49) and comparing the result with Equation (50) verifies Equation (48).
Using the transform pair (46) and the transform pair (23) for the left shift, we have the following result relating linear time-varying difference equations and left polynomial fractions in the ring .
Theorem 1. The VIT transform of has the left polynomial fraction form
If and only if
satisfies the
th-order linear time-varying difference equation:
Proof. Note that in Equation (51), we are writing the coefficients of the
on the left. Suppose
satisfies Equation (52). Then, taking the transform of Equation (52) and using the transform pair (46) and the left-shift property given by (23) results in
where the
are combinations of the initial values of
at the initial times
. Then, solving Equation (53) for
yields the left-fraction form
Conversely, suppose that the transform
of the function
is given by Equation (51). Multiplying both sides of Equation (51) on the left by
Yields.
Using the transform pair , we have that the inverse transform of the right side of Equation (55) is , and using the transform pairs (23) and (46), the inverse transform of the left side of Equation (55) is equal to , and thus Equation (52) is verified. □
The properties of the VIT transform which were derived in this section are given in
Table 1, and
Table 2 contains a list of basic transform pairs. Various additional transform pairs are computed in the next section by using scaling of
by time functions.
4. Scaling of z−i by Time Functions
In the VIT transform domain, it is possible to carry out scaling of by time functions. This results in transform pairs for a large class of time functions including sinusoids with general time-varying amplitudes and frequencies. The development is given in terms of a semilinear transformation from into , where as before, consists of all functions from into or .
Given a function
, let
denote the mapping from
into
defined by
, where
is the left shift operator on
. Hence,
is the element of
equal to
. In the mathematics literature [
12],
is said to be a semilinear transformation with respect to
. This type of operator was utilized in [
10] in the state-space theory of linear time-varying discrete-time systems.
The
-fold composition of the operator
is given by
And when
,
. Evaluating Equation (56) at
gives
Note that when for all and is the constant function , then , and thus is a time-varying version of the power function. Then, we have the following result.
Proposition 2. Suppose that the two-variable function satisfies the first-order recursion With initial value
at initial time
then,
Proof. Setting
and
in Equation (57) yields
Rearranging the factors in Equation (60) and comparing with the result given by Equa- tion (13) verifies that is given by Equation (59). □
Using (15), we have the transform pair
This is the transform pair for the general form in the case of a first-order left polynomial fraction, with the time function expressed in terms of the semilinear transforma- tion . We shall now define scaling in terms of .
Given the time function
with VIT transform
We can scale
in Equation (62) by replacing
with
, where
is the time function defined by Equation (57) with
. The resulting VIT transform is given by
Which will be denoted by . We formalize this construction as follows.
Definition 2. Given and the VIT transform , the time-function scaled transform is the power series defined by We have the following result on the inverse transform of the scaled transform given by (63).
Proposition 3. Given with transform the inverse VIT transform of the scaled transform is equal to .
Proof. The result follows directly from the definition of the VIT transform applied to the function . □
By Proposition 3, scaling of
by replacing
with
corresponds to the multiplication of
by
in the time domain. This results in the transform pair
The transform pair (65) is the time-varying version of the transform pair (30). In fact, when for all , , and (65) reduces to (30).
Given
, by right
-linearity of the VIT transform operation, we can multiply (65) on the right by
, which results in the transform pair
Note that the time function in (66) satisfies the difference equation with initial value . Since in (66) are arbitrary functions from into or , a large number of transform pairs can be generated from (66). As shown now, by taking to be a complex exponential function, this result can be used to determine the transform of functions multiplied by a sinusoid with arbitrary time-varying frequency .
Let
, where again
. Then
, where
. By Proposition 2,
. Now, given
, by Euler’s formula we have
where
is the complex conjugate of
. Then, taking the transform of the right side of Equation (67) and using (66), we have the transform pair
where
is the complex conjugate of
and
is the transform of
. Similarly, we have the following transform pair for multiplication by
The application of the transform pairs (65) and (68) is illustrated below in the case when is a left polynomial fraction.
Suppose that
, where
and ν
are elements of the skew polynomial ring
. With
equal to the degree of
, the degree of ν
must be less than or equal to
, since
is a power series in
. Then,
where the elements comprising the right side of Equation (70) are polynomials in
. Hence, the transform
can be written as a left fraction consisting of polynomials in
.
Theorem 2. Suppose that , where and . Given , let and denote the time-function scaled polynomials defined by , . Then Proof. By definition of
where the multiplication
is carried out in the ring
. Define the mapping
Then, the operation of scaling of
by the time function
is equivalent to apply- ing the mapping
Applying
to both sides of Equation (72) gives
. It will be shown that
is a multiplicative mapping, and thus
, which proves that
, and (71) is verified: For any integers
,
, and using Equation (57) yields.
Hence, . Finally, for any , and thus is multiplicative. □
Combining Proposition 3 and Theorem 2 yields the following result.
Theorem 3. Suppose that has VIT transform where . Then, for any the transform of is given by .
As illustrated now, Theorem 3 can be used to generate left polynomial fraction transforms from a given polynomial fraction such as the ones in
Table 2: Let
,
, and given
, let
, where
. From (35), the transform
of
is equal to
Rewriting the right side of (74) as a polynomial in
gives
Then, scaling
by
in Equation (75) and using Theorem 3, we have
Hence, the transform of
is equal to
Rewriting the transform (76) in terms of powers of
with coefficients moved to the left of the
, and applying Theorem 1, we have that
satisfies the second-order difference equation
Note that if for all , then , and Equation (77) reduces to the well-known recursion for the exponentially-weighted cosine function.
The difference Equation (77) is the recursion for the cosine function
with a general weighting function
, where the only constraint on
is that it satisfies the first-order recursion
. As an application of this result, let the weighting
be equal to the Gaussian
defined by Equation (16). Then,
is the Gaussian-windowed cosine function, which is equal to the real part of the Gabor-Morlet wavelet [
8]. By Equation (17),
with
Thus, inserting
into Equation (77), we have that the wavelet
) satisfies the second-order recursion
. This result can be derived in the time domain by attempting to express
in terms of
and
, but as seen here, it is an immediate consequence of Theorems 1 and 3.
5. Combining and Decomposing Polynomial Fractions
In the first part of this section, it is shown that left polynomial fractions can be combined using the extended right Euclidean algorithm. The algorithm is carried out with the coefficients of the polynomials belonging to the quotient field of the ring . We begin with the definition of and then give the extended right Euclidean algorithm for elements belonging to the skew polynomial ring [z].
5.1. Extended Euclidean Algorithm
The quotient field
of
consists of all formal ratios
of elements
,
. If
for all
, the ratio
defines a function from
into
or
, and thus it is an element of
. If
has zero values, then when
is viewed as a function on
, it will have singularities. That is,
is not defined for any such values of
. With multiplication and addition defined by
is a field. The left shift operator extended to is defined by .
The skew polynomial ring
consists of all polynomials in
with coefficients in
, and with the noncommutative multiplication
. Since
is a field, it follows from the results in [
9] that
is a right Euclidean ring, and since
is surjective, it is also a left Euclidean ring. As a result, the extended left and right Euclidean algorithms can be carried out in the ring
. A description of the algorithms is given in [
13] for a general skew polynomial ring (see also [
14]). For completeness, the extended right Euclidean algorithm is given next.
Let , with deg deg, where “deg” denotes degree. Dividing into on the right in the ring gives , where the remainder is equal to zero or deg deg. The division process is repeated by dividing into , which gives remainder with or deg deg. The process is continued by dividing into , etc. until is equal to zero for some integer . It is important to note that even though and are polynomials in with coefficients belonging to , in general the remainders are elements of .
Given the sequence of divisions
We then have the following known result ([
13,
14]).
Proposition 4. With and given by Equation (78), define , where , . Then, Proof. When
,
, and Equation (79) becomes
, which is equivalent to Equation (78) when
. When
,
, and
Setting
in Equation (78) gives
The right sides of Equations (80) and (81) are equal, and thus Equation (79) is verified for
. For any
,
, and
. Hence,
Suppose Equation (79) holds for and then the right side of Equation (82) is equal to , which by Equation (78) is equal to . Therefore, and by the second principle of mathematical induction, Equation (79) is true for all . □
Since
, by Proposition 4,
By Equation (83), both and divide on the right, and thus the polynomial is a common right multiple of and . As a consequence of the properties of the Euclidean algorithm, is the least common right multiple (lcrm) of and . The lcrm is unique up to a multiplicative factor in .
5.2. Sum of Two Polynomial Fractions
Suppose that the discrete-time functions
and
(which will be denoted by
and
, respectively) satisfy the following linear time-varying difference equations
Let
denote the sum
. It follows from the VIT transform approach that
also satisfies a recursion over
. To show this, let
denote the transforms of
and
, respectively. Using Theorem 1, we have
, where
, and
are polynomials belonging to
. Then, by linearity of the transform operation, the transform
of
is equal to
Applying the extended right Euclidean algorithm to
and
results in the lcrm
, where in general
and
are polynomials with coefficients in
. Then, multiplying both sides of Equation (86) on the left by
gives
μ
and thus, the left polynomial fraction form of
is
Suppose that
. Then, by Theorem 1, the inverse transform
of
satisfies the
-order linear time-varying difference equation
Since
, the coefficients
in Equation (88) are elements of
in general, and thus (88) is a linear recursion over
. We can rewrite Equation (88) as a recursion over
as follows: Suppose that
, and let
. Then,
for all
, and multiplying both sides of (88) by
results in the following recursion over
Note that if for some value of , then Equation (88) is singular when , and cannot be determined from either Equation (88) or (89). When , can be computed using the relationship , where and are given by the recursions (84) and (85).
The possible zero values of
in the recursion Equation (89) are a result of common factors appearing in
and
when
is evaluated at particular integer values. To see an example of this, suppose that
and
for
with initial values
, and where
. Taking the transform using the transform pair (15) yields
Applying the extended right Euclidean algorithm to
and
results in the lcrm
Multiplying both sides of Equation (90) on the left by the term in Equation (91), we have
This is the left polynomial fraction form of the VIT transform of
. Using the definition of multiplication in
we obtain
Hence,
satisfies the following recursion over
In this example,
. Then, multiplying Equation (92) by
results in the following recursion over
Clearly, if for some integer , cannot be computed from Equation (93). However, can be computed from Note that when , the factors and are identical, so they have a common factor when viewed as polynomials in
In
Section 6, it is shown that for a linear time-varying finite-dimensional system, the VIT transform of the unit-pulse response function is a left polynomial fraction (the transfer function). Hence, by the results given here, the transfer function of a parallel connection will in general consist of polynomials over
.
As an application of summing fractions, we shall determine the transform of
with arbitrary frequency function
Using the transform pair (68) with
, since
, we have the transform pair
where
. Applying the extended right Euclidean algorithm to
and
results in the lcrm
Now, let
denote the VIT transform of
Then, by the transform pair (94)
Multiplying Equation (96) on the left by
results in
This is the left polynomial fraction form of the VIT transform of , where the frequency is an arbitrary real-valued function of .
It is possible to rewrite Equation (97) in terms of polynomials with real-valued coefficient functions: Beginning with the denominator, using the definition of multiplication in
, we have
Here, we are using the fact that
. By (95), we also have
Adding both sides of Equations (98) and (99) gives
Factoring out
in the right side of Equation (100) results in
Since
, it follows that the coefficients of
in
Are real-valued functions of . Hence, (101) is the real form of the denominator poly- nomial of . The derivation of the real form of the numerator is omitted.
Let
. Then, applying Theorem 1, we have that the cosine function
with time-varying frequency
satisfies the second-order recursion.
where
. Note that Equation (102) is the recursion for
for any frequency function
, including the linear frequency chirp
and the exponential chirp
, where
is a positive real number. Moreover, note that when the frequency function
is equal to a constant
,
, and Equation (102) reduces to the recursion for the cosine function
.
5.3. Fraction Decomposition
The decomposition of polynomial fractions with varying coefficients can be carried out in terms of an evaluation of polynomials with coefficients in
or
, which is defined as follows. Given
or
, let
denote the semilinear transformation from
into
defined by
. This is the extension from
to
of the semilinear transformation defined in
Section 4. Then, applying the notion of skew polynomial evaluation given in [
15], we define the evaluation of the polynomial
at
to be the function
given by
And let
denote the semilinear transformation on
defined by
. Then, the evaluation of
at
is given by
We then have the following known result.
Proposition 5. Given , and , the remainder after dividing into on the right is equal to , and the remainder after dividing into on the left is equal to .
Proof. The result on the remainder after division on the right follows from Lemma 2.4 in [
15] by setting
. The second part of the proposition follows from Theorem 3.1 in [
13] by setting
. □
The concept of skew polynomial evaluation leads to the following decomposition result.
Theorem 4. Given , and , suppose thatwhere, anddoes not divideon the right. For some .
Proof. Suppose that the hypothesis of the theorem is satisfied, so that (106) is true. Dividing
into
on the right gives
By Proposition 5, the remainder in Equation (108) is equal to the evaluation . Further, since does not divide on the right. Multi- plying both sides of (108) on the right by and on the left by , we have . Hence, .
It follows from Equation (106) that
, and thus,
Also, = , and therefore, Equation (107) is satisfied with . □
There is a second decomposition of which is given next.
Corollary 1. Suppose that the hypothesis of Theorem 4 is satisfied so that (106) is true with , and does not divide on the left. Let . Then,where , , and . Proof. Dividing into on the left and carrying out steps similar to those in the proof of Theorem 4 yields the result. □
Note that the decomposition in Equation (109) is given in terms of left polynomial fractions, whereas (107) is in terms of right polynomial fractions. Moreover, note that the decompositions (107) and (109) are identical when the and are constant functions, in which case and .
Corollary 2. Suppose that Equation (109) is true. Let . Then, given , with For some .
Proof. Multiplying both sides of Equation (109) on the right by
gives
. Dividing
into
on the left, we have
where
and
. By Proposition 5, the remainder
in (111) is equal to the evaluation of the polynomial
at
. Hence,
. Then,
Now, since deg deg, is a strictly proper polynomial fraction, and thus can be written in the form for some with deg deg, which verifies Equation (110). □
Corollary 2 is a generalization of the first step of the partial fraction expansion for rational functions with real coefficients to left polynomial fractions with variable coefficients. The decomposition process can be continued if the polynomial in Equation (110) has left factors and ζ with deg. Note that if the and in Theorem 4 are constant functions, then commutes with , and thus (106) is satisfied with and . In this case, , and thus, evaluated at . If the are also constant functions, the coefficient of in Equation (110) is equal to the rational function evaluated at .
In the case when the and are nonconstant functions, the computation of and in Equation (106) is considered in the next section when the decomposition is used to determine the steady-state output response of a linear time-varying system or digital filter.
6. The VIT Transfer Function Representation
Consider the causal linear time-varying discrete-time system or digital filter given by the input/output relationship
where
is the unit-pulse response function,
is the input, and
is the out- put response resulting from
with zero initial energy (zero initial conditions) prior to the application of the input. Recall that
is the output response at time
resulting from the unit pulse
applied at time
. Moreover, note that by causality,
when
.
For each fixed integer
, let
denote the element of the ring
defined by
The function
is equal to the value of the unit-pulse response function
at the time point
, which is located
steps after the initial time
. As first defined in [
7], the transfer function
of the system given by Equation (112) is the element of the power series ring
defined by
From (113), we see that is equal to the VIT transform of the unit-pulse response function .
The transfer function representation can be generated by taking the VIT transform of the input/output relationship in Equation (112) defined in terms of an arbitrary initial time
. To set this up, suppose that the input
is applied to the system at initial time
, so that
for
. In general,
depends on the initial time
, so we shall write
. Then, the output response
resulting from
will also be a function of
and
, and is given by
Taking the VIT transform of both sides of Equation (114) and using Proposition 1, we have the following result.
Preposition 6. Let denote the VIT transforms of , respectively. Then, The relationship in Equation (115) is the VIT transfer function representation of the given system. Using Theorem 1, we have the following result on systems defined by a linear time-varying difference equation.
Preposition 7. The system transfer function has the left polynomial fraction form , if and only if the system input and system output satisfy the linear time-varying difference equation By Proposition 7, a linear time-varying system is finite-dimensional if and only if its transfer function is a left polynomial fraction.
We shall apply the VIT transfer function framework to the problem of determining the steady-state response to the input
where
with
for all
. Then,
is the initial value of
, and by definition of
,
It is assumed that is a bounded function of and does not converge to zero as . Hence, does not decay to zero as . Two simple examples of signals satisfying these conditions are the unit-step function , and the complex exponential . By Equation (61), the VIT transform of the input defined by (117) is .
Now suppose that the system or digital filter is a time-varying moving average given by the input/output relationship
Taking the VIT transform of both sides of Equation (118) yields
and thus the transfer function of the moving average filter is
Then, when the input
is given by Equation (117), the transform of the resulting output is
Now,
, and applying Theorem 4 with
, and
, we have
For some
. Since
and
,
Multiplying both sides of Equation (121) on the left by
and on the right by
, and summing the results for
, we have that the transform of the output re- sponse is
Let
so that
, and
, where
and
are the inverse VIT transforms of
and
, respectively. Then, since the highest power of
in Equation (125) is equal to M,
for
, and thus
is the transient part of the output response, and
is the steady-state part of the output response. Taking the inverse transform of
, we then have the following result.
Theorem 5. The steady-state output response of the time-varying moving average to the input defined by Equation (117) iswhere. Proof. It follows directly from the transform pair property that the inverse transform of the right side of Equation (124) is equal to the right side of (126). □
A key point here is that the steady-state response is equal to a scaling of the input by the time function . As an illustration of this result, suppose that and for all . Then, and . In this case, and . Hence, .
Then, by Theorem 5 and Equation (126), the steady-state response to the cosine input (127) is
where
denotes the real part. Then,
.
Defining
can be written in the form.
Hence, the steady-state response of a time-varying moving average filter to the cosine input given by Equation (127) is scaled in magnitude by the time function
and phase shifted by the time function
. Based on this result, the time-varying frequency response function
of the moving average filter can be defined to be
We now consider linear time-varying systems given by an autoregressive model. First, we need to restrict attention to systems that are stable in the following sense.
Definition 3. A linear time-varying system with transfer function where , is asymptotically stable if for any initial conditions , and any initial time ϵ Z, the solution to the difference equation , converges to zero as
Now suppose that the system or digital filter is given by the following time-varying autoregressive model
In this case, the transfer function of the system is equal to
, where
, and when the input
is defined by Equation (117), the trans- form of the output response is
The steady-state part of the output response can be determined by decomposing the right side of Equation (130) using the result in Corollary 1. This requires that
be expressed in the form
For some . If commutes with , (131) is satisfied with . In the general case, the computation of can be carried out as follows.
Let
. Suppose that
satisfies Equation (131). Then, since
is a right factor of
, by Proposition 5, the evaluation
is equal to zero. That is,
By (56),
, and thus
(1). Inserting this into Equation (132) gives
Solving Equation (133) for
, we have
where
, and when
,
. The function
can be computed for a finite range
by solving Equation (134) recursively for a given set of initial conditions
. Since
when
and the coefficients
of
are constant functions, we shall take the initial conditions to be
. Since
for all
, Equation (134) can be solved recursively with these initial conditions, although there is a possibility that the time variance can result in a zero value for
for some value of
. Here, we assume that Equation (134) yields a solution with
for
.
Once
has been computed for
, the coefficients of the polynomial
can be computed from the relationship in Equation (131): Let
. Then,
Equating the right side of Equation (135) to
gives
From Equation (137), , , and from Equation (136), . Then, inserting the values of for yields the values of the for . We then have the following result.
Theorem 6. Suppose that the system given by the time-varying autoregressive model in Equation (129) is stable, and satisfy Equation (131), and the division of on the left by does not result in a remainder that is identically zero. Then, the steady-state response to the input Equation (117) is given by Proof. By Corollary 1, the transform
of the output response resulting from the input defined by Equation (117) has the decomposition
For some
. Since the system is stable, the inverse transform of the term
in (139) must converge to zero as
, and thus the transform
of the steady-state part of the output response is
Taking the inverse transform of Equation (140) using the transform pair (15) yields the steady-state response given by Equation (138). □
In contrast to the moving average case, by Theorem 6 the steady-state response to the input defined by Equation (117) is not a scaled version of the input when the system is given by the autoregressive model in Equation (129). This is a consequence of the fact that does not satisfy the relationship in Equation (131) as a result of the time variance of the coefficients of . In the case when is the complex exponential , where is a fixed frequency, the solution for given by Equation (134) can be expressed in the polar form with in general. Hence, the time variance will result in new frequencies appearing in the steady-state output response.
It is also interesting to note that if the decomposition in Theorem 4 is applied to
, we obtain the first-order term
The inverse transform of (141) is a scaled version of the input. However, in general it is not the steady-state response since in Equation (131) may not be stable (i.e., may not be the transfer function of a stable system). If is stable, then the inverse transform of (141) can be defined to be the steady-state response and the scal- ing factor defines a frequency response function for the time-varying auto- regressive system model. The derivation of an expression for this frequency function is omitted.
In the general case when the system is given by the input/output relationship (116), the steady-state response to the input defined by Equation (117) can be computed by combining the above results for the moving average and autoregressive models. The details are omitted.