Abstract
When studying signal reconstruction, the frames are often selected in advance as encoding tools. However, in practical applications, this encoding frame may be subject to attacks by intermediaries and generate errors. To solve this problem, in this paper, the erasure recovery matrices for data erasures and rearrangements are analyzed. Unlike the previous research, first of all, we introduce a kind of frame and its erasure recovery matrix M so that , where is a unit matrix. In this case, we do not need to invert the matrix of the frame operator and the erasure recovery matrix, and this greatly simplifies reconstruction problems and calculations. Then three different construction algorithms of the above erasure recovery matrix M and the frame are proposed, and each of them has advantages. Furthermore, some restrictions on M so that the constructed frame and erasure recovery matrix M can recover coefficients from rearrangements are imposed. We prove that in some cases, the above M and frame can recover coefficients stably from m rearrangements.
MSC:
46C07
1. Introduction
In order to deal with some problems concerned with the nonharmonic Fourier series, frames were first introduced by Duffin and Schaeffer in 1952 []. Specifically,
Definition 1
([]). A sequence of elements in a Hilbert space H is a frame for H if there exist constants such that
The numbers are called frame bounds. In particular, if , then the frame is called a tight frame for H. Especially if , then the tight frame is called a Parseval frame for H.
Due to the redundancy of the frame [], it can be used in many fields as a generalization of the base in Hilbert space, such as signal and image processing [], quantization [], capacity of transmission channels [], coding theory [], and data transmission technology []. In particular, more and more scholars are beginning to apply the frame to signal erasures and reconstruction [].
More specifically, some scholars recover the lost data by inverting the frame operator of the frame whose indices correspond to the non-erased frame coefficients []. However, this method is slower because it needs to invert the matrix. Therefore, some people have proposed to use the dual frame defined below to recover the lost data [].
Definition 2
([]). For a frame , if there is a sequence such that
for any Then is a dual of
Then, for any element f in a separable Hilbert space H, it can be recovered by using a reconstruction formula by a frame and a dual frame, i.e., there is a dual frame such that []. When a part of the frame coefficients is erased, the rest of the frame coefficients can be used for reconstruction. Then some optimal frames and dual that minimize the reconstruction error need to be discussed. For example, in [], the authors found the optimal dual frames that minimize the reconstruction error for 1-erasure and 2-erasures. Many other scholars have researched this issue in recent years, such as [,,], and so on.
However, in practical applications, the pre-selected encoding frame may be subject to attacks by intermediaries and generate errors. And it will be difficult to recover a lot of lost data with the above methods. In [], the authors proposed to use the erasure recovery matrix M to recover the erased data, which can be used to deal with a large number of data erasures and also protect the encoding frame. Moreover, two ways to recover the erased data with the erasure recovery matrix M are proposed. One is
where is the index set of the erased coefficients, and denotes the minor of M formed by the columns indexed by . The other one is
where denotes the minor of M with rows indexed by I and columns indexed by , and I is a subset of .
Then the authors mainly discussed the method of (1) and did not discuss the method of (2) because it takes a certain amount of calculation to find a suitable I to make reversible. Hence, in this paper, our work was motivated by [], the problem of finding I to make reversible is solved. First of all, a special frame and its erasure recovery matrix so that is discussed, where is a unit matrix. In this case, we can easily recover the erased data. In fact, we just need
Obviously, our method does not need to invert the matrix, which greatly simplifies reconstruction problems and calculations. Furthermore, the construction of the above frame and the erasure recovery matrix M are discussed. Three different construction algorithms are proposed, and each of them has advantages. Next, we prove that the frame and erasure recovery matrix M, which we construct, can recover the data when data at a known location are erased. Furthermore, some restrictions on M so that the constructed frame and erasure recovery matrix M can recover coefficients from m rearrangements are imposed. Then we give a construction algorithm for the erasure recovery matrix and a frame that can be recovered of coefficients from rearrangements. Finally, we prove that in some cases, the above M and frame can recover coefficients stably from m rearrangements.
2. Notation, Terminology and Data Erasures
In this section, we recall some notations, terminologies, definitions, and properties of the frame theory that we use throughout the paper.
Firstly, we introduce the following operators, which often appear throughout this paper.
Definition 3
([]). Let be a Bessel sequence for H.
(I) The analysis operator of F is defined by
(II) The synthesis operator of F is defined by
(III) The frame operator of F is defined by
In the applications, we often use a frame to encode data f and obtain the frame coefficients Then we use a dual frame known as the decoding frame to recover f, thus
However, in the applications, some erasures and rearrangements will happen. Thus we may only get part of or the rearrangements . Hence in this paper, we use erasure recovery matrices (introduced in []) to recover the data.
Then, we use the Table 1 to introduce some notations and terminologies that appear in the following text.
Table 1.
This is the table of notations and terminologies.
3. Recovery Data from M-Erasures
In [], the authors proposed to use the erasure recovery matrix M to recover the lost data. And they give two methods to recover the lost data, which are (1) and (2). However, they point out that it is difficult to find a suitable set I to make the matrix invertible, where denotes the minor of M with rows indexed by I and columns indexed by , and I is a subset of . Hence in this section, we construct a special frame and its erasure recovery matrix so that , where is a unit matrix. In this case, is invertible. And we can easily recover the lost data.
3.1. Some Properties of the Erasure Recovery Matrix
Firstly, we provide the definition of the erasure recovery matrix.
Definition 4
([]). Let be a frame for an n-dimensional Hilbert space H and . An m-erasure recovery matrix is a matrix M with spark satisfying for any vector , where T denotes the analysis operator for the frame F and the spark of a collection of vectors is the size of the smallest linearly dependent subset of .
In the following, we discuss how to recover erased frame coefficients at known locations. We use to represent the index set of the erased coefficients, where . Then we introduce a special frame . If for any , there is a sequence of complex numbers such that
where none of the coefficients in the above equations are equal to zero. We also recall that the excess of a frame of H is the greatest integer m such that m elements can be removed from the frame and still have a frame for H [].
Next, we consider whether the above frame remains a frame whenever any m elements are removed.
Proposition 1.
remains a frame whenever any m elements are removed when is a frame for H with the excess m.
Proof.
First of all, if the first m elements are removed, then the conclusion can be obtained obviously. More precisely, in finite-dimensional Hilbert spaces, the frames are exactly spanning families of vectors in the space, and consequently, the frames with the excess m are robust under removing the first m-elements.
Next, we consider the case where any m elements are removed. In this case, since the first m elements can be expressed linearly by the rest of the elements, all coefficients are not equal to zero. Hence, any removing m elements can be expressed linearly by the remaining elements. We can obtain that is still a frame whenever any m elements are removed. □
Note that if the excess of is the greatest integer m. Then for any ,
we let
If M is an m-erasure recovery matrix, then is invertible. Hence we discuss whether M is an m-erasure recovery matrix. Before that, we introduce the following lemma and proposition.
Lemma 1
([]). Let be a frame for H. Suppose that is an integer and that M is a matrix such that . Then the following assertions are equivalent.
(i). Every m columns of M are linearly independent.
(ii). remains a frame whenever any m elements are removed.
Proposition 2.
Let H be an N-dimensional Hilbert space, M be a matrix as defined above (3), and T be an analysis operator of the frame . If , then we have and M is an m-erasure recovery matrix.
Proof.
On the one hand, for any , there is a such that , for all .
Furthermore,
That is .
On the other hand, since , we obtain that .
So far, we find that and remain a frame whenever any m elements are removed. According to lemma 1, it is clear that every m columns of M are linearly independent. Obviously, the first columns of the M are linearly dependent. Hence , and the matrix M is an m-erasure recovery matrix. □
In this case, we let such that . Then the following Example 1 shows that we can use the m-erasure recovery matrix M to recover the erased data if .
Example 1.
Assume that the first m data are lost. Then we can construct a frame for H with the excess m (the construction method of this kind of frame will be discussed later), and for any , there is a sequence of complex numbers such that
where none of the coefficients in the above equations is equal to zero.
Hence its erasure recovery matrix is
Then for any , we let , where . And
Since the first m data are erased, we obtain that
where is a matrix composed of the first m rows and the first m columns of matrix M, and .
Obviously, we have . Hence
and
That is to say we can use the remaining data to recover the erased data easily.
3.2. Algorithm Construction
In what follows, we discuss the construction of the above frame and the m-erasure recovery matrix M when data at a known location are erased. We assume that the first m data are erased, and the same goes for the erasure of any other m known locations (Algorithm 1).
| Algorithm 1 The construction of the m-erasure recovery matrix M when data at a known location are erased |
|
Proposition 3.
The above matrix M in Algorithm 1 is an m-erasure recovery matrix of if .
Proof.
First of all, we prove that is a frame for H. Since , for all . Then
where
And is a frame for , since is a standard orthonormal basis for . Thus
where B is the upper frame bound of .
And
where A is the lower frame bound of .
Thus is a frame. Since T is subjective, hence is a frame for .
Next, we prove that M is an m-erasure recovery matrix of .
Since
Hence Moreover, according to the structure of frame , we can obtain that remains a frame whenever any m elements are removed.
By Lemma 1, we know every m column vectors of M are linearly independent. Hence M is an m-erasure recovery matrix of . □
As we know, when the frame used for coding is a Parseval frame, some better results will often be obtained. Therefore, we propose a construction method for a Parseval frame and its m-erasure recovery matrix (Algorithm 2).
| Algorithm 2 The construction of m-erasure recovery matrix for a Parseval frame |
|
Proposition 4.
The above matrix M in Algorithm 2 is an m-erasure recovery matrix of the Parseval frame if has a full rank and .
Proof.
First of all, we prove that is a Parseval frame.
According to the structure of G, we have
Since has a full rank, is a Parseval frame.
Since
for all , and is the matrix obtained by performing the Gram-Schmidt orthonormalization procedure on the columns of Q. We can get that
for all .
Hence
and .
According to Lemma 1, we know M is an m-erasure recovery matrix of . □
Similar to Algorithm 2, we can get a construction method of an ordinary frame instead of the Parseval frame and its m-erasure recovery matrix (Algorithm 3).
| Algorithm 3 The construction of the m-erasure recovery matrix for an ordinary frame |
|
Proposition 5.
The above matrix M in Algorithm 3 is an m-erasure recovery matrix of the frame if Q has a full rank and .
Proof.
Since both F and K are Parseval frames and
for any , we can obtain that
thus is a frame.
According to Proposition 4, we know that M in Algorithm 3 is an m-erasure recovery matrix of the Parseval frame F and K. Thus
hence
And
hence M is an m-erasure recovery matrix of the frame . □
4. Recovery Data from Rearrangements
In actual signal transmission, besides data erasure, a very common problem is data rearrangement. So in this section, we impose some restrictions on M so that the constructed frame and M can recover coefficients from rearrangements if .
Firstly, Lemma 2 introduces the conditions under which the frame can recover coefficients from rearrangements.
Lemma 2
([]). Suppose that . Let be a frame for H and M be a matrix such that . Then can recover the sequence of frame coefficients from any of its rearrangements for any (where is the union of finitely many proper subspaces of H and therefore is of measure zero) if and only if for any matrix consisting of the same columns of M but in different order, rank(M) < rank.
Then we construct a matrix and a frame and explore under what conditions this matrix is an erasure recovery matrix, the frame is a Parseval frame, and whether they can recover coefficients from rearrangements (Algorithm 4).
| Algorithm 4 The construction of erasure recovery matrix for rearrangements |
|
Proposition 6.
The above matrix M in Algorithm 4 is an m-erasure recovery matrix of the Parseval frame if Q has a full rank and . Moreover, M can recover coefficients from rearrangements if .
Proof.
First of all, similar to Proposition 5, we know that is a Parseval frame.
Since
for all .
Hence
and , where T is the analysis operator of
Furthermore, since remains a frame whenever any m elements are removed, we can obtain that M is an m-erasure recovery matrix of .
Next, we prove that M can recover coefficients from m rearrangements if . We discuss the following three situations:
Situation 1: If the rearrangement occurs between the th column and the Nth column. Since , it is easy to check that
Thus
where is a matrix in which the columns are the same as M but the order is different. According to Lemma 2, we can use M and to recover coefficients from rearrangements.
Situation 2: If the rearrangement occurs both between the th column and the Nth column and the first m columns but there is no interchange between the first m columns and the rest columns. Without loss of generality, we assume that the first column becomes the second column, and some similar results can be obtained in other cases. Since the first row of the matrix is all positive and each row is orthogonal to each other, we have that there is not a constant c such that
hence
Thus
so we can use M and to recover coefficients from rearrangements.
Situation 3: One of the first m columns becomes one of the last th columns. Without loss of generality, we assume that the first column becomes the column, and some similar results can be obtained in other cases. Then
where a is a constant.
Then we can choose some c such that . Moreover,
Hence we can use M and to recover coefficients from rearrangements. □
Next we discuss whether the above M and in construction algorithm 4 can recover coefficients stably from m rearrangements. We obtain that if and are pairwise orthogonal, where , then we can recover coefficients stably from m rearrangements in Situation 1 and Situation 2. Before that, we give a sufficient and necessary condition such that a frame can recover coefficients stably from m rearrangements.
Lemma 3
([]). We can recover coefficients stably from m rearrangements if and only if is totally robust. Thus is a frame and for any , and satisfying , we have .
Hence we just need to prove in Algorithm 4 is totally robust.
Proposition 7.
in Algorithm 4 is totally robust in the following two situations, and it can recover coefficients stably from m rearrangements, if .
Proof.
Situation 1: If the rearrangement occurs between the th column and the Nth column. For any , and satisfying , we consider
Hence
Since the rearrangement occurs between the th column and the Nth column, we use to represent the set of indicators that have rearrangements, hence
Thus
where .
By the same reason, we have
where .
If , then .
If , since and are pairwise orthogonal, where , we can get that
where c is a constant. Moreover,
where .
Since remains a frame whenever any m elements are removed, we get and
According to Lemma 3, is totally robust and can recover coefficients stably from m rearrangements.
Situation 2: If the rearrangement occurs between the first m columns. Then we have
Hence
Thus
and
where .
If , then .
If , since there is a j such that and are linearly independent, we obtain that
where c is a constant. Moreover
Since remains a frame whenever any m elements are removed, we get and
Hence is totally robust and can recover coefficients stably from m rearrangements. □
Author Contributions
All authors contributed to the study conception and design. Conceptualization, M.H.; Formal analysis, M.H.; Funding acquisition, M.H.; Investigation, M.H. and C.W.; Methodology, M.H.; Supervision, J.L.; Writing—original draft, M.H.; Writing—review and editing, M.H. and C.W. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by Scientific Research Initiation Fund of Chengdu University of Technology (10912-KYQD2022-09459).
Data Availability Statement
Data sharing not applicable to this article, as no datasets were generated or analyzed during the current study.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Duffin, R.J.; Schaeffer, A.C. A class of nonharmonic fourier series. Trans. Am. Math. Soc. 1952, 72, 341–366. [Google Scholar] [CrossRef]
- Benac, M.J.; Massey, P.; Ruiz, M. Optimal frame designs for multitasking devices with weight restrictions. Adv. Comput. Math. 2020, 46, 22. [Google Scholar] [CrossRef]
- Cands, E.J.; Donoho, D.L. New tight frames of curvelets and optimal representations of objects with piecewise singularities. Commun. Pure Appl. Math. 2004, 57, 219–266. [Google Scholar] [CrossRef]
- Bodmann, B.G.; Paulsen, V.I. Frame paths and error bounds for sigma-delta quantization. Appl. Comput. Harmon. Anal. 2007, 22, 176–197. [Google Scholar] [CrossRef][Green Version]
- Dana, A.F.; Gowaikar, R.; Palanki, R.; Hassibi, B.; Effros, M. Capacity of wireless erasure networks. IEEE Trans. Inf. Theory 2006, 52, 789–804. [Google Scholar] [CrossRef]
- Leng, J.S.; Han, D.; Huang, T. Optimal dual frames for communication coding with probabilistic erasures. IEEE Trans. Signal. Process. 2011, 59, 5380–5389. [Google Scholar] [CrossRef]
- Albanese, A.; Blaǧomer, J.; Edmonds, J.; Luby, M.; Sudan, M. Priority encoding transmission. IEEE Trans. Inf. Theory 1996, 42, 1737–1744. [Google Scholar] [CrossRef]
- Fickus, M.; Marks, J.D.; Poteet, M.J. A generalized Schur-Horn theorem and optimal frame completions. Appl. Comput. Harmon. Anal. 2016, 40, 505–528. [Google Scholar] [CrossRef]
- Arabyani-Neyshaburi, F.; Kamyabi-Gol, R.A.; Farshchian, R. Matrix Methods for perfect signal recovery underlying range space of operators. Math. Method Appl. Sci. 2023, 46, 12273–12290. [Google Scholar] [CrossRef]
- Han, D.; Hu, Q.F.; Liu, R. Quantum injectivity of multi-window Gabor frames in finite dimensions. Ann. Funct. Anal. 2022, 13, 59. [Google Scholar] [CrossRef]
- Han, D.; Kornelson, K.; Larson, D.; Weber, E. Frames for Undergraduates; American Mathematical Society: Providence, RI, USA, 2007; pp. 40–41. [Google Scholar]
- Alexeev, B.; Cahill, J.; Mixon, D. Full spark frames. J. Fourier Anal. Appl. 2012, 18, 1167–1194. [Google Scholar] [CrossRef]
- Leng, J.S.; Han, D.; Huang, T. Probability modelled optimal frames for erasures. Linear Algebra Its Appl. 2013, 438, 4222–4236. [Google Scholar] [CrossRef]
- Cheng, C.; Han, D. On Twisted Group Frames. Linear Algebra Its Appl. 2019, 569, 285–310. [Google Scholar] [CrossRef]
- He, M.; Leng, J.S.; Li, D. Operator representations of K-frames: Boundedness and stability. Oper. Matrices 2020, 14, 921–934. [Google Scholar] [CrossRef]
- Lv, F.; Sun, W. Construction of robust frames in erasure recovery. Linear Algebra Appl. 2015, 479, 155–170. [Google Scholar] [CrossRef]
- Han, D.; Larson, D.; Scholze, S.; Sun, W. Erasure recovery matrices for encoder protection. Appl. Comput. Harmon. Anal. 2020, 48, 766–786. [Google Scholar] [CrossRef]
- Casazza, P.; Kutyniok, G. Finite Frames: Theory and Applications; Springer: New York, NY, USA, 2013; pp. 154–196. [Google Scholar]
- Han, D.; Sun, W. Reconstruction of signals from frame coefficients with erasures at unknown locations. IEEE Trans. Inform. Theory 2014, 60, 4013–4025. [Google Scholar] [CrossRef]
- Balan, R.; Casazza, P.G.; Heil, C. Deficits and Excesses of Frames. Adv. Comput. Math. 2003, 18, 93–116. [Google Scholar] [CrossRef]
- Han, D.; Lv, F.; Sun, W. Recovery of signals from unordered partial frame coefficients. Appl. Comput. Harmon. Anal. 2016, 42, 38–58. [Google Scholar] [CrossRef]
- Han, D.; Lv, F.; Sun, W. Stable recovery of signals from frame coefficients with erasures at unknown locations. Sci. China Math. 2018, 61, 151–172. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).