Open Access
This article is

- freely available
- re-usable

*Electronics*
**2019**,
*8*(5),
550;
https://doi.org/10.3390/electronics8050550

Article

A Robust Semi-Blind Receiver for Joint Symbol and Channel Parameter Estimation in Multiple-Antenna Systems

^{1}

School of Information and Communication Engineering, Communication University of China, Beijing 100024, China

^{2}

Power Dispatching Control Center, Guangxi Power Grid, Nanning 530023, China

^{*}

Author to whom correspondence should be addressed.

Received: 9 April 2019 / Accepted: 8 May 2019 / Published: 16 May 2019

## Abstract

**:**

For multiple-antenna systems, the technologies of joint symbol and channel parameter estimation have been developed in recent works. However, existing technologies have a number of problems, such as performance degradation and the large cost of prior information. In this paper, a tensor space-time coding scheme in multiple-antenna systems was considered. This scheme allowed spreading, multiplexing, and allocating information symbols associated with multiple transmitted data streams. We showed that the received signal was formulated as a third-order tensor satisfying a Tucker-2 model, and then a robust semi-blind receiver was developed based on the optimized Levenberg–Marquardt (LM) algorithm. Under the assumption that the instantaneous channel state information (CSI) is unknown at the receiving end, the proposed semi-blind receiver jointly estimates the information symbol and channel parameters efficiently. The proposed receiver had a better estimation performance compared with existing semi-blind receivers, and still performed well when the channel became strongly correlated. Moreover, the proposed semi-blind receiver could be extended to the multi-user massive multiple-input multiple-output (MIMO) system for joint symbol and channel estimation. Computer simulation results were shown to demonstrate the effectiveness of the proposed receiver.

Keywords:

multiple-antenna systems; third-order tensor; Tucker-2 model; semi-blind receiver; optimized LM algorithm## 1. Introduction

Multiple-antenna techniques are well known to provide spatial diversity and multiplexing gains [1,2,3]. Over the last few decades, the benefits of multiple-antenna communications have been verified in both theory and practice. On the other hand, tensor-based signaling approaches that utilize several signal dimensions such as time, space, and code, are seen as good technologies for improving the information transmission rate and enhancing communication reliability [4,5,6]. Against this background, the problem of joint symbol and channel estimation is resolved by using tensor-based signaling approaches, and a number of semi-blind or blind receivers have been proposed for multiple-input multiple-output (MIMO) systems.

A parallel factor (PARAFAC) [7] based receiver is proposed in [8] by using the Khatri–Rao space-time (KRST) coding scheme, which can achieve a flexible tradeoff between error performance and transmission efficiency. In [9], the authors extend the KRST coding scheme by using the linear constellation precoding, and then developing several semi-blind receivers. These semi-blind receivers allow a joint symbol and channel estimation without requiring pilot sequences for the instantaneous channel state information (CSI) acquisition. In [10], the authors develop a new tensor-based receiver in MIMO relay systems for channel estimation by using PARAFAC analysis. A low complexity PARAFAC-based channel estimation scheme for non-regenerative MIMO relay systems is developed in [11]. In [12], a novel semi-blind receiver is derived using a multiple KRST coding scheme for joint symbol and channel estimation. More recently, a nested PARAFAC-based receiver for cooperative MIMO communications is proposed in [13], and three-step and double two-step alternating least squares (ALS) algorithms are proposed to fit the nested PARAFAC model for estimating system parameters. For millimeter wave (mmWave) massive MIMO systems, a PARAFAC decomposition-based algorithm is developed in [14] to jointly estimate channel parameters of multiple users. In [15], the algorithm in [14] is extended to mmWave MIMO orthogonal frequency division multiplexing (MIMO-OFDM) systems for channel estimation, and Cramér–Rao bound (CRB) results for channel parameters are also derived. Considering the channel estimation issue in the presence of pilot contamination for multi-cell massive MIMO systems, a new PARAFAC-based approach is proposed in [16] to jointly estimate directions of arrival, fading coefficients, and delays. Although these works [8,9,10,11,12,13,14,15,16] consider different design approaches, their common feature is using the PARAFAC model, which needs to know the first column or row of one loading matrix to eliminate scaling ambiguity. Furthermore, the ALS algorithm used in these receivers exhibits a convergence problem when ill-conditioned factor matrices exist [17].

In contrast to the ALS algorithm, the Levenberg–Marquardt (LM) algorithm updates all the parameters to be estimated at the same time. The LM algorithm is successfully used to fit some tensor models, adapt to collinearity problems, and provide quadratic convergence [18,19,20]. A LM algorithm is first proposed for fitting PARAFAC model in [18]. In [19], the authors present a LM algorithm to the decomposition of the Block Component Model (BCM) in the uplink of a wideband direct-sequence code-division multiple access (DS-CDMA) systems. Recently, a LM algorithm was developed in [20] to jointly estimate information symbol and channel matrices for a generalized PARATUCK2 model. As an iterative algorithm, the LM algorithm is also sensitive to initialization. Thus, the optimization of the initial value is important to improve the performance of the LM algorithm.

In [21], a tensor-based space-coding scheme using PARATUCK2 model is developed. For the PARATUCK2 model, the number of channel uses can be different from one transmitted data stream to another. In [22], a generalized PARATUCK2 model is proposed by exploiting a tensor space-time (TST) coding. Recently, a Kronecker product least squares (KPLS) receiver is proposed in [23] to estimate the symbol and channel matrices. More recently, it is shown in [24] that a KPLS receiver can be extended to all the tensor-based systems. Although the KPLS receiver is a non-iterative and low-complexity solution, it needs the related core tensor unfolding to be right-invertible, which is a relatively harsh condition in signal design.

Inspired by [21] and [22], we considered a simple tensor space-time coding scheme for multiple-antenna systems, along with an efficient receiver. The allocation factor and the space-time code factor in the TST coding scheme in [22] are independent, while the allocation factor in our coding scheme is also a three-dimensional space-time code factor. Thanks to the special structure of the proposed coding scheme, the received signal can be constructed as a Tucker-2 model [25,26], which has uniqueness property under some suitable conditions. Then, a robust semi-blind receiver based on optimized LM algorithm is presented for joint channel and symbol estimation. Uniqueness and identifiability issues for the constructed Tucker-2 model are also discussed in this paper. Compared with existing receivers, the proposed receiver has a better estimation performance. Moreover, the proposed semi-blind receiver can be extended to the multi-user massive MIMO system. For the low-rank channel, the proposed receiver still has good performance for joint symbol and channel estimation even in the shorter length of code and information symbol, and larger number of data streams.

The organization of this paper is as follows. Section 2 presents a brief overview of the Tucker model. In Section 3, the system model is presented and the associated tensor signal model is formulated. Section 4 briefly reviews the receiver with the ALS algorithm and describes the proposed semi-blind receiver based on the optimized LM algorithm. Section 5 extends the proposed semi-blind receiver to multi-user massive MIMO systems for joint symbol and channel estimation. In Section 6, some simulation results are shown to demonstrate the performance of our semi-blind receiver. Conclusions are drawn in Section 7.

Notation: Scalars, vectors, matrices, and tensors are denoted by lower-case letters $\left(a,b,\cdots \right)$, boldface lower-case letters $\left(a,b,\cdots \right)$, boldface capitals $\left(A,B,\cdots \right)$, and underlined boldface capitals $\left(\underline{A},\underline{B},\cdots \right)$, respectively. ${A}^{T}$, ${A}^{H}$, ${A}^{-1}$, and ${A}^{\u2020}$ represent transpose, conjugate transpose, inverse, and Moore–Penrose pseudo-inverse of the matrix $A$, respectively. ${\u2225A\u2225}_{F}$ denotes the Frobenius norm of $A$. ${I}_{M}$ denotes the $M\times M$ identity matrix. The operator $vec\left(\xb7\right)$ stacks the columns of its matrix argument to a vector, while $unvec\left(\xb7\right)$ represents the inverse vectorization operation. The Kronecker matrix product is denoted by ⊗. The term ${D}_{i}\left(A\right)$ corresponds the diagonal matrix out of the i-th row of $A$.

## 2. Tucker Model

This section first presents a brief overview of the Tucker model, and then focuses on the Tucker-2 model used in this work. For an Nth-order tensor $\underline{T}\in {\mathbb{C}}^{{I}_{1}\times \cdots \times {I}_{N}}$, a Tucker-N model or Tucker model is defined in the following scalar form as [26]:
where ${i}_{n}=1,\dots ,{I}_{n}$ for $n=1,\dots ,N$, ${a}_{{i}_{n},{r}_{n}}^{\left(n\right)}$ and ${g}_{{r}_{1},\dots ,{r}_{N}}$ stand for typical elements of the matrix factor ${\mathbf{A}}^{\left(n\right)}\in {\mathbb{C}}^{{I}_{n}\times {R}_{n}}$ and the core tensor $\underline{\mathbf{G}}\in {\mathbb{C}}^{{R}_{1}\times \cdots \times {R}_{N}}$, respectively. Using the mode-n product representation, the model (1) can be written as:
where $\underline{\mathbf{G}}{\times}_{n}{\mathbf{A}}^{\left(n\right)}$ denotes the mode-n product of $\underline{\mathbf{G}}$ and ${\mathbf{A}}^{\left(n\right)}$ along the ${N}^{th}$ mode, gives a tensor $\underline{\mathbf{W}}$ of dimensions ${R}_{1}\times \cdots \times {R}_{n-1}\times {I}_{n}\times {R}_{n+1}\times \cdots \times {R}_{N}$ such that:
where ${w}_{{r}_{1},\dots ,{r}_{n-1},{i}_{n},{r}_{n+1},\dots ,{r}_{N}}$ is a typical element of the tensor $\underline{\mathbf{W}}$. It has been known that the Tucker model is not essentially unique [26], which restricts its application. Their matrix factors can be only determined up to nonsingular transformations characterized by nonsingular matrices. However, some low-order Tucker models with special structures are unique up to permutation and/or scaling ambiguities.

$$\begin{array}{c}\hfill \begin{array}{c}{t}_{{i}_{1},\dots ,{i}_{N}}={\displaystyle \sum _{{r}_{1}=1}^{{R}_{1}}}\cdots {\displaystyle \sum _{{r}_{N}=1}^{{R}_{N}}}{g}_{{r}_{1},\dots ,{r}_{N}}\left({a}_{{i}_{1},{r}_{1}}^{\left(1\right)}\times \cdots \times {a}_{{i}_{N},{r}_{N}}^{\left(N\right)}\right)\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}=\sum _{{r}_{1}=1}^{{R}_{1}}\cdots {\displaystyle \sum _{{r}_{N}=1}^{{R}_{N}}}{g}_{{r}_{1},\dots ,{r}_{N}}{\displaystyle \prod _{n=1}^{N}}{a}_{{i}_{n},{r}_{n}}^{\left(n\right)}\hfill \end{array}\end{array}$$

$$\begin{array}{c}\hfill \begin{array}{c}\underline{\mathbf{T}}=\underline{\mathbf{G}}{\times}_{1}{\mathbf{A}}^{\left(1\right)}{\times}_{2}{\mathbf{A}}^{\left(2\right)}{\times}_{3}\cdots {\times}_{N}{\mathbf{A}}^{\left(N\right)}\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}=\underline{\mathbf{G}}{\times}_{n=1}^{N}{\mathbf{A}}^{\left(n\right)}\hfill \end{array}\end{array}$$

$$\begin{array}{c}\hfill {w}_{{r}_{1},\dots ,{r}_{n-1},{i}_{n},{r}_{n+1},\dots ,{r}_{N}}=\sum _{{r}_{n}=1}^{{R}_{n}}{a}_{{i}_{n},{r}_{n}}^{\left(n\right)}{g}_{{r}_{1},\dots ,{r}_{n-1},{r}_{n},{r}_{n+1},\dots ,{r}_{N}}\end{array}$$

Assuming $N\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}3$ and ${\mathbf{A}}^{\left(3\right)}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\mathbf{I}}_{{I}_{3}}$ for the third-order tensor $\underline{\mathbf{T}}\in {\mathbb{C}}^{{I}_{1}\times {I}_{2}\times {I}_{3}}$, we have:

$$\begin{array}{c}\hfill \begin{array}{c}{t}_{{i}_{1},{i}_{2},{i}_{3}}={\displaystyle \sum _{{r}_{1}=1}^{{R}_{1}}}{\displaystyle \sum _{{r}_{2}=1}^{{R}_{2}}}{\displaystyle \sum _{{r}_{3}=1}^{{R}_{3}}}{g}_{{r}_{1},{r}_{2},{r}_{3}}{\displaystyle \prod _{n=1}^{3}}{a}_{{i}_{n},{r}_{n}}^{\left(n\right)}\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\displaystyle \sum _{{r}_{1}=1}^{{R}_{1}}}{\displaystyle \sum _{{r}_{2}=1}^{{R}_{2}}}{g}_{{r}_{1},{r}_{2},{i}_{3}}{a}_{{i}_{1},{r}_{1}}^{\left(1\right)}{a}_{{i}_{2},{r}_{2}}^{\left(2\right)}.\hfill \end{array}\end{array}$$

This model is called Tucker-2 model or Tucker-(2, 3) model, and is widely applied in data analysis and parameter estimation [4]. ${\mathbf{A}}^{\left(1\right)}$ and ${\mathbf{A}}^{\left(2\right)}$ are the two loading matrices, and $\underline{\mathbf{G}}$ is the core tensor. In the same way, such a model can be written in terms of mode-n product as:

$$\begin{array}{c}\hfill \begin{array}{c}\underline{\mathbf{T}}=\underline{\mathbf{G}}{\times}_{1}{\mathbf{A}}^{\left(1\right)}{\times}_{2}{\mathbf{A}}^{\left(2\right)}{\times}_{3}{\mathbf{I}}_{{I}_{3}}\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}=\underline{\mathbf{G}}{\times}_{n=1}^{2}{\mathbf{A}}^{\left(n\right)}.\hfill \end{array}\end{array}$$

## 3. System Model

Consider a multiple-antenna system with ${M}_{S}$ transmit antennas and ${M}_{D}$ receive antennas as shown in Figure 1. ${h}_{{m}_{D},{m}_{S}}$ represents the channel coefficient between the ${m}_{S}$-th transmit antenna and the ${m}_{D}$-th receive antenna (${m}_{S}=1,\dots ,{M}_{S}$, ${m}_{D}=1,\dots ,{M}_{D}$). ${s}_{n,r}$ represents the n-th symbol of the r-th data stream ($n=1,\dots ,N$, $r=1,\dots ,R$), with each data stream being formed of N information symbols. Each symbol ${s}_{n,r}$ is coded by a three-dimensional space-time code ${b}_{{m}_{S},r,p}$ ($p=1,\dots ,P$), whose dimensions are the numbers of transmit antennas, data streams, and chips, respectively. We then define the antenna-to-slot allocation factor ${q}_{p,{m}_{S}}$, which is 0 or 1. Both the transmitter and the receiver know these factors ${b}_{{m}_{S},r,p}$ and ${q}_{p,{m}_{S}}$.

The signal transmitted from ${m}_{S}$-th transmit antenna, during the n-th symbol period of the p-th chip, is given by:
where ${s}_{n,r}$ and ${q}_{p,{m}_{S}}$ are $\left(n,\phantom{\rule{0.277778em}{0ex}}r\right)$-th and $\left(p,\phantom{\rule{0.277778em}{0ex}}{m}_{S}\right)$-th elements of signal matrix $\mathbf{S}\in {\mathbb{C}}^{N\times R}$ and the antenna-to-slot allocation matrix $\mathbf{Q}\in {\mathbb{C}}^{P\times {M}_{S}}$, respectively. ${x}_{{m}_{S},n,p}$ and ${b}_{{m}_{S},r,p}$ are typical elements of the transmitted signal tensor $\underline{\mathbf{X}}\in {\mathbb{C}}^{{M}_{S}\times N\times P}$ and the coding tensor $\underline{\mathbf{B}}\in {\mathbb{C}}^{{M}_{S}\times R\times P}$, respectively. The elements in $\underline{\mathbf{B}}$ are chosen as ${e}^{\sqrt{-1}\varsigma /\phantom{\sqrt{-1}\varsigma 2\pi}\phantom{\rule{0.0pt}{0ex}}2\pi}$, where $\varsigma $ is taken from random uniformly distributed pseudorandom numbers. In our tensor coding scheme, the number of transmitted data streams is not restricted to be equal to that of transmit antennas, and the data streams can be allocated to an arbitrary set of transmitted antennas. Without considering the allocation of stream-to-slot, the coding scheme in [21] can be regarded as a special case of our tensor coding scheme with a fixed two-dimensional space-time code.

$$\begin{array}{c}\hfill {x}_{{m}_{S},n,p}=\sum _{r=1}^{R}{q}_{p,{m}_{S}}{b}_{{m}_{S},r,p}{s}_{n,r}\end{array}$$

Assuming Rayleigh flat fading channels, then the discrete-time baseband signal at the ${m}_{D}$-th receive antenna can be written as:
where ${h}_{{m}_{D},{m}_{S}}$ is the $\left({m}_{D},\phantom{\rule{0.277778em}{0ex}}{m}_{S}\right)$-th element of channel matrix $\mathbf{H}\in {\mathbb{C}}^{{M}_{D}\times {M}_{S}}$, ${y}_{{m}_{D},n,p}$ and ${v}_{{m}_{D},n,p}$ are typical elements of the received signal tensor $\underline{\mathbf{Y}}\in {\mathbb{C}}^{{M}_{D}\times N\times P}$ and the noise tensor $\underline{\mathbf{V}}\in {\mathbb{C}}^{{M}_{D}\times N\times P}$, respectively.

$$\begin{array}{c}\hfill \begin{array}{c}{y}_{{m}_{D},n,p}={\displaystyle \sum _{{m}_{S}=1}^{{M}_{S}}}{h}_{{m}_{D},{m}_{S}}{x}_{{m}_{S},n,p}+{v}_{{m}_{D},n,p}\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}={\displaystyle \sum _{{m}_{S}=1}^{{M}_{S}}}{\displaystyle \sum _{r=1}^{R}}{h}_{{m}_{D},{m}_{S}}{q}_{p,{m}_{S}}{b}_{{m}_{S},r,p}{s}_{n,r}+{v}_{{m}_{D},n,p}\hfill \end{array}\end{array}$$

#### 3.1. Constructed Tucker-2 Model

Let us define ${c}_{{m}_{S},r,p}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{q}_{p,{m}_{S}}{b}_{{m}_{S},r,p}$, where ${c}_{{m}_{S},r,p}$ is the typical element of the compound tensor $\underline{\mathbf{C}}\in {\mathbb{C}}^{{M}_{S}\times R\times P}$. So Equation (7) can be written as:

$$\begin{array}{c}\hfill {y}_{{m}_{D},n,p}=\sum _{{m}_{S}=1}^{{M}_{S}}\sum _{r=1}^{R}{h}_{{m}_{D},{m}_{S}}{c}_{{m}_{S},r,p}{s}_{n,r}+{v}_{{m}_{D},n,p}.\end{array}$$

By comparing Equation (4) with Equation (8), the received signal tensor $\underline{\mathbf{Y}}\in {\mathbb{C}}^{{M}_{D}\times N\times P}$ of noiseless signals satisfies a Tucker-2 model, with the following correspondences:

$$\begin{array}{c}\hfill \left(\underline{\mathbf{G}},\phantom{\rule{0.277778em}{0ex}}{\mathbf{A}}^{\left(1\right)},\phantom{\rule{0.277778em}{0ex}}{\mathbf{A}}^{\left(2\right)}\right)\iff \left(\underline{\mathbf{C}},\phantom{\rule{0.277778em}{0ex}}\mathbf{H},\phantom{\rule{0.277778em}{0ex}}\mathbf{S}\right)\end{array}$$

$$\begin{array}{c}\hfill \left({I}_{1},{I}_{2},{R}_{1},{R}_{2},{I}_{3}\right)\iff \left({M}_{D},N,{M}_{S},R,P\right).\end{array}$$

Using the mode-n product representation, the model (8) can be written as:
where $\mathbf{S}$ and $\mathbf{H}$ represent the two loading matrices, and $\underline{\mathbf{C}}$ is the core tensor.

$$\begin{array}{c}\hfill \underline{\mathbf{Y}}=\underline{\mathbf{C}}{\times}_{1}\mathbf{H}{\times}_{2}\mathbf{S}+\underline{\mathbf{V}}\end{array}$$

Let us define ${\mathbf{Y}}_{\xb7\xb7p}\in {\mathbb{C}}^{{M}_{D}\times N}$, ${\mathbf{B}}_{\xb7\xb7p}\in {\mathbb{C}}^{{M}_{S}\times R}$, ${\mathbf{C}}_{\xb7\xb7p}\in {\mathbb{C}}^{{M}_{S}\times R}$ and ${\mathbf{V}}_{\xb7\xb7p}\in {\mathbb{C}}^{{M}_{D}\times N}$ as the p-th matrix slice of $\underline{\mathbf{Y}}\in {\mathbb{C}}^{{M}_{D}\times N\times P}$, $\underline{\mathbf{B}}\in {\mathbb{C}}^{{M}_{S}\times R\times P}$, $\underline{\mathbf{C}}\in {\mathbb{C}}^{{M}_{S}\times R\times P}$, and $\underline{\mathbf{V}}\in {\mathbb{C}}^{{M}_{D}\times N\times P}$, respectively. We have ${\mathbf{C}}_{\xb7\xb7p}={D}_{p}\left(\mathbf{Q}\right){\mathbf{B}}_{\xb7\xb7p}$. By defining ${\mathbf{Y}}_{1}={\left[{\mathbf{Y}}_{\xb7\xb71}^{T},\dots ,{\mathbf{Y}}_{\xb7\xb7P}^{T}\right]}^{T}\in {\mathbb{C}}^{P{M}_{D}\times N}$, ${\mathbf{Y}}_{2}={\left[{\mathbf{Y}}_{\xb7\xb71},\dots ,{\mathbf{Y}}_{\xb7\xb7P}\right]}^{T}\in {\mathbb{C}}^{PN\times {M}_{D}}$, ${\mathbf{Y}}_{3}=\left[vec\left({\mathbf{Y}}_{\xb7\xb71}\right),\dots ,vec\left({\mathbf{Y}}_{\xb7\xb7P}\right)\right]\in {\mathbb{C}}^{{M}_{D}N\times P}$, and ${\mathbf{Y}}_{4}=\left[vec\left({\mathbf{Y}}_{\xb7\xb71}^{T}\right),\dots ,vec\left({\mathbf{Y}}_{\xb7\xb7P}^{T}\right)\right]\in {\mathbb{C}}^{N{M}_{D}\times P}$, we can obtain four compact forms of the Tucker-2 model (11):
with,
and,

$$\begin{array}{c}\hfill \begin{array}{c}{\mathbf{Y}}_{1}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}\left({\mathbf{I}}_{P}\otimes \mathbf{H}\right){\mathbf{F}}_{1}{\mathbf{S}}^{T}+{\mathbf{V}}_{1}\end{array}\end{array}$$

$$\begin{array}{c}\hfill \begin{array}{c}{\mathbf{Y}}_{2}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}\left({\mathbf{I}}_{P}\otimes \mathbf{S}\right){\mathbf{F}}_{2}{\mathbf{H}}^{T}+{\mathbf{V}}_{2}\end{array}\end{array}$$

$$\begin{array}{c}\hfill \begin{array}{c}{\mathbf{Y}}_{3}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}\left(\mathbf{S}\otimes \mathbf{H}\right){\mathbf{F}}_{3}+{\mathbf{V}}_{3}\end{array}\end{array}$$

$$\begin{array}{c}\hfill \begin{array}{c}{\mathbf{Y}}_{4}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}\left(\mathbf{H}\otimes \mathbf{S}\right){\mathbf{F}}_{4}+{\mathbf{V}}_{4}\end{array}\end{array}$$

$$\begin{array}{c}\hfill \begin{array}{c}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{F}}_{1}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\left[{\mathbf{B}}_{\xb7\xb71}^{T}{D}_{1}\left(\mathbf{Q}\right),\dots ,{\mathbf{B}}_{\xb7\xb7P}^{T}{D}_{P}\left(\mathbf{Q}\right)\right]}^{T}\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{F}}_{2}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\left[{D}_{1}\left(\mathbf{Q}\right){\mathbf{B}}_{\xb7\xb71},\dots ,{D}_{P}\left(\mathbf{Q}\right){\mathbf{B}}_{\xb7\xb7P}\right]}^{T}\hfill \\ {\mathbf{F}}_{3}=\phantom{\rule{0.277778em}{0ex}}\left[vec\left({D}_{1}\left(\mathbf{Q}\right){\mathbf{B}}_{\xb7\xb71}\right),\dots ,vec\left({D}_{P}\left(\mathbf{Q}\right){\mathbf{B}}_{\xb7\xb7P}\right)\right]\hfill \\ {\mathbf{F}}_{4}=\phantom{\rule{0.277778em}{0ex}}\left[vec\left({\mathbf{B}}_{\xb7\xb71}^{T}{D}_{1}\left(\mathbf{Q}\right)\right),\dots ,vec\left({\mathbf{B}}_{\xb7\xb7P}^{T}{D}_{P}\left(\mathbf{Q}\right)\right)\right]\hfill \end{array}\end{array}$$

$$\begin{array}{c}\hfill \begin{array}{c}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{V}}_{1}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\left[{\mathbf{V}}_{\xb7\xb71}^{T},\dots ,{\mathbf{V}}_{\xb7\xb7P}^{T}\right]}^{T}\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{V}}_{2}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\left[{\mathbf{V}}_{\xb7\xb71},\dots ,{\mathbf{V}}_{\xb7\xb7P}\right]}^{T}\hfill \\ {\mathbf{V}}_{3}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}\left[vec\left({\mathbf{V}}_{\xb7\xb71}\right),\dots ,vec\left({\mathbf{V}}_{\xb7\xb7P}\right)\right]\hfill \\ {\mathbf{V}}_{4}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}\left[vec\left({\mathbf{V}}_{\xb7\xb71}^{T}\right),\dots ,vec\left({\mathbf{V}}_{\xb7\xb7P}^{T}\right)\right].\hfill \end{array}\end{array}$$

In this paper, two following assumptions are satisfied.

(a) The antenna-to-slot allocation matrix $\mathbf{Q}$ does not have an all-zero column. This means that at least one transmit antenna is used during each time slot;

(b) Both the transmitter and receiver know the allocation matrix $\mathbf{Q}$ and the coding tensor $\underline{\mathbf{B}}$.

#### 3.2. Uniqueness Issue

Due to the loading matrices factors being unique up to nonsingular matrices, the generalized Tucker-2 model is not essentially unique. This consequence can be verified by using the property of the mode-n product:
where the noise tensor $\underline{\mathbf{V}}$ has been omitted for convenience of notation, ${\Theta}_{\mathbf{S}}\in {\mathbb{C}}^{R\times R}$ and ${\Theta}_{\mathbf{H}}\in {\mathbb{C}}^{{M}_{S}\times {M}_{S}}$ are nonsingular matrices.

$$\begin{array}{c}\hfill \begin{array}{c}\phantom{\rule{0.277778em}{0ex}}\underline{\mathbf{C}}{\times}_{1}\mathbf{H}{\times}_{2}\mathbf{S}=\underline{\mathbf{C}}{\times}_{1}\left(\mathbf{H}{\Theta}_{\mathbf{H}}{\Theta}_{\mathbf{H}}^{-1}\right){\times}_{2}\left(\mathbf{S}{\Theta}_{\mathbf{S}}{\Theta}_{\mathbf{S}}^{-1}\right)\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}=\underline{\mathbf{C}}{\times}_{1}{\Theta}_{\mathbf{H}}^{-1}{\times}_{2}{\Theta}_{\mathbf{S}}^{-1}{\times}_{1}\left(\mathbf{H}{\Theta}_{\mathbf{H}}\right){\times}_{2}\left(\mathbf{S}{\Theta}_{\mathbf{S}}\right)\hfill \end{array}\end{array}$$

It is shown that applying the uniqueness theorem of the Tucker model in [25], if the core tensor $\underline{\mathbf{C}}$ is known, then $\mathbf{S}$ and $\mathbf{H}$ are unique to a scaling ambiguity, i.e.,
where $\overline{\mathbf{S}}$ and $\overline{\mathbf{H}}$ are alternative solutions for $\mathbf{S}$ and $\mathbf{H}$, respectively, ${\Theta}_{\mathbf{S}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}\beta {\mathbf{I}}_{R}$ and ${\Theta}_{\mathbf{H}}=\left(1/\phantom{1\beta}\phantom{\rule{0.0pt}{0ex}}\beta \right){\mathbf{I}}_{{M}_{S}}$. Consequently, the priori knowledge of only one symbol is enough to resolve this scaling ambiguity factor $\beta $. Compared to the PARAFAC model used in existing receivers, the constructed Tucker-2 model only needs a priori knowledge of one symbol to eliminate the scaling ambiguity. Therefore, our scheme has higher spectral efficiency.

$$\begin{array}{c}\hfill \left(\mathbf{S},\phantom{\rule{0.277778em}{0ex}}\mathbf{H}\right)=\left({\overline{\mathbf{S}}\Theta}_{\mathbf{S}}^{-1},\phantom{\rule{0.277778em}{0ex}}{\overline{\mathbf{H}}\Theta}_{\mathbf{H}}^{-1}\right)\end{array}$$

#### 3.3. Identifiability Conditions

The identifiability for the constructed Tucker-2 model is an assignable problem for recovering the parameters to be estimated. In this paper, it is directly linked to the estimation of the signal matrix $\mathbf{S}$ and channel matrix $\mathbf{H}$ from the received signal tensor $\underline{\mathbf{Y}}$. Conditions of parameter identifiability is given in the following theorem.

**Theorem**

**1.**

(Sufficient Conditions): Assuming that $\mathbf{H}$ has independent and identically distributed (i.i.d.) entries, and $\mathbf{S}$ has a full column rank. ${P}_{1}$ denotes the number of nonzero elements in $\mathbf{Q}$. Then sufficient conditions for identifiability of signal matrix $\mathbf{S}$ and channel matrix $\mathbf{H}$ are:

$${P}_{1}\u2a7eR\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}and\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}min\left({M}_{D},\phantom{\rule{0.277778em}{0ex}}R\right)\u2a7e{M}_{S}.$$

**Proof**

**of**

**Theorem**

**1.**

From Equation (12) and Equation (13), necessary and sufficient conditions for identifiability of $\mathbf{S}$ and $\mathbf{H}$ requires that $\left({\mathbf{I}}_{P}\otimes \mathbf{H}\right){\mathbf{F}}_{1}$ and $\left({\mathbf{I}}_{P}\otimes \mathbf{S}\right){\mathbf{F}}_{2}$ have a full column rank, i.e.,

$$Rank\left(\left({\mathbf{I}}_{P}\otimes \mathbf{H}\right){\mathbf{F}}_{1}\right)=R$$

$$Rank\left(\left({\mathbf{I}}_{P}\otimes \mathbf{S}\right){\mathbf{F}}_{2}\right)={M}_{S}.$$

Under the assumptions in Theorem 1 that $\mathbf{H}$ has i.i.d. entries, ${M}_{D}\u2a7e{M}_{S}$ can ensure $\mathbf{H}$ has a full column rank. Since ${\mathbf{I}}_{P}$ and $\mathbf{H}$ have a full column-rank, then ${\mathbf{I}}_{P}\otimes \mathbf{H}$ has a full column rank, i.e., $Rank\left({\mathbf{I}}_{P}\otimes \mathbf{H}\right)=P{M}_{S}$. Therefore, Equation (21) is satisfied if ${\mathbf{F}}_{1}$ has a full column rank. We rewrite ${\mathbf{F}}_{1}$ from Equation (15) as:
where ${\mathbf{B}}_{1}={\left[{\mathbf{B}}_{\xb7\xb71}^{T},\dots ,{\mathbf{B}}_{\xb7\xb7P}^{T}\right]}^{T}$. Due to ${\mathbf{B}}_{1}$ being a specially constructed matrix with different generators, any two rows (or two columns) of ${\mathbf{B}}_{1}$ are linearly independent. If the number of non-zero rows of ${\mathbf{F}}_{1}$ is greater than or equal to the number of columns of ${\mathbf{F}}_{1}$ (i.e., ${P}_{1}\u2a7eR$), then ${\mathbf{F}}_{1}$ is full column rank. Thus, ${M}_{D}\u2a7e{M}_{S}$ and ${P}_{1}\u2a7eR$ can ensure that condition (21) is satisfied.

$${\mathbf{F}}_{1}=diag\left(vec\left({\mathbf{Q}}^{T}\right)\right){\mathbf{B}}_{1}$$

Since $\mathbf{S}$ has a full column rank, we can deduce that $Rank\left({\mathbf{I}}_{P}\otimes \mathbf{S}\right)=PR$. Thus, condition (22) is satisfied if ${\mathbf{F}}_{2}$ has a full column rank. We rewrite ${\mathbf{F}}_{2}$ from Equation (16) as:
where ${\mathbf{B}}_{2}=\mathrm{blk}\left[{\mathbf{B}}_{\xb7\xb71},\dots ,{\mathbf{B}}_{\xb7\xb7P}\right]$ and ${\mathbf{Q}}_{P}={\left[{D}_{1}\left(\mathbf{Q}\right),\dots ,{D}_{P}\left(\mathbf{Q}\right)\right]}^{T}$. Recall that $\mathbf{Q}$ does not have an all-zero column, which means that ${\mathbf{Q}}_{P}$ has a full column rank. We have that ${\mathbf{F}}_{2}$ is full column rank if ${\mathbf{B}}_{2}$ is full row rank. Since ${\mathbf{B}}_{2}$ has the block diagonal structure and ${\mathbf{B}}_{\xb7\xb7p}$ has different generators, $R\u2a7e{M}_{S}$ ensures that ${\mathbf{B}}_{2}$ has full row rank. Therefore, $R\u2a7e{M}_{S}$ can ensure that condition (22) is satisfied. This ends the proof. ☐

$${\mathbf{F}}_{2}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\mathbf{B}}_{2}^{T}{\mathbf{Q}}_{P}$$

**Remark**

**1.**

The conditions in Theorem 1 is sufficient but not necessary for parameter identifiability. Sufficient condition (21) and condition (22) also concern the ALS algorithm. In fact, identifiability of signal and channel parameters is possible in our simulation results when ${M}_{S}>R$. Necessary conditions for parameter identifiability is based on the dimensions of $\left({\mathbf{I}}_{P}\otimes \mathbf{H}\right){\mathbf{F}}_{1}$ and $\left({\mathbf{I}}_{P}\otimes \mathbf{S}\right){\mathbf{F}}_{2}$. If the channel matrix $\mathbf{H}$ does not have a full column or row rank, i.e., $L<min\left({M}_{S},{M}_{D}\right)$, where L is the rank of the channel matrix $\mathbf{H}$. Thus, identifiability conditions of Theorem 1 are no longer applicable because of the low-rank property of $\mathbf{H}$. However, we can also deduce identifiability conditions based on Equations (21) and (22), i.e., necessary and sufficient conditions for identifiability of $\mathbf{S}$ and $\mathbf{H}$ requires that $\left({\mathbf{I}}_{P}\otimes \mathbf{H}\right){\mathbf{F}}_{1}$ and $\left({\mathbf{I}}_{P}\otimes \mathbf{S}\right){\mathbf{F}}_{2}$ have full column rank. For this case, we will do further analysis in Section 5.

## 4. Semi-Blind Receiver

The ALS algorithm is a classical solution for fitting tensor models. However, it is well known that the ALS algorithm exhibits a convergence problem when collinearity is present in one or more modes [27,28]. The LM algorithm is successfully used to fit the PARAFAC and PARATUCK2 models, adapted to collinearity problems, and provide quadratic convergence [19,20]. As an iterative algorithm, the LM algorithm is also sensitive to initialization. Thus, the optimization of the initial value is important to improving the performance of the LM algorithm.

In this section, a novel semi-blind receiver based on the optimized LM algorithm is developed for joint symbol and channel estimation. The basic principle of the optimized LM algorithm is to first resort to a LSK approximation problem [29,30], based on the singular value decomposition (SVD) of rank-1 matrix to initialize the symbol and channel matrices, and then update these two matrices at the same time in each iteration. Finally, the modified singular value projection (SVP) based algorithm [31,32] is used to further improve the performance of channel estimation.

The proposed optimal initialization method is based on the Kronecker least squares algorithm, which exploits SVD-based rank-one approximations to get an initial estimation of $\mathbf{S}$ and $\mathbf{H}$ from their Kronecker matrix product.

By post-multiplying Equation (14) with ${\mathbf{F}}_{3}^{\u2020}$, we get $\mathbf{Z}=\left({\widehat{\mathbf{S}}}^{\left(0\right)}\otimes {\widehat{\mathbf{H}}}^{\left(0\right)}\right)={\mathbf{Y}}_{3}{\mathbf{F}}_{3}^{\u2020}\in {\mathbb{C}}^{N{M}_{D}\times R{M}_{S}}$, where ${\widehat{\mathbf{S}}}^{\left(0\right)}$ and ${\widehat{\mathbf{H}}}^{\left(0\right)}$ are initial estimates of $\mathbf{S}$ and $\mathbf{H}$. According to the Theorem 2.1 in [29], we have:
where $\Xi =unvec(\Delta )\in {\mathbb{C}}^{{M}_{D}{M}_{S}\times NR}$ is a rank-one matrix, and $\Delta \in {\mathbb{C}}^{N{M}_{D}R{M}_{S}\times 1}$ is, given that:

$${\u2225\mathbf{Z}-{\widehat{\mathbf{S}}}^{\left(0\right)}\otimes {\widehat{\mathbf{H}}}^{\left(0\right)}\u2225}_{F}^{2}={\u2225\Xi -vec\left({\widehat{\mathbf{H}}}^{\left(0\right)}\right){\left(vec\left({\widehat{\mathbf{S}}}^{\left(0\right)}\right)\right)}^{T}\u2225}_{F}^{2}$$

$$\begin{array}{c}\hfill \Delta =\left[\begin{array}{c}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}vec\left(\mathbf{Z}(1:{M}_{D},\phantom{\rule{0.277778em}{0ex}}1:{M}_{S})\right)\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\vdots \hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}vec\left(\mathbf{Z}((N-1){M}_{D}+1:N{M}_{D},\phantom{\rule{0.277778em}{0ex}}1:{M}_{S})\right)\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\vdots \hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}vec\left(\mathbf{Z}(1:{M}_{D},\phantom{\rule{0.277778em}{0ex}}(R-1){M}_{S}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}1:R{M}_{S})\right)\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\vdots \hfill \\ vec\left(\mathbf{Z}((N-1){M}_{D}+1:N{M}_{D},\phantom{\rule{0.277778em}{0ex}}(R-1){M}_{S}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}1:R{M}_{S})\right)\hfill \end{array}\right]\end{array}$$

In this case, the Kronecker product matrix $\mathbf{Z}$ has been rearranged into a rank-one matrix $\Xi $. Applying SVD to the rank-one matrix $\Xi $, the vectors $vec\left({\widehat{\mathbf{S}}}^{\left(0\right)}\right)$ and $vec\left({\widehat{\mathbf{H}}}^{\left(0\right)}\right)$ can be estimated by using a rank-one approximation method, i.e., by computing its largest singular value and the corresponding left and right singular vectors. ${\widehat{\mathbf{S}}}^{\left(0\right)}$ and ${\widehat{\mathbf{H}}}^{\left(0\right)}$ are determined up to a scaling factor, which can be removed by setting ${s}_{1,1}=1$ as in [27,30]. The detailed process is shown below.

By applying SVD to the rank-one matrix $\Xi $, we have:
where $\Sigma \in {\mathbb{C}}^{{M}_{D}{M}_{S}\times NR}$ is a diagonal matrix containing singular values of $\Xi $, $\mathbf{U}\in {\mathbb{C}}^{{M}_{D}{M}_{S}\times {M}_{D}{M}_{S}}$ and $\mathbf{V}\in {\mathbb{C}}^{NR\times NR}$ are unitary matrices. Using the rank-one approximation of $\Xi $, we have:
where ${\sigma}_{\xb71}$ is the largest singular value, and ${\mathbf{U}}_{\xb71}$ and ${\mathbf{V}}_{\xb71}$ are the corresponding left and right singular vectors. Thus, vectors $vec\left({\widehat{\mathbf{S}}}^{\left(0\right)}\right)$ and $vec\left({\widehat{\mathbf{H}}}^{\left(0\right)}\right)$ can be estimated as:
where $\alpha $ is the scalar factor. $vec\left({\widehat{\mathbf{S}}}^{\left(0\right)}\right)$ and $vec\left({\widehat{\mathbf{H}}}^{\left(0\right)}\right)$ are determined up to this scalar factor. In practical communication systems, this scalar factor $\alpha $ can be removed by setting ${s}_{1,1}=1$. Thus, the value $\alpha $ in this paper is equal to $\frac{1}{{v}_{1.1}^{*}}$ when ${s}_{1,1}=1$. Note that we can also choose Equation (14) to implement the above optimal initialization procedure.

$$\begin{array}{c}\hfill \Xi =\mathbf{U}\Sigma {\mathbf{V}}^{H}\end{array}$$

$$\begin{array}{c}\hfill \Xi \approx {\mathbf{U}}_{\xb71}{\sigma}_{1}{\mathbf{V}}_{\xb71}^{H}\end{array}$$

$$\begin{array}{c}\hfill vec\left({\widehat{\mathbf{S}}}^{\left(0\right)}\right)=\alpha {\mathbf{V}}_{\xb71}^{*},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}vec\left({\widehat{\mathbf{H}}}^{\left(0\right)}\right)=\frac{{\sigma}_{1}}{\alpha}{\mathbf{U}}_{\xb71}\end{array}$$

Define a parameter vector stacking all the unknowns as:
where ${\mathbf{u}}_{\mathbf{S}}=vec\left({\mathbf{S}}^{T}\right)\in {\mathbb{C}}^{NR\times 1}$, ${\mathbf{u}}_{\mathbf{H}}=vec\left({\mathbf{H}}^{T}\right)\in {\mathbb{C}}^{{M}_{D}{M}_{S}\times 1}$, and $Q=NR+{M}_{D}{M}_{S}$. The cost function to be minimized is given by:
where ${\tilde{y}}_{{m}_{D},n,p}\left(\mathbf{u}\right)$ is the typical element of the tensor $\underline{\tilde{\mathbf{Y}}}\left(\mathbf{u}\right)\in {\mathbb{C}}^{{M}_{D}\times N\times P}$, which denotes the output tensor in absence of noise. $\mathbf{z}\left(\mathbf{u}\right)=vec\left(\underline{\tilde{\mathbf{Y}}}\left(\mathbf{u}\right)\right)-vec\left(\underline{\mathbf{Y}}\right)=\tilde{\mathbf{y}}\left(\mathbf{u}\right)-\mathbf{y}\in {\mathbb{C}}^{NP{M}_{D}\times 1}$ denotes the vector of residuals and $L=NP{M}_{D}$.

$$\begin{array}{c}\hfill \mathbf{u}={\left[{\mathbf{u}}_{\mathbf{S}}^{T},\phantom{\rule{0.277778em}{0ex}}{\mathbf{u}}_{\mathbf{H}}^{T}\right]}^{T}\in {\mathbb{C}}^{Q\times 1}\end{array}$$

$$\begin{array}{c}\hfill \begin{array}{c}\varphi \left(\mathbf{u}\right)=\frac{1}{2}\sum _{{\mathrm{m}}_{D}=1}^{{M}_{D}}\sum _{n=1}^{N}\sum _{p=1}^{P}{\left|{\tilde{y}}_{{m}_{D},n,p}\left(\mathbf{u}\right)-{y}_{{m}_{D},n,p}\right|}^{2}\hfill \\ \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}=\frac{1}{2}\sum _{l=1}^{L}{\left|{z}_{l}\left(\mathbf{u}\right)\right|}^{2}=\frac{1}{2}{\mathbf{z}}^{H}\left(\mathbf{u}\right)\mathbf{z}\left(\mathbf{u}\right)\hfill \end{array}\end{array}$$

Let the $\mathbf{J}\in {\mathbb{C}}^{L\times Q}$ be the Jacobian matrix of $\mathbf{z}\left(\mathbf{u}\right)$ with respect to $\mathbf{u}$, and $\mathbf{g}$ be the gradient of $\varphi \left(\mathbf{u}\right)$ with respect to $\mathbf{u}$. $\mathbf{J}$ and $\mathbf{g}$ are respectively defined by:

$$\begin{array}{c}\hfill {J}_{l,q}=\partial {z}_{l}\left(\mathbf{u}\right)/\phantom{\partial {z}_{l}\left(\mathbf{u}\right)\partial {\mathbf{u}}_{q}}\phantom{\rule{0.0pt}{0ex}}\partial {\mathbf{u}}_{q}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}\partial {\tilde{y}}_{l}\left(\mathbf{u}\right)/\phantom{\partial {\tilde{y}}_{l}\left(\mathbf{u}\right)\partial {\mathbf{u}}_{q}}\phantom{\rule{0.0pt}{0ex}}\partial {\mathbf{u}}_{q}\end{array}$$

$$\begin{array}{c}\hfill \mathbf{g}=\partial \varphi \left(\mathbf{u}\right)/\phantom{\partial \varphi \left(\mathbf{u}\right)\partial \mathbf{u}}\phantom{\rule{0.0pt}{0ex}}\partial \mathbf{u}={\mathbf{J}}^{H}\left(\mathbf{u}\right)\mathbf{z}\left(\mathbf{u}\right).\end{array}$$

The optimized LM algorithm consists in optimizing ${\mathbf{u}}^{\left(0\right)}$, and estimating ${\mathbf{u}}^{\left(i+1\right)}$ at the $\left(i+1\right)$-th iteration from ${\mathbf{u}}^{\left(i\right)}$ at the i-th iteration via ${\mathbf{u}}^{\left(i+1\right)}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\mathbf{u}}^{\left(i\right)}\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}\Delta {\mathbf{u}}^{\left(i\right)}$. The step $\Delta {\mathbf{u}}^{\left(i\right)}\in {\mathbb{C}}^{NP{M}_{D}\times 1}$ is updated by solving the following modified normal equations:
where ${\lambda}^{\left(i\right)}$ is the damping parameter to ensure that $\Delta {\mathbf{u}}^{\left(i\right)}$ is a descent direction. The whole procedure of the optimized LM algorithm used in our semi-blind receiver is listed in Algorithm 1.

$$\begin{array}{c}\hfill \left({{\mathbf{J}}^{\left(i\right)}}^{H}{\mathbf{J}}^{\left(i\right)}+{\lambda}^{\left(i\right)}{\mathbf{I}}_{Q}\right)\Delta {\mathbf{u}}^{\left(i\right)}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}-{\mathbf{g}}^{\left(i\right)}\end{array}$$

Due to the partitioned structure of $\mathbf{u}$, the Jacobian matrix $\mathbf{J}$ can be written as $\mathbf{J}=\left[{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}},\phantom{\rule{0.277778em}{0ex}}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}\right]$, where ${\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}\in {\mathbb{C}}^{NP{M}_{D}\times NR}$ and ${\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}\in {\mathbb{C}}^{NP{M}_{D}\times {M}_{D}{M}_{S}}$ are respectively given by:
The permutation matrix $\Pi \in {\mathbb{C}}^{NP{M}_{D}\times {M}_{D}PN}$ is given by:
where ${\mathbf{e}}_{n}^{\left(N\right)}$ and ${\mathbf{e}}_{{m}_{D}}^{\left({M}_{D}\right)}$ are the n-th and ${m}_{D}$-th column vectors of the identity matrices ${\mathbf{I}}_{N}$ and ${\mathbf{I}}_{{M}_{D}}$, respectively.

$$\begin{array}{c}\hfill {\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}={\mathbf{I}}_{N}\otimes \left(\left({\mathbf{I}}_{P}\otimes \mathbf{H}\right){\mathbf{F}}_{1}\right)\end{array}$$

$$\begin{array}{c}\hfill {\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}=\Pi \left({\mathbf{I}}_{{M}_{D}}\otimes \left(\left({\mathbf{I}}_{P}\otimes \mathbf{S}\right){\mathbf{F}}_{2}\right)\right).\end{array}$$

$$\begin{array}{c}\hfill \Pi =\sum _{{\mathrm{m}}_{D}=1}^{{M}_{D}}\sum _{n=1}^{N}{\mathbf{e}}_{n}^{\left(N\right)}{\left({\mathbf{e}}_{{m}_{D}}^{\left({M}_{D}\right)}\right)}^{T}\otimes {\mathbf{I}}_{P}\otimes {\mathbf{e}}_{{m}_{D}}^{\left({M}_{D}\right)}{\left({\mathbf{e}}_{n}^{\left(N\right)}\right)}^{T}\end{array}$$

Algorithm 1 The optimized LM algorithm |

First stage: • Compute the LS estimate of $\mathbf{Z}$: $\mathbf{Z}={\mathbf{Y}}_{3}\phantom{\rule{0.277778em}{0ex}}{\left({\mathbf{F}}_{3}\right)}^{\u2020}$; • Rearrange $\mathbf{Z}$ to a rank-one matrix $\Xi $; • Apply the SVD on $\Xi $: $SVD\left(\Xi \right)=\mathbf{U}\Sigma {\mathbf{V}}^{H}$; • Calculate initialization matrices ${\mathbf{S}}^{\left(0\right)}$ and ${\mathbf{H}}^{\left(0\right)}$: ${\mathbf{S}}^{\left(0\right)}=unvec\left({{\mathbf{V}}_{\xb71}^{*}/\phantom{{\mathbf{V}}_{\xb71}^{*}v}\phantom{\rule{0.0pt}{0ex}}v}_{1,1}^{*}\right),\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{H}}^{\left(0\right)}=unvec\left({\sigma}_{1}{\mathbf{U}}_{\xb71}{v}_{1,1}^{*}\right)$. Second stage: Initialization: Initialize ${\mathbf{u}}^{\left(0\right)}={\left[{\mathbf{u}}_{{\widehat{\mathbf{S}}}^{\left(0\right)}}^{T},{\mathbf{u}}_{{\widehat{\mathbf{H}}}^{\left(0\right)}}^{T}\right]}^{T}$, ${\lambda}^{\left(0\right)}=\mathrm{max}\left(\mathrm{diag}\left({\mathbf{J}}^{\left(0\right)H}{\mathbf{J}}^{\left(0\right)}\right)\right)$ and $\tau =2$; set $\epsilon \phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{10}^{-5}$ and $i=1$; while$\left|\varphi \left({\mathbf{u}}^{\left(i\right)}\right)-\phantom{\rule{0.166667em}{0ex}}\varphi \left({\mathbf{u}}^{\left(i-1\right)}\right)\right|/\phantom{\left|\varphi \left({\mathbf{u}}^{\left(i\right)}\right)\phantom{\rule{4.pt}{0ex}}-\phantom{\rule{4.pt}{0ex}}\varphi \left({\mathbf{u}}^{\left(i-1\right)}\right)\right|\left|\varphi \left({\mathbf{u}}^{\left(i\right)}\right)\right|}\phantom{\rule{0.0pt}{0ex}}\left|\varphi \left({\mathbf{u}}^{\left(i\right)}\right)\right|\u2a7e\epsilon $doStep 1. Compute ${{\mathbf{J}}^{\left(i\right)}}^{H}{\mathbf{J}}^{\left(i\right)}$ and ${\mathbf{g}}^{\left(i\right)}$ respectively;Step 2. Compute $\Delta {\mathbf{u}}^{\left(i\right)}$: $\Delta {\mathbf{u}}^{\left(i\right)}=-{\left({{\mathbf{J}}^{\left(i\right)}}^{H}{\mathbf{J}}^{\left(i\right)}+{\lambda}^{\left(i\right)}{\mathbf{I}}_{Q}\right)}^{-1}{\mathbf{g}}^{\left(i\right)}$; Step 3. Update ${\mathbf{u}}^{\left(i+1\right)}$: ${\mathbf{u}}^{\left(i+1\right)}={\mathbf{u}}^{\left(i\right)}+\Delta {\mathbf{u}}^{\left(i\right)}$;Step 4. Calculate the gain rate $\alpha $: $\alpha =\frac{\varphi \left({\mathbf{u}}^{\left(i+1\right)}\right)\phantom{\rule{4.pt}{0ex}}-\phantom{\rule{4.pt}{0ex}}\varphi \left({\mathbf{u}}^{\left(i\right)}\right)}{{\delta}^{\left(i\right)}}$, where ${\delta}^{\left(i\right)}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\left({\mathbf{J}}^{\left(i\right)}\Delta {\mathbf{u}}^{\left(i\right)}\right)}^{H}\mathbf{z}\left({\mathbf{u}}^{\left(i\right)}\right)\phantom{\rule{4.pt}{0ex}}+\phantom{\rule{4.pt}{0ex}}$$\frac{1}{2{\u2225{\mathbf{J}}^{\left(i\right)}\Delta {\mathbf{u}}^{\left(i\right)}\u2225}_{F}^{2}}$;Step 5. Update $\lambda $: If $\alpha \u2a7e0$, ${\mathbf{u}}^{\left(i+1\right)}$ is ture, and set ${\lambda}^{\left(i+1\right)}={\lambda}^{\left(i\right)}max\left(1-{\left(2\alpha -1\right)}^{3},\phantom{\rule{0.277778em}{0ex}}1/3\right)$ and $\tau =2$. Otherwise, ${\mathbf{u}}^{\left(i+1\right)}$ is invalid, and set ${\lambda}^{\left(i+1\right)}=\tau {\lambda}^{\left(i\right)}$ and $\tau \leftarrow 2\tau $;Step 6.$i\leftarrow i+1$;endAcquire ${\mathbf{S}}^{\left(\infty \right)}$ and ${\mathbf{H}}^{\left(\infty \right)}$: ${\mathbf{S}}^{\left(\infty \right)}={\left(unvec\left({\mathbf{u}}_{\mathbf{S}}^{\left(\infty \right)}\right)\right)}^{T}$, ${\mathbf{H}}^{\left(\infty \right)}={\left(unvec\left({\mathbf{u}}_{\mathbf{H}}^{\left(\infty \right)}\right)\right)}^{T}$. Compute ${\mathbf{H}}_{new}^{\left(\infty \right)}$: If $L<min\left({M}_{S},{M}_{D}\right)$, ${\mathbf{H}}_{new}^{\left(\infty \right)}=SVP\left({\mathbf{H}}^{\left(\infty \right)}\right)$. Otherwise, ${\mathbf{H}}_{new}^{\left(\infty \right)}={\mathbf{H}}^{\left(\infty \right)}$. Remove the scaling ambiguity: ${\widehat{\mathbf{S}}}^{\left(final\right)}=\frac{{\widehat{\mathbf{S}}}^{\left(\infty \right)}}{{\widehat{s}}_{1,1}^{\left(\infty \right)}}$, ${\widehat{\mathbf{H}}}^{\left(final\right)}={\widehat{s}}_{1,1}^{\left(\infty \right)}{\mathbf{H}}_{new}^{\left(\infty \right)}$. |

We can then build the blocks of ${\mathbf{J}}^{H}\mathbf{J}$ as follows:
The terms ${\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}$, ${\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}$ and ${\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}$ can be respectively written as:

$$\begin{array}{c}\hfill {\mathbf{J}}^{H}\mathbf{J}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}\left[\begin{array}{c}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}\hfill \\ {\left({\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}\right)}^{H}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}\hfill \end{array}\right]\end{array}$$

$$\begin{array}{c}\hfill {\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\mathbf{I}}_{N}\otimes \left({\mathbf{F}}_{1}^{H}\left({\mathbf{I}}_{P}\otimes {\mathbf{H}}^{H}\mathbf{H}\right){\mathbf{F}}_{1}\right)\end{array}$$

$$\begin{array}{c}\hfill {\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}={\mathbf{I}}_{{M}_{D}}\otimes \left({\mathbf{F}}_{2}^{H}\left({\mathbf{I}}_{P}\otimes {\mathbf{S}}^{H}\mathbf{S}\right){\mathbf{F}}_{2}\right)\end{array}$$

$$\begin{array}{c}\hfill {\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}={\left({\mathbf{I}}_{N}\otimes \left({\mathbf{I}}_{P}\otimes \mathbf{H}\right){\mathbf{F}}_{1}\right)}^{H}\Pi \left({\mathbf{I}}_{{M}_{D}}\otimes \left({\mathbf{I}}_{P}\otimes \mathbf{S}\right){\mathbf{F}}_{2}\right).\end{array}$$

Similarly, the partitioned structure of $\mathbf{u}$ allows us to write $\mathbf{g}$ as the concatenation of the following two gradients:
where ${\mathbf{g}}_{{\mathbf{u}}_{\mathbf{S}}}\in {\mathbb{C}}^{NR\times 1}$ and ${\mathbf{g}}_{{\mathbf{u}}_{\mathbf{H}}}\in {\mathbb{C}}^{{M}_{D}{M}_{S}\times 1}$ are respectively given by:

$$\begin{array}{c}\hfill \mathbf{g}=\left[\begin{array}{c}{\partial \varphi \left(\mathbf{u}\right)/\phantom{\partial \varphi \left(\mathbf{u}\right)\partial \mathbf{u}}\phantom{\rule{0.0pt}{0ex}}\partial \mathbf{u}}_{\mathbf{S}}\hfill \\ {\partial \varphi \left(\mathbf{u}\right)/\phantom{\partial \varphi \left(\mathbf{u}\right)\partial \mathbf{u}}\phantom{\rule{0.0pt}{0ex}}\partial \mathbf{u}}_{\mathbf{H}}\hfill \end{array}\right]=\left[\begin{array}{c}{\mathbf{g}}_{{\mathbf{u}}_{\mathbf{S}}}\hfill \\ {\mathbf{g}}_{{\mathbf{u}}_{\mathbf{H}}}\hfill \end{array}\right]\end{array}$$

$$\begin{array}{c}\hfill {\mathbf{g}}_{{\mathbf{u}}_{\mathbf{S}}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}{\mathbf{u}}_{\mathbf{S}}-{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{S}}}^{H}\mathbf{y}\end{array}$$

$$\begin{array}{c}\hfill {\mathbf{g}}_{{\mathbf{u}}_{\mathbf{H}}}\phantom{\rule{4.pt}{0ex}}=\phantom{\rule{4.pt}{0ex}}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}^{H}{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}{\mathbf{u}}_{\mathbf{H}}-{\mathbf{J}}_{{\mathbf{u}}_{\mathbf{H}}}^{H}\mathbf{y}.\end{array}$$

In Algorithm 1, the estimated matrix ${\mathbf{H}}^{\left(\infty \right)}$ is projected onto a low rank estimated matrix ${\mathbf{H}}_{new}^{\left(\infty \right)}$ by the SVP based algorithm when $L<min\left({M}_{S},{M}_{D}\right)$. Here ${\mathbf{H}}_{new}^{\left(\infty \right)}$ is calculated as ${\mathbf{H}}_{new}^{\left(\infty \right)}=SVP\left({\mathbf{H}}^{\left(\infty \right)}\right)={\displaystyle \sum _{l=1}^{L}}{\beta}_{l}{\mathbf{U}}_{\xb7l}^{\left(C\right)}{\left({\mathbf{V}}_{\xb7l}^{\left(C\right)}\right)}^{H}$, where ${\beta}_{l}$ denotes the l-th largest singular value of ${\mathbf{H}}^{\left(\infty \right)}$, ${\mathbf{U}}_{\xb7l}^{\left(C\right)}$ and ${\mathbf{V}}_{\xb7l}^{\left(C\right)}$ are the corresponding left and right singular vectors. The overall complexity of the optimized LM algorithm mainly depends on the per-iteration complexity and the numbers of iterations. The per-iteration complexity of this algorithm can be estimated as $\mathcal{O}\left({\left(NR+{M}_{D}{M}_{S}\right)}^{3}\right)$. Since the antenna-to-slot allocation matrix and the coding tensor are fixed and known at the receiver, the convergence of the optimized LM algorithm is usually achieved in only a few iterations. The average number of iterations for the optimized LM algorithm will be further analyzed in Section 6.

## 5. Extension to Multi-User Massive Mimo Systems

In the following section, we show that the developed algorithm can be applied to multi-user massive MIMO systems with hybrid precoding architecture for joint symbol and channel estimation. We consider a fully-connected hybrid precoding architecture, which is the typical model of massive MIMO systems. The base station communicates with M users simultaneously, and each mobile station is equipped with ${M}_{D}$ antennas. The base station is equipped with ${M}_{S}$ antennas and ${M}_{RF}$ independent radio frequency chains to transmit R streams for ${M}_{D}$ receive antennas in each mobile station. In the considered downlink system, each symbol ${s}_{n,r}$ is coded by a three-dimensional baseband code ${b}_{{m}_{RF},r,p}^{\left(M\right)}$ followed by a radio frequency code ${e}_{{m}_{S},{m}_{RF}}$ in the base station. At the m-th ($m=1,\dots ,M$) mobile station, the discrete-time baseband signal at the ${m}_{D}$-th receive antenna is written as:
where ${e}_{{m}_{S},{m}_{RF}}$ and ${h}_{{m}_{D},{m}_{S}}^{\left(m\right)}$ are $\left({m}_{S},{m}_{RF}\right)$-th and $\left({m}_{D},{m}_{S}\right)$-th elements of the radio frequency precoder matrix $\mathbf{E}\in {C}^{{M}_{S}\times {M}_{RF}}$ and the massive MIMO channel matrix ${\mathbf{H}}^{\left(m\right)}\in {C}^{{M}_{D}\times {M}_{S}}$, respectively. ${y}_{{m}_{D},n,p}^{\left(m\right)}$ is the typical element of the received signal tensor ${\underline{\mathbf{Y}}}^{\left(m\right)}\in {C}^{{M}_{D}\times N\times P}$. Then Equation (45) can be rewritten as:
where,

$$\begin{array}{c}\hfill {y}_{{m}_{D},n,p}^{\left(m\right)}=\sum _{{m}_{S}=1}^{{M}_{S}}\sum _{{m}_{RF}=1}^{{M}_{RF}}\sum _{r=1}^{R}{h}_{{m}_{D},{m}_{S}}^{\left(m\right)}{q}_{p,{m}_{S}}{e}_{{m}_{S},{m}_{RF}}{b}_{{m}_{RF},r,p}{s}_{n,r}+{v}_{{m}_{D},n,p}^{\left(m\right)}\end{array}$$

$$\begin{array}{c}\hfill {y}_{{m}_{D},n,p}^{\left(m\right)}=\sum _{{m}_{S}=1}^{{M}_{S}}\sum _{r=1}^{R}{h}_{{m}_{D},{m}_{S}}^{\left(m\right)}{c}_{{m}_{S},r,p}{s}_{n,r}+{v}_{{m}_{D},n,p}^{\left(m\right)}\end{array}$$

$$\begin{array}{c}\hfill {c}_{{m}_{S},r,p}=\sum _{{m}_{RF}=1}^{{M}_{RF}}{q}_{p,{m}_{S}}{e}_{{m}_{S},{m}_{RF}}{b}_{{m}_{RF},r,p}\end{array}$$

Following [33,34], we also adopt a geometric channel model with ${L}_{m}$ scatterers between the base station and the m-th mobile station, ${L}_{m}=1,\dots ,{L}_{M}$. Under this model, the channel matrix ${\mathbf{H}}^{\left(m\right)}$ is expressed as:
where ${\alpha}_{l}^{\left(m\right)}$ denotes the complex gain of l-th path, ${\theta}_{l}^{\left(m\right)}$, and ${\varphi}_{l}^{\left(m\right)}$ are l-th ’s azimuth angles of arrival and departure (AoAs/AoDs) of the mobile station and base station, respectively. ${\Lambda}_{MS}\left({\theta}_{l}^{\left(m\right)}\right)$ and ${\Lambda}_{BS}\left({\varphi}_{l}^{\left(m\right)}\right)$ are receive and transmit antenna array at a specific AoA and AoD, respectively. Finally, ${\mathbf{a}}_{BS}\left({\varphi}_{l}^{\left(m\right)}\right)$ and ${\mathbf{a}}_{MS}\left({\theta}_{l}^{\left(m\right)}\right)$ are the steering vectors at the base station and mobile station, respectively. If uniform linear arrays are considered, the steering vectors ${\mathbf{a}}_{BS}\left({\varphi}_{l}^{\left(m\right)}\right)$ and ${\mathbf{a}}_{MS}\left({\theta}_{l}^{\left(m\right)}\right)$ are respectively given by:
where $\lambda $ denotes the signal wavelength, and d is the distance between two neighboring antenna elements.

$$\begin{array}{c}{\mathbf{H}}^{\left(m\right)}=\sum _{l=1}^{{L}_{m}}{\alpha}_{l}^{\left(m\right)}{\Lambda}_{MS}\left({\theta}_{l}^{\left(m\right)}\right){\Lambda}_{BS}\left({\varphi}_{l}^{\left(m\right)}\right){\mathbf{a}}_{MS}\left({\theta}_{l}^{\left(m\right)}\right){\mathbf{a}}_{BS}^{H}\left({\varphi}_{l}^{\left(m\right)}\right)\hfill \end{array}$$

$$\begin{array}{c}\hfill \begin{array}{c}{\mathbf{a}}_{BS}\left({\varphi}_{l}^{\left(m\right)}\right)=\frac{1}{{M}_{S}}{[1,\phantom{\rule{0.277778em}{0ex}}{e}^{j\frac{2\pi}{\lambda}dsin{\varphi}_{l}^{\left(m\right)}},\cdots ,\phantom{\rule{0.277778em}{0ex}}{e}^{j\frac{2\pi}{\lambda}d({M}_{S}-1)sin{\varphi}_{l}^{\left(m\right)}}]}^{T}\hfill \end{array}\phantom{\rule{4pt}{0ex}}\end{array}$$

$$\begin{array}{c}\hfill \begin{array}{c}{\mathbf{a}}_{MS}\left({\theta}_{l}^{\left(m\right)}\right)=\frac{1}{{M}_{D}}{[1,\phantom{\rule{0.277778em}{0ex}}{e}^{j\frac{2\pi}{\lambda}dsin{\theta}_{l}^{\left(m\right)}},\cdots ,\phantom{\rule{0.277778em}{0ex}}{e}^{j\frac{2\pi}{\lambda}d({M}_{D}-1)sin{\theta}_{l}^{\left(m\right)}}]}^{T}\hfill \end{array}\phantom{\rule{4pt}{0ex}}\end{array}$$

Similar to the analysis of Section 3.1, the received signal tensor ${\underline{\mathbf{Y}}}^{\left(m\right)}$ of noiseless signal also satisfies the Tucker-2 model, and the proposed algorithm in Section 4 remains suitable for joint symbol and channel estimation at each mobile station. However, two points are important to note here. First, identifiability conditions of Theorem 1 are no longer applicable because of the low-rank property of ${\mathbf{H}}^{\left(m\right)}$. However, we can deduce new identifiability conditions based on Equations (21) and (22), i.e., necessary and sufficient conditions for identifiability of $\mathbf{S}$ and ${\mathbf{H}}^{\left(m\right)}$ require that $\left({\mathbf{I}}_{P}\otimes {\mathbf{H}}^{\left(m\right)}\right){\mathbf{F}}_{1}$ and $\left({\mathbf{I}}_{P}\otimes \mathbf{S}\right){\mathbf{F}}_{2}$ have full column rank. For convenience of analysis, we assume that the antenna-to-slot allocation matrix is all-ones matrix. Then, we have the following theorem.

**Theorem**

**2.**

Assuming that the path gains of the low-rank channel ${\mathbf{H}}^{\left(m\right)}$ are Rayleigh distributed, and N and R are large enough. Then sufficient conditions for identifiability of ${\mathbf{H}}^{\left(m\right)}$ and $\mathbf{S}$ are:

$$P\u2a7emax\left(\frac{R}{{L}_{m}},\phantom{\rule{0.277778em}{0ex}}\frac{{M}_{S}}{N},\phantom{\rule{0.277778em}{0ex}}\frac{{M}_{S}}{R}\right)$$

**Proof**

**of**

**Theorem**

**2.**

The channel model ${\mathbf{H}}^{\left(m\right)}$ is expressed as Equation (48). The rank of ${\mathbf{H}}^{\left(m\right)}$ is ${L}_{m}$, and the path gains of the ${\mathbf{H}}^{\left(m\right)}$ are Rayleigh distributed. ${\mathbf{F}}_{\mathbf{1}}$ is a full rank matrix, which contains different generators. Consequently, $min\left(P{M}_{D},\phantom{\rule{0.277778em}{0ex}}P{L}_{m},\phantom{\rule{0.277778em}{0ex}}P{M}_{S}\right)\u2a7eR$ ensures that $\left({\mathbf{I}}_{P}\otimes \mathbf{H}\right){\mathbf{F}}_{1}$ have full column rank. Since ${\mathbf{H}}^{\left(m\right)}$ is a low-rank, i.e., ${L}_{m}<min\left({M}_{S},\phantom{\rule{0.277778em}{0ex}}{M}_{D}\right)$, $P\u2a7e\frac{R}{{L}_{m}}$ can ensure $\left({\mathbf{I}}_{P}\otimes \mathbf{H}\right){\mathbf{F}}_{1}$ has the full column rank. Since N and R are large enough, and $\mathbf{S}$ has the random nature, the rank of $\mathbf{S}$ is equal to N or R. Moreover, ${\mathbf{F}}_{2}$ is also a full rank matrix because of its special structure. We deduce that $\left({\mathbf{I}}_{P}\otimes \mathbf{S}\right){\mathbf{F}}_{2}$ is full column rank if $min\left(PN,\phantom{\rule{0.277778em}{0ex}}PR\right)\u2a7e{M}_{S}$, i.e., $P\u2a7emax\left(\frac{{M}_{S}}{N},\phantom{\rule{0.277778em}{0ex}}\frac{{M}_{S}}{R}\right)$. Therefore, condition (51) can ensure identifiability of ${\mathbf{H}}^{\left(m\right)}$ and $\mathbf{S}$. This ends the proof of Theorem 2. ☐

Second, the low-rank property of the mmWave massive MIMO channel should be exploited. Due to very limited scattering of the mmWave channel and larger quantities of transmitting and receiving antennas, ${L}_{m}$ is usually less than ${M}_{S}$ and ${M}_{D}$. Different from the conventional MIMO channel matrix that usually has full column or row rank, the rank of the mmWave massive MIMO channel matrix is much smaller than its dimension. This is called ‘low-rank property’ of the mmWave massive MIMO channel matrix. Therefore, the final part of the proposed

**Algorithm 1**takes advantage of this low-rank constraint $rank\left({\mathbf{H}}^{\left(m\right)}\right)\u2a7d{L}_{m}$ to further improve the estimation accuracy of the channel.## 6. Simulation Results and Discussion

We studied the performance of the proposed semi-blind receiver through numerical simulations. The channel matrix $\mathbf{H}$ has independent and identically distributed (i.i.d.) complex Gaussian entries with zero-mean and unit variance. The default values of the system parameters are set to ${M}_{S}={M}_{D}=4$, and the antenna-to-slot allocation matrix is all-ones matrix. Throughout the simulation, the coding tensor $\underline{\mathbf{C}}$ is known at the receiver. Quadrature phase-shift keying (QPSK) constellations are used to modulate the transmitted symbols. All results are averaged over 10,000 independent Monte Carlo simulations. As in [8,9], the signal-to-noise ratio (SNR) at the receiver is defined as:
where $\underline{\tilde{\mathbf{Y}}}$ denotes the noise-free signal tensor (the tensor-of-interest) containing both symbol and channel parameters. For each channel realization, the normalized mean square error (NMSE) for different receivers is computed as ${\u2225\mathbf{H}-\widehat{\mathbf{H}}\u2225}_{F}^{2}/\phantom{{\u2225\mathbf{H}-\widehat{\mathbf{H}}\u2225}_{F}^{2}{\u2225\mathbf{H}\u2225}_{F}^{2}}\phantom{\rule{0.0pt}{0ex}}{\u2225\mathbf{H}\u2225}_{F}^{2}$, where $\widehat{\mathbf{H}}$ is the estimation of $\mathbf{H}$ at convergence.

$$SNR=10{log}_{10}\left({\u2225\underline{\tilde{\mathbf{Y}}}\u2225}_{F}^{2}/\phantom{{\u2225\underline{\tilde{\mathbf{Y}}}\u2225}_{F}^{2}{\u2225\underline{\mathbf{V}}\u2225}_{F}^{2}}\phantom{\rule{0.0pt}{0ex}}{\u2225\underline{\mathbf{V}}\u2225}_{F}^{2}\right)\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}dB$$

In the first example, we evaluate the convergence performance of the optimized LM algorithm, which is used in our semi-blind receiver. We assume the system design parameters $N=P=5$ and $R=3$. In Figure 2, the average value of the cost function is plotted versus the number of iterations, for three SNR values. We observe from Figure 2 that for each SNR value, the cost function decreases as the number of iterations increases until the algorithm converges. We can also see that for the same number of iterations, the cost function decreases as SNR increases. The proposed algorithm needs few iterations to converge. For instance, the optimized LM algorithm achieves convergence in about 10 iterations at the SNR of 20 dB.

In the second example, we evaluated the estimation performance of the proposed semi-blind receiver in terms of bit error rate (BER) and the NMSE of channel estimation. In particular, we compared the PARAFAC-based receiver with KRST (P-KRST) coding scheme in [8] and the training-based receiver with the space-time (TB-ST) coding scheme. For the TB-ST scheme, the symbol matrix is composed of two parts as in [9], i.e., the training symbol matrix and the unknown data symbol matrix. ${N}_{tr}$ denotes the length of the channel training sequence in the TB-ST receiver.

The transmission rates for the proposed coding scheme and the KRST coding scheme are $\frac{RN}{PN}=\frac{R}{P}$ and $\frac{{M}_{S}N}{PN}=\frac{{M}_{S}}{P}$ (data symbols per symbol period), respectively. However, the KRST coding scheme needs to know the first column of signal matrix $\mathbf{S}$ to eliminate the scaling ambiguity, while the proposed coding scheme only needs to know ${s}_{1,1}$ to eliminate the scaling ambiguity. Thus, the efficient transmission rates for the proposed coding scheme and the KRST coding scheme are $\frac{RN-1}{PN}$ and $\frac{{M}_{S}\left(N-1\right)}{PN}$, respectively. To ensure a fair comparison, the proposed coding scheme and the KRST coding scheme should keep the same efficient transmission rate, i.e., $N=\frac{{M}_{S}-1}{{M}_{S}-R}$. Thus, the system design parameters in this example are set equal to ${M}_{S}=4$, $P=7$, and $R=N=3$. For TB-ST coding scheme, we divide $P={P}_{tr}+{P}_{d}$, where blocks ${P}_{tr}=2$ and ${P}_{tr}=5$ are used for channel training and data transmitting. Therefore, the length of the channel training sequence in the TB-ST receiver is ${N}_{tr}={P}_{tr}N=6$.

The BER performance of different receivers versus SNR is shown in Figure 3. It can be seen that the proposed semi-blind receiver outperforms the P-KRST and TB-ST receiver. The NMSE performance of the different receivers is demonstrated in Figure 4. It can be seen from Figure 4 that the P-KRST receiver has the best performance of channel estimation, and the proposed semi-blind receiver yields a smaller NMSE compared with the TB-ST receiver. From [8], the per-iteration complexity in the PARAFAC based receiver is $\mathcal{O}\left({M}_{S}{M}_{D}PN\right)$. The complexity of the TB-ST scheme can be estimated as $\mathcal{O}\left({N}_{r}{M}_{S}\left({M}_{D}+{N}_{r}\right)+RP{M}_{D}\left(N+R\right)\right)$. The per-iteration complexity of the proposed O-LM algorithm is given at the end of Section 4. The TB-ST scheme has the least computational complexity due to the use of the channel training sequence. Due to the adoption of the simple KRST coding scheme, the PARAFAC based receiver has lower complexity than that of the proposed receiver. However, the TB-ST receiver requires a long channel training sequence, the PARAFAC-based receiver needs to know the first column or row of the signal matrix to eliminate the scaling ambiguity, but the proposed receiver only needs to know one symbol of the signal matrix.

In the third example, we evaluated and compared the performance of the traditional ALS (T-ALS) and optimized LM (O-LM) algorithms. We assume the system design parameters $N=P=L$ and $R=5$. Correlated MIMO channel is considered in this example, and the channel matrix $\mathbf{H}$ is modeled as in [35], where $\rho $ denotes the normalized correlation coefficient with magnitude $\left|\rho \right|\le 1$. We consider $\rho =0$ (non-correlation) and $\rho =0.8$ (strong correlation), respectively. For each Monte Carlo run, the T-ALS algorithm is initialized with ten different random matrices as in [20,36]. The estimation performance is evaluated after selecting the best initialization, which is the one that results in the minimum value of ${\delta}^{\left(j\right)}$.

We observe from Figure 5 that the T-ALS and O-LM algorithms give a similar BER and NMSE performance, which means that these two algorithms converge to the same point. For the right subfigure of Figure 5, the NMSE of the T-ALS and O-LM algorithms is also shown in Table 1 for the sake of comparison. We can also observe from Figure 5 that for these two algorithms, BER and NMSE performance degrade when the channel becomes strongly correlated. The overall complexities of the O-LM algorithm and the ALS algorithm depend on the per-iteration complexity and the number of iterations. The per-iteration complexity of the O-LM algorithm is higher than that of the T-ALS algorithm. However, because of the robustness of the O-LM algorithm, the O-LM algorithm needs fewer iterations compared with the T-ALS algorithm. Therefore, the proposed algorithm has lower complexity compared with the existing T-ALS algorithm. The mean processing times required in the T-ALS and O-LM algorithms are shown in Figure 6. We observe that the mean processing time required in the O-LM algorithm is shorter than that of the T-ALS algorithm, especially when the channel becomes strongly correlated. From Figure 6 we can also observe that the advantage of the O-LM algorithm is obvious as L decreases from 8 to 7 compared with the T-ALS algorithm.

In the fourth example, the influence of design parameters $\left(P,\phantom{\rule{0.277778em}{0ex}}R\right)$ for the proposed receiver is studied. In the left subfigure of Figure 7, it can be seen that the BER decreases when P increases, which expounds the performance gain brought by the time diversity. It can also be seen from this subfigure that the BER increases as the number of data streams R increases. The impact of design parameters $\left(P,\phantom{\rule{0.277778em}{0ex}}R\right)$ on the NMSE performance is shown in the right subfigure of Figure 7. As expected, we can observe that the NMSE decreases linearly as a function of P, and increases as R increases. Hence, appropriate values for the design parameters P and R can be selected according to requirements of the system performance and transmission rate.

In the fifth example, we assume ${M}_{S}=3$, $R=4$, and $N=P=8$ for our semi-blind receiver. The influence of the receive antenna was analyzed. We also compared the performance of our chosen coding tensor (OCCT) $\underline{\mathbf{B}}$ with the random coding tensor (RCT) whose entries are circularly-symmetric Gaussian random variables. In Figure 8, it can be seen that both the BER and NMSE decrease when ${M}_{D}$ increases, which expounds the performance gain brought by the receive diversity. We also observed from Figure 8 that OCCT has a better performance than RCT. Although OCCT is suboptimal, this choice has good symbol and channel identifiability properties, which is advantageous from a receiver design viewpoint.

In the sixth example, we studied the estimation performance of two different transmission schemes for our semi-blind receiver. The default values of the system parameters were set to ${M}_{D}=5$ and $N=6$. In scheme 1, we assume ${M}_{S}=2$, $R=5$, and $P=6$. Three different antenna-to-slot allocation matrices are given as follows:
In scheme 2, we assume ${M}_{S}=3$, $R=3$, and $P=7$. Three different antenna-to-slot allocation matrixes are given as follows:

$${\mathbf{Q}}_{1}=\left[\begin{array}{c}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \end{array}\right],\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{Q}}_{2}=\left[\begin{array}{c}0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \end{array}\right],\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{Q}}_{3}=\left[\begin{array}{c}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \end{array}\right]$$

$${\mathbf{Q}}_{4}=\left[\begin{array}{c}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \end{array}\right],\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}{\mathbf{Q}}_{5}=\left[\begin{array}{c}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \\ 0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}0\hfill \\ 1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}1\hfill \end{array}\right]$$

The BER and NMSE performance of the proposed receiver for different schemes is shown in Figure 9. For scheme 1, the proposed receiver with ${\mathbf{Q}}_{2}$ has a better BER and NMSE performance than that of the proposed receiver with ${\mathbf{Q}}_{1}$. The reason is that the allocation matrix ${\mathbf{Q}}_{2}$ provides a higher transmit spatial diversity gain than the allocation matrix ${\mathbf{Q}}_{1}$. For the same reason, the allocation matrix ${\mathbf{Q}}_{3}$ outperforms ${\mathbf{Q}}_{2}$, and the allocation matrix ${\mathbf{Q}}_{5}$ outperforms ${\mathbf{Q}}_{4}$. We also observe in Figure 9 that scheme 2 has a better BER and NMSE performance than scheme 1. The reason is that scheme 2 can provide a higher coding diversity than scheme 1. It is worth noting that scheme 1 has higher spectral efficiency compared with scheme 2. The transmission rates for scheme 1 and scheme 2 are about 5/6 and 3/7 (data symbols per symbol period), respectively. In summary, a desired tradeoff between estimation performance and transmission rate can be obtained by designing a suitable scheme.

In final example, the multi-user massive MIMO system with a fully-connected hybrid precoding architecture was considered, where ${M}_{S}=48$, $M\times {M}_{D}=6\times 6$, and ${L}_{m}=2$ for all $m=1,\dots ,M$. The carrier frequency of this system is set as 28 GHz [37], and $d=\lambda /\phantom{\lambda 2}\phantom{\rule{0.0pt}{0ex}}2$. We assume that AoAs/AoDs are uniformly distributed in $\left[0,\phantom{\rule{0.277778em}{0ex}}2\pi \right]$. For the considering multi-user massive MIMO system, we also evaluate the estimation performance of the proposed receiver in terms of BER and NMSE of channel estimation. It can be seen from Figure 10 and Figure 11 that the BER and NMSE of the proposed semi-blind receiver decrease as P and N increase, and increase as R increases. The increase of P will reduce the transmission rate, but the increase or decrease of N has no effect on the transmission rate. That means that we can improve the estimation performance of the proposed semi-blind receiver by increasing N if the channel is constant over a long time interval before changing to another realization. We also observed from Figure 10 and Figure 11 that the proposed semi-blind receiver still has a good performance for joint symbol and channel estimation even in a shorter length of code and information symbol, and a larger number of data streams, i.e., $P=24$, $N=6$, and $R=12$.

## 7. Conclusions

We have developed a robust semi-blind receiver combined with the Tucker-2 model in multiple-antenna systems. The proposed receiver could jointly estimate the information symbol and channel parameters. Compared with existing semi-blind receivers, the proposed one gave better estimation performance, and had a higher spectral efficiency Moreover, the proposed semi-blind receiver was also applicable to multi-user massive MIMO systems. Perspectives of this work include an extension to relay-assisted massive MIMO systems by applying the antenna allocation matrix at the relays. Since both the source-relay and the relay-destination channel matrices have low-rank property, new identifiability conditions and efficient fitting algorithms will be deduced and developed, respectively. Another perspective considers extending the proposed robust semi-blind receiver into mmwave MIMO systems for joint channel parameter estimation, which includes AOAs, fading coefficients and time delays [38,39].

## Author Contributions

Conceptualization J.D.; methodology, J.D.; software, M.H. and H.L.; validation, M.H. and Y.H.; writing–original draft preparation, J.D. and M.H.; writing–review and editing, Y.H., Y.C. and H.L.; supervision, J.D. and Y.C.

## Funding

This research was supported by the grant from the National Natural Science Foundation of China (Nos. 61601414, 61701448, 61702466), the National Key Research and Development Program of China (No. 2016YFB0502001), and the Fundamental Research Funds for the Central Universities (Nos. CUC18A007, 2018CUCTJ082, 3132018XNG1808).

## Acknowledgments

The authors would like to thank the anonymous reviewers and the editor for their careful reviews and constructive suggestions to help us improve the quality of this paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Jin, S.; McKay, M.R.; Wong, K.K.; Li, X. Low-SNR capacity of multiple-antenna systems with statistical channel-state information. IEEE Trans. Veh. Technol.
**2010**, 59, 2874–2884. [Google Scholar] [CrossRef] - Shafi, M.; Molisch, A.F.; Smith, P.J.; Haustein, T.; Zhu, P.; De Silva, P.; Tufvesson, F.; Benjebbour, A.; Wunder, G. 5G: A tutorial overview of standards, trials, challenges, deployment, and practice. IEEE J. Sel. Areas Commun.
**2017**, 35, 1201–1221. [Google Scholar] [CrossRef] - Collins, A.; Polyanskiy, Y. Coherent multiple-antenna block-fading channels at finite blocklength. IEEE Trans. Inf. Theory
**2019**, 65, 380–405. [Google Scholar] [CrossRef] - Lahat, D.; Adali, T.; Jutten, C. Multimodal data fusion: An overview of methods, challenges, and prospects. Proc. IEEE
**2015**, 103, 1449–1477. [Google Scholar] [CrossRef] - Sidiropoulos, N.D.; De Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process.
**2017**, 65, 3551–3582. [Google Scholar] [CrossRef] - Freitas, W.d.C.; Favier, G.; de Almeida, A.L. Tensor-Based Joint Channel and Symbol Estimation for Two-Way MIMO Relaying Systems. IEEE Signal Process. Lett.
**2019**, 26, 227–231. [Google Scholar] [CrossRef] - Harshman, R.A. Foundations of the PARAFAC Procedure: Models and Conditions for an “Explanatory” Multimodal Factor Analysis. UCLA Working Pap. Phon.
**1970**, 16, 1–84. [Google Scholar] - Sidiropoulos, N.D.; Budampati, R.S. Khatri–Rao space-time codes. IEEE Trans. Signal Process.
**2002**, 50, 2396–2407. [Google Scholar] [CrossRef] - Liu, K.; Da Costa, J.P.C.; So, H.C.; De Almeida, A.L. Semi-blind receivers for joint symbol and channel estimation in space-time-frequency MIMO-OFDM systems. IEEE Trans. Signal Process.
**2013**, 61, 5444–5457. [Google Scholar] [CrossRef] - Rong, Y.; Khandaker, M.R.; Xiang, Y. Channel estimation of dual-hop MIMO relay system via parallel factor analysis. IEEE Trans. Wirel. Commun.
**2012**, 11, 2224–2233. [Google Scholar] [CrossRef] - Du, J.; Yuan, C.; Zhang, J. Low complexity PARAFAC-based channel estimation for non-regenerative MIMO relay systems. IET Commun.
**2014**, 8, 2193–2199. [Google Scholar] [CrossRef] - Du, J.; Yuan, C.; Zhang, J. Semi-blind parallel factor based receiver for joint symbol and channel estimation in amplify-and-forward multiple-input multiple-output relay systems. IET Commun.
**2015**, 9, 737–744. [Google Scholar] [CrossRef] - Ximenes, L.R.; Favier, G.; de Almeida, A.L. Semi-blind receivers for non-regenerative cooperative MIMO communications based on nested PARAFAC modeling. IEEE Trans. Signal Process.
**2015**, 63, 4985–4998. [Google Scholar] [CrossRef] - Zhou, Z.; Fang, J.; Yang, L.; Li, H.; Chen, Z.; Li, S. Channel estimation for millimeter-wave multiuser MIMO systems via PARAFAC decomposition. IEEE Trans. Wirel. Commun.
**2016**, 15, 7501–7516. [Google Scholar] [CrossRef] - Zhou, Z.; Fang, J.; Yang, L.; Li, H.; Chen, Z.; Blum, R.S. Low-rank tensor decomposition-aided channel estimation for millimeter wave MIMO-OFDM systems. IEEE J. Sel. Areas Commun.
**2017**, 35, 1524–1538. [Google Scholar] [CrossRef] - Wei, X.; Peng, W.; Chen, D.; Ng, D.W.K.; Jiang, T. Joint Channel Parameter Estimation in Multi-cell Massive MIMO System. IEEE Trans. Commun.
**2019**. [Google Scholar] [CrossRef] - Comon, P.; Luciani, X.; De Almeida, A.L. Tensor decompositions, alternating least squares and other tales. J. Chemom.
**2009**, 23, 393–405. [Google Scholar] [CrossRef][Green Version] - Tomasi, G.; Bro, R. A comparison of algorithms for fitting the PARAFAC model. Comput. Stat. Data Anal.
**2006**, 50, 1700–1734. [Google Scholar] [CrossRef] - Nion, D.; De Lathauwer, L. A block component model-based blind DS-CDMA receiver. IEEE Trans. Signal Process.
**2008**, 56, 5567–5579. [Google Scholar] [CrossRef] - De Almeida, A.L.; Favier, G.; Ximenes, L.R. Space-time-frequency (STF) MIMO communication systems with blind receiver based on a generalized PARATUCK2 model. IEEE Trans. Signal Process.
**2013**, 61, 1895–1909. [Google Scholar] [CrossRef] - De Almeida, A.L.; Favier, G.; Mota, J.C. Space–time spreading–multiplexing for MIMO wireless communication systems using the PARATUCK-2 tensor model. Signal Process.
**2009**, 89, 2103–2116. [Google Scholar] [CrossRef] - Favier, G.; Da Costa, M.N.; De Almeida, A.L.; Romano, J.M.T. Tensor space–time (TST) coding for MIMO wireless communication systems. Signal Process.
**2012**, 92, 1079–1092. [Google Scholar] [CrossRef] - Favier, G.; de Almeida, A.L. Tensor space-time-frequency coding with semi-blind receivers for MIMO wireless communication systems. IEEE Trans. Signal Process.
**2014**, 62, 5987–6002. [Google Scholar] [CrossRef] - Da Costa, M.N.; Favier, G.; Romano, J.M.T. Tensor modelling of MIMO communication systems with performance analysis and Kronecker receivers. Signal Process.
**2018**, 145, 304–316. [Google Scholar] [CrossRef] - Tucker, L.R. Some mathematical notes on three-mode factor analysis. Psychometrika
**1966**, 31, 279–311. [Google Scholar] [CrossRef] - Favier, G.; de Almeida, A.L. Overview of constrained PARAFAC models. EURASIP J. Adv. Signal Process.
**2014**, 2014, 142. [Google Scholar] [CrossRef][Green Version] - Du, J.; Yuan, C.; Hu, Z.; Lin, H. A novel tensor-based receiver for joint symbol and channel estimation in two-hop cooperative MIMO relay systems. IEEE Commun. Lett.
**2015**, 19, 1961–1964. [Google Scholar] [CrossRef] - Chen, Y.; Han, D.; Qi, L. New ALS methods with extrapolating search directions and optimal step size for complex-valued tensor decompositions. IEEE Trans. Signal Process.
**2011**, 59, 5888–5898. [Google Scholar] [CrossRef] - Van Loan, C.F.; Pitsianis, N. Approximation with Kronecker products. In Linear Algebra for Large Scale and Real-Time Applications; Springer: Dordrecht, The Netherlands, 1993; pp. 293–314. [Google Scholar]
- Du, J.; Tian, P.; Lin, H. Tucker-2 Model Based Scheme for Joint Signal Detection and Channel Estimation in MIMO Systems. J. Beijing Univ. Posts Telecommun.
**2016**, 39, 6–10. [Google Scholar] - Jain, P.; Meka, R.; Dhillon, I.S. Guaranteed rank minimization via singular value projection. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 6–9 December 2010; pp. 937–945. [Google Scholar]
- Shen, W.; Dai, L.; Shim, B.; Mumtaz, S.; Wang, Z. Joint CSIT acquisition based on low-rank matrix completion for FDD massive MIMO systems. IEEE Commun. Lett.
**2015**, 19, 2178–2181. [Google Scholar] [CrossRef] - Alkhateeb, A.; El Ayach, O.; Leus, G.; Heath, R.W. Hybrid precoding for millimeter wave cellular systems with partial channel knowledge. In Proceedings of the 2013 Information Theory and Applications Workshop (ITA), San Diego, CA, USA, 10–15 February 2013; pp. 1–5. [Google Scholar]
- Dai, L.; Gao, X.; Quan, J.; Han, S.; Chih-Lin, I. Near-optimal hybrid analog and digital precoding for downlink mmWave massive MIMO systems. In Proceedings of the 2015 IEEE International Conference on Communications (ICC), London, UK, 8–12 June 2015; pp. 1334–1339. [Google Scholar]
- Shiu, D.S.; Foschini, G.J.; Gans, M.J.; Kahn, J.M. Fading correlation and its effect on the capacity of multielement antenna systems. IEEE Trans. Commun.
**2000**, 48, 502–513. [Google Scholar] [CrossRef] - Nion, D.; De Lathauwer, L. An enhanced line search scheme for complex-valued tensor decompositions. Application in DS-CDMA. Signal Process.
**2008**, 88, 749–755. [Google Scholar] [CrossRef] - Alkhateeb, A.; El Ayach, O.; Leus, G.; Heath, R.W. Channel estimation and hybrid precoding for millimeter wave cellular systems. IEEE J. Sel. Top. Signal Process.
**2014**, 8, 831–846. [Google Scholar] [CrossRef] - Hu, C.; Dai, L.; Mir, T.; Gao, Z.; Fang, J. Super-resolution channel estimation for mmWave massive MIMO with hybrid precoding. IEEE Trans. Veh. Technol.
**2018**, 67, 8954–8958. [Google Scholar] [CrossRef] - Srivastava, S.; Mishra, A.; Rajoriya, A.; Jagannatham, A.K.; Ascheid, G. Quasi-Static and Time-Selective Channel Estimation for Block-Sparse Millimeter Wave Hybrid MIMO Systems: Sparse Bayesian Learning (SBL) Based Approaches. IEEE Trans. Signal Process.
**2019**, 67, 1251–1266. [Google Scholar] [CrossRef]

**Figure 3.**Bit error rate (BER) performance of different receivers versus signal-to-noise ratio (SNR).

**Figure 5.**BER and NMSE performance of traditional alternating least squares (T-ALS) and O-LM algorithms for different L and $\rho $.

**Figure 10.**BER performance of the proposed receiver for the multi-user massive multiple-input multiple-output (MIMO) system.

$\mathit{SNR}$ (dB) | 0 | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 | 20 | 22 | 24 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

$TALS$, $L=8$, $\rho =0$ | 0.3917 | 0.2265 | 0.1348 | 0.0822 | 0.0514 | 0.0318 | 0.0199 | 0.0124 | 0.0077 | 0.0049 | 0.0030 | 0.0019 | 0.0012 |

$O-LM$, $L=8$, $\rho =0$ | 0.3900 | 0.2253 | 0.1355 | 0.0814 | 0.0503 | 0.0317 | 0.0198 | 0.0124 | 0.0077 | 0.0049 | 0.0030 | 0.0019 | 0.0012 |

$TALS$, $L=8$, $\rho =0.8$ | 0.5920 | 0.3113 | 0.1803 | 0.1079 | 0.0666 | 0.0407 | 0.0251 | 0.0156 | 0.0100 | 0.0061 | 0.0039 | 0.0024 | 0.0015 |

$O-LM$, $L=8$, $\rho =0.8$ | 0.5777 | 0.3090 | 0.1814 | 0.1078 | 0.0664 | 0.0404 | 0.0251 | 0.0161 | 0.0098 | 0.0062 | 0.0039 | 0.0024 | 0.0015 |

$TALS$, $L=7$, $\rho =0.8$ | 1.5305 | 0.6519 | 0.3055 | 0.1623 | 0.1004 | 0.0570 | 0.0360 | 0.0221 | 0.0139 | 0.0091 | 0.0051 | 0.0034 | 0.0021 |

$O-LM$, $L=7$, $\rho =0.8$ | 1.4047 | 0.6228 | 0.2849 | 0.1770 | 0.0952 | 0.0580 | 0.0354 | 0.0221 | 0.0136 | 0.0087 | 0.0055 | 0.0034 | 0.0021 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).