Accurate Channel Estimation and Adaptive Underwater Acoustic Communications Based on Gaussian Likelihood and Constellation Aggregation

Achieving accurate channel estimation and adaptive communications with moving transceivers is challenging due to rapid changes in the underwater acoustic channels. We achieve an accurate channel estimation of fast time-varying underwater acoustic channels by using the superimposed training scheme with a powerful channel estimation algorithm and turbo equalization, where the training sequence and the symbol sequence are linearly superimposed. To realize this, we develop a ‘global’ channel estimation algorithm based on Gaussian likelihood, where the channel correlation between (among) the segments is fully exploited by using the product of the Gaussian probability-density functions of the segments, thereby realizing an ideal channel estimation of each segment. Moreover, the Gaussian-likelihood-based channel estimation is embedded in turbo equalization, where the information exchange between the equalizer and the decoder is carried out in an iterative manner to achieve an accurate channel estimation of each segment. In addition, an adaptive communication algorithm based on constellation aggregation is proposed to resist the severe fast time-varying multipath interference and environmental noise, where the encoding rate is automatically determined for reliable underwater acoustic communications according to the constellation aggregation degree of equalization results. Field experiments with moving transceivers (the communication distance was approximately 5.5 km) were carried out in the Yellow Sea in 2021, and the experimental results verify the effectiveness of the two proposed algorithms.


Introduction
Underwater acoustic communication technology can be widely applied in many fields, such as marine pollution monitoring, underwater rescue, underwater autonomous underwater vehicle (AUV) positioning and navigation. However, underwater acoustic channels are characterized by a time-varying multipath. In particular, when there is relative motion between the transceivers, the channel will change rapidly, resulting in fast timevarying multi-path interference, which distorts the received signal waveform and leads to a reduction in or even failure of the decoding performance of the underwater acoustic communication system [1][2][3].
To solve the issues of time-varying underwater acoustic channels and environmental noise, an adaptive communication scheme was proposed, where the transmitter automatically selected an appropriate modulation according to the instantaneous channel state information (CSI) and signal noise ratio (SNR). The adaptive communication scheme can be mainly divided into two categories, including feedback adaptive communications and direct adaptive communications, as shown in Figure 1a,b, respectively. For feedback adaptive communications, as shown in Figure 1a, User A sends a test signal to User B. User B estimates CSI and SNR based on the test signal, and then feeds them back to User A. User A selects a modulation according to the feedback CSI and SNR, and then transmits the data information to User B by using the selected modulation [4,5]. For direct adaptive communications in Figure 1b, User A initially selects a modulation, such as the direct sequence spread spectrum (DSSS), then transmits the data information to User B by using DSSS. User B identifies the modulation (i.e., identifies DSSS), demodulates and decodes, and estimates CSI and SNR, such as a simple channel and SNR = 20 dB. According to the estimated CSI and SNR, User B selects a new modulation, such as orthogonal frequency division multiplexing (OFDM), and then feeds data information back to User A by using OFDM. Similarly, User A identifies, demodulates and decodes, and estimates CSI and SNR. Then, according to the estimated CSI and SNR, User A selects an original or new modulation, and then transmits the data information to User B by using the selected modulation [6,7]. The biggest difference in the two adaptive communications is that there is no need to send a test signal for the second scheme. Therefore, for a specified amount of data information, the second scheme saves communication time, thereby reducing or even avoiding time-variation of the channel during communications. Therefore, the second scheme is more suitable for fast time-varying channels incurred by underwater acoustic communications with moving transceivers than the first scheme. For adaptive communications, there are mainly four selected modulations and demodulations: multiple frequency shift keying (MFSK), spread spectrum, orthogonal frequency division multiplexing (OFDM) and single carrier. The transmission rate of MFSK is low; spread spectrum technology always uses high-order spread spectrum code, which has a low communication efficiency [8]; OFDM has a poor anti-frequency-offset performance [9][10][11][12][13][14][15][16]. Therefore, with a high transmission rate and good anti-frequency-offset characteristics [17][18][19], the single carrier technology is adopted in this paper. It can be used with a variety of encoding rates to realize adaptive underwater acoustic communications with moving transceivers.
Channel estimation is one of the key factors to realize reliable adaptive communications. At present, there are mainly three kinds of underwater acoustic channel estimation algorithms, such as channel estimation algorithms based on reference signal, blind estimation algorithms and semi-blind estimation algorithms [20][21][22][23][24][25][26][27][28]. Among the three kinds of algorithms, the channel estimation capability and channel tracking capability based on the reference signal are the strongest. Much research on them has been conducted by some teams, such as the team of the University of Connecticut, the team of the Massachusetts Institute of Technology, the team of Institute of Acoustics, Chinese Academy of Sciences, and the team of Harbin Engineering University. So far, all of the above channel estimation algorithms based on reference signals have adopted the traditional time-multiplexing training sequence scheme. In order to further improve the tracking capability of time-varying channels, the joint team of Qingdao University of Technology, the University of Wollongong and the University of Western Australia [29] proposed a superimposed training scheme for underwater acoustic communications, where the training sequence and the symbol sequence are linearly superimposed in order to make the channel information of the training sequence and the symbol sequence completely consistent.
Same as literatures [29,30], the superimposed training (ST) scheme and the segment strategy are used in this paper to enhance the estimation and tracking capability of fast time-varying channels. To realize the full potential of the ST scheme and the segment strategy, a channel estimation algorithm based on Gaussian likelihood (GL) is proposed. The product of the Gaussian probability-density functions of the segments is still the Gaussian probability-density function, which can be parameterized by the mean and the variance, where the mean is the channel estimate and the variance is the deviation of the channel estimate. The variance of the Gaussian probability-density function after the product is less than the variance of the Gaussian probability-density function of each segment, which means that the estimated channel for the segment after the product is more accurate than the estimated channel for the segment itself. This is equivalent to estimating the channel information of the segment by using the 'whole' data block [29,30], thereby leading to the ideal channel estimation of the segment.
It is important to note that the proposed GL algorithm can achieve the same channel estimation and tracking performance as literatures [29,30] in a 'novel' Gaussian product way, because it can be seen as a message-passing method in the Gaussian scenario [29,30]. The message-passing thought was first proposed in literature [30] to improve the channel estimation capability; then, it was applied in underwater acoustic communications with a communication distance of approximately 1 km [29]. Different from literatures [29,30], in this paper, the same thought as message passing [29,30] is realized in a 'novel' Gaussian product way; in particular, the proposed algorithm is applied in actual underwater acoustic communication machines, and the effective communication distance is extended from 1 km to 5.5 km.
In addition, an adaptive communication algorithm based on constellation aggregation (CA) is proposed. The encoding rate (such as rate-1/2, rate-1/4, rate-1/8, or rate-1/16) is automatically selected based on the aggregation degree of the constellation points after linear minimum mean square error (LMMSE) equalization. The working principles of the proposed direct adaptive communications and the traditional direct adaptive communications are different. The traditional direct adaptive communications select the modulation based on CSI and SNR. However, the proposed direct adaptive communications select the encoding rate based on the constellation aggregation. The proposed algorithm based on the constellation aggregation is more accurate in making a selection than the traditional algorithm based on CSI and SNR. In order to fully realize the potential of the GL algorithm and the CA algorithm, the single-carrier communication system and turbo equalization are adopted. The channel estimator (GL), constellation aggregation decision maker (CA), equalizer and decoder are combined together, and they are performed jointly in an iterative manner (turbo equalization) to realize an accurate estimation of fast time-varying channels and reliable communications by using the information exchange between the equalizer and the decoder (turbo equalization). Field experiments with moving transceivers (the communication distance was approximately 5.5 km) were carried out in the Yellow Sea in 2021 to verify the effectiveness of the proposed algorithms. The major contributions of this paper are summarized as follows: (1) A channel estimation algorithm, named GL, is proposed, realizing the same performance as the bidirectional channel estimation algorithm [29,30] in a novel product way of probability-density functions; (2) An adaptive communication algorithm based on constellation aggregation is proposed to improve the applicability of the system for different environments; (3) GL-based channel estimation, LMMSE equalization and decoding are iteratively performed (turbo equalization), leading to a significant performance improvement of the whole system; (4) The proposed algorithms are applied in actual underwater acoustic communication machines to verify their effectiveness.
The remainder of the paper is organized as follows. The system structure is provided in Section 2. Then, a channel estimation algorithm based on Gaussian likelihood and an adaptive communication algorithm based on constellation aggregation are shown in Section 3. Simulations, experiments and the conclusion are presented in Sections 4-6, respectively. Throughout the paper, superscripts [·] T r and [·] H represent transpose and conjugate transpose, respectively.

System Structure
The system structure of underwater acoustic communications is shown in Figure 2.
At the transmitter, the information bit sequence is encoded, interleaved and mapped by using quadrature phase shift keying (QPSK) to symbols. The training sequence and the symbol sequence are linearly superimposed, the resultant sequence is partitioned into multiple segments and then each segment is appended a cyclic prefix (CP) to avoid inter-segment interference and to facilitate low-complexity equalization. The in-phase quadrature (IQ) modulation is used for each CP plus segment. Hyperbolic frequency modulation (HFM) signals with negative and positive modulation rates are used as the head and the tail of the signal frame, respectively. Then, the resultant signals are transmitted by a transducer. The HFM signals are used to estimate and eliminate the average frequency offset and to synchronize the received signals [31]. Then, the transmitted signals are extracted, band-pass filtering and IQ demodulation are carried out and CPs are removed. With the resultant signals, we estimate initial channelsĥ nF of all segments based on the GL algorithm and noise powersp n , and obtain a 'clean' signal z n after training elimination for data equalization. Then, LMMSE equalization, CA decision and decoding are carried out based onĥ nF ,p n and z n , as shown in Figure 3. They, on both sides of equalization, represent the same things. Please note that, on the right side, they represent the initial values, and, on the left side, they represent the iterative values. The iterative process proceeds until a pre-set number of iterations is reached, and difficult decisions are made on each information bit in the last iteration. An illustration of the CA decision is shown in Figure 4. We can ensure the return encoding rate by comparing the pre-set values ξ in and ξ ex . When the constellation aggregation degree ξ m < ξ in , the return encoding rate is increased. When the constellation aggregation degree ξ m > ξ ex , the return encoding rate is reduced. When the constellation aggregation degree ξ in ≤ ξ m ≤ ξ ex , the return encoding rate is kept the same.  The turbo equalization is shown in Figure 3. Based onĥ nF ,p n and z n , the LMMSE equalization and decoding for each segment are carried out, where the LMMSE equalization can be efficiently implemented with fast Fourier transform (FFT), where the initial a priori logarithm likelihood ratios (LLRs) of the interleaved encoded bits are set to zeros, i.e., L a = 0. The soft detection outputs for multiple segments are collected to make up extrinsic LLRs L e , and then deinterleaving and decoding are carried out. The output of the decoder is used by the equalizer and the channel estimator, so there are two branches from the decoder. Both branches use the latest decoding results, i.e., the LLRs of encoded bits of the decoder, and they are updated in each iteration. In the first branch, the LLRs of encoded bits are interleaved and input to the equalizer. In the second branch, difficult decisions on the encoded bits are executed, followed by interleaving and QPSK mapping to obtain the (estimated) symbol sequence. They, together with the training sequence, are used for accurate channel (re)estimation. After that, based on L a from the first branch and h nF ,p n and z n from the second branch, LMMSE equalization is performed to obtain L e , which is input into the decoder for the next round of iteration (turbo equalization).

Accurate Channel Estimation and Adaptive Communications
A block of an information bit sequence denoted by b = b 1 , · · · , b L b T r is encoded and interleaved, yielding an interleaved coded bit sequence denoted by c = c 1 , · · · , c L i T r , Denote the periodic training sequence as . The training sequence and the symbol sequence are linearly superimposed with a power ratio r, yielding the transmitted signal s with length L f , where L f is an integer multiple of T.
Divide s into N y segments, i.e., s = s 1 , · · · , s N y T r , and the length of each segment is L s , where L f = N y × L s . Taking s n as an example, the corresponding symbol sequence is f n , and the corresponding training sequence is t L s . CP is added to each segment, yielding the channel circulant matrix denoted by H n . Denote the white Gaussian noise as w.
Denote a segment of the received signal after CP removal as y n , and its length is an integer multiple of T, i.e., L s = pT. Then, we can represent y n as y n = y 1T , · · · , y pT T r .
The received signal y n can be written as Define L c as the channel order, where T ≥ L c ; then, the Toeplitz matrix formed by the training sequence can be represented as From Appendix A, based on the least squares (LS) algorithm, the channel estimate of a segment can be computed aŝ

Accurate Channel Estimation Based on Gaussian Likelihood
Channel estimates of two consecutive segments can be expressed as two independent and identically distributed probability-density functions in the Gaussian scenario. Denote two independent and identically distributed probability-density functions as p n−1 (x) and p n (x). Denote µĥ n−1 and σ 2 h n−1 as the mean value and variance of the channel estimatê h n−1 of the (n-1)-th segment, respectively, and denote µĥ n and σ 2 h n as the mean value and variance of the channel estimateĥ n of the n-th segment, respectively. Then, we can obtain Denoteĥ nF as the channel estimate after information fusion of the channel estimatê h n−1 of the (n-1)-th segment and the channel estimateĥ n of the n-th segment. Denote µĥ nF and σ 2 h nF as the mean value and variance of the channel estimateĥ nF after information fusion, respectively. Then, the product of the two probability-density functions can be expressed as where and It is important to note that which means that the variance σ 2 h nF after the product becomes smaller than σ 2 i.e., the fused channel estimateĥ nF becomes more accurate, which is more close to the real channel thanĥ n−1 andĥ n . C A is the scale factor of the Gaussian distribution, and it is not a variable, which can be normalized. Therefore, we can obtain the Gaussian distribution p nF (x) after the product, i.e., where N() represents Gaussian distribution. From (7), we can acquire From (6) and (7), i.e., µĥ i.e., The message fusion Formulas (10) and (12) are equivalent to the message fusion Formulas (18) and (19) of literature [29], i.e., the proposed GL algorithm using a 'novel' Gaussian product can achieve the same performance as the bidirectional channel estimation algorithm in literature [29]. They have the same computational complexity.
The formulas of the forward passing and the backward passing are as follows [33]: and where α p is the channel correlation coefficient of the consecutive segments, n p is Gaussian white noise (the mean is 0) and β is the noise power. Take the n-th segment as an example to show the flow of the 'global' channel estimation, and the flow diagram is shown in Figure 5. For forward message passing, the local channel estimationĥ 1 of the first segment is fused with the local channel estimationĥ 2 of the second segment to obtain a fused channel estimationĥ 2 f by using (10) and (12). Then, the message updateĥ 2a can be obtained by using (13) until the fused channel estimation h n f is acquired. For backward message passing, the local channel estimationĥ N y of the last segment is fused with the local channel estimationĥ N y −1 of the (N y -1)-th segment to obtain a fused channel estimationĥ (N y −1) f by using (10) and (12). Then, the message updateĥ (N y −1)b can be obtained by using (14) until the fused channel estimationĥ (n+1)b is acquired. Finally,ĥ n f andĥ (n+1)b can be fused to obtain a 'global' channel estimation h nF of the n-th segment. Appending a proper number of zeros toĥ nF to form a length-L s vector, i.e.,ĥ nF = ĥ nF , 0

Message fusion
Backward passing Forward passing Figure 5. Accurate channel estimation of the n-th segment.

Training Interference Elimination, Estimation of Noise Power and Turbo Equalization
We use F to denote a normalized discrete Fourier transform (DFT) matrix, i.e., the (m, n)th element is given as L −1/2 s e −j2πmn/L s with j = √ −1. Take the n-th segment as an example. The circulant matrix H n can be diagonalized by a DFT matrix, i.e., H n = F H D n F, where D n is a diagonal matrix. After the training interference elimination, the frequencydomain received signal can be written as Based on the estimated channel, the diagonal elements of the diagonal matrixD n can be acquired as follows: As the power of the transmitted symbol sequence is set to 1, the noise power σ 2 n for the n-th segment is the difference between the power Py n for the received segment and the corresponding channel energy Eĥ nF , i.e., Take the n-th segment as an example of LMMSE equalization. Following literatures [30,33,34], the a priori mean and variance of the symbol f i (the symbol sequence f n ) are as follows: where both the initial values of L a n c 1 i and L a n c 2 i (the initial a priori LLRs of the interleaved encoded bits) are set to 0. The estimated interleaved bit sequence is converted to the symbol sequence m a = m a 1 , · · · , m a L s by using (18). The a posteriori mean and variance of the symbol f i (the symbol sequence f n ) are as follows: The a posteriori mean sequence m p is the estimated symbol sequence after LMMSE equalization. It is noted that the computational complexity of the LMMSE equalizer is dominated by (19), and its computational complexity is only in the order of log(Ls) per symbol. In addition, the extrinsic mean and variance of the symbol f i (the symbol sequence f n ) are as follows: As QPSK mapping is used, the extrinsic LLRs of the interleaved encoded bits c 1 i and c 2 i can be expressed as The estimated symbol sequence is converted to the extrinsic LLRs (i.e., the estimated interleaved bit sequence L e n ) by using (19)- (21). The extrinsic LLRs of the segments are denoted collectively as L e and then input into the decoder for the next round of iteration (turbo equalization).

Adaptive Underwater Acoustic Communications Based on Constellation Aggregation
We set a certain iteration number, such as one iteration, to obtain the aggregation degree after LMMSE equalization, i.e., (22), wheref i is the a posteriori mean of a symbol f i , and its real part and imaginary part are denoted byf Re i andf Im i , respectively.
We compute the mean of all ξ i for a frame of information bits, i.e., ξ m = 1 Denote ξ in and ξ ex as the inner boundary and the outer boundary, respectively. As shown in Figure 4, when ξ m < ξ in , the encoding rate will be increased automatically; when ξ m > ξ ex , the encoding rate will be reduced automatically; when ξ in ≤ ξ m ≤ ξ ex , the encoding rate will be kept.

Simulation Results
The simulation parameters are shown in Table 1. Rate-1/2, rate-1/4, rate-1/8 and rate-1/16 convolutional codes and QPSK mapping are used. A variety of power ratios of the training sequence and the symbol sequence, such as 0.15:1, 0.2:1, 0.25:1 and 0.3:1, are used. The standard block with 1024 symbols is divided into a number of segments with a variety of lengths, including 128 symbols, 256 symbols, 512 symbols and 1024 symbols. The corresponding cases are denoted by S128, S256, S512 and W1024, respectively, where the prefix W means that the standard block is treated as a segment. W1024 is used as the benchmark turbo system. S128, S256 and S512 are used in the proposed GL turbo system. The CP is set to 128 symbols. One frame includes 100 blocks, and one block includes 1024 information bits. Assume that a 4 kHz bandwidth is provided. For S256, with rate-1/2, rate-1/4, rate-1/8 and rate-1/16 convolutional codes, the transmission rates are 2667 bits/s, 1333 bits/s, 667 bits/s and 333 bits/s, respectively, and the corresponding bandwidth efficiencies are 0.67 bps/Hz, 0.33 bps/Hz, 0.17 bps/Hz and 0.08 bps/Hz, respectively. The SNR is from −4 dB to 13 dB. A static channel, as shown in Figure 6, and the white Gaussian noise are used.
The BER performance for S256 is shown in Figure 7. It can be seen from the results that both the ST scheme and the channel estimate fusion based on Gaussian likelihood are effective. The lower the encoding rate, the better the BER performance that the system can achieve. Taking Figure 7b, with SNR = 7 dB and rate-1/4 convolutional code, after three iterations, 100 blocks of information bits are correctly decoded. From Figure 7, after two iterations, all information bits with rate-1/2 convolutional code and SNR = 13 dB (Figure 7a   Taking a block of 1024 information bits with rate-1/4 convolutional code as an example, where the noise is not added, the channel in Figure 6 is used, and the channel estimation and equalization results are shown in Figure 8. S256 and a static channel are used; therefore, the channels of the four consecutive segments are the same and their channels are perfectly correlated, i.e., α p = 1. When turbo equalization is not used, i.e., with 0 iterations, the corresponding channel estimate and equalization results are shown in Figure 8(a1),(b1). The estimated channel in Figure 8(a1) is obviously different from the real channel in Figure 6, and the constellation points after LMMSE equalization in Figure 8(b1) are significantly scattered. When turbo equalization is performed once, i.e., after one iteration, the corresponding equalization results are shown in Figure 8(b2), where it is noted that the estimated channels have been updated before turbo equalization, and the aggregation degree of the constellation points after LMMSE equalization becomes significantly better. When turbo equalization is performed two times, i.e., after two iterations, the corresponding equalization results are shown in Figure 8(b3), where the constellation points after LMMSE equalization are ideally condensed together, and the corresponding estimated channel in Figure 8(a2) is exactly the same as the real channel in Figure 6, demonstrating the effectiveness of the ST scheme used for enhancing the channel estimation and tracking capability and the GL algorithm used for the channel information fusion of the segments.   From Figure 8b, it is clear that we can carry out adaptive communications according to the pre-set constellation aggregation degree threshold. It is important to note that we do not show the adaptive communication performance, as we can see the results clearly from Figure 7.
Next, we test the BER performance with a variety of power ratios of the training sequence and the symbol sequence. Taking S128 with rate-1/2 convolutional code as an example, the BER performance of the system is shown in Figure 9a. The green triangle line represents the BER performance with a power ratio 0.2:1 and SNR = 13 dB; after three iterations, 100 blocks of information bits are correctly decoded. Considering the complexity and variability of underwater acoustic channels incurred by the moving transceivers, the power ratio 0.25:1 is used in the follow-up simulations and experiments. Assuming that the SNR = 13 dB and the power ratio is 0.25:1, the BER performance of the system is shown in Figure 9b. The blue star line represents the BER performance of the system with the training interference elimination; after two iterations, 100 blocks of information bits are correctly decoded, demonstrating the effectiveness of the training interference elimination. The pink square line represents the BER performance of the system without the training interference elimination, and it can be seen that, if we do not use the training interference elimination, the system simply does not work. Then, we test the BER performance of the system by using the ST scheme and the GL algorithm, where W1024 is used as the benchmark turbo system. The BER performance comparison is shown in Figure 10. From Figure 10a, if the GL algorithm is not used, the system with a variety of segment lengths does not work. From Figure 10b, if the GL algorithm is used to fuse the local channel estimates to obtain global channel estimates, it can be seen that, no matter how long the segment is, the BER performances for S128, S256, S512 and W1024 are similar. This is because, regardless of the segment length, the 'whole' standard symbol block is used to acquire the global channel estimate for each segment. This demonstrates that the proposed GL turbo system (S128, S256 and S512) can achieve a similar performance as the benchmark turbo system (W1024).

Experimental Results
Two separate underwater acoustic communication experiments with moving transceivers were carried out in the Yellow Sea in 2021, named Yellow Sea 1 and Yellow Sea 2, respectively. Their deployments are shown in Figure 11a,b, respectively. We did not use a vertical array in the experiments. The two receiving hydrophones were completely independent, i.e., they had nothing to do with each other. Therefore, multiple receiver channels were not exploited in the system. The height of the sea waves was from 0.5 m to 1 m; the sea temperature was 5.6°C; the south wind was from level 3 to level 4; the ship with the transducer floated away from the ship with the hydrophone at a speed of approximately 0.5 m/s. The detailed experimental parameters are shown in Table 1. For the two experiments, QPSK mapping and the power ratio of the training sequence and the symbol sequence 0.25:1 were used; both the communication distances of the transceivers were approximately 5.5 km; one frame included 16 blocks, and one block included 1024 information bits; the single-carrier communication system was used; the center frequency was 12 kHz with a bandwidth of 4 kHz; and the sampling frequency was 96 kHz. The signal structure for field experiments is shown in Figure 12.

Adaptive Underwater Acoustic Communications with SNR = 9 dB
The experimental deployment and instruments for Yellow Sea 1 are shown in Figure 11a and Figure 13, respectively. Rate-1/4, rate-1/8 and rate-1/16 convolutional codes were adopted. S256, S512 and W1024 were used, and the CP was set to 128 symbols. Taking S256, for rate-1/4, rate-1/8 and rate-1/16 convolutional codes, the transmission rate were 1333 bits/s, 667 bits/s and 333 bits/s, respectively, and the corresponding bandwidth efficiencies were 0.33 bps/Hz, 0.17 bps/Hz and 0.08 bps/Hz, respectively. Both transceivers were deployed at a depth of 4 m. We firstly used rate-1/16 convolutional code, and the BER performance of 16 data blocks based on the GL algorithm is shown in Figure 14. By comparing the results of S256, S512 and the benchmark turbo system (W1024), it can be seen that S256 was much more effective than S512 and the benchmark turbo system for underwater acoustic communications with moving transceivers. After only one iteration, all information bits with S256 were correctly decoded. However, both S512 (pink square curve) and the benchmark turbo system (blue dotted curve) are completely invalid. This is because moving communications incur time-varying channels. The average channel estimate does not effectively represent the channel information of the 512 symbol block and 1024 symbol block. Taking the first block for S256 in Figure 14 as an example, as S256 was used, there were four consecutive segments for the first data block, and their channels were different due to floating transceivers, i.e., α p = 1. It is important to note that α p can be obtained automatically, and can be calculated by using the estimated channels of the four segments, as α p was equal to the correlated coefficient of the estimated channels of the four segments, which was also used in the initial channel estimation. When turbo equalization was not used, i.e., with 0 iterations, the corresponding channel equalization results are shown in Figure 15(b1), and the constellation points after LMMSE equalization were very scattered. Then, the automatically determined α p was recalculated by using the updated channel estimates of the four segments. When turbo equalization was performed once, i.e., after one iteration, the corresponding equalization results are shown in Figure 15(b2), where the constellation points after LMMSE equalization were ideally condensed together. The corresponding estimated channels of the four segments in Figure 15a were significantly different, where α p = 0.07 after one iteration, demonstrating the time-variation of the channel and the effectiveness of the ST scheme and the GL algorithm.  Then, we carried out field experiments with a variety of convolutional codes to test the effectiveness of direct adaptive communications. The adaptive threshold setting is shown in Table 2. We used the mean aggregation degree after one iteration to compare the threshold, where the inner boundary is set to ξ in = 0.03 and the outer boundary is set to ξ ex = 0.2. When the mean aggregation degree is ξ m < 0.03, the encoding (transmission) rate will be improved automatically; when the mean aggregation degree is ξ m > 0.2, the encoding (transmission) rate will be reduced automatically; when the mean aggregation degree is 0.03 ≤ ξ m ≤ 0.2, the encoding (transmission) rate will be kept the same. Table 2. Threshold setting of mean aggregation degree ξ m after one iteration for Yellow Sea 1 and Yellow Sea 2.
Improve the encoding rate ξ m > 0. 2 Reduce the encoding rate 0.03 ≤ ξ m ≤ 0. 2 Keep the encoding rate The calculation of the mean aggregation degree ξ m is shown in Table 3. For Yellow Sea 1, assuming that rate-1/16 convolutional code was used first, after one iteration, the mean aggregation degree was ξ m = 0.002, which was less than 0.03. Therefore, the encoding rate was improved automatically, i.e., the encoding rate was adjusted from rate-1/16 convolutional code to rate-1/8 convolutional code automatically. We can see that, after one iteration, with rate-1/8 convolutional code, the mean aggregation degree was ξ m = 0.0591, which belonged to [0.03, 0.2]. Therefore, the encoding rate was kept the same. Assuming that rate-1/4 convolutional code was used first, after one iteration, the mean aggregation degree was ξ m = 0.4442, which was more than 0.2. Therefore, the encoding rate was reduced automatically, i.e., the encoding rate was adjusted from rate-1/4 convolutional code to rate-1/8 convolutional code automatically. As after one iteration with rate-1/8 convolutional code, the mean aggregation degree was ξ m = 0.0591, which belonged to [0.03, 0.2], the encoding rate was kept. The aggregation performance of the 16 blocks of information bits with S256 is shown in Figure 16. From Figure 16b, after one iteration with rate-1/8 convolutional code, the constellation points of the 16 blocks of information bits were obviously clustered. The BER performance based on the ST scheme and the GL algorithm with S256 for Yellow Sea 1 is shown in Figure 17. We can see that, after one iteration, the decoding with rate-1/4 convolutional code was invalid, and the decodings with rate-1/8 convolutional code and rate-1/16 convolutional code were valid. With rate-1/8 convolutional code, all information bits were correctly decoded after one iteration, and the BER performance was sufficient in meeting the needs for underwater acoustic communications. Therefore, the rate-1/8 convolutional code was kept, which was in keeping with the result from the mean aggregation degree, demonstrating the effectiveness of the GL algorithm and the CA algorithm.

Adaptive Underwater Acoustic Communications with SNR = 13 dB
The experimental deployment in the Yellow Sea is shown in Figure 11b. An underwater acoustic communication machine (Seatrix Modem) was used, whose illustration and dimensions are shown in Figures 18 and 19, respectively. The introduction of the machine is listed in Table 4. An SD card was plugged in Seatrix Modem to collect data at the receiver, and the collected data were analyzed by using a computer. The transmitting ship floated away from the receiving ship at a speed of approximately 0.5 m/s. For Yellow Sea 2, rate-1/2 and rate-1/4 convolutional codes were used. S256 was used, and the CP was set to 16 symbols. The transmission rates were 3765 bits/s and 1882 bits/s, respectively, and the corresponding bandwidth efficiencies were 0.94 bps/Hz (rate-1/2) and 0.47 bps/Hz (rate-1/4), respectively. The deployment depths of the transducer and the hydrophone were 4 m and 5 m, respectively. The main goal of the experiment is to demonstrate a successful implementation on modem hardware for the proposed algorithms. As the receiver in Section 5.2 has a higher SNR than the receiver in Section 5.1, higher code rates can be used in Section 5.  We used rate-1/2 and rate-1/4 convolutional codes and S256. The BER performance based on the GL algorithm is shown in Table 5. It can be seen that S256 was very effective for underwater acoustic communications with moving transceivers. After only one iteration, all information bits were correctly decoded. Taking the fourth block with rate-1/2 convolutional code in Table 5 as an example, there were four consecutive segments for the fourth data block, and their channels were different due to floating transceivers, i.e., α p = 1. When turbo equalization was not used, i.e., with 0 iterations, the corresponding channel equalization results are shown in Figure 20(b1), and the constellation points after LMMSE equalization were very scattered. Then, the automatically determined α p was updated. When turbo equalization was performed once, i.e., after one iteration, as shown in Figure 20(b2), the constellation points after LMMSE equalization were still scattered. After three iterations, as shown in Figure 20(b4), the constellation points after LMMSE equalization were ideally condensed together. The corresponding estimated channels of the four segments in Figure 20a were significantly different, where α p = 0.09 after three iterations, demonstrating the time-variation of the channel and the effectiveness of the ST scheme and the GL algorithm. Comparing Figure 15a in Yellow Sea 1 and Figure 20a in Yellow Sea 2, it can be seen that their channel lengths are almost the same. This is because the communication environments are basically the same. We did not show BERs for S512 and W1024, as they basically do not work.
Then, we tested the effectiveness of direct adaptive communications with real underwater acoustic communication machines. The adaptive threshold setting is shown in Table 2. We still used the mean aggregation degree after one iteration to compare the threshold, where the inner boundary is set to ξ in = 0.03 and the outer boundary is set to ξ ex = 0.2. When the mean aggregation degree is ξ m < 0.03, the encoding (transmission) rate will be improved automatically; when the mean aggregation degree is ξ m > 0.2, the encoding (transmission) rate will be reduced automatically; when the mean aggregation degree is 0.03 ≤ ξ m ≤ 0.2, the encoding (transmission) rate will be kept the same. The calculation of the mean aggregation degree ξ m is shown in Table 3. For Yellow Sea 2, assuming that rate-1/4 convolutional code was used first, after one iteration, the mean aggregation degree was ξ m = 0.007, which was less than 0.03. Therefore, the encoding rate was improved automatically, i.e., the channel code was adjusted from rate-1/4 convolutional code to rate-1/2 convolutional code automatically. After one iteration with rate-1/2 convolutional code, the mean aggregation degree was ξ m = 0.041, which belonged to [0.03, 0.2]. Therefore, the encoding rate was kept the same. Assuming that rate-1/2 convolutional code was used firstl, after one iteration, the mean aggregation degree was ξ m = 0.041, which belonged to [0.03, 0.2]; therefore, the encoding rate was kept. The aggregation performance of the 16 blocks of information bits with S256 for Yellow Sea 2 is shown in Figure 21. From Figure 21a, after one iteration with rate-1/2 convolutional code, the constellation points of the 16 blocks of information bits became obviously clustered.  Table 5. (a1) Rate-1/2, 0 iteration; (a2) rate-1/2, 1 iteration; (a3) rate-1/2, 2 iterations; (b1) rate-1/4, 0 iteration; (b2) rate-1/4, 1 iteration; (b3) rate-1/4, 2 iterations.
The BER performance based on the GL algorithm with S256 is shown in Table 5. After one iteration, the decoding performance with rate-1/2 convolutional code was sufficient in meeting the needs for underwater acoustic communications. Therefore, the rate-1/2 convolutional code was kept, which was in keeping with the result from the mean aggregation degree. The experiment demonstrated the effectiveness and practicability of the proposed algorithms in real underwater acoustic communication machines. Table 5. BER performance of the GL turbo system with S256 at SNR = 13 dB for Yellow Sea 2. From the above simulations and experimental results, we can conclude that the best segment length is 2 n , and should be close to and longer than the channel length, and shorter than a period of the training sequence, where n is an integer. Considering the transmission rate and time variation of the channel, S256 is better than S128, S512 and W1024. The two separate experimental results show that, with SNR = 9 dB, the 1/8 code rate is effective; with SNR = 13 dB, the 1/2 code rate is effective. Even if α p = 0.07 and α p = 0.09 (α p can be obtained by calculating the correlation coefficient of the consecutive segments), i.e., the channels of the four segments are weakly correlated, the proposed system is still effective.

Conclusions
The GL algorithm and the CA algorithm have been proposed to achieve a global accurate channel estimation of each segment and automatic encoding rate adjustment. To improve the estimation and tracking capability of time-varying channels, the ST scheme has been used. For channel estimate fusion of the segments, S256 is the best for practical moving underwater acoustic communications. Even if the channel correlation coefficients of the segments are as low as 0.7 and 0.9, the proposed GL turbo system is still effective. The experimental results demonstrates that a 1/8 code rate is effective at SNR = 9 dB, and a 1/2 code rate is effective at SNR = 13 dB. In the process of iteration, direct adaptive communications based on constellation aggregation have been realized. The experimental results illustrated that the encoding rate can be adjusted automatically among the 1/2 code rate, 1/4 code rate, 1/8 code rate and 1/16 code rate by using the mean aggregation degree decision. Simulations and experimental results have verified the effectiveness of the proposed system. y n = H n s n + w = H n (rt L s + f n ) + w h 1 0 0 · · · · · · · · · · · · h 2 · · · · · · · · · · · · h 1 Take elements of h n , s n , t L s ,f n and w to build the channel estimator based on the LS algorithm. An element of the impulse response of the channel is denoted by h n ; an element of the symbol sequence is denoted by f n ; an element of the periodic training sequence is denoted by t n with a period of T; and the superimposed symbol sequence is denoted by s n = f n + r × t n . As r is a constant number, the superimposed symbol sequence can be simplified as s n = f n + t n . Then, an element of the received signal sequence can be expressed as y n = h n * s n + w n = L c where w n and L c is the white Gaussian noise and the channel length, respectively. Assuming that the mean value of the symbol sequence is 0, and T ≥ L c , then we can obtain Let L s = pT. We divide E[y n ] into p subsegments with a length of T, i.e., where