# Sea Clutter Reduction and Target Enhancement by Neural Networks in a Marine Radar System

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Measuring and Monitoring Marine System

_{s}) of this temporal sequence of radar images corresponds to the antenna rotation period. The spatial resolutions (R

_{x}and R

_{y}) of each image depends on the azimuthal and range resolutions of the radar system.

## 3. NN-based Clutter Reduction System

#### 3.1. System Processing: Output Radar Image Achievement

^{(o)}is obtained for a cell under test (CUT). Moreover, it is also shown how this output is assigned to the corresponding cell of the output radar image (o

_{r,c}), which is established by the r-th row and c-th column of the CUT in the input radar image. Note that only one MLP output is selected because the objective is processing each valid cell of the input radar image and obtaining an output radar image of the same size as the input one. In our case of study, a total of 5 radar return measurements for different positions (range and azimuth) are considered, where they are selected from horizontal radar image raw data. Note that due to the circular symmetry of the coverage and the freedom of waves movement and ships navigation (they can move in whatever direction), a vertical orientation could be selected obtaining similar system performances. Other choice could be use a different shape of selecting the surrounding data of the CUT, what could be studied in the future. Moreover, note that from these 5 selected measurements of

**I**for each CUT, the central element (i

_{r,c}) corresponds to the CUT, and the others correspond to surrounding measurements, where a symmetric distribution of the selected data is done. This symmetric selection is done in order to have the same quantity of information from both sides. This number of measurements (empirically obtained) is selected as a trade-off between the system performance, the MLP computational complexity and the size of the targets (ships) under study. Note that 5 cells with a range resolution of 7.5 m (see table 1) involve a distance of 37.5 m, which is the minimum range necessary to englobe the beam of all the ships under study (see table 2 of subsection 4.1.) in case they are perpendicularly placed with respect to the orientation of the selection of input data. As can be observed, the MLP architecture depends on the characteristics of the target to be enhanced. On the other hand, following the nomenclature established in Figure 3 and taking into account the dynamic range of the input data (8 bits: dynamic range of [0 − 255]), the processed output (o

_{r,c}= round(y

^{(o)}· 255)) for a given CUT and its surrounding cells, which are contained in the input vector

**x**= [x

_{1}, x

_{2}, x

_{3}, x

_{4}, x

_{5}] = [i

_{r,c−}

_{2}, i

_{r,c−}

_{1}, i

_{r,c}, i

_{r,c}

_{+1}, i

_{r,c}

_{+2}]/255, is given by:

_{NN}(·) denotes the transfer function implemented by the NN. Note that the NN inputs (

**x**) are normalized to a range [0 − 1]. Moreover, due to the activation function selected for the NN output neuron, its output (y

^{(o)}) is limited to the range [0 − 1], as shown below.

_{NN}(·)).

_{tan}(·)) [16] is used.

_{i}) with the j-th hidden neuron (see Figure 3). So,

**W**

^{(h)}is the matrix that contains the synaptic weights that connect the MLP inputs with the MLP hidden neurons. Moreover, ${b}_{j}^{(\mathrm{h})}$ denotes the bias of the j-th hidden neuron, where the row vector

**b**

^{(h)}contains all the hidden neuron biases. For our case of study,

**W**

^{(h)}contains a total of [5×H] synaptic weights and

**b**

^{(h)}contains a total of [1×H] bias weights.

^{(o)}is the overall weighted input of the output neuron, which is obtained by eq. (4), the neuron output (y

^{(o)}) can be computed by eq. (5). In this case, this neuron uses a (nonlinear) logistic activation function (ψ

_{log}(·)) [16] because of the need that the MLP output is limited between 0 and 1.

**w**

^{(o)}contains all these weights. Moreover, b

^{(o)}denotes the bias of the output neuron. For our case of study,

**w**

^{(o)}contains a total of [H×1] synaptic weights and b

^{(o)}contains a total of [1×1] bias weight.

#### 3.2. Output Error Estimation

^{(o)}) and achieved (y

^{(o)}) MLP outputs is computed by eq. (8).

_{r,c,k}and o

_{r,c,k}denote the elements placed at the r-th row and c-th column of the k-th desired (D) and obtained (O) MLP output images of the set, respectively. Note that both elements are normalized by a constant of $\frac{1}{255}$ (inverse of the dynamic range of the radar image cells/pixels) because the range of the MLP output varies from 0 to 1. Moreover, the index m depends on the indexes r, c and k, which is calculated as m = r + (c − 1)N + (k − 1)NN. For our case of study, the indexes r and c varies from 1 to N, which denotes that the radar images of the set are squared and have the same size (N×N pixels), and the index k varies from 1 to M, where M is the number of radar images of the set. Finally, the index m varies from 1 to P, where P depends on the values of N and M in the following way: P = NNM.

#### 3.3. Training Algorithm

_{MS}[n + 1]). Several training algorithms could be used to minimize this error. In this way, second order optimization techniques could be used, as the ones based on Newton, Quasi-Newton and Levenberg-Marquardt methods [16]. The Newton method is characterized because of the Hessian matrix is estimated by the approximation of second order derivates. Whereas, the Quasi-Newton method approximates the Hessian matrix by the Jacobian matrix, where first order derivatives must be approximated. Finally, the Levenberg-Marquardt method needs to estimate the Jacobian matrix, where first order derivatives must be approximated. All these methods present two main advantages: great speed of convergence and high success rate in finding the global minima. Nevertheless, they need a huge number of training observation vectors (data) to estimate the Hessian or Jacobian matrixes with a minimum accuracy. Moreover, this training is effective in terms of computational cost if low size MLPs are used because of, considering that the MLP is composed of F weights, both estimated matrixes are of sizes F×F. So, if F increases linearly, the computational cost needed in each algorithm iteration increases exponentially. Therefore, as we don’t know a priori which is the optimum MLP size in our case of study, the learning algorithm used during the MLP design is the error back-propagation algorithm with variable learning rate (α) and momentum (μ) [17]. This training algorithm doesn’t need so much computational cost, as the previously exposed approaches, especially for big MLP sizes. This algorithm lets update the MLP synaptic weights that connect the inputs with the hidden neurons (

**W**

^{(h)}) and those that connect the hidden neurons with the output neuron (

**w**

^{(o)}) according to eq. (13) and (14), respectively.

**b**

^{(h)}) and output (b

^{(o)}) neurons are updated by eq. (15) and (16), respectively. Note that these expressions are very similar to the synaptic weight updates but considering that the virtual input of the neuron connected by the bias is unity, i.e., x

_{m,0}[n] = 1 and ${y}_{m,0}^{(\mathrm{o})}[n]=1$.

**x**

_{m}), where m indicates the index of the CUT, as exposed previously. Moreover, it is important to clear that the matrix δ

^{(h)}[n] and the vector δ

^{(o)}[n] are the local derivatives of the MS error function at the n-th iteration of the algorithm with respect to its corresponding neuron output, i.e., this partial derivative is a way to estimate the sensibility of the neuron weights with respect to the error. Both matrix and vector can be achieved by eq. (17) and (18), respectively [17]. Note that the MLP is trained in a batch mode, i.e., the weights and biases are not updated until the error for all the P cells of all the radar images of a given set are obtained.

^{′}

_{tan}(·) and ψ

^{′}

_{log}(·) denote the partial derivatives of the activation functions ψ

_{tan}(·) and ψ

_{log}(·), respectively, with respect to the overall weighted inputs ${v}_{m,j}^{(\mathrm{h})}[n]$ and ${v}_{m}^{(\mathrm{o})}[n]$, respectively. After applying these partial derivatives to eq. (3) and (5), eq. (19) and (20) are achieved, respectively, where it is important to remember that ${y}_{m}^{(\mathrm{o})}[n]={o}_{r,c,k}[n]/255$.

_{inc}and α

_{dec}are the increasing and decreasing rates of the learning rate (α). Moreover, in order to warranty the stability of the learning algorithm, a learning rate constraint is set. This constraint controls the maximum error increase in order not to surpass a certain limit. In this way, this constraint is controlled by the parameter p

_{max}.

#### 3.4. Design and Test Stages

- First: We take the 32 radar images of a sequence that contains a target (a ship). For this sequence, a statistical study of its length and beam is done, considering for this study the zones of the image where the target is located.
- Second: A model of the target is done according to the mean values of length and beam, rounding the edges of this model in order to approximate to the real shape of a ship.
- Third: The ship model obtained for the sequence under study is manually superimposed in each radar image until the model is correctly placed over the ship in the radar image. It is important to note that the ship is continuously changing its relative position to the radar emplacement in both angle and range, what is also considered in this procedure. In this step, it is necessary to be careful with the electromagnetic shadows that produce some ships of huge volume.

- First: The NN structure is created, in a general case, with 5 inputs, H hidden neurons and 1 output neuron (structure 5/H/1).
- Second: Once the NN is created, it is initialized using the Nguyen-Widrow algorithm [18]. This initialization algorithm lets the training algorithm to increase its speed of convergence and to find a minimum of the error surface at the end of the training with high success rate. But it does not warranty that the achieved minimum is always the lowest one (the global minimum). This high success rate of finding a local or global minima at the end of the training is due to the NN weights are initialized considering aspects of the training data (ship and sea clutter) such as the mean, maximum and minimum values, which lets to start the training with some knowledge of the data.
- Third: The initial value of the learning rate in the first algorithm iteration (n = 0) is set to α[0] = 0.05, which is evolving during the training algorithm progress by eq. (21). Moreover, the incremental and decremental rates of the learning rate are set to α
_{inc}= 0.05 and α_{dec}= 0.25, respectively, and the maximum error increase from one iteration to the next one is set to p_{max}= 0.04. - Fourth: The momentum constant is set to μ = 0.9, which warranties a certain stability in the training algorithm.
- Fifth: The maximum number of algorithm iterations during the NN design (training with external validation) is set to 200. Nevertheless, the NN training is usually stopped due to the loss of generalization. This loss of generalization is estimated by the MS error in the validation set. So, the training algorithm stop is produced when the MS error calculated for the validation set increases during the following algorithm iterations. This MS error increase indicates that the NN is specializing (memorizing) in the radar images of the training set and loosing generalization capabilities to extrapolate this acquired knowledge to other radar images, as those of the validation set.
- Finally, mention that the NN training described in the previous steps is repeated ten times due to the achievement of the lowest minimum error is not always warranted for one execution of the training algorithm. After that, the best trained NN is selected in terms of the maximum average SCR improvement achieved in the validation set (external validation). The used SCR improvement is estimated by the difference between the SCR at the output of the proposed system and the SCR at its input. Note that the SCR is calculated by the decimal logarithmic relationship between the powers of the signal where the target is present and where it is absent (only clutter).

#### 3.5. Performance Evaluation

- For a given radar image:
- – The MS error computed by eq. (11) when M = 1, where the obtained/processed radar image and the desired radar image are considered.
- – The clutter power (P
_{c}) improvement (reduction for a negative value) is obtained as the difference between the clutter powers (in dBm) at the output and input of the system. The estimations of these powers are obtained from the zones of the radar image where target (ship) is absent. - – The target power (P
_{t}) improvement (enhancement for a positive value) is obtained as the difference between the target powers (in dBm) at the output and input of the system. The estimations of these powers are obtained from the zones of the radar image where target (ship) is present. - – The signal-to-clutter ratio (SCR) improvement is obtained as:$${\text{SCR}}^{\text{imp}}(\text{dB})={\text{SCR}}^{\text{out}}(\text{dB})-{\text{SCR}}^{\text{in}}(\text{dB})$$$${\text{SCR}}^{\text{in}}(\text{dB})={P}_{\mathrm{t}}^{\text{in}}(\text{dBm})-{P}_{\mathrm{c}}^{\text{in}}(\text{dBm})$$$${\text{SCR}}^{\text{out}}(\text{dB})={P}_{\mathrm{t}}^{\text{out}}(\text{dBm})-{P}_{\mathrm{c}}^{\text{out}}(\text{dBm})$$

- For a given set of radar images:
- – The average P
_{c}improvement (reduction for a negative value) is obtained as the difference between the clutter powers (in dBm) at the output and input of the system for all the cells of the radar images of the set where target (ship) is absent. Note that this estimation can be different of the mean value of the P_{c}improvement achieved for each radar image because the number of cells related to target absent can be different from one image to other. Take as an example the case where the target is not completely inside the radar coverage in one image and in the next radar image of the sequence it is completely inside. - – The average P
_{t}improvement (enhancement for a positive value) is obtained as the difference between the clutter powers (in dBm) at the output and input of the system for all the cells of the radar images of the set where target (ship) is present. Note that this estimation can be different of the mean value of the P_{t}improvement achieved for each radar image because the number of cells related to target presence can be different from one image to other. Take as an example the same as previously. - – The average SCR improvement is obtained as the mean value of the SCR improvement achieved for each of the M radar images of the set, i.e.:$${\text{SCR}}_{\text{av}}^{\text{imp}}(\text{dB})=\frac{1}{M}\sum _{i=1}^{M}{\text{SCR}}_{i}^{\text{out}}(\text{dB})-{\text{SCR}}_{i}^{\text{in}}(\text{dB})$$

#### 3.6. Dimensionality

- The training, validation and test sets are always the same for all the experiments done for different NN sizes.
- Ten different NNs are initialized for each size (5/H/1) considered in our studies using the Nguyen-Widrow algorithm [18]. As mentioned in the design stage description (see subsection 3.4.), this initialization algorithm helps the NN training algorithm to start from a point in the error surface that leads it to find a local or global minima of the error surface at the end of the training.
- Each NN is trained by the error back-propagation algorithm with variable learning rate and momentum, where an external validation of the training progress is done. This validation tries to stop the training before the NN is specializing or memorizing the training set and, in consequence, loosing generalization capabilities. The same algorithm parameters as the ones used in the design stage (see subsection 3.4.) are used.
- Once the ten NNs are created and trained for a given size (5/H/1), the best NN of them is selected. The selection is done according to the maximum average SCR improvement considering the radar images of the validation set.
- Finally, this procedure is repeated for each NN size (5/H/1) we want to study, where the best NN size is selected in terms of the maximum average SCR improvement achieved considering the radar images of the validation set.

## 4. Results

#### 4.1. Radar Image Database: Training, Validation and Test Sets Compositions

#### 4.2. NN-based Clutter Reduction System: Dimensionality

_{c}, P

_{t}and SCR improvements achieved for the training, validation and test sets. Analyzing these results, several aspects can be remarked:

**First:**The achieved average SCR improvements, clutter reductions (negative improvements of the P_{c}) and target enhancements (positive improvements of the P_{t}) are always better under design conditions (training and validation sets) than under test conditions (test set). This is due to the NN is learning the statistical conditions of the radar environments presented in the training and validation sets. But note that the achieved improvements are similar for both design and test conditions, specially for NNs of low size. This is due to, on one hand, the NN is maintaining the generalization capabilities with low sizes and not with high sizes. On the other hand, due to the correct selection of the radar images that contains the training and validation sets, the performance generalization of the NN to other radar images never seen before, as the ones that compose the test set, is higher.**Second:**The average SCR improvement is not the same as the difference between the average P_{t}and P_{c}improvements. It happens because of the average P_{c}improvement is obtained as the difference between the clutter powers (in dBm) at the output and input of the system for all the cells of the radar images of the set where target (ship) is absent. Whereas this estimation can be different of the mean value of the P_{c}improvement achieved for each radar image because the number of cells related to target absent can be different from one image to other. The same occurs with the average P_{t}but when the target is present. To appreciate it, consider the case where the target is not completely inside the radar coverage in one image and in the next one it is already inside.**Third:**As can be observed in the achieved average P_{c}improvements (clutter reduction), the optimum NN size needs to include 6 hidden neurons in its hidden layer (5/6/1) because greater or lower NN sizes achieve lower performances. This is due to the NN only needs a few number ((5 · 6 + 1 · 6) + (6 · 1 + 1 · 1) = 43) of adaptive parameters (synaptic weights (**W**^{(h)}and**w**^{(o)}) and biases (**b**^{(h)}and b^{(o)}) of the NN) to find a good solution. Moreover, it is observed that having more than 43 adaptive parameters (weights and biases) in the NN-based clutter reduction system is not efficient because both the computational cost and the memory requirements to store the parameters increase, and the most important aspect, its performance decreases. This performance decrease, due to the NN size increase, is based on the specialization that acquires the NN after training and its corresponding loss of generalization [22, 23] to find a good solution for other radar images never seen before.**Fourth:**The average P_{t}improvement (target enhancement) is very similar in the training and validation sets for all the cases under study (NN size) and slightly different in the test set when the NN size increases (loss of generalization).**Fifth:**As a general conclusion related with the previous effects, the average SCR improvement is maximum for a size of 6 hidden neurons in its hidden layer (structure 5/6/1). Moreover, less or more hidden neurons provokes a decrease in its performance, where the higher is the number of hidden neurons, the greater is the performance decrease. This performance decrease with the NN size increase is due to the specialization the NN is acquiring, i.e., the loss of generalization capabilities.

#### 4.3. NN-based Clutter Reduction System: Subjective Analysis

#### 4.4. NN-based Clutter Reduction System: Objective Analysis

**D**) and obtained (

**O**) system outputs of the cases presented in Figure 5 and 6, MS errors of 8.3 · 10

^{−3}and 8.7 · 10

^{−3}are achieved, respectively. It is important to note that the radar images

**D**and

**O**are normalized by a factor of $\frac{1}{255}$ (inverse of the image pixel dynamic range) because the output range of the NN varies from 0 to 1. Moreover, note that this error measurement is a mean value and, as can be observed in both figures, the clutter reduction obtained near the radar site is lower than far away.

_{t}and P

_{c}improvements when target is present) of 10 dB is achieved, approximately. Note that this SCR improvement is a minimum value and better results are obtained because of both selected radar images are the worst cases that compose the test set under study when target is present or absent. In this case, the worst cases refer to hard sea state conditions and strong radar measurements of the ship, where the electromagnetic shadow of the ship is of high intensity (see Figure 5).

_{c}and P

_{t}improvements and the corresponding average SCR improvements for each set. These average measurements show how the P

_{c}improvement is near −11.5 dB (power reduction) for each set, approximately. Note that this average measurement is greater than the P

_{c}improvement achieved for the cases of Figure 5 and 6 (8.7 dB, approximately), what means that better results than the ones presented in these figures can be found in the test set, what was previously mentioned. Moreover, as can be observed, the achieved average P

_{t}improvement is near 1.3 dB in average, which is similar to the one obtained in the case under study of Figure 5. Finally, the achieved SCR improvement is near 12.5 dB, approximately, which is greater than the one obtained for the case under study of Figure 5 (10 dB, approximately) because of the same reasons previously exposed for the analysis of the results of the average P

_{c}improvement. On the other hand, it is important to note that the average SCR improvement is not always the same as the difference between the average P

_{t}and P

_{c}improvements. It is due to the average P

_{t}and SCR improvements are only calculated for the radar images of each set that contain target information, whereas the average P

_{c}improvement is calculated for all the radar images of each set, where a target can be present or not.

## 5. Conclusions and Future Research Lines

## Acknowledgments

## References and Notes

- Harati-Mokhtari, A.; Wall, A.; Brooks, P.; J., W. Automatic identification system (AIS): Data reliability and human error implications. J. Navig
**2007**, 60, 373–389. [Google Scholar] - Ellis, G.; Dix, A. A Taxonomy of Clutter Reduction for Information Visualisation. IEEE T. Vis. Comput. Gr
**2007**, 13, 1216–1223. [Google Scholar] - Ellis, G.; Dix, A. Enabling Automatic Clutter Reduction in Parallel Coordinate Plots. IEEE T. Vis. Comput. Gr
**2006**, 12, 717–724. [Google Scholar] - Merwe, A.; Gupta, I. AA Novel Signal Processing Technique for Clutter Reduction in GPR Measurements of Small, Shallow Land Mines. IEEE T. Geosci. Remote
**2000**, 38, 2627–2637. [Google Scholar] - Karlsen, B.; Larsen, J.; Sorensen, H.; Jakobsen, K. Comparison of PCA and ICA based clutter reduction in GPR systems for anti-personal landmine detection. Statistical Signal Processing. Proc. 11th IEEE Signal Processing Workshop on, 2001; pp. 146–149.
- Karlsen, B.; Sorensen, H.; Larsen, J.; Jakobsen, K. Independent component analysis for clutter reduction in ground penetrating radar data. Proc. SPIE
**2002**, 4742, 378–389. [Google Scholar] - Brunzell, H. Clutter reduction and object detection in surface penetrating radar. Radar
**1999**, 97, 688–691. [Google Scholar] - Carevic, D. Clutter reduction and target detection in ground-penetrating radar data using wavelets. Proc. SPIE
**1999**, 3710, 973–984. [Google Scholar] - Carevic, D. Clutter Reduction and Detection of Minelike Objects in Ground Penetrating Radar Data Using Wavelets. Subsurface Sensing Technologies and Applications
**2000**, 1, 101–118. [Google Scholar] - Nieto-Borge, J.; Hessner, K.; Jarabo-Amores, P.; Mata-Moya, D. Signal-to-noise ratio analysis to estimate ocean wave heights from X-band marine radar image time series. IET Radar Son. Nav
**2008**, 2, 35–41. [Google Scholar] - Hennessey, G.; Leung, H.; Drosopoulos, A.; Yip, P. Sea-clutter modeling using a radial-basis-function neural network. IEEE J. Oceanic Eng
**2001**, 26, 358–372. [Google Scholar] - Leung, H.; Hennessey, G.; Drosopoulos, A. Signal detection using the radial basis function coupled map lattice. IEEE T. Neural Networ
**2000**, 11, 1133–1151. [Google Scholar] - Rico-Ramirez, M.; Cluckie, I. Classification of Ground Clutter and Anomalous Propagation Using Dual-Polarization Weather Radar. IEEE T. Geosci. Remote
**2008**, 46, 1892–1904. [Google Scholar] - Ziemer, F.; Brockmann, C.; Vaughan, R.; Seemann, J.; Senet, C. Radar survey of near shore bathymetry within the oroma project. EARSeL eProceedings
**2004**, 3, 282–288. [Google Scholar] - Reichert, K.; Hessner, K.; Dannenberg, J.; Trankmann, I.; Lund, B. X-band radar as a tool to determine spectral and single wave properties. 8th Int. Workshop on Wave Hindcasting and Forecasting
**2005**. [Google Scholar] - Bishop, C. Neural networks for pattern recognition; Oxford University Press Inc: New York, 1995. [Google Scholar]
- Haykin, S. Neural Networks. A Comprehensive Foundation (Second Edition); Prentice-Hall: London, 1999. [Google Scholar]
- Nguyen, D.; Widrow, B. Improving the Learning Speed of 2-layer Neural Networks by Choosing Initial Values of the Adaptive Weights. Neural Networks, 1990 IJCNN Int. Joint Conf. 1990; pp. 21–26. [Google Scholar]
- Bowditch, N. H.O. pub No. 9: American Practical Navigator. Revised Edition 1938 (Not Copyrighted); United States Hydrographic Office, 1938. [Google Scholar]
- Faltinsen, O. Sea Loads on Ships and Offshore Structures; Cambridge University Press: Cambridge, 1999. [Google Scholar]
- World Meteorological Organization official Web site: http://www.wmo.int/pages/index_en.html.
- Holland, J.; Holyoak, K.; Nisbett, R.; Thagard, P. Induction: Processes of Inference, Learning, and Discovery; The MIT Press: Cambridge, 1986. [Google Scholar]
- Baum, E.; Haussler, D. What Size Net Gives Valid Generalization? Neural Comput
**1989**, 1, 151–160. [Google Scholar]

**Figure 5.**Example of Scan Maps containing Target and Clutter at the Input and Output of the NN-based Clutter Reduction System

**Figure 6.**Example of Scan Maps containing only Clutter at the Input and Output of the NN-based Clutter Reduction System

Radar System Frequency (X-band) | 10.0 GHz |

Antenna Rotation Speed | 50 rpm |

Antenna Polarization | H and H |

Pulse Repetition Frequency (PRF) | 1000 Hz |

Radar Pulse Width | 80 ns |

Distance Range (Coverage) | 200 – 2150 m |

Range Resolution | 7.5 m |

Azimuthal Range (Coverage) | 0 – 360 ° |

Azimuthal Resolution | 0.28 ° |

Kind of Ship | Length | Beam |
---|---|---|

Cruise Ships | 350 m | 35 m |

Ferries | 200 m | 28 m |

Container Ships | 200 m | 32 m |

General Cargo Ships | 120 m | 18 m |

**Table 3.**Average Improvements of the Clutter Power (P

_{c}), Target Power (P

_{t}) and SCR during the Design and Test Stages in the Training/Validation/Test Sets for Different NN-based Clutter Reduction System (MLP) sizes (5/H/1)

MLP size (5/H/1) | Average P_{c} Improvement (dB) | Average P_{t} Improvement (dB) | Average SCR Improvement (dB) |
---|---|---|---|

5/4/1 | −11.4 / −11.3 / −11.1 | +1.2 / +1.2 / +0.9 | +12.6 / +12.6 / +12.1 |

5/6/1 | −11.7 / −11.5 / −11.3 | +1.3 / +1.3 / +1.1 | +12.9 / +12.8 / +12.5 |

5/8/1 | −11.3 / −11.1 / −10.9 | +1.3 / +1.3 / +1.2 | +12.6 / +12.5 / +12.3 |

5/10/1 | −11.2 / −11.1 / −10.9 | +1.2 / +1.3 / +1.1 | +12.4 / +12.3 / +12.0 |

5/12/1 | −11.0 / −10.9 / −10.8 | +1.3 / +1.3 / +1.1 | +12.4 / +12.3 / +12.0 |

5/14/1 | −11.0 / −10.9 / −10.8 | +1.2 / +1.2 / +1.0 | +12.4 / +12.3 / +12.0 |

5/16/1 | −11.0 / −10.9 / −10.7 | +1.2 / +1.2 / +1.0 | +12.4 / +12.3 / +11.9 |

5/18/1 | −10.9 / −10.8 / −10.7 | +1.2 / +1.2 / +1.0 | +12.2 / +12.1 / +11.9 |

5/20/1 | −10.6 / −10.5 / −10.3 | +1.2 / +1.2 / +0.9 | +12.0 / +11.9 / +11.6 |

5/25/1 | −10.5 / −10.4 / −10.2 | +1.2 / +1.2 / +0.8 | +12.0 / +11.9 / +11.4 |

5/30/1 | −10.5 / −10.4 / −10.0 | +1.2 / +1.2 / +0.7 | +11.8 / +11.7 / +11.0 |

**Table 4.**Clutter Power (P

_{c}), Target Power (P

_{c}) and Signal-to-Clutter Ratio (SCR) Improvements for a NN-based Clutter Reduction System with a NN Structure of 5/6/1

Radar Image | Input (dBm) | Output (dBm) | Improvement (dB) | |
---|---|---|---|---|

Clutter Power (P_{c}) | (Fig. 5) | +6.1 | −2.7 | −8.8 |

Target Power (P_{t}) | (Fig. 5) | +15.5 | +16.8 | +1.3 |

Signal-to-Clutter Ratio (SCR) | (Fig. 5) | +9.4 | +19.5 | +10.1 |

Clutter Power (P_{c}) | (Fig. 6) | +6.2 | −2.4 | −8.6 |

Target Power (P_{t}) | (Fig. 6) | - | - | - |

Signal-to-Clutter Ratio (SCR) | (Fig. 6) | - | - | - |

© 2009 by the authors; licensee MDPI, Basel, Switzerland This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Vicen-Bueno, R.; Carrasco-Álvarez, R.; Rosa-Zurera, M.; Nieto-Borge, J.C. Sea Clutter Reduction and Target Enhancement by Neural Networks in a Marine Radar System. *Sensors* **2009**, *9*, 1913-1936.
https://doi.org/10.3390/s90301913

**AMA Style**

Vicen-Bueno R, Carrasco-Álvarez R, Rosa-Zurera M, Nieto-Borge JC. Sea Clutter Reduction and Target Enhancement by Neural Networks in a Marine Radar System. *Sensors*. 2009; 9(3):1913-1936.
https://doi.org/10.3390/s90301913

**Chicago/Turabian Style**

Vicen-Bueno, Raúl, Rubén Carrasco-Álvarez, Manuel Rosa-Zurera, and José Carlos Nieto-Borge. 2009. "Sea Clutter Reduction and Target Enhancement by Neural Networks in a Marine Radar System" *Sensors* 9, no. 3: 1913-1936.
https://doi.org/10.3390/s90301913