# Attention Graph Convolution Network for Image Segmentation in Big SAR Imagery Data

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

## 3. Methodology

#### 3.1. Graph Construction

#### 3.2. Superpixel-Based Voting

^{th}pixel is represented as ${l}_{m},m=1,\dots ,M$. Then, the number of pixels in ${\widehat{r}}_{j}$ belonging to each category is counted, and the most frequent class will be selected as the category of ${r}_{j}$, represented as:

#### 3.3. Attention Mechanism Layer

_{ij}. Note that the GAT model requires a learnable parameter matrix $W$ to transform linearly the features of nodes into higher-level features before we calculate attention coefficients in Equation (8). The linear transformation matrix $W$ is also the convolution kernel in Equation (9). The input features extracted by CNN already obtain sufficient expressive power, and there is no need to transform the feature again. Furthermore, the subsequent convolution operations are also linear transformations for node features, so the transformation here is repetitive and redundant. It is also difficult to ensure the simultaneously accuracy of the convolution operation and the linear transformation during the training process.

#### 3.4. Attention Graph Convolutional Network

Algorithm 1: Training AGCN for Image Segmentation. |

## 4. Results

#### 4.1. Data Description

#### 4.2. Evaluation Metrics

#### 4.3. Implementation Details

#### 4.4. Experiments on the Fangchenggang Dataset

#### 4.4.1. Experiments Using the Complete Training Set

#### 4.4.2. Experiments on Part of Training Set

#### 4.5. Experiments on the Pucheng Dataset

## 5. Discussion

#### 5.1. Selection of Superpixel’s Size

#### 5.2. Performance of AGCN for Different SNR Values

#### 5.3. Computational Complexity

## 6. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Chen, H.; Zhang, F.; Tang, B.; Yin, Q.; Sun, X. Slim and Efficient Neural Network Design for Resource-Constrained SAR Target Recognition. Remote Sens.
**2018**, 10, 1618. [Google Scholar] [CrossRef] - Fei, G.; Fei, M.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. Semi-Supervised Generative Adversarial Nets with Multiple Generators for SAR Image Recognition. Sensors
**2018**, 18, 2706. [Google Scholar][Green Version] - Gao, F.; Ma, F.; Wang, J.; Sun, J.; Zhou, H. Visual Saliency Modeling for River Detection in High-resolution SAR Imagery. IEEE Access
**2018**, 6, 1000–1014. [Google Scholar] [CrossRef] - Zhou, L.; Guo, C.; Li, Y.; Shang, Y. Change Detection Based on Conditional Random Field with Region Connection Constraints in High-Resolution Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens.
**2017**, 9, 3478–3488. [Google Scholar] [CrossRef] - Mason, D.; Davenport, I.; Neal, J.; Schumann, G.; Bates, P.D. Nearreal-time flood detection in urban and rural areas using high resolution synthetic aperture radar images. IEEE Trans. Geosci. Remote Sens.
**2012**, 50, 3041–3052. [Google Scholar] [CrossRef] - Covello, F.; Battazza, F.; Coletta, A.; Lopinto, E.; Fiorentino, C.; Pietranera, L.; Valentini, G.; Zoffoli, S. COSMO-SkyMed—An existing opportunity for observing the Earth. J. Geodyn.
**2010**, 49, 171. [Google Scholar] [CrossRef] - Suykens, J.A.K.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett.
**1999**, 9, 293–300. [Google Scholar] [CrossRef] - González, A.; Pérez, R.; Romero-Zaliz, R. An Incremental Approach to Address Big Data Classification Problems Using Cognitive Models. Cogn. Comput.
**2019**, 11, 347–366. [Google Scholar] [CrossRef] - Padillo, F.; Luna, J.M.; Ventura, S.A. grammar-guided genetic programing algorithm for associative classification in Big Data. Cogn. Comput.
**2019**. [Google Scholar] [CrossRef] - Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Susstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell.
**2012**, 34, 2274–2282. [Google Scholar] [CrossRef] - Lafferty, J.D.; Mccallum, A.; Pereira, F.C.N. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. Proc. ICML
**2001**, 3, 282–289. [Google Scholar] - Ma, F.; Gao, F.; Sun, J. Weakly Supervised Segmentation of SAR Imagery Using Superpixel and Hierarchically Adversarial CRF. Remote Sens.
**2019**, 11, 512. [Google Scholar] [CrossRef] - Liu, F.; Lin, G.; Shen, C. CRF learning with CNN features for image segmentation. Pattern Recognit.
**2015**, 48, 2983–2992. [Google Scholar] [CrossRef][Green Version] - Guo, E.; Bai, L.; Zhang, Y.; Han, J. Vehicle Detection Based on Superpixel and Improved HOG in Aerial Images. In International Conference on Image and Graphics; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Xiang, D.; Tang, T.; Zhao, L.; Su, Y. Superpixel Generating Algorithm Based on Pixel Intensity and Location Similarity for SAR Image Classification. IEEE Geosci. Remote Sens. Lett.
**2013**, 10, 1414–1418. [Google Scholar] [CrossRef] - Micusik, B.; Kosecka, J. Semantic segmentation of street scenes by superpixel co-occurrence and 3D geometry. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Kyoto, Japan, 27 September–4 October 2009. [Google Scholar]
- Szummer, M.; Kohli, P.; Hoiem, D. Learning CRFs Using Graph Cuts. In Proceedings of the European Conference on Computer Vision, Marseille, France, 12–18 October 2008. [Google Scholar]
- Zhong, P.; Wang, R. Learning conditional random fields for classification of hyperspectral images. IEEE Trans. Image Process.
**2010**, 19, 1890–1907. [Google Scholar] [CrossRef] - Gao, F.; Huang, T.; Sun, J. A New Algorithm for SAR Image Target Recognition Based on an Improved Deep Convolutional Neural Network. Cogn. Comput.
**2018**. [Google Scholar] [CrossRef] - Xu, L.; Clausi, D.A.; Li, F.; Wong, A. Weakly Supervised Classification of Remotely Sensed Imagery Using Label Constraint and Edge Penalty. IEEE Trans. Geosci. Remote Sens.
**2017**, 55, 1424–1436. [Google Scholar] [CrossRef] - Bruna, J.; Zaremba, W.; Szlam, A. Spectral Networks and Locally Connected Networks on Graphs. arXiv
**2013**, arXiv:1312.6203. [Google Scholar] - Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016; pp. 3844–3852. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv
**2016**, arXiv:1609.02907. [Google Scholar] - Duvenaud, D.K.; Maclaurin, D.; Aguileraiparraguirre, J.; Gomezbombarelli, R.; Hirzel, T.D.; Aspuruguzik, A.; Adams, R.P. Convolutional networks on graphs for learning molecular fingerprints. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2015), Montreal, Canada, 7–12 December 2015; pp. 2224–2232. [Google Scholar]
- Atwood, J.; Towsley, D. Diffusion-convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016; pp. 1993–2001. [Google Scholar]
- Niepert, M.; Ahmed, M.; Kutzkov, K. Learning convolutional neural networks for graphs. In Proceedings of the 33nd International Conference on Machine Learning (ICML 2016), New York City, NY, USA, 19–24 June 2016; pp. 2014–2023. [Google Scholar]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 3–9 December 2017; pp. 1024–1034. [Google Scholar]
- Zhang, F.; Yao, X.; Tang, H.; Yin, Q.; Hu, Y.; Lei, B. Multiple Mode SAR Raw Data Simulation and Parallel Acceleration for Gaofen-3 Mission. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens.
**2018**, 11, 2115–2126. [Google Scholar] [CrossRef] - Petar, V.; Guillem, C.; Arantxa, C.; Adriana, R.; Pietro, L.; Yoshua, B. Graph Attention Networks. arXiv
**2017**, arXiv:1710.10903. [Google Scholar] - Shuman, D.I.; Narang, S.K.; Frossard, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag.
**2013**, 30, 83–98. [Google Scholar] [CrossRef][Green Version] - Henaff, M.; Bruna, J.; LeCun, Y. Deep convolutional networks on graph-structured data. arXiv
**2015**, arXiv:1506.05163. [Google Scholar] - Martha, T.R.; Kerle, N.; Westen, C.J.V.; Jetten, V.; Kumar, K.V. Segment Optimization and Data-Driven Thresholding for Knowledge-Based Landslide Detection by Object-Based Image Analysis. IEEE Trans. Geosci. Remote Sens.
**2011**, 49, 4928–4943. [Google Scholar] [CrossRef] - Jie, G.; Wang, H.; Fan, J.; Ma, X. SAR Image Classification via Deep Recurrent Encoding Neural Networks. IEEE Trans. Geosci. Remote Sens.
**2018**, 56, 2255–2269. [Google Scholar] - Sasaki, Y. The truth of the F-measure. Teach. Tutor Mater.
**2007**, 1, 1–5. [Google Scholar] - Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas.
**1960**, 20, 37–46. [Google Scholar] [CrossRef] - Huang, Y.L.; Xu, B.B.; Ren, S.Y. Analysis and pinning control for passivity of coupled reaction-diffusion neural networks with nonlinear coupling. Neurocomputing
**2018**, 272, 334–342. [Google Scholar] [CrossRef] - Tian, T.; Chang, L.; Jinkang, X.; Ma, J. Urban Area Detection in Very High Resolution Remote Sensing Images Using Deep Convolutional Neural Networks. Sensors
**2018**, 18, 904. [Google Scholar] [CrossRef] - Geng, J.; Fan, J.; Wang, H.; Ma, X.; Li, B.; Chen, F. High-Resolution SAR Image Classification via Deep Convolutional Autoencoders. IEEE Geosci. Remote Sens. Lett.
**2015**, 12, 1–5. [Google Scholar] [CrossRef] - Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. arXiv
**2015**, arXiv:1505.07293. [Google Scholar] [CrossRef] [PubMed] - Yue, Z.; Gao, F.; Xiong, Q. A Novel Semi-Supervised Convolutional Neural Network Method for Synthetic Aperture Radar Image Recognition. Cogn. Comput.
**2019**. [Google Scholar] [CrossRef] - Martino, G.D.; Iodice, A.; Riccio, D. Equivalent Number of Scatterers for SAR Speckle Modeling. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 2555–2564. [Google Scholar] [CrossRef]

**Figure 1.**Architecture of our SAR image segmentation approach based on Attention Graph Convolution Network (AGCN).

**Figure 4.**Segmentation results of six models on the Fangchenggang dataset. (

**a**) Input image; (

**b**) ground truth; (

**c**) AGCN; (

**d**) GAT; (

**e**) GCN; (

**f**) AlexNet; (

**g**) DCAE; (

**h**) SegNet.

**Figure 5.**Segmentation results of four models on the Fangchenggang dataset. (

**a**) Input image; (

**b**) ground truth; (

**c**) AGCN; (

**d**) GAT; (

**e**) GCN; (

**f**) SegNet.

**Figure 6.**Segmentation results of four models on the Pucheng dataset. (

**a**) Input image; (

**b**) ground truth; (

**c**) AGCN; (

**d**) GAT; (

**e**) GCN; (

**f**) SegNet.

**Figure 7.**Effect of superpixels’ sizes for the segmentation results of AGCN on the Fangchenggang dataset.

**Figure 9.**Segmentation results of three models for a speckled image. (

**a**) Speckled image (SNR = 3); (

**b**) AGCN (

**c**) GAT; (

**d**) GCN.

Farmland | River | Urban | Background | Non-Image | |
---|---|---|---|---|---|

TrainTest | 1835 2624 | 3573 18,042 | 1654 3278 | 6899 33,994 | 1825 10,906 |

Input 32 × 32 Superpixel |

3 × 3 Conv. 20, stride 1, LeakyReLU |

3 × 3 Conv. 40, stride 2, LeakyReLU |

3 × 3 Conv. 80, stride 2, LeakyReLU |

2 × 2 Max-pool, stride none |

100 fc LeakyReLU |

Output 1 × 100 feature vector |

**Table 3.**Comparison of the overall segmentation performance with six state-of-the-art methods. GAT, Graph Attention Network.

AGCN | GAT | GCN | CNN | AlexNet | DCAE | SegNet | |
---|---|---|---|---|---|---|---|

OP (%) | 90.55 | 89.47 | 89.53 | 81.62 | 77.29 | 78.78 | 85.85 |

OA (%) | 90.74 | 89.79 | 89.43 | 81.10 | 76.15 | 79.75 | 84.90 |

F1-score (%) | 90.44 | 89.43 | 89.37 | 81.03 | 76.51 | 79.08 | 85.16 |

κ | 0.8444 | 0.8284 | 0.8266 | 0.6974 | 0.6303 | 0.6725 | 0.7579 |

**Table 4.**Comparison of the pixel-wise F1-scores for individual classes with six state-of-the-art methods.

Class | AGCN | GAT | GCN | CNN | AlexNet | DCAE | SegNet |
---|---|---|---|---|---|---|---|

Urban Farmland River Background Non-image | 68.13 70.96 85.6892.5999.11 | 69.13 68.18 82.12 92.22 97.59 | 70.0871.8081.94 91.24 99.27 | 58.46 56.64 58.65 87.40 89.76 | 45.79 25.12 70.15 80.69 98.92 | 54.86 27.89 71.01 83.79 98.33 | 61.46 52.56 75.00 90.43 99.04 |

A-GCN | GAT | GCN | CNN | SegNet | |
---|---|---|---|---|---|

OP (%) | 86.60 | 59.87 | 85.91 | 81.64 | 84.59 |

OA (%) | 87.07 | 63.90 | 85.97 | 80.32 | 84.35 |

F1-score (%) | 86.55 | 57.99 | 85.97 | 80.59 | 84.31 |

κ | 0.7862 | 0.3756 | 0.7759 | 0.6940 | 0.7508 |

**Table 6.**Comparison of pixelwise F1-scores for individual classes with four state-of-the-art methods.

Class | A-GCN | GAT | GCN | CNN | SegNet |
---|---|---|---|---|---|

Urban Farmland River Background Non-image | 67.74 56.96 78.5490.5796.99 | 03.85 00.01 74.50 85.50 00.00 | 68.7062.67 75.67 89.17 97.22 | 55.40 50.13 69.05 83.97 98.42 | 59.9167.0573.37 87.14 98.58 |

Methods | OP (%) | OA (%) | F1-score (%) | κ | F1-Score (%) | |
---|---|---|---|---|---|---|

Urban | Farmland | |||||

AGCN | 95.57 | 94.61 | 94.90 | 0.7604 | 79.08 | 96.90 |

GAT | 95.16 | 93.43 | 93.93 | 0.7238 | 76.04 | 96.19 |

GCN | 95.40 | 94.38 | 94.70 | 0.7512 | 78.28 | 96.77 |

CNN | 92.86 | 91.60 | 92.07 | 0.6272 | 67.44 | 95.18 |

SegNet | 94.39 | 94.07 | 94.20 | 0.7162 | 74.97 | 96.63 |

Method | SNR = 5 | SNR = 3 | ||
---|---|---|---|---|

F1-Score (%) | κ | F1-Score (%) | κ | |

AGCN | 87.46 | 0.8010 | 87.27 | 0.7984 |

GAT | 87.23 | 0.7828 | 85.34 | 0.7670 |

GCN | 86.34 | 0.7794 | 85.13 | 0.7601 |

CNN | 58.50 | 0.4165 | 50.31 | 0.3522 |

**Table 9.**Comparison of the run time (seconds) with five state-of-the-art algorithms on the Fangchenggang dataset.

Class | AGCN | GAT | GCN | AlexNet | DCAE | SegNet |
---|---|---|---|---|---|---|

Time(s) | 168.3 s | 169.8 s | 167.2 s | 480.0 | 4219.0 | 3380.0 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ma, F.; Gao, F.; Sun, J.; Zhou, H.; Hussain, A. Attention Graph Convolution Network for Image Segmentation in Big SAR Imagery Data. *Remote Sens.* **2019**, *11*, 2586.
https://doi.org/10.3390/rs11212586

**AMA Style**

Ma F, Gao F, Sun J, Zhou H, Hussain A. Attention Graph Convolution Network for Image Segmentation in Big SAR Imagery Data. *Remote Sensing*. 2019; 11(21):2586.
https://doi.org/10.3390/rs11212586

**Chicago/Turabian Style**

Ma, Fei, Fei Gao, Jinping Sun, Huiyu Zhou, and Amir Hussain. 2019. "Attention Graph Convolution Network for Image Segmentation in Big SAR Imagery Data" *Remote Sensing* 11, no. 21: 2586.
https://doi.org/10.3390/rs11212586