Next Article in Journal
Forest Fire Spread Monitoring and Vegetation Dynamics Detection Based on Multi-Source Remote Sensing Images
Next Article in Special Issue
A Depth-Wise Separable U-Net Architecture with Multiscale Filters to Detect Sinkholes
Previous Article in Journal
Long-Term Variation Study of Fine-Mode Particle Size and Regional Characteristics Using AERONET Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Extraction of Marine Aquaculture Zones from Optical Satellite Images by R3Det with Piecewise Linear Stretching

1
Marine Science and Technology College, Zhejiang Ocean University, Zhoushan 316000, China
2
School of Fishery, Zhejiang Ocean University, Zhoushan 316000, China
3
Department of Marine Resources and Energy, Tokyo University of Marine Science and Technology, Tokyo 108-8477, Japan
4
National Engineering Research Center for Marine Aquaculture, Zhejiang Ocean University, Zhoushan 316000, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2022, 14(18), 4430; https://doi.org/10.3390/rs14184430
Submission received: 5 August 2022 / Revised: 31 August 2022 / Accepted: 31 August 2022 / Published: 6 September 2022

Abstract

:
In recent years, the development of China’s marine aquaculture has brought serious challenges to the marine ecological environment. Therefore, it is significant to classify and extract the aquaculture zone and spatial distribution in order to provide a reference for aquaculture management. However, considering the complex marine aquaculture environment, it is difficult for traditional remote sensing technology and deep learning to achieve a breakthrough in the extraction of large-scale aquaculture zones so far. This study proposes a method based on the combination of piecewise linear stretching and R3Det to classify and extract raft aquaculture and cage aquaculture zones. The grayscale value is changed by piecewise linear stretching to reduce the influence of complex aquaculture backgrounds on the extraction accuracy, to effectively highlight the appearance characteristics of the aquaculture zone, and to improve the image contrast. On this basis, the aquaculture zone is classified and extracted by R3Det. Taking the aquaculture zone of Sansha Bay as the research object, the experimental results showed that the accuracy of R3Det in extracting the number of raft aquaculture and cage aquaculture zones was 98.91% and 97.21%, respectively, and the extraction precision of the area of the aquaculture zone reached 92.08%. The proposed method can classify and extract large-scale marine aquaculture zones more simply and efficiently than common remote sensing techniques.

Graphical Abstract

1. Introduction

In recent years, with the continuous increase in fishing intensity, marine aquatic resources have gradually decreased. To meet the demand for aquatic products, the marine aquaculture industry in various countries has been extensively developed [1,2]. According to the statistics of global aquaculture production in 2022, the total amount of aquaculture in Asia accounts for 91.61% of the global total. China accounts for 62.77% of the total aquaculture in Asia, and the aquaculture industry has an important position in China [3]. As an important part of the aquaculture industry, the rapid expansion of marine aquaculture has brought great challenges to the marine ecological environment [4]. The geographical and environmental conditions of marine aquaculture zones weaken the exchange capacity of internal and external water bodies. Meanwhile, the pollutants such as fish excrement, residual bait, and antibiotics exceeded the environmental carrying capacity, causing serious water pollution in the marine [5,6,7]. Moreover, the harsh natural environment such as typhoons is constantly threatening the development of marine aquaculture. The huge waves caused by typhoons can bring devastating blows to the aquaculture zones and cause incalculable economic losses [8,9]. Reasonable planning of aquaculture zones, control of aquaculture scale, and reduction in aquaculture density can reduce aquaculture risks and improve economic efficiency. Therefore, it is significant to accurately obtain the spatial distribution, aquaculture quantity, and the area of the marine aquaculture zone [10].
Marine aquaculture zones are widely distributed, numerous, and complex in the environment, which makes it difficult to obtain accurate information. The traditional method of manually determining the number and area of aquaculture zones is time-consuming and labor-intensive. High-resolution remote sensing satellite images are featured with a wide imaging range and high imaging accuracy; thus, they have obvious advantages for the extraction of large-scale and small target objects and have been widely developed in the extraction of marine aquaculture zones [11,12,13]. Jayanthi adopted visual interpretation to extract aquaculture zones along the southeastern coast of India and statistically analyzed changes in aquaculture zones [14]. Seto and Fragkias adopted visual interpretation to extract information on aquaculture zones in QuickBird remote sensing images to investigate the impact of the Ramsar Convention on Wetlands on the aquaculture industry. It was found that the implementation of the Ramsar Convention on Wetlands did not slow down the development of aquaculture in Ramsar wetlands [15]. Although the visual interpretation method has higher extraction accuracy, the workload is high in terms of time consumption, and the extraction accuracy depends on the experience of the interpreter. This method has strong subjectivity and is not suitable for the extraction and quantitative analysis of large-scale aquaculture zones. To further improve the extraction accuracy and efficiency of aquaculture zones, experts and scholars proposed methods such as information extraction based on spatial structure [16,17], information extraction based on ratio index analysis [18], information extraction based on correspondence analysis [19], and object-oriented information extraction [20,21,22]. The aquaculture zone is effectively extracted by classifying the spatial, spectral, texture, and shape features of the object. Although the use of traditional remote sensing technology can achieve good results in the extraction of a single aquaculture type in a small range, with the expansion of the aquaculture range, the aquaculture environment becomes increasingly complex. Meanwhile, the traditional extraction method is affected by factors such as “salt-and-pepper noise”, “same substance with different spectrum”, and “same spectrum foreign matter”, which lead to a decrease in the extraction accuracy.
In recent years, deep learning has achieved great success in the field of computer vision. Because of its generalization and robustness, it has been gradually applied to aquaculture extraction [23,24]. In the face of high-density, large-scale marine aquaculture zones and aquaculture sea conditions with complex spectral information, deep learning has better feature analysis capabilities and can achieve better extraction accuracy. For example, Cui improved the U-Net network structure by adding a pyramid up-sampling module and a squeeze-excitation module (PSE), which solved the problem of fuzzy boundaries. The network was applied to extract the raft aquaculture zone in the east of Lianyungang, China [25]. Liu et al. proposed a multisource feature fusion target extraction method based on DeepLabv3, which could effectively extract marine aquaculture zones with weak signals [26]. Fu et al. proposed a hierarchical cascade convolutional neural network (HCNet), which could effectively extract multiscale information from images and map marine aquaculture zones more finely [27]. On the basis of Sentinel-2 multispectral scan imaging (MSI) image data, the improved U-Net model reduces the edge-sticking phenomenon and improves the extraction accuracy of the aquaculture zone [28]. However, to improve the extraction accuracy of aquaculture zones, most of the existing research took raft aquaculture in a specific zone as the research object, and the extraction of aquaculture areas in this area was realized by improving the network structure. This method has high professional requirements for scholars, and the extraction range is limited, which makes it difficult to be applied to statistics and monitoring in large-scale aquaculture zones. In addition, the marine aquaculture management department usually needs to conduct statistical analysis and monitoring of a variety of aquaculture types (mainly including rafts and cages), and the extraction of a single type of aquaculture zone cannot provide substantial help for the statistical management of aquaculture zones.
As an improved single-stage detector, R3Det has higher extraction speed and extraction accuracy, and its rotation bounding box has a higher fitting effect with the extraction target [29]. In a previous study, Ma et al. applied it to the extraction of cage aquaculture zones in Fujian Province, and good results were achieved in the extraction of large-scale single-type aquaculture zones [30]. However, the extraction of cage aquaculture zones is still affected by similar features due to the complex marine environment. For example, some raft aquaculture and cage aquaculture zones have similar characteristics, which reduces the extraction accuracy of the model for the aquaculture zone. Meanwhile, the statistics of a single type of cage aquaculture do not result in practical effects on the management of marine aquaculture. In this case, it is crucial to reduce the influence of the aquaculture background on the aquaculture zone and realize the classification and extraction of different aquaculture types. Moreover, further improving the extraction accuracy in a simple way needs to be investigated.
As a high-efficiency and low-cost image processing technology, image enhancement can highlight the important details according to qualitative criteria so as to improve the extraction of the target [31,32,33,34]. In particular, the method of enhancing image contrast using an image histogram has gradually been accepted because of its ability to process images more adaptively [35,36]. Therefore, this study proposes a combination of histogram-based piecewise linear stretching and R3Det to extract marine aquaculture zones. The image of the offshore aquaculture zone is enhanced by piecewise linear stretching, and then the aquaculture zone is classified and extracted by R3Det. It was found that piecewise linear stretching can effectively suppress the grayscale range of raft aquaculture and cage aquaculture zones and reduce the color difference in the raft aquaculture zone due to different aquaculture periods. Meanwhile, it can improve the contrast of the image and reduce the influence of the aquaculture background on the extraction accuracy of the aquaculture zone. The experimental results indicate that the proposed method is simple and efficient to improve the classification and extraction accuracy of offshore aquaculture zones.

2. Study Area and Data

2.1. Study Area

Sansha Bay is in the northeastern part of Fujian Province (26°30′–26°58′N, 119°26′–120°10′E) (Figure 1). It is a semi-closed world-class natural deep-water port composed of Dongchong Peninsula and Jianjiang Peninsula, with a water area of 714 square kilometers [37,38]. Moreover, Sansha Bay is an important aquaculture zone in China, where cage aquaculture and raft aquaculture are the two main aquaculture types (Figure 2). Traditional aquaculture cages are mainly used for fish farming and are composed of rigid frames (wood or steel structure), flexible nets, floats (EPS floating balls), and anchors. The cages are always floating on the water surface and can be seen as a gray-white color in the remote sensing images (Figure 2c) due to the cage frame and floats. Raft aquaculture mainly uses floats and ropes to form floating rafts, which are fixed to the seabed with cables so that the seedlings of seaweed and sessile animals (such as mussels) are fixed on the slings suspended on the floating raft. The raft aquaculture zone has a dark-gray band on the image (Figure 2d), and the color tone of a single aquaculture zone is uniform. Meanwhile, the depth of tone varies between different raft aquaculture zones at different aquaculture stages. In addition, there are many estuaries and islands in Sansha Bay, and the sand in the near-coastal zone is accumulated, resulting in a complex sea environment.

2.2. Data and Preprocessing

Optical remote sensing images can provide spectral information and capture ground objects with different spectral characteristics. Gaofen-6 (GF-6) is a low-orbit optical remote sensing satellite equipped with a multispectral high-resolution sensor PMS (panchromatic band spatial resolution of 2 m, multispectral band spatial resolution of 8 m) and a multispectral medium-resolution wide-width sensor WFV (multispectral band spatial resolution of 16 m), which can achieve a global observation, and the image data effectively cover the coastal areas of China; the specific parameters of the two sensors are shown in Table 1. Compared with Gaofen-1, Ziyuan-3, and other Gaofen series satellites, GF-6 PMS observation has a width of 95 km, and it has the advantages of large coverage, which help to avoid the influence of ground object reflectivity due to different image shooting times. In addition, GF-6 and GF-1 have multispectral high-resolution sensors with the same spatial resolution, enabling 2/8 m sensors to revisit the world in 1 day.
The raft aquaculture zone has different characteristics in different aquaculture stages. The zone with a longer aquaculture period is more distinct from the seawater background, while the characteristics of the cage aquaculture zone do not change with the aquaculture stage. To ensure that the raft aquaculture zone had obvious characteristics in the remote sensing images and to avoid the influence of cloud coverage, this study selected the panchromatic and multispectral images of PMS (17 April 2020). All images were preprocessed by ENVI5.3 software, in which panchromatic images were processed by radiometric calibration and orthorectification, and multispectral images were subjected to radiometric calibration, atmospheric correction, and orthorectification. In this way, the effects of unfavorable factors such as sensors and the atmosphere could be eliminated [39,40]. Meanwhile, to ensure the visual effect during data processing, it was necessary to upscale the image resolution to 2 m with ENVI5.3.

3. Research Methods

3.1. Extraction Process from Satellite Images

The operation process of extracting aquaculture zones in this study consisted of the following three stages: image processing, model training, and results analysis (Figure 3). In the first stage, the effects of factors such as sensors and the atmosphere were eliminated by image preprocessing and by constructing the NDWI (normalized difference water index, which can efficiently achieve the separation of water and land when faced with a large area of sea) model to realize the separation of water and land for the fused images, thus eliminating the influence of inland features on the extraction accuracy of aquaculture zones. However, it was found that some of the cages would be rejected as non-water bodies in the experiment because the aquaculture cages made of materials such as wooden boards or steel structures were floating on the sea surface, and the spectral reflectance was different from that of water bodies. Therefore, it was necessary to repair this part of the image information to ensure the integrity of the information to be identified, and then the image was stretched by piecewise linear stretching. In addition, in subsequent experiments, only the true-color images composed of three bands of red, green, and blue were used as data sources. In the second stage, to ensure the credibility of the experiment, it was necessary to divide the research area into a training set and a test set and expand the training samples for model training. In the third stage, the test set was input into the trained model for testing, the resulting image was obtained, and the accuracy evaluation and comparative analysis of the resulting images under different conditions were performed.

3.2. Piecewise Linear Stretching Based on Histogram

An image histogram reflects the grayscale value distribution that represents the occurrence frequency of each grayscale value [41]. The piecewise linear stretching based on a histogram can highlight the region of interest by changing the grayscale value of the image pixel, enhancing the image contrast, and improving the image quality [42,43]. Sansha Bay aquaculture zones were mainly divided into three types: cage aquaculture zone, raft aquaculture zone, and non-aquaculture zone. Among them, the raft aquaculture zone had weak texture characteristics, and there were differences in aquaculture periods and zones. Although cage aquaculture had strong texture characteristics, the differences between some cages and raft aquaculture zones were small. To intuitively reflect the grayscale characteristics of three different aquaculture zones, we statistically evaluated the grayscale values of the different types of aquaculture zones in Figure 4, and the results are shown in Table 2. In the green band and blue band, the cage aquaculture zone had the largest average grayscale value, followed by the non-aquaculture zone, while the raft aquaculture zone had the smallest average grayscale value due to the influence of underwater aquaculture species; in the red band, the average grayscale value of the raft aquaculture zone was higher than that of the non-aquaculture zone. Therefore, to better reduce the complex appearance characteristics of the raft aquaculture zone and the influence of the aquaculture background, the effect of image recognition was improved by enhancing the contrast of the image. In this study, when adjusting the piecewise transformation points of piecewise linear stretching, the transformation points were set between the average grayscale value of the raft aquaculture zone and the non-aquaculture zone, as well as between that of the non-aquaculture zone and the cage aquaculture zone. The schematic of piecewise linear stretching is shown in Figure 5, and the corresponding calculation is shown in Equation (1).
g ( y ) = {   c a   ×   f ( x ) 0     f ( x )   <   a [   d     c     b     a   ]   ×   f ( x ) + c a     f ( x )     b 255     b 255     d   ×   f ( x ) + d b   <   f ( x )     255 .
To compress the grayscale range of raft aquaculture and cage aquaculture, according to the actual stretching effect, the value of c should be within [0, a), and the value of d should be within (b, 255]. The images before and after piecewise linear stretching are shown in Figure 6. From the comparison of the enlarged area in the lower right corner of the two figures, it can be seen that, in the image after piecewise linear stretching, the grayscale interval of the raft aquaculture zone was obviously compressed, and the complex appearance characteristics caused by the aquaculture cycle were reduced, while the grayscale of the aquaculture cage was significantly improved.

3.3. Dataset

In this study, the aquaculture zone of Sansha Bay was used as the dataset and divided into a training set and a test set. The training set was used to train the parameters in the model, and the test set was used to evaluate the generalization ability of the model. The traditional dataset division is mainly based on the training set to ensure the training effect of the model. However, aquaculture zones have different characteristics such as color and texture. To ensure the credibility of the model evaluation results, the test samples should contain as many characteristics of the identification target as possible. Therefore, the ratio of the training set and test set was adjusted to 0.4:0.6 by taking the aquaculture zone of Sansha Bay as the unit in this study [44,45,46]; the division results are shown in Figure 7a. To ensure the quality of training samples and alleviate the impact of the shortage of training samples on model training, this study used python to divide the training image into 135 images of 800 × 800 pixels as the data source (Figure 7b), and the labelme software was adopted to create a training set (Figure 7c). Then, the training set was augmented using data augmentation methods, including image translation, image flipping (horizontal, vertical, diagonal), and image brightness adjustment. Through data augmentation, the original 135 images were expanded tenfold, yielding 1350 training samples.
According to the research conditions, the model training environment was as follows: Ubnutu 16.04 + intel®Cor (Santa Clara, CA, USA)-eTMi9-10900 XCPU + RTX2060 super + python 3.5 + cuda 10.0 + opencv-python 4.1.1.26 + tensorflow-plot 0.2.0 + tensorflow-gpu 1.13 + tqdm 4.54.0 + shapely 1.7.1 + cpython 0.29.23, and the model training parameters are shown in Table 3.

3.4. R3Det

The R3Det detector [29] is a single-stage detector proposed by Xue et al. by adding a feature refinement module (FRM) to RetinaNet [47]. R3Det is mainly composed of two parts: backbone network and regression subnetwork (classification and bounding box), where the backbone network involves building a feature pyramid network (FPN) [48] on ResNet [49] through top-down paths and horizontal connections. In this way, a rich multiscale feature pyramid is constructed from an input single-resolution image to detect objects at different scales, thereby efficiently extracting features from images. Each layer of the backbone network is connected with a classification and regression sub-network for object classification and location prediction. The horizontal anchor point can achieve a higher recall rate, and the rotation anchor point has a more accurate monitoring effect in dense scenes. Thus, R3Det uses the horizontal anchor point in the first stage to obtain faster speed and higher recall rate, and it uses refined rotation anchors in the refinement stage to detect objects in dense scenes. Meanwhile, to avoid the feature offset caused by the position change of the bounding box, FRM re-encodes the position information of different target bounding boxes to the corresponding feature points, reconstructs the feature map, and realizes the accurate detection of the target. The model structure is shown in Figure 8.

3.5. Confusion Matrix

To verify the accuracy of the extraction results in the Sanshawan aquaculture zone. This study adopted the confusion matrix (Table 4) for evaluation. To ensure the generalization and authenticity of the test results, the remote sensing images in the test set were used as the data source to avoid the influence of the training set on the extraction accuracy of the model. On the basis of the extracted result images, this study combined with higher-resolution Google satellite images to visually interpret the extraction targets to obtain relevant aquaculture information, ensuring the accuracy and reliability of the evaluation data [50]. Three commonly used precision evaluation indicators, namely precision, recall, and F-measure, were used to evaluate the extraction precision of the model. Precision and recall indicate the characteristics of a certain classification, while F-measure combines precision and recall and can be used for the overall evaluation of model accuracy. When the F-measure is higher, the classification model is more effective. Therefore, F-measure was used as the indicator to evaluate the accuracy of the model in extracting aquaculture zones. The specific calculation of the evaluation indicators is as follows:
Precision = TP TP + FP .
Recall = TP TP + FN .
F-measure = 2   ×   recall   ×   precision recall + precision .

4. Experimental Results and Analysis

4.1. Extraction Results

The extraction results of the marine aquaculture zone on the test set based on piecewise linear stretching and R3Det are shown in Figure 9. Raft and cage aquaculture zones are marked with bounding boxes with bright yellow color and dark yellow color, respectively. In addition, “SC” represents the predicted score of cage aquaculture, “RF” represents the predicted score of raft aquaculture, and the score ranges from 0 to 1. A larger score indicates a stronger correlation between the bounding box and the real ground object. The “angle” represents the rotation angle of the bounding box relative to the horizontal.
To intuitively explain the influencing factors of the extraction accuracy, this study examined the missed objects and misidentified objects in the test images. It can be seen from the number and distribution of missed targets and misidentified targets in Figure 9 that the aquaculture zones with poor extraction effect were mainly distributed in coastal zones, especially at the intersection with rivers, and there were many omissions and misidentifications. As for the extraction of the raft aquaculture zone, the number of missed extraction zones was lower than the number of misidentified zones. According to field investigations, it was found that the raft aquaculture zones that were not extracted were mainly small aquaculture zones with inconspicuous appearance characteristics; the zones mistakenly identified as raft aquaculture were mainly aquaculture cages with low grayscale values because farmers usually cover the top of the cages with a layer of black bird nets to prevent birds from catching fish in the cages. As a result, the grayscale value of the cage aquaculture zone was low, which had a certain impact on the accuracy of raft aquaculture extraction. For the extraction of aquaculture cages, the number of missed extraction cages was much larger than the number of misidentified cages. Misidentified aquaculture cages were mainly affected by ships at sea and were mainly distributed in coastal zones. The missed identification of aquaculture cages was mainly composed of cages with low grayscale values and cages with a small area of the aquaculture zone and oversaturated brightness. The lower grayscale value was mainly due to the existence of the black bird net on the upper layer of the cage, which led to the zone resembling raft aquaculture. Cage aquaculture was similar to raft aquaculture, with many omissions in smaller zones, and this was mainly due to the influence of image resolution, resulting in the loss of target texture features. Moreover, due to the influence of waves, the brightness of the surrounding cages was oversaturated, which led to an increase in the number of missed cages during the extraction process.

4.2. Comparisons of Accuracy of Different Stretching Conditions

To verify that piecewise linear stretching could effectively highlight the cage and raft aquaculture zone and improve the image contrast, this study compared piecewise linear stretching with several commonly used image stretching methods, including square root stretching, equalization stretching, Gaussian stretching, and logarithmic stretching. R3Det was used to classify and extract the aquaculture zones from the stretched images by different stretching methods and the original image, and the comparison results are shown in Table 5. For different stretched images, the F-measures of the extracted cage and raft aquaculture zones by R3Det were both higher than 90%, and the F-measure of the raft aquaculture zone was higher than that of the cage aquaculture zone. In the extraction results of cages, the F-measure following logarithmic stretching was lower than the unstretched results, and the F-measure following square root stretching and Gaussian stretching was higher than the unstretched results, but the overall improvement effect was not obvious. Furthermore, the F-measure following equalization stretching was higher than the unstretched results, and the F-measure following piecewise linear stretching was the largest, while the effect of extracting cages was the best. Additionally, the recall was smaller than the F-measure. It can be seen that the main factor affecting the accuracy was the missed extraction of cages. In the extraction results of rafts, the F-measures of square root stretching, logarithmic stretching, and Gaussian stretching were all lower than the unstretched results, and the square root stretching and the logarithmic stretching had a lower recall. Furthermore, equalization stretching and piecewise linear stretching had a good effect on the extraction of cultured rafts, and piecewise linear stretching performed the best.
Figure 10 shows the results of R3Det extracting aquaculture zones under different stretching conditions. To visually show different extraction effects, some annotations are added to the figure, and the changes before and after the annotations are shown in Figure 10g,h. The red rectangles in Figure 10 represent the wrongly extracted aquaculture zones, and the green rectangles represent the aquaculture zones that were not completely enclosed by the bounding box. Compared with the unstretched image (Figure 10f), the overall brightness of the resulting images of square root stretching (Figure 10a) and logarithmic stretching (Figure 10b) was significantly improved, but the contrast between the aquaculture zone and the aquaculture background was not significantly enhanced. Meanwhile, the overall brightness of the resulting image of Gaussian stretching (Figure 10c) was not significantly improved, but the grayscale range of the aquaculture zone was compressed to a certain extent, which reduced the complex characteristics of the aquaculture zone. Although equalization stretching (Figure 10d) enhanced the image brightness and the contrast between the aquaculture zone and the aquaculture background, the noise contrast in the aquaculture background also increased. By contrast, piecewise linear stretching could not only improve the brightness and contrast of the image but also reduce the complex features caused by different aquaculture periods. According to the number of annotations in Figure 10, under the condition of piecewise linear stretching, R3Det performed better than other stretching methods in extracting the aquaculture zone.

4.3. Comparisons of Different Models

Previous research work confirmed the high accuracy and high efficiency of R3Det for aquaculture cages [30]. However, the simultaneous applicability of extracting raft aquaculture and cage aquaculture still requires further verification. R2CNN [51] is a two-stage detector based on the faster R-CNN [52] for detecting text in natural scenes in any direction. It has high accuracy and a high degree of fit between the inclined bounding box and the target, which ensures the advantage of R2CNN in scene text extraction; RetinaNet is a new single-stage detector improved by taking ResNet-101–FPN [48] as the backbone, which can solve the problem of class imbalance by adding a “focal loss” function. Meanwhile, the fit between the rectangular box and the extraction target is improved by adding a rotated rectangular box into RetinaNet [29], which further improves the accuracy of the single-stage detector to extract the target objects from the remote sensing image. In this study, R2CNN, RetinaNet, and R3Det were used to simultaneously extract the raft aquaculture zone and the cage aquaculture zone for comparative analysis. The classification and extraction accuracy of different models for aquaculture zones is shown in Table 6. Although the F-measures of the three models for extracting cage aquaculture and raft aquaculture zones all exceeded 95%, compared with R2CNN and RetinaNet, R3Det had a better extraction effect. Figure 11 shows the partial extraction results of the three models in the marine aquaculture zone under piecewise linear stretching. For the extraction results of R3Det, R2CNN, and RetinaNet, three, 11, and 23 bounding boxes did not fit the aquaculture zone, respectively. Compared with R2CNN and RetinaNet, the bounding box extracted by R3Det fit better with the aquaculture zone. Therefore, the use of R3Det for the statistical analysis of aquaculture zones had higher reliability.

5. Discussion

5.1. Importance of Piecewise Linear Stretching for Extraction of Aquaculture Zones

The piecewise linear stretching based on image histogram has a good promotion effect on the extraction of coastal aquaculture. However, influenced by factors such as aquaculture types, aquaculture cycles, human intervention, and coastal and estuary sediments, it is difficult to further improve the extraction accuracy of marine aquaculture zones. The piecewise linear stretching takes into account the characteristics of different grayscale values between the aquaculture zone and the aquaculture background and sets different thresholds between raft aquaculture, non-aquaculture, and cage aquaculture zones, which can reduce the grayscale level of raft aquaculture zones. In this way, the grayscale value of the cage aquaculture and the nearshore sediment deposition zone is improved, and the grayscale value of the raft aquaculture, cage aquaculture, and the nearshore sediment deposition zone is compressed, as shown in Figure 6. Accordingly, the features of raft aquaculture and cage aquaculture zones are highlighted, the contrast of images is improved, and the influence of nearshore and estuary zones on the classification and extraction of marine aquaculture zones is reduced. Compared with the method of using remote sensing technology and improving the network model to improve the extraction accuracy, this method is simpler, more feasible, wider in extraction range, and higher in application value. To further improve the extraction accuracy, traditional remote sensing methods such as object-oriented extraction and threshold segmentation are interfered with by various factors such as sediment, sediment concentration, and chlorophyll concentration in the aquaculture background, making it difficult to effectively distinguish the aquaculture zone from the aquaculture background, thus leading to a high false-positive rate. With the increase in the area, the phenomenon of “same spectrum foreign matter” gradually increases, which also makes it more difficult for traditional methods to extract the target. The method of improving the extraction accuracy of aquaculture zones by improving the network structure of deep learning not only has higher technical requirements for researchers but is also difficult to implement. At this stage, the extraction effect of this method on the marine aquaculture zone has not been significantly improved. According to the extraction results of raft aquaculture and cage aquaculture in Sansha Bay, the precision, recall, and F-measure of R3Det were all greater than 95% under piecewise linear stretching. As a natural aquaculture port, Sansha Bay has an area of 714 square kilometers. There are many estuaries, and the waves at the connection with the outer sea are larger. The successful application of the proposed method in this complex environment shows that the method is not restricted by the area and specific aquaculture environment, and it can provide guidance for the marine management department.

5.2. Importance of R3Det for Extraction of Aquaculture Zones

Under piecewise linear stretching, R3Det simultaneously extracts raft aquaculture and cage aquaculture zones with higher accuracy. Moreover, it has higher accuracy and faster speed than the advanced single-stage detectors and two-stage detectors in the field of computer vision. As a more advanced two-stage detector, R2CNN is improved on the basis of fast R-CNN, which not only maintains the extraction accuracy of the two-stage detector but also realizes the tilting of the bounding box. However, the marine aquaculture zone has a large aspect ratio and dense arrangement. There are many “non-fitting” phenomena in the results identified by R2CNN, resulting in more aquaculture background information inside the bounding box. This not only increases the impact of background information on classification accuracy but also reduces the precision of the area extraction. Although RetinaNet introduces a “focal loss” function to solve the problem of class imbalance, the extraction results show (Table 6, Figure 11) that, in the classification and extraction of aquaculture zones, RetinaNet has a lower extraction accuracy and has more “non-fitting” phenomena, resulting in lower reliability when the extraction results are used for statistical analysis in aquaculture zones. By combining the horizontal frame and the rotating frame, R3Det improves the detection speed and accuracy. Furthermore, the feature map can be reconstructed according to the added FRM to achieve feature alignment, ensure the fit of the bounding box and the border of the aquaculture zone, and reduce the influence of non-aquaculture zones.

5.3. Influence of the Bounding Box on the Aquaculture Zone

This study verified that piecewise linear stretching and R3Det have good performance in classifying and extracting aquaculture zones. However, the bounding box in the extraction result of R3Det cannot completely fit the actual aquaculture boundary. Therefore, the area of the aquaculture zone is counted according to the coordinate information of the bounding box, and the result obtained is different from the actual aquaculture area. To evaluate the gap between the area obtained by the method proposed in this study and the actual area, eight zones with a size of 4 km2 were randomly selected in the test result image for comparative analysis, and the selected zones are illustrated in Figure 9. In the extraction results, each bounding box had four corresponding coordinate points, allowing the size of the area represented by the bounding box to be obtained. Meanwhile, on the basis of the vectorized data of different types of aquaculture zones in the eight regions, the actual area was obtained.
Figure 12 shows the extraction results and vectorization results of the selected eight regions. It can be seen that the extraction results of the proposed method could effectively avoid the phenomenon of “adhesion” in the aquaculture zone, and the bounding box had good coverage for the aquaculture zone. The detailed results are presented in Table 7. Except for the raft aquaculture zone in zone A, the area of the cage and raft aquaculture zones extracted by R3Det were larger than the vectorized results. This is mainly because the bounding box contained some non-aquaculture zones due to the actual aquaculture zone and the bounding box not fitting completely. In addition, the differences in the extraction precision of the area of cages and rafts in the eight regions were obvious, as the area of the non-aquaculture in the bounding box had a greater contingency for smaller aquaculture zones. Additionally, the precision in the G zone was −189.67%, because the cages in this zone had low grayscale values and were mistakenly identified as raft aquaculture zones. To ensure the validity of the extraction precision, all areas of cages and rafts in the eight regions were evaluated as data sources. The area extraction precision of cage aquaculture and raft aquaculture was 92.48% and 91.88%, respectively, and the extraction precision of the area of the aquaculture zone was 92.08%. Therefore, although the bounding box in the extraction results of R3Det could not completely fit the actual aquaculture boundary, the actual area represented by the bounding box was used for the statistical analysis of the aquaculture area, and the accuracy exceeded 90%, indicating high reliability.

5.4. Problems and Prospects

There are still some shortcomings in the method proposed in this study. To reduce the impact of complex inland on aquaculture zones, when separating land and water, inland water bodies will affect the results, and some aquaculture cages will be eliminated as non-water bodies. This part of the patch needs to be repaired, which inevitably increases the workload. Meanwhile, when performing linear stretching of the image, the threshold setting is artificially determined according to the histogram of the number of pixels with different grayscale scales, and there may be slight differences from the optimal threshold setting, which will affect the image stretching effect and reduce the extraction accuracy. Furthermore, a rectangular bounding box was used to replace the aquaculture zone in the results extracted by R3Det. Although the fit between the bounding box of R3Det and the object was the highest in the rotating object detection model, the result was still affected by non-aquaculture zones.
In future work, we will try to extract the boundary of the aquaculture zone according to the grayscale difference of the object in the bounding box, establish the area of interest of the aquaculture zone, and further improve the extraction precision of the aquaculture area. With the improvement of multispectral satellite resolution, the development of the aquaculture industry can progress more effectively by establishing a more reasonable aquaculture plan via the extraction of aquaculture information from the remoted images. In addition, marine litter is an important factor affecting the marine environment, and efficient removal of marine litter is of great significance to the protection of marine ecology [53,54]. In the future, we will investigate the performance of the proposed method in identifying marine litter.

6. Conclusions

This study took the aquaculture zone of Sansha Bay in 2020 as the research area and proposed a new method for marine aquaculture zone extraction. This method highlighted the features of the aquaculture zone through piecewise linear stretching, which further improved the classification and extraction accuracy of R3Det for marine aquaculture zones. The conclusions are as follows:
  • Compared with the stretched images using methods of square root stretching, equalization stretching, Gaussian stretching, logarithmic stretching, and unstretched images, piecewise linear stretching could more effectively highlight the appearance characteristics of raft aquaculture and cage aquaculture zones, as well as improve the contrast of the images, achieving the highest accuracy for both raft and cage extraction.
  • Compared with R2CNN and RetinaNet, R3Det showdc a higher extraction accuracy for marine aquaculture zones under piecewise linear stretching. The overall extraction accuracy of R3Det for Sansha Bay raft aquaculture and cage aquaculture were 98.91% and 97.21%, respectively, and the extraction precision of the total area of aquaculture was 92.08%.
  • The method proposed in this study is not limited by factors such as specific aquaculture zones and model structure and can classify and extract marine aquaculture zones under large-scale and complex aquaculture backgrounds. The study results can provide effective assistance for relevant marine aquaculture management departments to conduct large-scale aquaculture monitoring and scientific sea use, thus achieving sustainable development of the marine aquaculture industry.

Author Contributions

Conceptualization, Y.M. and X.Q.; methodology, Y.M.; software, X.Q.; validation, P.Z. and H.H.; investigation, F.G.; resources, D.F.; data curation, D.F.; writing—original draft preparation, Y.M. and X.Q.; writing—review and editing, C.Y., L.W., F.G. and D.F.; visualization, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Projects (2020YFE0200100) and the National Natural Science Foundation of China (42076213).

Acknowledgments

The authors would like to thank all reviewers and editors for their comments on this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xia, Z.; Guo, X.; Chen, R. Automatic extraction of aquaculture ponds based on Google Earth Engine. Ocean Coast. Manag. 2020, 198, 105348. [Google Scholar] [CrossRef]
  2. Akber, M.A.; Aziz, A.A.; Lovelock, C. Major drivers of coastal aquaculture expansion in Southeast Asia. Ocean Coast. Manag. 2020, 198, 105364. [Google Scholar] [CrossRef]
  3. FAO. The State of World Fisheries and Aquaculture; FAO: Rome, Italy, 2022; Volume 4, pp. 40–41. [Google Scholar]
  4. Clavelle, T.; Lester, S.E.; Gentry, R.; Froehlich, H.E. Interactions and management for the future of marine aquaculture and capture fisheries. Fish Fish. 2019, 20, 368–388. [Google Scholar] [CrossRef]
  5. Marquez, M.J.; Roncales, C.J.; Tigcal, R.A.; Quinto, E.; Orbecido, A.; Bungay, V.; Beltran, A.; Aviso, K. Development of optical detection for antibiotic residues: Oxytetracycline in freshwater aquaculture. MATEC Web Conf. 2019, 268, 06013. [Google Scholar] [CrossRef]
  6. Bing, W.; Ling, C.; Fiorenza, M.; Naylor, R.L.; Fringer, O.B. The effects of intensive aquaculture on nutrient residence time and transport in a coastal embayment. Environ. Fluid Mech. 2018, 18, 1321–1349. [Google Scholar]
  7. Neofitou, N.; Papadimitriou, K.; Domenikiotis, C.; Tziantziou, L.; Panagiotaki, P. GIS in environmental monitoring and assessment of fish farming impacts on nutrients of Pagasitikos Gulf, Eastern Mediterranean. Aquaculture 2019, 501, 62–75. [Google Scholar] [CrossRef]
  8. Zhang, C.; Yin, K.; Shi, X.; Yan, X. Risk assessment for typhoon storm surges using geospatial techniques for the coastal areas of Guangdong, China. Ocean Coast. Manag. 2021, 213, 105880. [Google Scholar] [CrossRef]
  9. Mmia, B.; Ab, B.; Gkk, C.; Mak, B.; Bp, B. Vulnerability of inland and coastal aquaculture to climate change: Evidence from a developing country. Aquac. Fish. 2019, 4, 183–189. [Google Scholar]
  10. Kang, J.; Sui, L.; Yang, X.; Liu, Y.; Wang, Z.; Wang, J.; Yang, F.; Liu, B.; Ma, Y. Sea Surface-Visible Aquaculture Spatial-Temporal Distribution Remote Sensing: A Case Study in Liaoning Province, China from 2000 to 2018. Sustainability 2019, 11, 7186. [Google Scholar] [CrossRef]
  11. dan Teknik, M.S.P.J.; NURDIN, S.; MUSTAPHA, M.A.; LIHAN, T.; Abd Ghaffar, M. Determination of potential fishing grounds of Rastrelliger kanagurta using satellite remote sensing and GIS technique. Sains Malays. 2015, 44, 225–232. [Google Scholar]
  12. Wang, Z.; Lu, C.; Yang, X. Exponentially sampling scale parameters for the efficient segmentation of remote-sensing images. Int. J. Remote Sens. 2018, 39, 1628–1654. [Google Scholar] [CrossRef]
  13. McCarthy, M.J.; Colna, K.E.; El-Mezayen, M.M.; Laureano-Rosario, A.E.; Méndez-Lázaro, P.; Otis, D.B.; Toro-Farmer, G.; Vega-Rodriguez, M.; Muller-Karger, F.E. Satellite remote sensing for coastal management: A review of successful applications. Environ. Manag. 2017, 60, 323–339. [Google Scholar] [CrossRef] [PubMed]
  14. Jayanthi, M. Monitoring brackishwater aquaculture development using multi-spectral satellite data and GIS- a case study near Pichavaram mangroves south-east coast of India. Indian J. Fish. 2011, 58, 85–90. [Google Scholar]
  15. Seto, K.C.; Fragkias, M. Mangrove conversion and aquaculture development in Vietnam: A remote sensing-based approach for evaluating the Ramsar Convention on Wetlands. Glob. Environ. Change 2007, 17, 486–500. [Google Scholar] [CrossRef]
  16. Chu, J.; Zhao, D.Z.; Zhang, F.S.; Wei, B.Q.; Li, C.M.; Suo, A.N. Monitor method of rafts cultivation by remote sense—A case of Changhai. Mar. Environ. Sci. 2008, 27, 6. [Google Scholar]
  17. Wang, M.; Cui, Q.; Wang, J.; Ming, D.; Lv, G. Raft cultivation area extraction from high resolution remote sensing imagery by fusing multi-scale region-line primitive association features. ISPRS J. Photogramm. Remote Sens. 2017, 123, 104–113. [Google Scholar]
  18. Lu, Y.; Li, Q.; Du, x.; Wang, H.; Liu, J. A Method of Coastal Aquaculture Area Automatic Extraction with High Spatial Resolution Images. Remote Sens. Technol. Appl. 2015, 30, 9. [Google Scholar]
  19. Wang, J.; Gao, J. Extraction of Enclosure Culture in Gehu Lake Based on Correspondence Analysis. J. Remote Sens. 2008, 12, 8. [Google Scholar]
  20. Sun, X.; Su, F.; Zhou, C.; Xue, Z. Analyses on Spatial-Temporal Changes in Aquaculture Iand in Coastal Areas of the Pearl River Estuarine. Resour. Sci. 2010, 32, 7. [Google Scholar]
  21. Xie, Y.; Wang, M.; Zhang, X. An Object-oriented Approach for Extracting Farm Waters within Coastal Belts. Remote Sens. Technol. Appl. 2009, 24, 68–72. [Google Scholar]
  22. Guan, X.; Zhang, C.; Jiang, J.; Cao, J. Remote Sensing monitoring of aquaculture and automatic information extraction. Remote Sens. Land Resour. 2009, 21, 41–44. [Google Scholar]
  23. Shen, Y.; Zhu, S.; Chen, C.; Du, Q.; Xiao, L.; Chen, J.; Pan, D. Efficient Deep Learning of Non-local Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 6029–6043. [Google Scholar] [CrossRef]
  24. Cheng, B.; Liang, C.; Liu, X.; Liu, Y.; Wang, G. Research on a novel extraction method using Deep Learning based on GF-2 images for aquaculture areas. Int. J. Remote Sens. 2020, 41, 3575–3591. [Google Scholar] [CrossRef]
  25. Cui, B.; Fei, D.; Shao, G.; Lu, Y.; Chu, J. Extracting Raft Aquaculture Areas from Remote Sensing Images via an Improved U-Net with a PSE Structure. Remote Sens. 2019, 11, 2053. [Google Scholar] [CrossRef]
  26. Liu, C.; Jiang, T.; Zhang, Z.; Sui, B.; Pan, X.; Zhang, L.; Zhang, J. Extraction method of offshore mariculture area under weak signal based on multisource feature fusion. J. Mar. Sci. Eng. 2020, 8, 99. [Google Scholar] [CrossRef]
  27. Fu, Y.; Ye, Z.; Deng, J.; Zheng, X.; Huang, Y.; Yang, W.; Wang, Y.; Wang, K. Finer resolution mapping of marine aquaculture areas using worldView-2 imagery and a hierarchical cascade convolutional neural network. Remote Sens. 2019, 11, 1678. [Google Scholar] [CrossRef]
  28. Lu, Y.; Shao, W.; Sun, J. Extraction of Offshore Aquaculture Areas from Medium-Resolution Remote Sensing Images Based on Deep Learning. Remote Sens. 2021, 13, 3854. [Google Scholar] [CrossRef]
  29. Yang, X.; Liu, Q.; Yan, J.; Li, A.; Zhang, Z.; Yu, G. R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object. AAAI Conf. Artif. Intelligence 2021, 35, 3163–3171. [Google Scholar]
  30. Ma, Y.; Qu, X.; Feng, D.; Zhang, P.; Huang, H.; Zhang, Z.; Gui, F. Recognition and statistical analysis of coastal marine aquacultural cages based on R3Det single-stage detector: A case study of Fujian Province, China. Ocean Coast. Manag. 2022, 225, 106244. [Google Scholar]
  31. Raju, G.; Nair, M.S. A fast and efficient color image enhancement method based on fuzzy-logic and histogram. AEU-Int. J. Electron. Commun. 2014, 68, 237–243. [Google Scholar] [CrossRef]
  32. Paul, A.; Bhattacharya, P.; Maity, S.P. Histogram modification in adaptive bi-histogram equalization for contrast enhancement on digital images. Optik 2022, 259, 168899. [Google Scholar] [CrossRef]
  33. Kumar, R.; Bhandari, A.K. Luminosity and contrast enhancement of retinal vessel images using weighted average histogram. Biomed. Signal Process. Control 2022, 71, 103089. [Google Scholar] [CrossRef]
  34. Luo, W.; Duan, S.; Zheng, J. Underwater image restoration and enhancement based on a fusion algorithm with color balance, contrast optimization, and histogram stretching. IEEE Access 2021, 9, 31792–31804. [Google Scholar] [CrossRef]
  35. Mayathevar, K.; Veluchamy, M.; Subramani, B. Fuzzy color histogram equalization with weighted distribution for image enhancement. Optik 2020, 216, 164927. [Google Scholar] [CrossRef]
  36. Singh, H.; Kumar, A.; Balyan, L.; Singh, G.K. A novel optimally weighted framework of piecewise gamma corrected fractional order masking for satellite image enhancement. Comput. Electr. Eng. 2019, 75, 245–261. [Google Scholar] [CrossRef]
  37. Zhou, S.; Kang, R.; Ji, C.; Kaufmann, H. Heavy metal distribution, contamination and analysis of sources—Intertidal zones of Sandu Bay, Ningde, China. Mar. Pollut. Bull. 2018, 135, 1138–1144. [Google Scholar] [CrossRef]
  38. Zhang, J.; Xing, X.; Qi, S.; Tan, L.; Yang, D.; Chen, W.; Yang, J.; Xu, M. Organochlorine pesticides (OCPs) in soils of the coastal areas along Sanduao Bay and Xinghua Bay, southeast China. J. Geochem. Explor. 2013, 125, 153–158. [Google Scholar] [CrossRef]
  39. Chen, M.; Ke, Y.; Bai, J.; Li, P.; Lyu, M.; Gong, Z.; Zhou, D. Monitoring early stage invasion of exotic Spartina alterniflora using deep-learning super-resolution techniques based on multisource high-resolution satellite imagery: A case study in the Yellow River Delta, China. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102180. [Google Scholar] [CrossRef]
  40. Zhang, M.; Lin, H. Wetland classification using parcel-level ensemble algorithm based on Gaofen-6 multispectral imagery and Sentinel-1 dataset. J. Hydrol. 2022, 606, 127462. [Google Scholar] [CrossRef]
  41. Zeng, M.; Li, Y.; Meng, Q.; Yang, T.; Liu, J. Improving histogram-based image contrast enhancement using gray-level information histogram with application to X-ray images. Optik 2012, 123, 511–520. [Google Scholar] [CrossRef]
  42. Gibson, D.; Gaydecki, P.A. The application of local grey level histograms to organelle classification in histological images. Comput. Biol. Med. 1996, 26, 329. [Google Scholar] [CrossRef]
  43. Li, L.; Ran, G.; Chen, W. Gray level image thresholding based on fisher linear projection of two-dimensional histogram. Pattern Recognit. 1997, 30, 743–749. [Google Scholar]
  44. Cheng, M.; Yuan, H.; Wang, Q.; Cai, Z.; Liu, Y.; Zhang, Y. Application of deep learning in sheep behaviors recognition and influence analysis of training data characteristics on the recognition effect. Comput. Electron. Agric. 2022, 198, 107010. [Google Scholar] [CrossRef]
  45. Zhang, Y.; Wang, J.; Yu, Z.; Zhao, S.; Bei, G. Research on Intelligent Detection of Coal Gangue Based on Deep Learning. Measurement 2022, 198, 111415. [Google Scholar] [CrossRef]
  46. Scardino, G.; Scicchitano, G.; Chirivì, M.; Costa, P.J.; Luparelli, A.; Mastronuzzi, G. Convolutional Neural Network and Optical Flow for the Assessment of Wave and Tide Parameters from Video Analysis (LEUCOTEA): An Innovative Tool for Coastal Monitoring. Remote Sens. 2022, 14, 2994. [Google Scholar] [CrossRef]
  47. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  48. Lin, T.Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  49. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  50. Liu, Y.; Yang, X.; Wang, Z.; Lu, C.; Li, Z.; Yang, F. Aquaculture area extraction and vulnerability assessment in Sanduao based on richer convolutional features network model. J. Oceanol. Limnol. 2019, 37, 1941–1954. [Google Scholar] [CrossRef]
  51. Jiang, Y.; Zhu, X.; Wang, X.; Yang, S.; Li, W.; Wang, H.; Fu, P.; Luo, Z. R2CNN: Rotational Region CNN for Orientation Robust Scene Text Detection. arxiv 2017, arXiv:1706.09579. [Google Scholar] [CrossRef]
  52. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  53. Garcia-Garin, O.; Monleon-Getino, A.; Brosa, P.L.; Borrell, A.; Vighi, M. Automatic detection and quantification of floating marine macro-litter in aerial images: Introducing a novel deep learning approach connected to a web application in R. Environ. Pollut. 2021, 273, 116490. [Google Scholar] [CrossRef]
  54. Wolf, M.; Berg, K.; Garaba, S.P.; Gnann, N.; Zielinski, O. Machine learning for aquatic plastic litter detection, classification and quantification (APLASTIC-Q). Environ. Res. Lett. 2020, 15, 114042. [Google Scholar] [CrossRef]
Figure 1. Location of the study area of Sansha Bay, Fujian Province.
Figure 1. Location of the study area of Sansha Bay, Fujian Province.
Remotesensing 14 04430 g001
Figure 2. Photography of cage (a) and raft (b). Level 19 remote sensing image of cage (c) and raft (d) in Google Earth.
Figure 2. Photography of cage (a) and raft (b). Level 19 remote sensing image of cage (c) and raft (d) in Google Earth.
Remotesensing 14 04430 g002
Figure 3. Flowchart of aquaculture zone extraction in this study.
Figure 3. Flowchart of aquaculture zone extraction in this study.
Remotesensing 14 04430 g003
Figure 4. Thematic map showing the marine aquaculture zones in Sansha Bay.
Figure 4. Thematic map showing the marine aquaculture zones in Sansha Bay.
Remotesensing 14 04430 g004
Figure 5. The schematic diagram of piecewise linear stretching. Here, f(x) represents the grayscale value of the original image; a and b represent the segment transformation points in the original image; g(y) represents the grayscale value after image enhancement; c and d represent the segment point after enhancement.
Figure 5. The schematic diagram of piecewise linear stretching. Here, f(x) represents the grayscale value of the original image; a and b represent the segment transformation points in the original image; g(y) represents the grayscale value after image enhancement; c and d represent the segment point after enhancement.
Remotesensing 14 04430 g005
Figure 6. Image before (left) and after (right) piecewise linear stretching.
Figure 6. Image before (left) and after (right) piecewise linear stretching.
Remotesensing 14 04430 g006
Figure 7. Dataset division of Sansha Bay (a), where the blue patch represents the range of the training set, and the red patch represents the range of the test set; an example image in the training dataset with a repulsion of 800 × 800 pixels (b); building of the training set by labelme (c), where the red borders represent cages, and the green borders represent rafts.
Figure 7. Dataset division of Sansha Bay (a), where the blue patch represents the range of the training set, and the red patch represents the range of the test set; an example image in the training dataset with a repulsion of 800 × 800 pixels (b); building of the training set by labelme (c), where the red borders represent cages, and the green borders represent rafts.
Remotesensing 14 04430 g007
Figure 8. R3Det network architecture.
Figure 8. R3Det network architecture.
Remotesensing 14 04430 g008
Figure 9. Extraction results of aquaculture zones in the test set (left) and the enlarged panel showing the extraction details (right). The green dot represents missed cages; the purple dot represents misidentified cages; the red dot represents missed rafts; the blue dot represents misidentified rafts.
Figure 9. Extraction results of aquaculture zones in the test set (left) and the enlarged panel showing the extraction details (right). The green dot represents missed cages; the purple dot represents misidentified cages; the red dot represents missed rafts; the blue dot represents misidentified rafts.
Remotesensing 14 04430 g009
Figure 10. Resulting images of R3Det extraction aquaculture zone under different stretching conditions. The red rectangles represent misidentified aquaculture zones, and the green rectangles represent aquaculture zones not included by the bounding box. (a) Square root stretching: 13 green rectangles. (b) Logarithmic stretching: 14 green rectangles. (c) Gaussian stretching: 10 green rectangles and one red rectangle. (d) Equalization stretching: seven green rectangles and one red rectangle. (e) Piecewise linear stretching (ours): six green rectangles. (f) Unstretched: 11 green rectangles and one red rectangle. (g) Images of misidentified result (left) and manually annotated (right). (h) Images of aquaculture zones not fully included by the bounding box (left) and manually annotated (right).
Figure 10. Resulting images of R3Det extraction aquaculture zone under different stretching conditions. The red rectangles represent misidentified aquaculture zones, and the green rectangles represent aquaculture zones not included by the bounding box. (a) Square root stretching: 13 green rectangles. (b) Logarithmic stretching: 14 green rectangles. (c) Gaussian stretching: 10 green rectangles and one red rectangle. (d) Equalization stretching: seven green rectangles and one red rectangle. (e) Piecewise linear stretching (ours): six green rectangles. (f) Unstretched: 11 green rectangles and one red rectangle. (g) Images of misidentified result (left) and manually annotated (right). (h) Images of aquaculture zones not fully included by the bounding box (left) and manually annotated (right).
Remotesensing 14 04430 g010aRemotesensing 14 04430 g010b
Figure 11. Comparisons of the extraction results of R3Det (left), R2CNN (middle), and RetinaNet (right). The pink rectanglew represent bounding boxes that did not fit the aquaculture zones well.
Figure 11. Comparisons of the extraction results of R3Det (left), R2CNN (middle), and RetinaNet (right). The pink rectanglew represent bounding boxes that did not fit the aquaculture zones well.
Remotesensing 14 04430 g011
Figure 12. Comparisons of extraction precision of the area of the randomly selected aquaculture zones (AH) in Figure 9. The blue patches represent cage aquaculture zones, and the red patches represent raft aquaculture zones.
Figure 12. Comparisons of extraction precision of the area of the randomly selected aquaculture zones (AH) in Figure 9. The blue patches represent cage aquaculture zones, and the red patches represent raft aquaculture zones.
Remotesensing 14 04430 g012aRemotesensing 14 04430 g012b
Table 1. Indicators of GF-6 satellite payload and performance.
Table 1. Indicators of GF-6 satellite payload and performance.
Sensor Type Spectral Range (nm)Spatial Resolution (m)Swath Width (km)Revisit Period (Day)Coverage Period (Day)
PMS sensorPanchromaticPanchromatic: 450–900295441
MultispectralBlue: 450–5208
Green: 520–590
Red: 630–690
NIR: 770–890
WFV sensorMultispectralBlue: 450–52016860441
Green: 520–590
Red: 630–690
Table 2. Average grayscale values of different types of aquaculture zones in the region shown in Figure 4 under different wavelength bands.
Table 2. Average grayscale values of different types of aquaculture zones in the region shown in Figure 4 under different wavelength bands.
Value (Red)Value (Green)Value (Blue)
Cage195.76157.62150.18
Raft32.4022.4527.39
Non-aquaculture127270
Table 3. Model training parameters.
Table 3. Model training parameters.
ParameterValue
Max epoch10
Iteration epoch27,000
Max iteration270,000
Batch size1
Epsilon0.00005
Momentum0.9
Learning rate0.0005
Decay weight0.0001
Table 4. Confusion matrix.
Table 4. Confusion matrix.
Actual
PositiveNegative
PredictPositiveTrue positive (TP)False positive (FP)
NegativeFalse negative (FN)True negative (TN)
Table 5. Comparisons of extraction cage and raft aquaculture zones under different image stretching conditions by R3Det.
Table 5. Comparisons of extraction cage and raft aquaculture zones under different image stretching conditions by R3Det.
TypeStretching MethodPrecision
(%)
Recall
(%)
F-Measure
(%)
CageSquare root stretching97.8889.5293.51
Logarithmic stretching98.5785.0991.33
Gaussian stretching97.5889.3593.28
Equalization stretching96.2794.4195.33
Piecewise linear stretching98.7995.6797.21
Unstretched98.2888.1692.91
RaftSquare root stretching97.1794.3195.72
Logarithmic stretching97.3090.2693.65
Gaussian stretching96.6696.4196.53
Equalization stretching97.1398.6197.86
Piecewise linear stretching98.6699.1698.91
Unstretched96.6796.7396.70
Table 6. Comparisons of extraction accuracy of aquaculture zones using different models.
Table 6. Comparisons of extraction accuracy of aquaculture zones using different models.
TypeModelPrecision (%)Recall (%)F-Measure (%)
CageR2CNN97.5995.9496.76
RetinaNet96.9795.8796.42
R3Det98.7995.6797.21
RaftR2CNN97.8299.1098.45
RetinaNet96.6698.8497.74
R3Det98.6699.1698.91
Table 7. Extraction precision of the R3Det model for the area of the aquaculture zone.
Table 7. Extraction precision of the R3Det model for the area of the aquaculture zone.
IDTypeVectorized
(Hectare)
R3Det
(Hectare)
Precision
(%)
TypeVectorization
(Hectare)
R3Det
(Hectare)
Precision
(%)
ACage7.047.8388.75Raft74.2466.5889.68
BCage0.000.00-Raft168.34188.4188.08
CCage8.9811.5371.60Raft173.51188.7691.21
DCage53.3062.3882.97Raft110.86120.9590.90
ECage28.8033.8382.53Raft150.40166.2189.49
FCage71.3172.0698.95Raft8.138.2898.12
GCage47.9750.5994.53Raft0.622.41−189.67
HCage125.49130.4596.05Raft1.762.1080.68
A–HCage342.89368.6892.48Raft687.85743.7091.88
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, Y.; Qu, X.; Yu, C.; Wu, L.; Zhang, P.; Huang, H.; Gui, F.; Feng, D. Automatic Extraction of Marine Aquaculture Zones from Optical Satellite Images by R3Det with Piecewise Linear Stretching. Remote Sens. 2022, 14, 4430. https://doi.org/10.3390/rs14184430

AMA Style

Ma Y, Qu X, Yu C, Wu L, Zhang P, Huang H, Gui F, Feng D. Automatic Extraction of Marine Aquaculture Zones from Optical Satellite Images by R3Det with Piecewise Linear Stretching. Remote Sensing. 2022; 14(18):4430. https://doi.org/10.3390/rs14184430

Chicago/Turabian Style

Ma, Yujie, Xiaoyu Qu, Cixian Yu, Lianhui Wu, Peng Zhang, Hengda Huang, Fukun Gui, and Dejun Feng. 2022. "Automatic Extraction of Marine Aquaculture Zones from Optical Satellite Images by R3Det with Piecewise Linear Stretching" Remote Sensing 14, no. 18: 4430. https://doi.org/10.3390/rs14184430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop