Next Article in Journal
Research on the Differential Model-Free Adaptive Mooring Control Method for Uncrewed Wave Gliders
Previous Article in Journal
Satellite Altimetry and Seasonal Circulation in the Ligurian Sea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on the Extraction of Satellite Image Information for Two Types of Coastal Fishery Facility Fish Cages and Rafts Influenced by Clouds and Vessels

by
Ao Chen
1,
Jialu Yu
2,
Junbo Zhang
1,3,*,
Gangyi Yu
1 and
Rong Wan
1,3,*
1
Colloge of Marine Living Resource Sciences and Management, Shanghai Ocean University, Shanghai 201306, China
2
Shanghai Fisheries Research Institute, Shanghai Fisheries Technical Extension Station, Shanghai 200433, China
3
National Engineering Research Center for Oceanic Fisheries, College of Marine Sciences, Shanghai Ocean University, Shanghai 201306, China
*
Authors to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(12), 2280; https://doi.org/10.3390/jmse12122280
Submission received: 15 October 2024 / Revised: 1 December 2024 / Accepted: 5 December 2024 / Published: 11 December 2024
(This article belongs to the Section Marine Aquaculture)

Abstract

:
Research on the extraction of satellite information for the areas of coastal fish cages and rafts is important to quickly grasp the pattern and structure of the coastal fishery aquaculture industry. This study proposes a multi-feature and rule-based object-oriented image classification (MROIC) model, integrating spatial-spectral enhancement techniques with object-based image analysis classification methods. The MROIC model enhances spectral information by constructing ratio bands alongside principal component analysis, subsequently employing rule sets, edge detection algorithms, and comprehensive algorithmic merging techniques. It is applicable to satellite image classification tasks in complex environments, including influence of clouds and vessels. The information of fish cage and raft facilities is extracted via the MROIC model on the southwest coast of Xiapu County, Fujian Province, as an example. The results showed that the MROIC model attained an average total classification accuracy of 90.43% and a Kappa coefficient of 0.80. Extracting the area of fisheries facilities under the influence of clouds and vessels can provide better extraction accuracy and lower omission error. The MROIC model proposed in this study demonstrates high extraction accuracy and strong applicability, offering technical support for government planning in fishery facility areas and aiding in the risk assessment and management efficiency of fishery facility insurance.

1. Introduction

The rapid development of the marine aquaculture sector has produced substantial economic advantages [1], and many fisheries facilities are extensively located in coastal seas. Nevertheless, effectively surveying and monitoring the vast area of fishery facilities presents significant challenges [2]. Advances in satellite technology have provided imagery with high spatial resolution, extensive coverage, and brief revisit intervals, which have increasingly become a vital data source for acquiring information on fishery facilities, alongside the ongoing development of related extraction methodologies [3]. Object-based image analysis (OBIA) is frequently employed to extract the area of fishery facilities [4], emphasizing the structure, texture, and correlation between neighboring elements within a category. This approach mitigates the issue of “noise” by transitioning the analytical focus from individual elements to objects, incorporating their spatial and semantic information [5]. However, OBIA is susceptible to “sticking” when extracting feature connections, resulting in significantly reduced extraction accuracy in regions heavily influenced by environmental factors [6]. Xu et al. [7] pioneered improvements to the OBIA approach by using case-based reasoning (CBR) to categorize multi-scale data, effectively delineating areas of coastal pond facilities. Zheng et al. [8] used the construction of ratio bands to facilitate the automated extraction of vast areas of raft facility. Zhong et al. [9] further utilized the spectral, textural, and geometrical features of the raft facility area, and expanded the feature information between the extracted targets by introducing multi-feature technique, constructed the MBIC (multi-feature based image classification) model, which alleviated the phenomena of “spectral heterogeneity” and “sticking”. Han et al. [10] used the recursive feature removal approach into OBIA and developed a rule-based object-oriented image classification (ROIC) model, enhancing the area extraction efficacy of raft facilities. However, the interference of cloud cover and vessel has not been considered in existing models [9,10], resulting in the limited accuracy of area extraction for raft and fish cage facilities in such complex environmental conditions. It is therefore essential to investigate widely applicable models with higher accuracy for extracting the area of fisheries facilities.
A multi-feature and rule-based object-oriented image classification (MROIC) model is developed based on the study of Zhong et al. [9] and Cheng et al. [11] by integrating multi-source feature techniques with rule-based object-oriented classification methods, focusing on the area of raft and fish cage facilities. To reduce the impact of clouds and vessels, our model introduces spatial–spectral enhancement (SSE) and incorporates rectangularity into the OBIA rules. The accuracy of the model in extracting remote sensing data under the influence of complex environments such as clouds and vessels was thoroughly analyzed in order to assist local governments in swiftly understanding the allocation of fishery facility zones for the rational planning and management of coastal fishery infrastructures, while simultaneously addressing the challenges of acquiring precise data for the fishery underwriting and claims system and enhancing the efficiency of risk assessment and management in fishery insurance.

2. Materials and Methods

2.1. Study Area

Sansha Bay is located in the southeastern part of Fujian Province, China. The main fishery facility areas in the bay are a fish cage and raft facility, which are widely distributed and numerous, especially in the estuary region [12], where it is difficult to extract the information of areas. Therefore, the study area close to the estuary region of Sansha Bay is selected as the study area (Figure 1). Coastal fish cages in Sansha Bay typically consist of multiple 3 × 3 m squares arranged in a rectangular configuration, while raft facilities often appear as elongated rectangles of varying dimensions [13].

2.2. Data Sources

The satellite data are from GaoFen-2 (GF-2), which has a spatial resolution of 3.2 metres. The latitude–longitude in the center of the image was 119°52′17.04″ E, 26°43′50.88″ N. The time of shooting was 30 March 2019 at 3:15:34 p.m. Three regions affected by clouds, vessels, and no cover were selected for extraction from the areas of raft and fish cage facility (Figure 2). The sizes of Region A, B, and C are 500 × 500 pixels. The image preprocessing steps involved radiometric calibration followed by atmospheric correction, utilizing The Environment for Visualizing Images (ENVI) 6.0 with a custom gain and offset algorithm for radiometric calibration and the line-by-line (LBL) algorithm for atmospheric correction [14].

2.3. The MROIC Model

In this study, the MROIC model was proposed to extract information about the areas of the fish cage and raft facilities in satellite images. The model was composed by SSE and OBIA classification techniques [9].

2.3.1. SEE Technique

The SSE technique minimizes data redundancy among bands by improving spectral information related to fishery facilities, thereby enhancing the sharpness, contrast, and visibility of both the facilities and the adjacent seawater. SSE can significantly improve the distinction in surface reflectance between fishery facilities and other features in satellite images [15]. In the MROIC model, the SSE technique mainly include ratio indexes (RI) construction, Bhattacharyya distance (BD) screening, and principal component analysis (PCA) downscaling [16,17,18].
RI was used to find out the strongest and weakest reflection bands in the target region by performing a band ratio on the multispectral data. RI initially identified the strongest and weakest reflection bands in the target area and subsequently conducted a ratio operation to expand the gap between them, resulting in a diminished luminance of the background seawater [19]. Six bands ( R 1 R 6 ) were constructed in this study to improve the extraction accuracy of fish cages and rafts. The green band is the most effective for distinguishing the area of facilities, followed by the red band, while the near infrared ( N I R ) band is utilized to highlight water and facilities [20]. R 1 , R 2 , and R 5 were constructed to enhance facility and water differentiation. R 3 represents the Normalized Difference Aquaculture Index (NDAI) [20], R 4 denotes the Normalized Difference Water Index (NDWI) [21], and R 6 corresponds to the Suspended Sediment Differential Index (SSDI) [22]. The equations for these indices are as follows:
R 1 = G N I R ,
R 2 = R N I R ,
R 3 = N I R R N I R + R ,
R 4 = N I R G N I R + G ,
R 5 = R G R + G ,
R 6 = 0.028 × G + 0.019 × R 5.13 × R G + 0.537 .
We evaluate the classification performance of the constructed bands through BD (Equation (7)). A higher BD value indicates a greater capacity of the areas of fishery facilities to differentiate from seawater.
B D = 1 4 × μ 1 μ 2 2 σ 1 2 + σ 2 2 + 1 2 log σ 1 2 + σ 2 2 2 σ 1 σ 2 ,
where μ is the mean of the gray scale value (GSV) of two neighboring features, and σ represents the standard deviation between features. In the context of a fish cage facility, μ 1 and μ 2 denote the mean values of the GSV for the fish cage area and the surrounding seawater, respectively. σ 1 and σ 2 denote the standard deviations associated with the areas of fish cage facility, respectively [17].
To enhance classification, the study integrated the three bands exhibiting the highest BD values into a band set, which underwent dimensionality reduction and de-noising via the confusion matrix [23]. The first principal component (PC1) band retained over 80% of the original band information following dimensionality reduction [18] and will serve as the input data for the OBIA classification technique.

2.3.2. OBIA Classification Technique

The OBIA classification technique in the MROIC model consisted of three main components: rule sets, image segmentation, and image composition [24].
The building of the rule set was mainly based on the spectral type, choosing spectral mean, area, elongation, and rectangular fit. Image segmentation involves partitioning an image into multiple non-overlapping regions. This study employed the segmentation based on edge detection (SBED) algorithm [25] to segment images of fishery facilities characterized by distinct outlines. Segmentation was achieved by identifying points of typical discontinuities at the edges of fish cage and raft facilities using the enhanced Canny operator [26]. The full lambda-schedule (FLS) algorithm was used to combine nearby small patches. Suppose two neighboring regions, i and j , are merged into a new region k , t i j is the merge value; when t i j is greater than the defined threshold λ s t o p , the two regions stop merging, as shown in the Equations (8)–(11) [24].
t i j = O i   ×   O j O i   +   O j U i U j 2 L ,
L = l e n g t h σ O i , O j ,
U i = 1 N i k = 1 N P i k ,
U j = 1 N j k = 1 N P j k ,
where O i is image region i , O i is the area of the region, U i is its average value, U i U j 2 is the Euclidean distance between the spectra, L is the length of the shared boundary between O i and O j , N i and N j are the number of pixels in the boundaries i and j , and P i and P j are the value of each pixel within the boundaries i and j .

2.4. Control Models and Parameter Settings

The control models were the MBIC model and the ROIC model [27]. MBIC employs a threshold segmentation algorithm based on SSE to classify images, whereas ROIC directly applies OBIA to classify preprocessing images [13]. The ROIC model utilized identical parameter settings to the MROIC model regarding segmentation and merger thresholds, as well as rule sets. The parameters for Edge and FLS were 70 and 15, respectively. The elongation value ranged from 0.5 to 1. All values presented are unitless proportional values. Table 1 presents the spectral parameters.

2.5. Extraction Process and Accuracy Assessment

2.5.1. Extraction Process

Radiometric calibration and atmospheric correction are necessary following the acquisition of satellite images to mitigate errors induced by atmospheric conditions, Earth’s rotation, and other influencing factors [28]. The final output generated was surface reflectance. The preprocessed satellite images were imported into the MROIC model for extraction (Figure 3). During image classification, some unavoidable small patches occurred. The similarity between patches was calculated using the K-Means algorithm, which clustered patches with high similarity and removed smaller patches [28]. RI, BD, PCA, SBED, and FLS were used sequentially as inputs for the next step. When land and sea separation is required, masks are used to remove land areas.

2.5.2. Accuracy Assessment

The confusion matrix (CM) method was used to compare the number of the areas of real-life facility and pixels. The assessment indexes of accuracy assessment mainly included overall accuracy (OA), Kappa coefficient, producer’s accuracy (PA), user’s accuracy (UA), misclassification error (ME), and omission error (OE) [29], with the corresponding equations presented below.
O A = 1 N i = 1 m P i i ,
K a p p a = N i = 1 m p i i i = 1 m ( P i + P + i ) N 2 i = 1 m ( P i + P + i ) ,
P A = P i i P + i ,
U A = P i i P i + ,
M E = P i + P i i P i + = 100 % U A ,
O E = P + i P i i P + i = 100 % P A ,
where N is the total number of samples, m is the number of feature types, P i j is the total number of pixels in row i and column j of the CM (the CM involves only P i i on the diagonal), and P i + and P + i are the total number of pixels in row i and column i of the CM, respectively.

3. Results

3.1. Preprocessing Results

Figure 4 illustrates the image after preprocessing and 2% linear stretching. After pre-processing, the remote sensing image exhibits enhanced saturation and clarity, with finer textures and sharper edges in the area of the fisheries facility.

3.2. SSE Results

As shown in Figure 5, the three areas in the red and NIR bands have high color highlights in the area of the fish cage facility, with an overall uneven tone, and the area of raft facility is similar in color to the water column, with weaker discrimination. The color gradients of the distinct features in the green and blue bands are more pronounced, showing greater feature differentiation in region A, with moving ships and waves clearly identified in region B, and the least difference seen in region C. The color of the water body is more luminous in the R1 and R2 band images, with the boundary of mud and sand clearly visible. The color of the area of raft facility in region A closely resembles that of the water, while the moving boat in region B exhibits a color more akin to that of the raft facility. Additionally, the clouds in region C influence the pixel points surrounding the raft facility, resulting in a color similarity between the facility and the water body. The R4, R5, and R6 band images exhibit an increased separation between the water body and the adjacent fishery facility. Region A has the most distinct layers between the fish cage facility and the water body, region B displays a similar color of the moving vessel to the fish cage facility and can be better distinguished from the surrounding water, and region C has a “sticking” of the area of raft facility at the boundaries due to the influence of clouds in the R4 and R5 bands.
A total of 100 random pixel samples from the areas of raft facility area, fish cage, and seawater in each image band were utilized to calculate the mean, standard deviation, and BD values, which are presented in Table 2. The bands of BD values for the raft facility area, listed in descending order, are BandR, Bandred, BandR5, BandR4, Bandgreen, BandR6, BandR2, BandR3, BandNIR, and Bandblue. For the fish cage facility area, the bands are BandR1, BandR5, BandR4, BandR2, BandNIR, Bandblue, BandR3, BandR6, Bandred, and Bandgreen. BandR1, BandR4, and BandR5 exhibit the highest distinguishing capability, thereby constituting a new set of bands.
The image of the area of fishery facility obtained after downscaling the new band set using PCA (Figure 6) reveals that the downscaled image exhibits more vivid colors, enhancing the distinction between the areas of fish cage and raft facilities and the water body.

3.3. Classification Results

Figure 7 presents the classification extraction results of the three models across the three regions. All three models produced a relatively clear and complete raft frame for the area of the raft facility. In region A, the area of the raft facility extracted by the MROIC model has a clear outline with minimal “adhesion” to the seawater and fish cages, while the MBIC model incorrectly extracted some of the water within the area of the fish cage facility as an area of the raft facility, and the correctly extracted area of the raft facility by the ROIC model is slightly smaller than that of the MROIC model. In the vessel-affected region B, the MROIC model extracts a more comprehensive area of raft facility with reduced overall “sticking”. The ROIC model misclassified vessels in the area of the cage facility, and the MROIC and MBIC models misclassified vessels in the area of the raft facility, with the MROIC model being lower than the other two models in terms of misclassified area. In the cloud-affected region C, the MROIC model identified the area of raft facilities with comparable integrity to the ROIC model, exhibiting less “sticking” than the MBIC model. Additionally, there were occurrences of water bodies being incorrectly classified as the area of raft facility in regions with dense arrangements of fishery facilities.
Both the MROIC and MBIC models extracted a more complete and well-defined area of fish cage facility. In regions A and B, the MROIC model extracted the most intact areas of the fish cage facility, while the MBIC model revealed a minor degree of fragmentation within these areas. Conversely, the ROIC model indicated a significant extent of missed extraction. In region C, the MBIC model identified overlapping areas of the fish cage facility and the raft facility, leading to confusion between the two, and the ROIC model exhibited significant areas of missed extractions and confusion with the water body.

3.4. Accuracy Assessment Results

The extraction accuracy, area extracted, and number of extractions for the areas of fish cage and raft facilities by different models are shown in Table 3, with completeness greater than 80% counting as one area of fish cage or raft facility. For the fish cage facility, the MROIC model achieved an average extraction accuracy of 92.2% and an average number of extractions of 51 in the three regions, surpassing the performance of alternative methods. For the raft facility, the MROIC model achieved an average extraction accuracy of 89.7% and an average number of extractions of 93 for the three areas, achieving better extraction results.
The misclassification and omission errors for the cage and raft areas are shown in Table 4, where the average misclassification errors for the MROIC model were 1.10% and 0.81%, and the average omission errors were 7.82% and 10.27% respectively. The MBIC model had a mean misclassification error of 0.50% in the area of fish cage facility and a mean misclassification error of 4.15% in the area of raft facility. The ROIC model exhibits a greater average wrong score error and average missed score error compared to the other two models. The MROIC model exhibited an overall lower number of incorrect and missed scores.
The classification accuracies and Kappa coefficients for the three methods across the three regions are detailed in Table 5. The MROIC model achieves an average overall classification accuracy of 90.43%, surpassing the accuracy of both the MBIC model and the ROIC model. The average Kappa coefficient for the ROIC model is 0.57, while the MROIC model exhibits a higher value of 0.80. The MROIC model demonstrates superior average overall classification accuracy and Kappa coefficient.

4. Discussion

The results from RI indicate that the R1–R6 bands improved in the areas of color saturation, target divergence, and image layering. In region A, the green band demonstrated superior performance in differentiating areas designated for fishery facilities [19]. This is consistent with the study of Ma et al. [20]. The R4 band, derived from the green and NIR bands (which fully absorb incoming light energy at the water surface), was most effective in differentiating between the areas of fishery facility and others (Figure 5b,h). The R5 band rated the green and red bands, while the green band was the strongest band to distinguish the areas of fishery facilities, and the red band was the weakest band to distinguish fishery facilities. The R6 band weakened the effect of suspended sediment on the results and improves the effect of classification (Figure 5i,j). This aligns with the results of Wu et al. [22]. The materials used in the construction of the boats included steel, iron, and other substances comparable to those found in the fish cage facilities of region B [3,30]. Consequently, the spectral data of the boats in the RI results exhibited similarities to that of the fish cage facility. Clouds exhibit elevated luminance numbers in the image, resulting in a bright white appearance that influences the texture of adjacent objects. The original band (R, G, B, and NIR) in region C was the most significantly affected by cloud cover. RI may leverage the reflectivity variations across different bands to mitigate the impacts of atmospheric scattering and cloud cover [19].
The comparison of the results from RI and BD indicated that all data in the table closely corresponded to the resulting images. In the statistics concerning the areas of raft facility, the BD value of Bandred was notably high, ranking just below BandR1. It was found that BD sorting indicates that BandR1, BandR5, and BandR4 are all included in the top-ranked bands. Therefore, Bandred was not considered in the construction of the band sets. The use of RI and BD with better results is the innovation of our study. Both our study and the research by Jiang et al. [31] have demonstrated that RI and BD can enhance spectral features and make the fish cages and rafts in the image more distinguishable for classification. In our study, RI improved the overall classification accuracy by approximately 6%, BD by about 3%, and PCA by around 4%. The area and number of samples will be expanded in the future study to further validate the effectiveness of the Bandred in identifying areas of fishery facility.
In the classification of the areas of raft facility, the ROIC and MBIC models extracted regions B and C, where seawater pixels are prone to misclassification between adjacent facilities, resulting in the areas of raft facility stick to each other. This phenomenon can be attributed to the mixing of seawater pixels, resulting in individual pixels containing information from both the water body and the raft facilities [27], which reduces classification accuracy. The MROIC model facilitates improved differentiation between target objects and the seawater background by employing image enhancement techniques and integrating feature information, such as the geometric characteristics of raft facilities. The classification of the areas of fish cage facility using the MROIC model produced results that closely resembled the real images. The delineation of the fish cage facility areas was clear, but the results of the other two models appeared disjointed. A possible explanation is that the predominance of wooden fish cages in coastal waters, where prolonged immersion can lead to homogenization, causing the surface reflectance of wood to approximate that of the surrounding water, thereby impacting the continuity of the extraction process. The results also illustrate the importance of spectral and geometric features to the final model. Spectral bands help to identify the features of different facilities, while geometric information improves the accuracy of distinguishing these facilities in complex environments.
In complex environments, clouds can significantly increase the prevalence of mixed pixels in the areas of fishery facilities due to the absence of distinct physical boundaries [32,33]. The MROIC model improves the identification of fishery facility areas and mitigates cloud interference by implementing a set of rules for geometric features, which, similar to texture features, are frequently utilized for recognition. Nonetheless, the stochastic characteristics of cloud textures constrain the efficacy of texture-based approaches, such as the gray level co-occurrence matrix (GLCM), in feature extraction from cloud-affected regions [34]. The spectral information of vessels exhibited significant differences compared to the areas associated with fish cage and raft facilities. The MROIC model delineates the distinctions between boats and facilities by improving spectral information, thereby reducing the misclassification of vessels as fishery facilities. Although the MBIC model in extracting the areas of raft facility was more accurate than the MROIC model, it also had the highest commission error among the three models and lower accuracy in extracting the areas of fish cage facility than the MROIC model. It may be that the MBIC model relies only on spectral information for classification, while the phenomenon of “different spectral characteristics for the same object” is evident in the fish cage areas, leading to a poor result. The MROIC model integrates spectral and geometric features [35,36], enhancing its capability to handling complex environments and improving extraction in marine regions impacted by clouds and vessels.

5. Conclusions

This study presents an improved model of MROIC utilizing the SEE technique and OBIA classification method. We utilized OBIA instead of conventional pixel-based classification methods. In contrast to pixel-based methods, OBIA combines spectral and spatial features, including shape, texture, and context, to delineate objects more accurately based on their geometric properties and spatial relationships. This approach is especially beneficial in complex environments, such as aquaculture areas, where shape plays a crucial role, and pixel-based methods may encounter difficulties. In addition to using spectral indices (R1-R6) and statistical methods (BD and PCA) for input features, we also incorporated shape characteristics. Spectral indices help differentiate various fisheries facilities, while statistical methods like BD and PCA reduce dimensionality and highlight key features, improving segmentation and classification accuracy. Our approach improves classification robustness in complex environments by integrating spectral, statistical, and OBIA methods, addressing the limitations of spectral information alone. The MROIC model utilized GF-2 satellite images as a data source to extract information regarding the areas of fish cage and raft facility in the estuary region of Sansha Bay, Fujian Province, China. The results show that the enhanced model maintains high extraction accuracy in complex environments, achieving an average overall classification accuracy of 90.43% and a Kappa coefficient of 0.80. The errors of commission and omission were generally lower than those of other models. The MROIC model is object-based, leveraging the integration of spectral, shape, and spatial information to overcome the limitations of pixel-based methods. This study effectively extracts and monitors coastal fishery facility areas, offering management departments a tool for information extraction in the management of fishery facilities adaptable to complex environments.
Future research will focus on substituting expert evaluation in the construction of the rule set with association rule learning from data mining. This approach aims to minimize the non-objectivity resulting from manual intervention and enhance the algorithm’s automation. Furthermore, the integration of texture feature techniques with the MROIC model may be essential to improve the accuracy of fish cage and raft extraction in complex environments.

Author Contributions

Conceptualization and supervision, R.W. and J.Z.; methodology, data curation, formal analysis, and visualization, A.C., G.Y. and J.Y.; writing—original draft preparation, A.C. and J.Y.; writing—review and editing, R.W. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by National Key Research and Development Program of China (Grant No. 2019YFC0312104).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank Ping Qing and Hongtao Fu form East China Sea Branch, Ministry of Natural Resources, for their help in data collection.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Long, L.; Liu, H.; Cui, M.C.; Zhang, C.L.; Liu, C. Offshore aquaculture in China. Rev. Aquac. 2023, 16, 254–270. [Google Scholar] [CrossRef]
  2. English, G.; Lawrence, M.J.; McKindsey, C.W.; Roussel, A.L.; Bergeron, H.; Gauthier, S.; Wringe, B.F.; Trudel, M. A review of data collection methods used to monitor the associations of wild species with marine aquaculture sites. Rev. Aquac. 2024, 16, 1160–1185. [Google Scholar] [CrossRef]
  3. Chen, A.; Lv, Z.; Zhang, J.; Yu, G.; Wan, R. Review of the Accuracy of Satellite Remote Sensing Techniques in Identifying Coastal Aquaculture Facilities. Fishes 2024, 9, 52. [Google Scholar] [CrossRef]
  4. Zaki, A.; Buchori, I.; Sejati, A.W.; Liu, Y. An object-based image analysis in QGIS for image classification and assessment of coastal spatial planning. Egypt. J. Remote Sens. Space Sci. 2022, 25, 349–359. [Google Scholar] [CrossRef]
  5. Yang, K.; Zhang, H.; Wang, F.; Lai, R. Extraction of Broad-Leaved tree crown based on UAV visible images and OBIA-RF model: A case study for Chinese Olive Trees. Remote Sens. 2022, 14, 2469. [Google Scholar] [CrossRef]
  6. Ye, Z.; Yang, K.; Lin, Y.; Guo, S.; Sun, Y.; Chen, X.; Lai, R.; Zhang, H. A comparison between Pixel-based deep learning and Object-based image analysis (OBIA) for individual detection of cabbage plants based on UAV Visible-light images. Comput. Electron. Agric. 2023, 209, 107822. [Google Scholar] [CrossRef]
  7. Xu, J.; Zhao, J.; Zhang, F.; Li, F. Object-Oriented Information Extraction of Pond Aquaculture. Remote Sens. Nat. Resour. 2013, 1, 82–85. [Google Scholar]
  8. Zheng, Y.; Wu, J.; Wang, A.; Chen, J. Object-and pixel-based classifications of macroalgae farming area with high spatial resolution imagery. Geocarto Int. 2018, 33, 1048–1063. [Google Scholar] [CrossRef]
  9. Zhong, Y. Study on High Resolution Remote Sensing Extraction and Classification of Zhanjiang Offshore Aquaculture Area. Master’s Thesis, Guangdong Ocean University, Zhanjiang, China, 2020. [Google Scholar] [CrossRef]
  10. Han, Y.; Huang, J.; Ling, F.; Qiu, J.; Liu, Z.; Li, X. Dynamic mapping of inland freshwater aquaculture areas in Jianghan plain, China. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 4349–4361. [Google Scholar] [CrossRef]
  11. Cheng, B.; Liu, Y.; Liu, X.; Wang, Z.; Ma, X. Research on Extraction Method of Coastal Aquaculture Areas on High Resolution Remote Sensing lmage based on Multi features Fusion. Remote Sens. Technol. Andappl. 2018, 33, 296–304. [Google Scholar] [CrossRef]
  12. Lin, M. The Response of Landform and Sedimentary Changes to Reclamation and Aquaculture in Sansha Bay, Fujian; East China Normal Universit: Shanghai, China, 2021. [Google Scholar] [CrossRef]
  13. Chen, A. Study on the Extraction of Offshore Aquaculture Area Based on Multi-Feature Coupled with Object Classification—Taking Sansha Bay, Fujian Province as an Example. Master’s Thesis, Shanghai Ocean University, Shanghai, China, 2024. [Google Scholar] [CrossRef]
  14. Singh, M.P.; Gayathri, V.; Chaudhuri, D. A Simple Data Preprocessing and Postprocessing Techniques for SVM Classifier of Remote Sensing Multispectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7248–7262. [Google Scholar] [CrossRef]
  15. Kwan, C. Image Resolution Enhancement for Remote Sensing Applications. In Proceedings of the 2nd International Conference on Vision, Image and Signal Processing, Las Vegas, NV, USA, 27–29 August 2018; Volume 12, pp. 1–5. [Google Scholar] [CrossRef]
  16. Qiao, L.; Tang, W.; Gao, D.; Zhao, R.; An, L.; Li, M.; Sun, H.; Song, D. UAV-based chlorophyll content estimation by evaluating vegetation index responses under different crop coverages. Comput. Electron. Agric. 2022, 196, 106775. [Google Scholar] [CrossRef]
  17. Tang, J.; Cheng, J.; Xiang, D.; Hu, C. Large-difference-scale target detection using a revised Bhattacharyya distance in SAR images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4506205. [Google Scholar] [CrossRef]
  18. Arslan, N.; Nezhad, M.M.; Heydari, A.; Garcia, D.A.; Sylaios, G. A principal component analysis methodology of oil spill detection and monitoring using satellite remote sensing sensors. Remote Sens. 2023, 15, 1460. [Google Scholar] [CrossRef]
  19. Ma, Z.; Li, H.; Ye, Z.; Wen, J.; Hu, Y.; Liu, Y. Application of modified water quality index (WQI) in the assessment of coastal water quality in main aquaculture areas of Dalian, China. Mar. Pollut. Bull. 2020, 157, 111285. [Google Scholar] [CrossRef]
  20. Ma, Y.J.; Zhao, D.L.; Wang, R.M.; Su, W. Offshore Aquatic Farming Areas Extraction Method Based on ASTER Data. Available online: https://www.ingentaconnect.com/content/tcsae/tcsae/2010/00000026/a00201s2/art00023 (accessed on 8 December 2024).
  21. Zhang, X.; Ma, S.; Su, C.; Shang, Y.; Wang, T.; Yin, J. Coastal Oyster Aquaculture Area Extraction and Nutrient Loading Estimation Using a GF-2 Satellite Image. J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4934–4946. [Google Scholar] [CrossRef]
  22. Wu, G.; Cui, L.; Liu, L.; Chen, F.; Fei, T.; Liu, Y. Statistical model development and estimation of suspended particulate matter concentrations with Landsat 8 OLI images of Dongting Lake, China. Int. J. Remote Sens. 2015, 36, 343–360. [Google Scholar] [CrossRef]
  23. Wu, L.; Lu, M.; Fang, L. Deep covariance alignment for domain adaptive remote sensing image segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5620811. [Google Scholar] [CrossRef]
  24. Zhou, W.; Ming, D.; Xu, L.; Bao, H.; Wang, M. Stratified object-oriented image classification based on remote sensing image scene division. J. Spectrosc. 2018, 1, 3918954. [Google Scholar] [CrossRef]
  25. Ez-zahouani, B.; Teodoro, A.; Kharki, O.E.; Liu, J.; Kotaridis, I.; Yuan, X.; Ma, L. Remote sensing imagery segmentation in object-based analysis: A review of methods, optimization, and quality evaluation over the past 20 years. Remote Sens. Appl. Soc. Environ. 2023, 32, 101031. [Google Scholar] [CrossRef]
  26. Kolli, M.K.; Opp, C.; Karthe, D.; Pradhan, B. Automatic extraction of large-scale aquaculture encroachment areas using Canny Edge Otsu algorithm in Google Earth Engine–the case study of Kolleru Lake, South India. Geocarto Int. 2022, 37, 11173–11189. [Google Scholar] [CrossRef]
  27. Cheng, B.; Liang, C.; Liu, X.; Liu, Y.; Ma, X.; Wang, G. Research on a novel extraction method using Deep Learning based on GF-2 images for aquaculture areas. Int. J. Remote Sens. 2020, 41, 3575–3591. [Google Scholar] [CrossRef]
  28. Kavzoglu, T.; Tonbul, H. An experimental comparison of multi-resolution segmentation, SLIC and K-means clustering for object-based classification of VHR imagery. Int. J. Remote Sens. 2018, 39, 6020–6036. [Google Scholar] [CrossRef]
  29. Ma, H.; Zhao, W.; Li, F.; Yan, H.; Liu, Y. Study on Remote Sensing Image Classification of Oasis Area Based on ENVI Deep Learning. Pol. J. Environ. Stud. 2023, 32, 2231–2242. [Google Scholar] [CrossRef] [PubMed]
  30. Cheng, X.; Zhang, F.; Chen, X.; Wang, J. Application of Artificial Intelligence in the Study of Fishing Vessel Behavior. Fishes 2023, 8, 516. [Google Scholar] [CrossRef]
  31. Jiang, Z.; Ma, Y. Accurate extraction of offshore raft aquaculture areas based on a 3D-CNN model. Int. J. Remote Sens. 2020, 41, 5457–5481. [Google Scholar] [CrossRef]
  32. Xu, Y.; Hu, Z.; Zhang, Y.; Wang, J.; Yin, Y.; Wu, G. Mapping aquaculture areas with Multi-Source spectral and texture features: A case study in the pearl river basin (Guangdong), China. Remote Sens. 2021, 13, 4320. [Google Scholar] [CrossRef]
  33. Hu, Y.; Zhang, L.; Chen, B.; Zuo, J. An Object-Based Approach to Extract Aquaculture Ponds with 10-Meter Resolution Sentinel-2 Images: A Case Study of Wenchang City in Hainan Province. Remote Sens. 2024, 16, 1217. [Google Scholar] [CrossRef]
  34. Ghasemian, N.; Akhoondzadeh, M. Introducing two Random Forest based methods for cloud detection in remote sensing images. Adv. Space Res. 2018, 62, 288–303. [Google Scholar] [CrossRef]
  35. Gašparović, M.; Klobučar, D. Mapping floods in lowland forest using sentinel-1 and sentinel-2 data and an object-based approach. Forests 2021, 12, 553. [Google Scholar] [CrossRef]
  36. Rajadran, A.; Tan, M.; Chan, N.; Samat, N. Aquaculture Pond Mapping in Sungai Udang, Penang, Using Google Earth Engine: Pemetaan Kolam Akuakultur di Sungai Udang, Pulau Pinang, menggunakan Engin Bumi Google. Geografi 2021, 9, 86–106. [Google Scholar] [CrossRef]
Figure 1. Study area.
Figure 1. Study area.
Jmse 12 02280 g001
Figure 2. Study area: (A) no impact; (B) impact by vessels; (C) impact by clouds.
Figure 2. Study area: (A) no impact; (B) impact by vessels; (C) impact by clouds.
Jmse 12 02280 g002
Figure 3. The flowchart of the MROIC model.
Figure 3. The flowchart of the MROIC model.
Jmse 12 02280 g003
Figure 4. The comparison of images before (a) and after (b) preprocessing.
Figure 4. The comparison of images before (a) and after (b) preprocessing.
Jmse 12 02280 g004
Figure 5. RI images: (a) represents the red band; (b) represents the green band; (c) represents the blue band; (d) represents the NIR band; and (ej) represent the R1–R6 bands, respectively.
Figure 5. RI images: (a) represents the red band; (b) represents the green band; (c) represents the blue band; (d) represents the NIR band; and (ej) represent the R1–R6 bands, respectively.
Jmse 12 02280 g005aJmse 12 02280 g005b
Figure 6. Downscaled image of the fishery facilities.
Figure 6. Downscaled image of the fishery facilities.
Jmse 12 02280 g006
Figure 7. Extraction results.
Figure 7. Extraction results.
Jmse 12 02280 g007aJmse 12 02280 g007b
Table 1. The spectral parameters of fishery facilities (MinMaxScaler range [−1, 1]).
Table 1. The spectral parameters of fishery facilities (MinMaxScaler range [−1, 1]).
Region ARegion BRegion C
Raft0~0.8−0.8~0−0.35~0
Fish cage<0>0>0
Table 2. The BD value statistics.
Table 2. The BD value statistics.
FacilityBandblueBandgreenBandredBandNIRBandR1BandR2BandR3BandR4BandR5BandR6
Raft0.060.811.240.131.900.450.320.981.120.79
Fish cage1.820.060.402.4410.923.241.503.646.950.99
Table 3. The extraction accuracy, area extracted, and number of extractions for the areas of fish cage and raft facilities by different models.
Table 3. The extraction accuracy, area extracted, and number of extractions for the areas of fish cage and raft facilities by different models.
Method and RegionProducer Accuracy (%)Area (m2)Number (-)
Fish CageRaftFish CageRaftFish CageRaft
Region AMROIC95.290.742,48387,9996583
MBIC84.792.937,82790,1325886
ROIC62.889.428,05386,7454482
Region BMROIC89.689.040,04977,9014492
MBIC58.197.832,09683,97030100
ROIC57.083.031,73977,8802986
Region CMROIC91.889.534,930100,71946106
MBIC63.596.922,738103,35331110
ROIC51.386.919,60297,83726103
Table 4. The commission and omission errors for the area of raft and fish cage facilities.
Table 4. The commission and omission errors for the area of raft and fish cage facilities.
Method and RegionCommission Error (%)Omission Error (%)
Fish CageRaftFish CageRaft
Region AMROIC1.950.744.859.28
MBIC0.374.8815.337.13
ROIC0.160.2437.2110.59
Region BMROIC0.580.9710.3811.02
MBIC0.7220.4341.922.21
ROIC2.522.4942.9816.98
Region CMROIC0.780.718.2410.50
MBIC0.4212.6936.483.11
ROIC7.080.8548.7013.13
Table 5. The overall accuracy and Kappa coefficient for the area of raft and fish cage facilities.
Table 5. The overall accuracy and Kappa coefficient for the area of raft and fish cage facilities.
RegionMethodOverall AccuracyKappa Coefficient
Region AMROIC92.0%0.83
MBIC90.3%0.79
ROIC81.1%0.63
Region BMROIC89.2%0.79
MBIC82.7%0.61
ROIC73.3%0.54
Region CMROIC90.1%0.77
MBIC89.5%0.71
ROIC77.9%0.53
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, A.; Yu, J.; Zhang, J.; Yu, G.; Wan, R. A Study on the Extraction of Satellite Image Information for Two Types of Coastal Fishery Facility Fish Cages and Rafts Influenced by Clouds and Vessels. J. Mar. Sci. Eng. 2024, 12, 2280. https://doi.org/10.3390/jmse12122280

AMA Style

Chen A, Yu J, Zhang J, Yu G, Wan R. A Study on the Extraction of Satellite Image Information for Two Types of Coastal Fishery Facility Fish Cages and Rafts Influenced by Clouds and Vessels. Journal of Marine Science and Engineering. 2024; 12(12):2280. https://doi.org/10.3390/jmse12122280

Chicago/Turabian Style

Chen, Ao, Jialu Yu, Junbo Zhang, Gangyi Yu, and Rong Wan. 2024. "A Study on the Extraction of Satellite Image Information for Two Types of Coastal Fishery Facility Fish Cages and Rafts Influenced by Clouds and Vessels" Journal of Marine Science and Engineering 12, no. 12: 2280. https://doi.org/10.3390/jmse12122280

APA Style

Chen, A., Yu, J., Zhang, J., Yu, G., & Wan, R. (2024). A Study on the Extraction of Satellite Image Information for Two Types of Coastal Fishery Facility Fish Cages and Rafts Influenced by Clouds and Vessels. Journal of Marine Science and Engineering, 12(12), 2280. https://doi.org/10.3390/jmse12122280

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop