Next Article in Journal
Validation of Using Multiplex PCR with Sex Markers SSM4 and ALLWSex2 in Long-Term Stored Blood Samples to Determine Sex of the North American Shortnose Sturgeon (Acipenser brevirostrum)
Previous Article in Journal
Secondary Production and Biomass Dynamics of Mediterranean Brown Trout (Salmo trutta Complex) in Pyrenean Headwater Streams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping for Larimichthys crocea Aquaculture Information with Multi-Source Remote Sensing Data Based on Segment Anything Model

1
East China Sea Fisheries Research Institute, Chinese Academy of Fishery Sciences, Shanghai 200090, China
2
College of Science, University of Shanghai for Science and Technology, Shanghai 200093, China
*
Author to whom correspondence should be addressed.
Fishes 2025, 10(10), 477; https://doi.org/10.3390/fishes10100477
Submission received: 19 August 2025 / Revised: 14 September 2025 / Accepted: 22 September 2025 / Published: 24 September 2025
(This article belongs to the Section Fishery Facilities, Equipment, and Information Technology)

Abstract

Monitoring Larimichthys crocea aquaculture in a low-cost, efficient and flexible manner with remote sensing data is crucial for the optimal management and the sustainable development of aquaculture industry and aquaculture industry intelligent fisheries. An innovative automated framework, based on the Segment Anything Model (SAM) and multi-source high-resolution remote sensing image data, is proposed for high-precision aquaculture facility extraction and overcomes the problems of low efficiency and limited accuracy in traditional manual inspection methods. The research method includes systematic optimization of SAM segmentation parameters for different data sources and strict evaluation of model performance at multiple spatial resolutions. Additionally, the impact of different spectral band combinations on the segmentation effect is systematically analyzed. Experimental results demonstrate a significant correlation between resolution and accuracy, with UAV-derived imagery achieving exceptional segmentation accuracy (97.71%), followed by Jilin-1 (91.64%) and Sentinel-2 (72.93%) data. Notably, the NIR-Blue-Red band combination exhibited superior performance in delineating aquaculture infrastructure, suggesting its optimal utility for such applications. A robust and scalable solution for automatically extracting facilities is established, which offers significant insights for extending SAM’s capabilities to broader remote sensing applications within marine resource assessment domains.
Key Contribution: In this study, an expandable and automated process was proposed for remote sensing information extraction, which outperforms traditional manual or rule-driven methods in terms of segmentation accuracy and processing efficiency. This method provides an important reference for further expanding the potential of the SAM in remote sensing applications such as marine resource assessment.

Graphical Abstract

1. Introduction

Larimichthys crocea is a demersal species endemic to the northwestern Pacific Ocean, known for its high nutritional value, rapid growth rate, and sensitivity to environmental fluctuations, which make it an ideal indicator species for coastal ecosystem health. As one of the most important marine economic fish species in China, plays a crucial role in fishery production and the supply of superior aquatic products [1,2,3]. With the rapid development of the global aquaculture industry, the larimichthys crocea industry in China has continuously expanded in scale and intensity, becoming a core component of the national fishery system [4,5,6]. Among the major production areas, the Sanchu Bay in Ningde, Fujian Province, stands out as the largest cage aquaculture base in the country, accounting for over 40% of the national total, housing more than 50,000 cages across 2000 hectares [7].
However, the marine ecosystem has faced increasingly severe pressure due to the expansion of aquaculture. Problems like nutrient pollution, benthic degradation and increased risk of disease outbreaks have arisen from high-density net-cage and raft farming [8,9]. These problems threaten environmental sustainability and aquaculture productivity [10,11]. Therefore, efficient and accurate monitoring tools are urgently needed to effectively manage the spatial distribution density of aquaculture facilities, ensure the sustainable development of aquaculture, and maintain the balance of marine ecosystems [12,13]. Therefore, more efficient and intelligent technological tools are urgently needed to enhance the automation and accuracy of aquaculture facility monitoring, and high-resolution image segmentation methods based on deep learning have been proposed as a promising solution to this challenge.
Currently, aquaculture monitoring still relies mainly on manual inspections and statistical reports, which are inefficient and difficult to adapt to the practical application requirements of large scope and high frequency [14,15,16]. Although recent studies have attempted to use comparatively low-resolution remote sensing data to achieve semi-automated monitoring, extraction tasks at the object level still face significant challenges due to the limitations in spatial resolution and algorithm accuracy [16]. Therefore, the advent of high-resolution remote sensing imagery has opened new avenues for precision aquaculture monitoring [14]. High spatial resolution and rich spectral information from satellite platforms, such as Jilin-1 and UAV-based multispectral systems, significantly improve the accuracy of aquaculture facility identification [17,18]. Nevertheless, the complexity and volume of high-resolution data present challenges for traditional processing methods, which often fail to capture subtle features or manage noise effectively [19]. The rise in deep learning has sparked a revolutionary breakthrough in the field of image segmentation. Models such as U-Net [20] and DeepLab [21,22] have achieved notable success in fields such as medical imaging, land cover classification, and marine monitoring [23]. Recently, Meta AI introduced Segment Anything Model (SAM) [24], a zero-shot segmentation framework trained on over 1.1 billion masks and 11 million images, demonstrating exceptional versatility and generalization capabilities [25]. Based on prompt-based segmentation, SAM has demonstrated high potential in a variety of segmentation tasks, eliminating the need for task-specific retraining [26]. In recent studies, SAM has been increasingly applied in the field of remote sensing and environmental monitoring. For instance, researchers have explored its use in tasks such as land cover mapping [27], coastline delineation [28], and vegetation change detection [29], taking advantage of its ability to generalize across diverse image domains without retraining. Other works have applied SAM to high-resolution UAV imagery for tasks like wetland monitoring [30] and aquaculture facility identification [31], demonstrating its potential to significantly reduce the time and effort required for manual annotation, which highlight SAM’s growing role in environmental and resource management, while also revealing challenges related to parameter optimization and computational efficiency that this study seeks to address.
In aquatic environments where traditional monitoring methods struggle to capture dynamic ecological features, such as fish cage locations, algal blooms, or benthic structure degradation, the application of SAM provides a new avenue for scalable and efficient spatial analysis. Specifically, in the context of aquaculture monitoring, the application of SAM offers several advantages, which enhances segmentation accuracy, reduces the reliance on manual annotation, and performs robustly in complex marine environments where traditional methods often struggle [32]. Preliminary studies have shown that SAM can accurately identify aquaculture cages and floating raft facilities [33], especially when applied to high-resolution satellite and drone imagery. Nevertheless, despite growing interest in the use of deep learning for aquaculture monitoring, the systematic evaluation of SAM’s performance across different types of remote sensing imagery (from low resolution to ultra-high resolution) remains insufficient [34]. Existing studies mostly focus on single data types or specific aquaculture environments, resulting in a lack of comprehensive understanding of segmentation performance under different spatial resolutions and scene complexities [35,36]. Such capabilities are crucial for assessing the spatial extent and configuration of intensive aquaculture systems, which are closely linked to habitat fragmentation, nutrient dispersion, and fish health management.
The adaptability of the SAM on Sentinel-2, Jilin-1, and UAV multispectral remote sensing images is systematically evaluated, focusing on the monitoring of Larimichthys crocea mariculture, which is of significant economic and ecological importance in coastal fisheries. The aim of the study is to analyse the segmentation accuracy and classification accuracy of extracting information on Larimichthys crocea under different parameters and from different data sources, which verified the applicability and advantages of the SAM in extracting information for Larimichthys crocea. Unlike previous studies that primarily focused on general coastal mapping or terrestrial applications, our approach specifically addresses the challenge of accurately identifying and classifying aquaculture facilities associated with Larimichthys crocea farming, which provides a scalable and automated monitoring solution for coastal resource management and fisheries sustainability.

2. Study Area and Data

2.1. Study Area

Sandu Bay, situated in southeastern Ningde City, Fujian Province, lies along the East China Sea and serves as a strategic maritime location connecting China’s southern and northern coastal routes, which is a typical aquatic ecosystem dominated by marine aquaculture activities. It experiences a subtropical maritime monsoon climate and is situated between 119°30′ to 120°09′ E and 26°39′ to 27°06′ N, as shown in Figure 1. Surrounded by the Dongchong Peninsula and Jianjiang Peninsula, the area forms a nearly enclosed natural bay [25,37], which are clearly distinguished from the surrounding aquatic areas, enabling a precise delineation of the aquaculture zones. Due to its narrow entrance, suitable water depth and temperature for the reproduction and spawning of Larimichthys crocea, Sandu Bay exhibits strong seawater exchange capacity, providing a stable environment for aquaculture and making it an ideal region for the development of mariculture. The uneven coloring within the right panel in Figure 1 is intentionally applied to highlight the water area enclosed by the red boundary, which represents the designated study area, and the sample points used for field data collection are marked with yellow dots.
Sandu Bay is the largest net-pen aquaculture base of rhubarbfish in the country [38,39], with a long history of Larimichthys crocea aquaculture, due to its unique geographical advantages. The region provides a good ecological basis for the culture of Larimichthys crocea, due to the rich ecological resources and superior natural geographical conditions.
Moreover, its extensive aquaculture area and mature aquaculture model have led to Sandu Bay being listed as a key conservation zone for Larimichthys crocea germplasm resources [40]. Therefore, selecting Sandu Bay as the study area for research on information extraction techniques in Larimichthys crocea aquaculture holds significant practical importance.

2.2. Data

Due to frequent cloud cover and rainfall over the sea, which is unfavorable for satellite observations, multi-platform remote sensing observing system data sources were used in study, and a systematic preprocessing workflow was implemented for each data source to ensure the consistency and comparability of the results between different data sources in the subsequent analyses. These images are primarily from Sentinel-2 L2A products (Airbus Defence and Space, Toulouse, France), Jilin-1 03D series (Chang Guang Satellite Technology Co., Ltd., Changchun, China) satellite images and UAV multispectral aerial images, with specific parameters are shown in Table 1. Although these datasets were collected on different dates, the time intervals are relatively short in the context of aquaculture monitoring. During these periods, aquaculture facilities and their operational status typically remain stable, with no substantial structural or environmental changes expected. Therefore, the datasets are considered consistent and comparable for integrated analysis.
A systematic preprocessing workflow was applied to each dataset, including radiometric calibration, geometric correction, atmospheric correction, and spatial alignment, to ensure compatibility and reliability for subsequent analyses. This multi-source data integration, supported by rigorous preprocessing, enables a comprehensive and accurate assessment of aquaculture facilities while mitigating the limitations associated with single-source satellite observations.
The Sentinel-2 L2A data [41], sourced from the European Space Agency’s Copernicus Program, has undergone radiometric calibration and atmospheric correction using the Sen2Cor processor (version 2.11), providing surface reflectance products. To effectively mitigate atmospheric interference, the Scene Classification Layer (SCL) band generated by Sen2Cor was used to mask cloud and cloud-shadow regions, ensuring higher accuracy in subsequent analyses. Sentinel-2 imagery contains bands at three spatial resolutions (10 m, 20 m, and 60 m). To eliminate inconsistencies caused by these differing resolutions, all bands were resampled to a uniform 10 m resolution using the nearest-neighbor method. This approach was selected over alternatives such as bilinear or cubic interpolation because it preserves the original spectral reflectance values without introducing artifacts, which is particularly important for quantitative analyses such as spectral index calculations. To enhance data compatibility across different remote sensing platforms, the original SAFE format of the Sentinel-2 L2A data was converted to the ENVI standard format (ENVI version 5.6). The ENVI format is widely supported, offering efficient handling and storage of multispectral data, and facilitates subsequent processing, classification, and visualization. Finally, individual single-band files were merged through band stacking to generate multispectral composite images. This step integrates the spectral information from multiple bands into a single dataset, enabling comprehensive analyses such as feature extraction, land cover classification, and change detection. These preprocessing steps collectively improved the geometric consistency, processing efficiency, and interoperability of the data, providing a reliable foundation for further analysis.
The Jilin-1 03D satellite [42], equipped with a PMS sensor, captures 0.75 m resolution panchromatic bands and 3 m resolution multispectral bands (including blue, green, red, near-infrared). The preprocessing involved four key steps. First, the original digital numbers (DN) were converted to Top-of-Atmosphere (TOA) reflectance using calibration coefficients provided in the satellite metadata, following the official sensor documentation. This step was performed in ENVI software (version 5.6) using the Radiometric Calibration tool, ensuring that data from different acquisition times and sensors are physically comparable. Second, atmospheric effects were removed using the FLAASH (Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes) model implemented in ENVI. The atmospheric correction was configured with a mid-latitude summer atmospheric model and a rural aerosol model, with visibility set to 40 km, as recommended for coastal and nearshore environments, which correct for scattering and absorption effects, resulting in accurate surface reflectance products. Third, Gram-Schmidt pan-sharpening was employed to integrate the high spatial detail of the 0.75 m panchromatic image with the spectral richness of the 3 m multispectral data [43,44,45]. This method was chosen over alternatives such as Principal Component Analysis (PCA) and Brovey transform because it preserves spectral fidelity while effectively enhancing spatial resolution, making it suitable for applications that require precise spectral information, such as classification and change detection. Finally, orthorectification was performed by introducing ground control points (GCPs) collected from high-precision reference imagery, combined with a 30 m resolution Digital Elevation Model (DEM). This process corrected terrain-induced distortions and sensor-related geometric errors, achieving sub-pixel spatial positioning accuracy and ensuring consistency with standard geographic coordinate systems.
With the rapid advancement of unmanned aerial vehicle (UAV) technology, UAV multispectral aerial images have become an indispensable source for acquiring high-resolution surface data. In this study, a DJI M300 Pro (DJI, Shenzhen, China) multirotor UAV platform equipped with a DJI Micasense Altum-PT (MicaSense, Wichita, KS, USA) multispectral imaging system, which captures six spectral bands (blue, green, red, red edge, near-infrared, and thermal infrared) with a ground sampling distance of 5 cm at 200 m altitude, was employed to conduct a systematic aerial survey over the Ningde study area during the period from 16 to 19 July 2024. The platform was configured to maintain a constant operational altitude of 200 m, with forward and side overlaps of 70% and 60%, which were meticulously designed to ensure comprehensive spatial coverage and high geometric fidelity of the acquired imagery. The complete image acquisition process is shown in Figure 2.
Multispectral image acquisition was conducted at predefined time intervals, synchronized with concurrent GPS data logging to ensure precise georeferencing. Radiometric calibration was performed using standard reflectance panels to mitigate the effects of illumination variability and enhance data comparability and consistency. Subsequent post-processing workflow involved image mosaicking, digital surface model (DSM) generation, and orthorectified image production using DJI Terr (version 4.0). In addition, rigorous geometric correction was applied based on distributed ground control points (GCPs). This comprehensive data processing pipeline yielded high-precision orthomosaics with optimal spatial and spectral characteristics for subsequent image segmentation and classification analyses. These preprocessing measures ensured radiometric and geometric consistency across all datasets, thereby enabling the reliability and scientific validity of cross-platform image analysis and classification results in the subsequent stages of the study.

3. Method

3.1. Segment Anything Model

The SAM, proposed by Meta AI (New York, NY, USA), is a deep learning-based zero-shot image segmentation framework [46]. Unlike traditional segmentation approaches that require task-specific training, SAM is designed to perform segmentation directly on unseen data. Its main innovation lies in the construction of a large-scale segmentation dataset containing more than 1 billion masks generated from 11 million licensed and privacy-compliant images. This extensive dataset enables the model to achieve strong generalization capability across diverse segmentation tasks without additional fine-tuning [33].
As illustrated in Figure 3, the SAM is composed of three core components: an image encoder, a prompt encoder, and a mask decoder [47]. The image encoder, based on CNN, used to extract the high-dimensional visual features of the input image. The prompt encoder supports multimodal inputs (points/boxes/masks) to achieve human–computer interactive guidance. The mask decoder fuses image features and information to generate segmentation results with strong spatial consistency and clear boundaries. The mask decoder generates the final segmentation masks by decoding and integrating the combined image and prompt features. This process extracts the target regions from the high-dimensional feature space and produces segmentation outputs with clear boundaries and strong spatial consistency. By leveraging these three interconnected components, SAM is capable of rapidly and accurately generating segmentation masks under various input conditions, which has led to its broad application across different image segmentation tasks.
The overall workflow for extracting Larimichthys crocea mariculture areas is illustrated in Figure 4. The process consists of three main stages: data preparation, model experimentation, and result analysis. In the data preparation stage, multi-source data are collected, including Jilin-2 satellite imagery, Sentinel-2 imagery, UAV images, and ground-based sample data. These datasets undergo a series of preprocessing steps to enhance image quality and consistency, followed by region of interest (ROI) extraction, which isolates relevant aquaculture areas for subsequent analysis.
The model experimentation stage focuses on optimizing and evaluating the segmentation performance of the SAM. First, the preprocessed images are input into the SAM framework, where parameter sensitivity analysis is conducted using a controlled variable method to assess the impact of different parameter settings. Simultaneously, manual mask annotations are generated from the ground truth data to serve as a benchmark ( B g r o u n d   t r u t h ). By comparing the SAM-generated masks ( B p r e d i t t ) with the ground truth, the Intersection over Union (IoU) metric is computed to quantify segmentation accuracy. Based on this evaluation, the optimal segmentation parameters are selected to enhance precision and stability.
In the result analysis stage, segmentation performance is evaluated across different datasets to ensure generalizability, followed by optimal parameter analysis and an advantage analysis of data segmentation. Similarly, the classification performance is assessed through cross-dataset evaluations to determine the robustness of the classification framework. Finally, insights from both segmentation and classification analyses are integrated to support the conclusions of the study. Thus, the structured workflow enables a systematic approach to optimizing segmentation and classification processes, ensuring accurate and efficient identification of Larimichthys crocea mariculture areas across multiple data sources.
In remote sensing applications, SAM demonstrates excellent adaptability to multi-resolution images (ranging from Sentinel-2′s 10 m to UAV’s centimeter-level imagery) through configurable parameters (e.g., IoU thresholds, region constraints). The interactive segmentation capability of SAM significantly reduces the reliance on large-scale labelled samples compared to traditional methods. This study presents the systematic evaluation of SAM in the extraction of marine aquaculture facilities from multiple remote sensing sources, which provides an innovative technological path for constructing an intelligent coastal monitoring system.

3.2. Random Forest Classification Methodology

The RF (Random Forest) algorithm was employed in this study to classify aquaculture facilities based on the segmentation outputs generated by the SAM. Instead of providing a theoretical description of the algorithm, we focus on its practical implementation in our workflow. The segmented aquaculture facility masks produced by SAM were used to define regions of interest, and for each region, spectral information from Sentinel-2, Jilin-1, and UAV imagery was extracted as input features. The feature set included visible bands (blue, green, red), near-infrared, and derived spectral indices such as NDVI (Normalized Difference Vegetation Index) and NDWI, along with texture measures calculated from the high-resolution UAV imagery.
The Random Forest classifier was implemented using the Scikit-learn library (version 1.4.0). A total of 60 decision trees were used, and the maximum number of features considered at each split was set to the square root of the total number of input features, following standard practice for remote sensing classification. OOB (Out-of-bag) error estimates were enabled to evaluate model performance during training. These parameter choices were determined based on preliminary experiments to balance accuracy and computational efficiency.
By integrating the SAM segmentation outputs with the RF classification step, the workflow ensured that the classification focused only on candidate aquaculture areas identified by SAM, reducing noise from irrelevant land or water regions. This two-stage process improved both precision and computational efficiency. Detailed parameter names and settings were moved from Figure 5 into the text to simplify the visual presentation of the workflow.

3.3. Optimal Parameter Analysis

The Intersection over Union (IoU) is used as the primary metric to evaluate segmentation accuracy. IoU is defined as the ratio between the area of overlap of the predicted segmentation mask and the ground truth mask to the area encompassed by both masks. Mathematically, it can be expressed as:
IoU   =   B p B g B p B g  
where B p represents the set of pixels in the predicted segmentation and B g represents the set of pixels in the ground truth.
The IoU metric was chosen because it is widely used in image segmentation tasks, particularly in evaluating object localization and boundary accuracy. In the context of aquaculture monitoring, precise segmentation of fish or other target organisms is critical for tasks such as biomass estimation, behavior analysis, and health assessment. IoU provides a robust and interpretable measure of spatial agreement between predictions and reference annotations, making it suitable for assessing the reliability of segmentation models in this domain.

4. Results

4.1. Analysis of Optimal Segmentation Parameters for Different Data Sources

When adjusting the parameters of the SAM (Segment Anything Model), it is essential to consider both the size of the segmented imagery and the scale variability of the target objects to achieve optimal classification accuracy. The SAM allows tuning of several parameters, including the number of sampling points, number of point clouds, predicted IoU threshold, intersection-over-union (IoU) threshold, stability threshold, number of cropping layers, point reduction factor per layer, and minimum mask area. Among these, the stability threshold, IoU threshold, and minimum mask area play a particularly critical role in the training and application phases of the model.
The diversity in spatial resolution and structural detail across different remote sensing platforms necessitates tailored segmentation strategies. To facilitate reproducibility and parameter transferability, the finalized configurations across all data sources are systematically compiled. A consolidated overview of the optimal parameter values is provided in Table 2, which offers a reference framework for adapting segmentation pipelines to multi-source imagery under varying operational contexts.
Specifically, the sampling number defines the sampling resolution along each axis, with the total number of sampled points being the square of this value. The point cloud number specifies how many points are processed simultaneously during segmentation. Within a reasonable range, increasing either the sampling point count or point cloud number tends to improve segmentation accuracy, but also leads to greater computational complexity and higher GPU memory usage. The predicted IoU threshold is used to filter out low-quality segmentation masks and the IoU threshold is further employed to remove masks with excessive overlap. The stability threshold filter masks are filtered based on their internal consistency. The number of layers of cutting determines whether multi-scale cropping is applied to enhance mask precision.
The segmentation performance of the model was systematically investigated using multi-source remote sensing data (Sentinel-2, Jilin-1, and UAV imagery) through controlled variable experiments. By isolating and adjusting individual parameters while maintaining others constant, we conducted comprehensive parameter optimization and quantitatively analyzed how key parameters influence segmentation accuracy. The controlled experiments revealed that spatial resolution, scene complexity, and interference factors significantly affect parameter sensitivity, demonstrating the necessity for tailored optimization strategies across different data types. This methodological approach enabled precise identification of optimal parameter configurations for each dataset while minimizing confounding effects from other variables. IoU performance across different hyperparameter settings for three types of remote sensing imagery is shown in Figure 6.
For Sentinel-2 data, the model exhibits heightened sensitivity to the sampling number parameter. A setting of 64 sampling points optimally captures fine features in medium-to-low resolution imagery while maintaining computational efficiency, achieving an IoU of 72.93%, outperforming alternative configurations (e.g., 32 or 128 points, which reduce IoU by 2–3%). The point cloud number parameter shows minimal influence on accuracy, yet a batch size of 32 strikes the best balance between precision and resource consumption. Optimization experiments for predicted IoU threshold and stability parameter threshold reveal that overly stringent thresholds lead to valid region loss, whereas excessively lenient thresholds introduce noise. The optimal values are determined as 0.8 and 0.9, respectively, ensuring segmentation consistency without over-cropping. Spatial cropping analysis demonstrates that a number of layers of cutting of 1 combined with a reduction factor for trimming number of 1.25 effectively suppresses background noise, elevating final IoU to 72.59%. Additionally, to mitigate false detections from small objects like ships, a minimum division area of 100 significantly enhances segmentation reliability.
Jilin-1 data, with its intermediate resolution between Sentinel-2 and UAV imagery, exhibits transitional parameter characteristics. Like Sentinel-2, sampling number and point cloud number have limited impact, but the optimal predicted IoU threshold increases to 0.85, while stability parameter threshold remains at 0.9. This indicates that medium-resolution data requires moderately stricter constraints to filter low-confidence predictions, though excessive thresholds (e.g., 0.9) still degrade IoU by ~1.5%. Spatial cropping parameters align with Sentinel-2 (1 layer, downscale factor 1.25), reflecting a preference for moderate down sampling to balance noise removal and detail preservation.
For high-resolution UAV data (centimeter-scale), the model displays greater sensitivity to parameter constraints. The predicted IoU threshold of 0.825 yields peak performance (final IoU = 97.53%), whereas stricter thresholds (e.g., 0.875) cause valid region omission, reducing IoU by ~3%. The stability parameter threshold must be raised to 0.925 to address complex local variations in high-resolution scenes. Spatial cropping benefits from a smaller reduction factor for trimming number (0.8) to retain fine structural details, while a larger minimum division area (500) effectively excludes small floating debris near aquaculture facilities. These findings underscore the need for meticulous parameter tuning in high-resolution data processing to harmonize detail retention and noise suppression.
To further evaluate the robustness and consistency of the parameter configuration in heterogeneous data sources, we calculated the standard deviation of each segmentation parameter, as shown in Table 3, which revealed the relative sensitivity of individual parameters when applied to datasets with different spatial resolutions and features.
The relationship between spatial resolution and segmentation performance exhibits a non-monotonic trend across data sources. As shown in Figure 7, moderate resolution levels generally achieve a better balance between stability and final IoU. For instance, in Jilin-1 data, segmentation performance peaks at around 5 m resolution, beyond which higher resolution (e.g., 10 m) leads to noticeable performance degradation. This suggests that overly coarse granularity may obscure fine object boundaries, thus affecting IoU. In contrast, UAV data demonstrates higher sensitivity to resolution: fine-grained resolutions (<0.4 m) yield optimal performance, while beyond 0.8 m, IoU rapidly declines despite high stability scores. These results indicate that resolution tuning must consider both data characteristics and task-specific structure preservation, as finer resolution does not always guarantee improved accuracy if it leads to over-segmentation or instability.

4.2. Segmentation Accuracy Analysis of Different Data Sources

In the research, the random forest classification algorithm was used to classify the sample areas from the Sentinel-2, Jilin-1 and UAV data, and the segmentation accuracy results shown in Table 4. Due to the different segmentation accuracies, resolutions and characteristics of the various data, their performances in the classification task vary. The following will analyze these three data sources and explore their application effects in the classification of aquaculture facilities.
In this study, the segmentation specifically targets Larimichthys crocea aquaculture cages and floating rafts, which are key structures for mariculture production in Sandu Bay. The manual annotations shown in Figure 8 represent field-validated ground truth boundaries of these aquaculture facilities, providing a reference for evaluating the accuracy of SAM-based segmentation. This comparison is biologically and management-relevant, as accurate delineation of cages is essential for biomass estimation, facility expansion monitoring, and assessing potential environmental impacts.
Sentinel-2 imagery, with its broad coverage, is well-suited for large-scale remote sensing monitoring. Despite its relatively low resolution, it effectively captures the spatial distribution of aquaculture facilities. As shown in Figure 8, in large-area segmentation, the low resolution occasionally leads to the merging of adjacent objects into a single detection. Additionally, in densely packed cage areas (Figure 8c), many cages remain undetected, likely due to their close proximity altering the surrounding water reflectance and hindering the model’s ability to segment them individually. The segmentation accuracy of Sentinel-2 data was moderate, which reflects the influence of its coarse spatial resolution on detection performance.
In contrast, Jilin-1 data, with its higher resolution and superior image quality, achieves significantly improved segmentation accuracy (91.64%). As illustrated in Figure 8, the model successfully segments the majority of aquaculture facilities, with predicted boundaries closely aligned to ground truth. Compared to Sentinel-2, Jilin-1 demonstrated higher segmentation precision and clearer boundary delineation, particularly in areas with dense or complex aquaculture structures.
UAV data, with the highest spatial resolution (centimeter-level), achieved the highest segmentation accuracy of 97.71%. It was also effective in distinguishing aquaculture facilities from surrounding features such as buildings and boats, as illustrated in Figure 8c. However, processing UAV imagery required greater computational resources and longer processing times due to the large data volume.
The strengths of these datasets are highly complementary. Sentinel-2 serves as a cost-effective solution for macro-scale, low-precision surveillance. Jilin-1 strikes an optimal balance for medium-scale, accuracy-sensitive applications. UAV data, while unmatched in precision, is constrained by computational costs and thus limited to small-area, high-detail tasks. The choice of data source should align with specific monitoring objectives, weighing trade-offs between coverage, resolution, and resource availability. This tiered approach enables flexible and scalable aquaculture monitoring across diverse operational requirements.

4.3. Segmentation Accuracy Analysis of Band Combinations

Band combination is a technique that enhances the features of specific ground objects by selectively recombining multiple spectral bands from remote sensing imagery. Since different materials exhibit distinct electromagnetic reflectance and absorption characteristics, appropriate band combinations can effectively differentiate and identify various land cover types. In this study, the band combination comparisons shown in Figure 9 are based on the Jilin-1 dataset. The Jilin-1 imagery provides sufficient spatial and spectral resolution to clearly demonstrate the advantages of different band combinations, making it suitable for illustrating their effectiveness in distinguishing aquaculture facilities and other land cover types.
Specifically, Figure 7 illustrates the segmentation results derived from the Jilin-1 dataset, which was selected as a representative data source due to its high spatial resolution, making it more suitable for detailed analysis compared to Sentinel-2 or UAV imagery. The exact wavelength ranges (in nanometers) of the Jilin-1 spectral bands are: Blue (450–520 nm), Green (520–590 nm), Red (630–690 nm), and Near-infrared (760–900 nm). Four false-color composites were generated using different band combinations: Near-infrared, Blue, Green; Blue, Green, Red; Green, Red, Near-infrared; and Blue, Red, Near-infrared. The colored outlines overlaid on the image correspond to the segmentation results obtained from each respective band combination, providing a visual comparison of their performance in identifying aquaculture facilities and surrounding land cover types.
A comparison of segmentation accuracy across four different band combinations is presented in Figure 10: BGR (Blue, Green, Red), BR-NIR (Blue, Red, Near-infrared), GR-NIR (Green, Red, Near-infrared), and NIR-BG (Near-infrared, Blue, Green), which were selected based on both common practices in remote sensing and preliminary empirical tests. Specifically, RGB represents the standard true-color composite, while the other combinations incorporate NIR information to enhance water and vegetation discrimination, which is crucial for identifying aquaculture facilities. Although the differences in final IoU values appear small (approximately 1%), statistical significance was assessed using one-way ANOVA followed by Tukey’s HSD post hoc test, confirming that the differences among band combinations are statistically significant at the 0.05 level. This demonstrates that incorporating specific near-infrared and visible band combinations can meaningfully improve segmentation performance for aquaculture facility detection, rather than the observed differences arising from random variation.
Among all tested band combinations (Figure 10), the Blue-Green-Red (BGR) and Near-Infrared-Blue-Green (NIR-BG) combinations achieved the highest IoU value of 91.55%, demonstrating superior performance in accurately identifying target regions. These two combinations yielded nearly identical segmentation results, indicating their high consistency and precision in delineating Larimichthys crocea aquaculture facilities. The comparable performance also suggests that the Red and Near-Infrared (NIR) bands have minimal influence on facility extraction in this context.
The Blue-Red-NIR (BR-NIR) combination produced a slightly lower IoU of 90.89%, likely due to the absence of the Green band, which may weaken the model’s ability to discriminate certain features, particularly aquaculture structures.
The Green-Red-NIR (GR-NIR) combination exhibited the lowest IoU (90.22%) among all tested configurations. The exclusion of the Blue band reduced its effectiveness in distinguishing aquaculture facilities from water bodies, highlighting that the Blue band provides more discriminative information than the Green band for this specific segmentation task.
Optimal Band Selection: For Larimichthys crocea aquaculture facility segmentation, the NIR-Blue-Green is the most effective choice. This combination optimally captures vegetation- and water-related spectral features while maintaining high precision in target identification.

4.4. Classification Accuracy Analysis of Different Data Sources

To support classification using the random forest algorithm, a comprehensive set of features was extracted from both the ground sample polygons and the full study area. Specifically, spectral features included the mean and standard deviation of each multispectral band within the sample mask, capturing the reflectance characteristics of land cover types. Texture features were derived from the gray-level co-occurrence matrix (GLCM) of the first principal component, including contrast, dissimilarity, homogeneity, energy, and correlation. Geometric features, such as area and perimeter, were computed from the vector geometry to reflect shape and spatial complexity. The same feature set was applied to the segmented regions across the study area for classification and prediction. The results of classification accuracy rates for different data sources are shown in Table 5.
The Sentinel-2 satellite imagery, Jilin-1 satellite data, and UAV imagery were classified using a random forest algorithm, which based on spectral, color, texture, and geometric features. Due to variations in segmentation accuracy, resolution, and inherent characteristics among these datasets, their performance in classification tasks differed significantly. The Sentinel-2 data, with its relatively low spatial resolution, struggles to capture fine details and textural differences among various aquaculture structures (e.g., cages and floating rafts), limiting its ability to distinguish similar facilities. However, it still provides a foundational reference for broad classification, achieving an accuracy of 0.71 in differentiating “cages” and “floating rafts.” This result highlights its constraints in fine-grained classification tasks.
The Jilin-1 satellite data, with its high spatial resolution, provides clear visualization of ground details and demonstrates advantages in distinguishing aquaculture cages from floating rafts. In the study area, the primary cultured species is Larimichthys crocea, while other cage facilities include abalone and sea cucumber farming structures, along with a small number of solar photovoltaic panels.
During the classification process, density variations were observed between Larimichthys crocea cages and abalone cages. Specifically, Larimichthys crocea cages in the study area feature grid densities of 4 × 4 m and 10 × 10 m, whereas abalone cages predominantly maintain a 4 × 4 m grid pattern with additional horizontal walkways constructed at the center of each grid to facilitate feeding. In the Jilin-1 imagery, solar panels showed limited differentiation from abalone and sea cucumber cages, all exhibiting similarly dense patterns. Consequently, the high-resolution data was classified into three categories: Larimichthys crocea cages, other aquaculture facilities (e.g., abalone cages), and floating rafts.
The classification accuracy of Jilin-1 data reached 0.918, demonstrating excellent performance in detail representation. However, classification precision was somewhat compromised by interfering features such as nearby buildings and vessels that overlapped with aquaculture structures. Nevertheless, the high-resolution data effectively distinguished between Larimichthys crocea and abalone cages, while achieving higher accuracy in differentiating floating rafts from cage structures. As shown in Figure 11a,b, the data successfully separated floating rafts and Larimichthys crocea cages with minimal misclassification. In contrast, Figure 11c reveals more frequent misclassification between other cage types and Larimichthys crocea cages due to their similar geometric shapes and spectral characteristics.
The classification accuracy was further improved by incorporating field-collected ground truth data and supplementary background information. In total, 80 ground truth samples were collected to ensure comprehensive coverage of aquaculture facilities across multiple locations, including both net cages and floating rafts. Among these, 10 samples were obtained through direct field observations during on-site surveys, providing high-accuracy reference data. The remaining 70 samples were generated through manual visual interpretation of high-resolution imagery, which allowed for the identification and verification of aquaculture structures in areas where field access was limited. This combined approach ensured both the reliability and spatial representativeness of the ground truth dataset.
Despite interference from non-aquaculture features like buildings and boats, the Jilin-1 satellite data maintained reliable performance in discriminating between different types of aquaculture facilities, proving its effectiveness for detailed aquaculture monitoring applications.
In addition to reporting classification accuracies, we also evaluated the computational efficiency of processing the three datasets. Specifically, we measured the average processing time and peak memory usage for each dataset when applying the proposed method. These results, presented in Table 6, complement the classification accuracy results in Table 6 and provide practical insights into the computational demands associated with different data sources.
With the highest spatial resolution among all data sources, UAV-derived imagery provides near-ground-level detail and achieves exceptional classification accuracy (0.94). Its superior capability lies in precisely discriminating fine-scale aquaculture features, including fishing nets, abalone cages, and photovoltaic panels. Despite requiring computationally intensive tiling processing due to massive data volumes, UAV data enables clear delineation of individual cage boundaries while maintaining high classification performance. This advantage proves particularly significant in complex marine environments where structural heterogeneity exists. Although demanding greater processing effort, UAV imagery’s centimeter-level precision establishes it as the optimal remote sensing data source for fine-grained classification tasks, outperforming satellite alternatives in scenarios requiring microscopic feature differentiation. The empirical results confirm its unparalleled effectiveness when ultimate classification accuracy is prioritized over processing efficiency.

4.5. Accuracy Assessment of Different Aquaculture Facilities

Using Jilin-1 data as a representative high-resolution dataset, we further evaluated the segmentation performance for different types of aquaculture facilities. As shown in Table 7, the IoU between the SAM-generated masks and manual annotations for net cages reached 92.68%. The fine spatial resolution of Jilin-1 imagery enabled the model to capture detailed structural features of the net cages, such as the frame geometry and mesh distribution, thereby achieving highly precise segmentation. This provides a reliable foundation for downstream tasks such as cage enumeration and spatial distribution analysis. For floating rafts, the IoU between SAM results and manual annotations was 91.34%. Although slightly lower than that of net cages, the segmentation accuracy remained high, indicating that SAM performs robustly in delineating floating raft structures as well.
Net cages typically exhibit regular geometric shapes and well-defined boundary features, whereas floating rafts tend to have irregular forms and edge regions that are more susceptible to external interference, leading to comparatively larger segmentation errors. In Jilin-1 imagery, the structural features of net cages—such as frames and mesh patterns—are clearly visible, providing the model with abundant cues for accurate segmentation. In contrast, floating rafts present more homogeneous visual characteristics and are more prone to confusion with surrounding water surfaces or other objects, thereby increasing the complexity of segmentation.
Overall, the SAM demonstrates excellent segmentation performance on Jilin-1 data, effectively supporting high-precision monitoring of aquaculture facilities. The segmentation results allow for the extraction of critical spatial information, including the distribution patterns, density, and spatial extent of net cages and floating rafts. This information provides valuable insights into aquaculture production intensity, facility utilization efficiency, and potential environmental impacts. By integrating these data into aquaculture management practices, stakeholders can optimize resource allocation, monitor facility expansion, and detect abnormal changes over time, ultimately contributing to sustainable and data-driven aquaculture planning. The high-accuracy segmentation results lay a solid foundation for downstream applications such as spatial distribution analysis and change detection, facilitating refined management and data-driven decision-making in aquaculture regions.

5. Discussion

5.1. Impact of SAM Parameter Optimization on Aquaculture Facility Extraction Accuracy

A comparative analysis of the IoU results for segmentation parameters across multi-resolution datasets (Figure 12) reveals that four key parameters—predicted IoU threshold, IoU threshold, stability score threshold, and crop layer downscaling factor—exhibit varying degrees of correlation. As shown in Figure 12a–c, the predicted IoU threshold consistently demonstrates a strong negative correlation with IoU across Sentinel-2, Jilin-1, and UAV data, indicating its critical role in performance degradation across different resolutions. In contrast, the IoU threshold and stability score threshold show more dataset-specific influence (Figure 12b,c), while the crop layer downscaling factor displays a high negative correlation only in certain cases (e.g., UAV data in Figure 12c). These findings suggest that while some parameters—such as the predicted IoU threshold—require resolution-specific tuning, others may retain robustness across datasets. Compared to previous work that often assumes uniform parameter transferability, our analysis highlights intrinsic robustness and flexibility among certain parameters, underlining the need for selective adaptation in multi-resolution segmentation tasks.
As illustrated in Figure 13a, the predicted IoU threshold plays a key role in filtering out low-quality segmentation masks. For 10 m-resolution Sentinel-2 data, stringent thresholds discard many valid aquaculture facility masks due to inherently coarse mask quality, leading to under-segmentation. In contrast, moderately elevated thresholds (e.g., 0.85) improve precision for 0.75 m-resolution Jilin-1 data by suppressing sparse background noise. However, in ultra-high-resolution 0.06 m UAV imagery, further increases in threshold induce false positives by amplifying sensitivity to complex background textures such as water surfaces. This results in a nonlinear trend: the optimal threshold increases from 10 m to 0.75 m resolution, then decreases for UAV data to balance fine structural detail and background interference.
Figure 13b demonstrates that the impact of the IoU threshold also varies across resolutions. Lower-resolution data (Sentinel-2) is highly sensitive to changes in this threshold, while higher-resolution data (Jilin-1 and UAV) exhibit relative stability under variation. Figure 13c shows that the stability score threshold must be adapted to resolution: relaxed thresholds (e.g., 0.8) are required for Sentinel-2 to retain marginal features, whereas stricter values (0.9–0.925) enhance robustness in higher-resolution datasets. As seen in Figure 13d, the crop layer downscaling factor affects resolution-specific segmentation differently. A factor of 0.8 proves optimal for both Sentinel-2 and UAV data by balancing detail retention and noise suppression. For Jilin-1, its influence is minimal due to clear contrast between facility and background. These findings emphasize that some parameters, such as the predicted IoU threshold, require resolution-specific tuning, while others (e.g., crop layer downscaling factor) demonstrate greater robustness. Compared with previous works that often assume parameter settings can be directly transferred across resolutions; our results underscore the necessity of selective parameter adaptation in multi-resolution segmentation tasks.
In conclusion, the differential mechanism of segmentation parameters under multi-resolution conditions is revealed in the study. Unlike prior work that either ignores cross-resolution variability or performs brute-force re-optimization per dataset, our results identify a subset of parameters that can be fixed globally due to their stability, while highlighting others that require targeted adaptation. The predicted IoU threshold shows a significant nonlinear change trend at different resolutions, indicating that the parameter needs to be specifically tuned according to the resolution to balance the integrity of the mask and the noise suppression ability. In contrast, the IoU threshold, stability score threshold, and crop layer downscaling factor of the cropping layer exhibit stronger cross-resolution robustness and can maintain a fixed value within a wider range of applications. Different from most studies that investigate the influence of external parameters on the segmentation effect of the SAM [48,49], the robustness differences in the intrinsic parameters of the SAM are revealed, which is intrinsically related to the cross-resolution tuning strategy emphasized in the current research [50]. The result is consistent with the experience in designing segmentation parameters in SAMNet++ [51] and emphasizes the importance of predicting the IoU threshold, stability score threshold, crop layer, and crop layer downscaling factor for segmentation quality. At the same time, it compensates for the problem of Mask IoU [52] being insensitive to large target boundary errors.

5.2. Analysis of Data Source Selection for Different Task Requirements

Sentinel-2, Jilin-1, and UAV datasets each exhibit distinct advantages and limitations in remote sensing segmentation and classification tasks for aquaculture facility monitoring, as shown in Table 8. A comparative evaluation of their performance provides critical insights for optimal data source selection based on specific application needs.
Sentinel-2 satellite offers significant advantages for large-scale and long-term aquaculture monitoring [53,54] due to its extensive coverage and short revisit cycle. Its multi-temporal imagery provides reliable data support for tracking dynamic changes in aquaculture facilities, particularly suitable for persistent monitoring of large water bodies and coastal regions. As an open-access dataset, Sentinel-2 features low acquisition costs, mature processing workflows, and high computational efficiency, making it an ideal choice for large-area, multi-regional surveillance. However, due to the limitation of spatial resolution, Sentinel-2 performed unsatisfactorily in tasks related to detail recognition and fine classification. In distinguishing different types of livestock facilities, its identification capability was significantly lower than that of the Jilin-1 Satellite and UAV data, resulting in many misclassifications. Thus, this dataset is more appropriate for low-precision, macro-scale monitoring [55], exhibiting clear limitations in fine-grained aquaculture facility identification.
The Jilin-1 03D satellite provides substantially higher spatial resolution [56,57] but with smaller single-scene coverage compared to Sentinel-2. While its satellite constellation enables daily revisits for specific regions, data acquisition incurs moderate commercial costs, scaling with coverage area. Meanwhile, the data from the Jilin-1 satellite has a relatively high spatial resolution, which enables it to effectively distinguish between the breeding facilities and the surrounding environment, and to present the ground details more accurately. This positions it as being optimal for medium-scale, precision-demanding monitoring. However, similar to other optical satellites, Jilin-1 imagery is affected by atmospheric conditions such as cloud cover and haze, which can obscure surface features and reduce image quality. Moreover, despite its finer spatial resolution, Jilin-1 does not reach the ultra-high level achievable by UAV imagery, which may limit its ability to capture very small structures or subtle differences among facility types. Additionally, the commercial nature of Jilin-1 data means that acquisition and processing costs increase with monitoring scale, potentially constraining its use for large-area, long-term studies.
UAV-derived imagery, with its ultra-high resolution, excels in fine classification tasks, particularly for small-scale, high-accuracy applications, successfully differentiating aquaculture structures from adjacent features [58,59]. The UAV can effectively distinguish between livestock facilities (such as fishing nets, abalone cages) and surrounding objects (such as photovoltaic cells, houses, etc.). Also, the UAV possesses flexible data collection capabilities and can adjust flight parameters according to research needs. However, UAV data acquisition and processing entail notable limitations. Firstly, the workflow for data collection and processing is relatively complex, with aerial surveys being highly weather-dependent, covering limited areas per mission, and requiring substantial labor inputs. Secondly, the extensive data volume demands significant computational resources, resulting in comparatively lower processing efficiency. Consequently, UAV data are better suited for small-scale, high-precision monitoring applications. For large-area or long-term time-series monitoring tasks, their implementation necessitates disproportionately greater investments in manpower, time, and computing infrastructure.
Sentinel-2 satellite imagery offers significant advantages in large-scale monitoring and long-term time-series analysis; however, its limited spatial resolution constrains its ability to support fine-grained classification tasks. High-resolution satellite imagery performs well in medium- to small-scale, high-precision monitoring, effectively identifying complex aquaculture structures and offering a good balance in segmentation tasks, with relatively streamlined processing. However, its higher acquisition cost poses a constraint. UAV data excels in detailed classification of localized areas but involves a more complex acquisition process and higher data processing demands. Therefore, selecting appropriate data sources should be based on a comprehensive assessment of monitoring scope, accuracy requirements, budget constraints, and computational resources to achieve optimal monitoring outcomes.
In this context, our study proposes an innovative and adjustable methodology that differs from prior research by integrating multi-source remote sensing data (Sentinel-2, high-resolution satellite imagery, and UAV data) within a unified processing framework. Unlike previous studies that typically focus on a single data source or lack flexibility in application, our approach introduces a scalable workflow capable of accommodating different spatial resolutions and monitoring requirements. This enables both fine-scale, high-precision mapping and broad-scale, long-term monitoring, providing a comprehensive solution for offshore aquaculture facility management [18,28].
From a practical application perspective, this methodology offers a more efficient and expandable approach to current monitoring challenges, bridging the gap between detailed local surveys and regional-scale surveillance. It demonstrates how high-resolution remote sensing imagery can be systematically leveraged to improve the accuracy, adaptability, and operational feasibility of offshore aquaculture facility monitoring.

6. Conclusions

A novel and scalable framework is proposed by this study for extracting Larimichthys crocea aquaculture information by integrating the SAM with multi-source high-resolution remote sensing imagery. Compared with traditional manual delineation and rule-based approaches, SAM demonstrates superior performance by achieving both higher segmentation accuracy and a greater degree of automation, substantially reducing labor costs and subjective bias. For instance, UAV imagery processed with SAM achieved a markedly higher IoU and finer facility-type discrimination than conventional classification workflows, underscoring its potential for precise and efficient aquaculture mapping.
Beyond validating SAM’s robustness across Sentinel-2, Jilin-1 03D, and UAV multispectral data, this study highlights its adaptability through parameter optimization and band selection strategies. These capabilities allow SAM to generalize effectively to diverse sensors and environmental conditions, laying the groundwork for intelligent coastal aquaculture monitoring systems.
Looking forward, future enhancements could be realized by incorporating Synthetic Aperture Radar (SAR) and LiDAR data to address critical challenges in aquaculture monitoring. SAR’s all-weather imaging capabilities would enable reliable detection under cloud cover, heavy rainfall, or during night operations, while LiDAR’s three-dimensional structural information could help distinguish vertically complex aquaculture facilities, such as multi-layer cage systems. Furthermore, integrating time-series remote sensing data would allow dynamic tracking of aquaculture facility evolution, supporting early detection of structural damage, illegal expansion, or ecological risks. Such advancements would significantly expand the framework’s applicability, providing a powerful tool for sustainable fisheries management, regulatory oversight, and environmental impact assessment.

Author Contributions

Conceptualization, methodology, and formal analysis by K.N.; writing—review and editing and visualization by X.X. and F.W.; project administration and funding acquisition by W.F., S.Y., Y.L. and F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported by the Central Public-Interest Scientific Institution Basal Research Fund, CAFS (NO. 2022ZD0401 and NO. 2024TD04).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SAMSegment Anything Model
UAVUnmanned Aerial Vehicle
ROIRegion of Interest
NIRNear-Infrared
SCLScene Classification Layer
TOATop-of-Atmosphere
PCAPrincipal Component Analysis
SAFEStandard Archive Format for Europe
ENVIEnvironment for Visualizing Image
PMSParticle Measuring System
FLAASHFast Line-of-sight Atmospheric Analysis of Spectral Hypercubes
DNDigital Number
GCPGround Control Point
DEMDigital Elevation Model
CNNConvolutional Neural Network
IoUIntersection over Union
NIRNear-Infared
SARSynthetic Aperture Radar
NDVINormalized Difference Vegetation Index
RFRandom Forest

References

  1. Xu, S.; Ge, M.; Feng, J.; Wei, X.; Tan, H.; Liang, Z.; Tong, G. Epidemiological Investigation on Diseases of Larimichthys crocea in Ningbo Culture Area. Front. Cell. Infect. Microbiol. 2024, 14, 1420995. [Google Scholar] [CrossRef] [PubMed]
  2. Jia, S.; Wang, X.; Lu, M.; Lan, T.; Gao, Z.; Ding, J.; Meng, R.; Wang, X.; Wu, X.; Huang, L.; et al. Flavor Profiling and Optimization of Large Yellow Croaker (Larimichthys crocea) in Recirculating Aquaculture Systems: Effects of Exercise Intensity Assessed by Advanced Analytical Techniques. Food Chem. X 2025, 29, 102767. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, H.Y.; Wang, J.C.; Jing, Y. Larimichthys crocea (Large Yellow Croaker): A Bibliometric Study. Heliyon 2024, 10, e37393. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, X.; Tang, Y.; Hu, X.; Liu, C.; Liu, Y.; Zhuang, X.; Xu, G.; Liu, J. The Effects of Water Flow on the Swimming Behavior of the Large Yellow Croaker (Larimichthys crocea) in a Large Sea Cage. Fishes 2025, 10, 250. [Google Scholar] [CrossRef]
  5. Deng, H.; Guo, Q.; Wei, B.; Zhong, J.; Zheng, M.; Zheng, Y.; Lin, N.; Zheng, S. Comparative Analysis of Different Body Composition, Mucus Biochemical Indices, and Body Color in Five Strains of Larimichthys crocea. Fishes 2025, 10, 305. [Google Scholar] [CrossRef]
  6. Wang, Z.; Jin, R.; Xu, P.; Wang, B.; Gul, S.; Shang, Y.; Hu, M.; Yang, Q.; Huang, W.; Wang, Y. Physiological and Behavioral Effects of Underwater Noise on the Large Yellow Croaker (Larimichthys crocea) and the Blackhead Seabream (Acanthopagrus schlegelii). Aquaculture 2025, 602, 742318. [Google Scholar] [CrossRef]
  7. Wu, L.; Wu, L.; Lin, H.; Liu, M.; Ding, S. Continuous Genetic Assessment of the Impact of Hatchery Releases on Larimichthys crocea Stocks in China. Glob. Ecol. Conserv. 2025, 58, e03466. [Google Scholar] [CrossRef]
  8. Cao, L.; Wang, W.M.; Yang, Y.; Yang, C.T.; Yuan, Z.H.; Xiong, S.B.; Diana, J. Environmental Impact of Aquaculture and Countermeasures to Aquaculture Pollution in China. Environ. Sci. Pollut. Res. Int. 2007, 14, 452–462. [Google Scholar]
  9. Price, A.R.G. The Marine Food Chain in Relation to Biodiversity. Sci. World J. 2001, 1, 579–587. [Google Scholar] [CrossRef]
  10. Penry-Williams, I.L.; Kalantzi, I.; Tzempelikou, E.; Tsapakis, M. Intensive Marine Finfish Aquaculture Impacts Community Structure and Metal Bioaccumulation in Meso-Zooplankton. Mar. Pollut. Bull. 2022, 182, 114015. [Google Scholar] [CrossRef]
  11. Wang, W.; Mao, W.; Zhu, J.; Wu, R.; Yang, Z. Research on Efficiency of Marine Green Aquaculture in China: Regional Disparity, Driving Factors, and Dynamic Evolution. Fishes 2023, 9, 11. [Google Scholar] [CrossRef]
  12. Ji, J.; Zhao, N.; Zhou, J.; Wang, C.; Zhang, X. Spatiotemporal Variations and Convergence Characteristics of Green Technological Progress in China’s Mariculture. Fishes 2023, 8, 338. [Google Scholar] [CrossRef]
  13. Gentry, R.R.; Froehlich, H.E.; Grimm, D.; Kareiva, P.; Parke, M.; Rust, M.; Gaines, S.R.; Halpern, B.S. Mapping the Global Potential for Marine Aquaculture. Nat. Ecol. Evol. 2017, 1, 1317–1324. [Google Scholar] [CrossRef] [PubMed]
  14. Chen, X.; Li, D.; Mo, D.; Cui, Z.; Li, X.; Lian, H.; Gong, M. Three-Dimensional Printed Biomimetic Robotic Fish for Dynamic Monitoring of Water Quality in Aquaculture. Micromachines 2023, 14, 1578. [Google Scholar] [CrossRef] [PubMed]
  15. Hu, Z.; Li, R.; Xia, X.; Yu, C.; Fan, X.; Zhao, Y. A Method Overview in Smart Aquaculture. Environ. Monit. Assess. 2020, 192, 493. [Google Scholar] [CrossRef]
  16. Callac, N.; Giraud, C.; Boulo, V.; Wabete, N.; Pham, D. Microbial Biomarker Detection in Shrimp Larvae Rearing Water as Putative Bio-Surveillance Proxies in Shrimp Aquaculture. PeerJ 2023, 11, e15201. [Google Scholar] [CrossRef]
  17. Wang, Z.; Liu, K. Dynamic Evolution of Aquaculture along the Bohai Sea Coastline and Implications for Eco-Coastal Vegetation Restoration Based on Remote Sensing. Plants 2024, 13, 160. [Google Scholar] [CrossRef]
  18. Chen, A.; Lv, Z.H.; Zhang, J.B.; Yu, G.Y.; Wan, R. Review of the Accuracy of Satellite Remote Sensing Techniques in Identifying Coastal Aquaculture Facilities. Fishes 2024, 9, 52. [Google Scholar] [CrossRef]
  19. Magdin, M.; Balogh, Z. Comparison Classification Algorithms and the YOLO Method for Video Analysis and Object Detection. Sci. Rep. 2025, 15, 25432. [Google Scholar] [CrossRef]
  20. Zunair, H.; Ben Hamza, A. Sharp U-Net: Depthwise Convolutional Network for Biomedical Image Segmentation. Comput. Biol. Med. 2021, 136, 104699. [Google Scholar] [CrossRef]
  21. Sorek-Hamer, M.; Von Pohle, M.; Sahasrabhojanee, A.; Akbari Asanjan, A.; Deardorff, E.; Suel, E.; Lingenfelter, V.; Das, K.; Oza, N.C.; Ezzati, M.; et al. A Deep Learning Approach for Meter-Scale Air Quality Estimation in Urban Environments Using Very High-Spatial-Resolution Satellite Imagery. Atmosphere 2022, 13, 696. [Google Scholar] [PubMed]
  22. Derry, A.; Krzywinski, M.; Altman, N. Convolutional Neural Networks. Nat. Methods 2023, 20, 1269–1270. [Google Scholar] [CrossRef] [PubMed]
  23. Kriegeskorte, N.; Golan, T. Neural Network Models and Deep Learning. Curr. Biol. 2019, 29, R231–R236. [Google Scholar] [CrossRef]
  24. Ma, J.; He, Y.; Li, F.; Han, L.; You, C.; Wang, B. Segment Anything in Medical Images. Nat. Commun. 2024, 15, 654. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Shen, Z.; Jiao, R. Segment Anything Model for Medical Image Segmentation: Current Applications and Future Directions. Comput. Biol. Med. 2024, 171, 108238. [Google Scholar] [CrossRef]
  26. Chen, X.; Liu, M.; Wang, R.; Hu, R.; Liu, D.; Li, G.; Wang, Y.; Zhang, H. Spatially Covariant Image Registration with Text Prompts. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 12925–12936. [Google Scholar]
  27. Osco, L.P.; Wu, Q.; De Lemos, E.L.; Gonçalves, W.N.; Ramos, A.P.M.; Li, J.; Marcato, J. The Segment Anything Model (SAM) for Remote Sensing Applications: From Zero to One Shot. Int. J. Appl. Earth Obs. Geoinf. 2023, 124, 103540. [Google Scholar] [CrossRef]
  28. Ren, Y.; Yang, X.; Wang, Z.; Yu, G.; Liu, Y.; Liu, X.; Meng, D.; Zhang, Q.; Yu, G. Segment Anything Model (SAM) Assisted Remote Sensing Supervision for Mariculture—Using Liaoning Province, China as an Example. Remote Sens. 2023, 15, 5781. [Google Scholar] [CrossRef]
  29. Huang, Z.; Jing, H.; Liu, Y.; Yang, X.; Wang, Z.; Liu, X.; Gao, K.; Luo, H. Segment Anything Model Combined with Multi-Scale Segmentation for Extracting Complex Cultivated Land Parcels in High-Resolution Remote Sensing Images. Remote Sens. 2024, 16, 3489. [Google Scholar]
  30. Gui, B.; Bhardwaj, A.; Sam, L. Evaluating the Efficacy of Segment Anything Model for Delineating Agriculture and Urban Green Spaces in Multiresolution Aerial and Spaceborne Remote Sensing Images. Remote Sens. 2024, 16, 414. [Google Scholar] [CrossRef]
  31. Yang, H.; Jiang, Z.; Zhang, Y.; Wu, Y.; Luo, H.; Zhang, P.; Wang, B. A High-Resolution Remote Sensing Land Use/Land Cover Classification Method Based on Multi-Level Features Adaptation of Segment Anything Model. Int. J. Appl. Earth Obs. Geoinf. 2025, 141, 104659. [Google Scholar]
  32. Wang, C.; Wang, L.; Li, N. Wave-Net: A Marine Raft Aquaculture Area Extraction Framework Based on Feature Aggregation and Feature Dispersion for Synthetic Aperture Radar Images. Sensors 2025, 25, 2207. [Google Scholar] [CrossRef] [PubMed]
  33. Du, S.Q.; Huang, H.S.; He, F.; Luo, H.; Yin, Y.M.; Li, X.M.; Xie, L.F.; Guo, R.Z.; Tang, S.J. Unsupervised Stepwise Extraction of Offshore Aquaculture Ponds Using Super-Resolution Hyperspectral Images. Int. J. Appl. Earth Obs. Geoinf. 2023, 119, 103326. [Google Scholar] [CrossRef]
  34. Gladju, J.; Kamalam, B.S.; Kanagaraj, A. Applications of Data Mining and Machine Learning Framework in Aquaculture and Fisheries: A Review. Smart Agric. Technol. 2022, 2, 100061. [Google Scholar] [CrossRef]
  35. Zhai, X.; Wei, H.; Wu, H.; Zhao, Q.; Huang, M. Multi-Target Tracking Algorithm in Aquaculture Monitoring Based on Deep Learning. Ocean Eng. 2023, 289, 116005. [Google Scholar] [CrossRef]
  36. Hu, X.; Chen, C.; Yang, Z.; Liu, Z. Reliable, Large-Scale, and Automated Remote Sensing Mapping of Coastal Aquaculture Ponds Based on Sentinel-1/2 and Ensemble Learning Algorithms. Expert Syst. Appl. 2025, 293, 128740. [Google Scholar] [CrossRef]
  37. Zhang, J.; Xing, X.; Qi, S.; Tan, L.; Yang, D.; Chen, W.; Yang, J.; Xu, M. Organochlorine Pesticides (OCPs) in Soils of the Coastal Areas along Sanduao Bay and Xinghua Bay, Southeast China. J. Geochem. Explor. 2013, 125, 153–158. [Google Scholar] [CrossRef]
  38. Zhou, S.; Kang, R.; Ji, C.; Kaufmann, H. Heavy Metal Distribution, Contamination and Analysis of Sources—Intertidal Zones of Sandu Bay, Ningde, China. Mar. Pollut. Bull. 2018, 135, 1138–1144. [Google Scholar]
  39. Onsanit, S.; Ke, C.; Wang, X.; Wang, K.-J.; Wang, W.-X. Trace Elements in Two Marine Fish Cultured in Fish Cages in Fujian Province, China. Environ. Pollut. 2010, 158, 1334–1342. [Google Scholar] [CrossRef]
  40. Liu, H.; Fan, S.; Xu, Q.; Wang, X.; Zhang, Y.; Chen, W.; Hu, Y.; Deng, X.; Liu, H.; Yang, C.; et al. Germplasm Innovation of Large Yellow Croaker and Its Research Progress. Reprod. Breed. 2025, 5, 44–53. [Google Scholar] [CrossRef]
  41. Watanabe, F.; Alcântara, E.; Rodrigues, T.; Rotta, L.; Bernardo, N.; Imai, N. Remote Sensing of the Chlorophyll-a Based on OLI/Landsat-8 and MSI/Sentinel-2A (Barra Bonita Reservoir, Brazil). An. Acad. Brasil. Ciênc. 2018, 90, 1987–2000. [Google Scholar] [PubMed]
  42. Xiao, A.; Wang, Z.; Wang, L.; Ren, Y. Super-Resolution for “Jilin-1” Satellite Video Imagery via a Convolutional Network. Sens. 2018, 18, 1194. [Google Scholar] [CrossRef] [PubMed]
  43. Ye, F.; Zhou, Z.; Wu, Y.; Enkhtur, B. Application of Convolutional Neural Network in Fusion and Classification of Multi-Source Remote Sensing Data. Front. Neurorob. 2022, 16, 1095717. [Google Scholar]
  44. Huang, Q.; Jiang, F.; Huang, C. Remote Sensing Scene Classification with Relation-Aware Dynamic Graph Neural Networks. Eng. Appl. Artif. Intell. 2025, 150, 110513. [Google Scholar] [CrossRef]
  45. Wu, S.; Nie, K.; Lu, X.; Fan, W.; Zhang, S.; Wang, F. A Solar Trajectory Model for Multi-Spectral Image Correction of DOM from Long-Endurance UAV in Clear Sky. Drones 2025, 9, 196. [Google Scholar] [CrossRef]
  46. Sun, H.; Wei, Z.; Yu, W.; Yang, G.; She, J.; Zheng, H.; Jiang, C.; Yao, X.; Zhu, Y.; Cao, W.; et al. SIDEST: A Sample-Free Framework for Crop Field Boundary Delineation by Integrating Super-Resolution Image Reconstruction and Dual Edge-Corrected Segment Anything Model. Comput. Electron. Agric. 2025, 230, 109897. [Google Scholar] [CrossRef]
  47. Abdullah, M.T.; Rahman, S.; Rahman, S.; Islam, M.F. VAE-GAN3D: Leveraging Image-Based Semantics for 3D Zero-Shot Recognition. Image Vis. Comput. 2024, 147, 105049. [Google Scholar] [CrossRef]
  48. Forman, K.; Vara, E.; García, C.; Ariznavarreta, C.; Escames, G.; Tresguerres, J.A.F. Cardiological Aging in SAM Model: Effect of Chronic Treatment with Growth Hormone. Biogerontology 2010, 11, 275–286. [Google Scholar] [CrossRef]
  49. Benčević, M.; Qiu, Y.; Galić, I.; Pižurica, A. Segment-Then-Segment: Context-Preserving Crop-Based Segmentation for Large Biomedical Images. Sensors 2023, 23, 633. [Google Scholar]
  50. Shahraki, M.; Elamin, A.; El-Rabbany, A. SAMNet++: A Segment Anything Model for Supervised 3D Point Cloud Semantic Segmentation. Remote Sens. 2025, 17, 1256. [Google Scholar]
  51. Tetteh, G.O.; Schwieder, M.; Erasmi, S.; Conrad, C.; Gocht, A. Comparison of an Optimised Multiresolution Segmentation Approach with Deep Neural Networks for Delineating Agricultural Fields from Sentinel-2 Images. PFG–J. Photogramm. Remote Sens. Geoinf. Sci. 2023, 91, 295–312. [Google Scholar]
  52. Cheng, B.; Girshick, R.; Dollar, P.; Berg, A.C.; Kirillov, A. Boundary IoU: Improving Object-Centric Image Segmentation Evaluation. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 15329–15337. [Google Scholar]
  53. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar]
  54. Peng, Y.; Sengupta, D.; Duan, Y.; Chen, C.; Tian, B. Accurate Mapping of Chinese Coastal Aquaculture Ponds Using Biophysical Parameters Based on Sentinel-2 Time Series Images. Mar. Pollut. Bull. 2022, 181, 113901. [Google Scholar] [CrossRef] [PubMed]
  55. Ottinger, M.; Bachofer, F.; Huth, J.; Kuenzer, C. Mapping Aquaculture Ponds for the Coastal Zone of Asia with Sentinel-1 and Sentinel-2 Time Series. Remote Sens. 2021, 14, 153. [Google Scholar]
  56. Guan, Z.; Zhang, G.; Jiang, Y.; Shen, X. Low-Frequency Attitude Error Compensation for the Jilin-1 Satellite Based on Star Observation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1000617. [Google Scholar]
  57. Zhang, Z.; Guo, B.; Nan, Y.; Wu, X.; Du, H.; Li, F.; Zhai, J. Preliminary Sea Ice Detection Results from GNSS-R Payload on Board Chinese Jilin-1 Wideband-01B (J1-01B) Satellite. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4005405. [Google Scholar]
  58. Torresan, C.; Berton, A.; Carotenuto, F.; Di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry Applications of UAVs in Europe: A Review. Int. J. Remote Sens. 2017, 38, 2427–2447. [Google Scholar]
  59. Ubina, N.A.; Cheng, S.-C. A Review of Unmanned System Technologies with Its Application to Aquaculture Farm Monitoring and Management. Drones 2022, 6, 12. [Google Scholar] [CrossRef]
Figure 1. Map of the study water area in Sandu Bay, Fujian Province.
Figure 1. Map of the study water area in Sandu Bay, Fujian Province.
Fishes 10 00477 g001
Figure 2. UAV Image Acquisition Process.
Figure 2. UAV Image Acquisition Process.
Fishes 10 00477 g002
Figure 3. Working Principle of the SAM.
Figure 3. Working Principle of the SAM.
Fishes 10 00477 g003
Figure 4. Extraction workflow diagram for Larimichthys crocea mariculture in Sandu Bay.
Figure 4. Extraction workflow diagram for Larimichthys crocea mariculture in Sandu Bay.
Fishes 10 00477 g004
Figure 5. Flowchart of the Random Forest Algorithm.
Figure 5. Flowchart of the Random Forest Algorithm.
Fishes 10 00477 g005
Figure 6. IoU performance across different hyperparameter settings for three types of remote sensing imagery: Jilin-1, Drone, and Sentinel-2. (a) Effect of sampling number on segmentation performance; (b) Effect of point cloud number; (c) Effect of predicted IoU threshold; (d) Effect of IoU threshold; (e) Effect of stability parameter threshold; (f) Effect of reduction factor for trimming number.
Figure 6. IoU performance across different hyperparameter settings for three types of remote sensing imagery: Jilin-1, Drone, and Sentinel-2. (a) Effect of sampling number on segmentation performance; (b) Effect of point cloud number; (c) Effect of predicted IoU threshold; (d) Effect of IoU threshold; (e) Effect of stability parameter threshold; (f) Effect of reduction factor for trimming number.
Fishes 10 00477 g006
Figure 7. Effect of resolution on stability and final IoU in the Jilin-1 and UAV data. (a) Jilin-1 Data, (b) UAV Data.
Figure 7. Effect of resolution on stability and final IoU in the Jilin-1 and UAV data. (a) Jilin-1 Data, (b) UAV Data.
Fishes 10 00477 g007
Figure 8. Comparison of SAM Segmentation and Manual Annotation. (a) Sentinel-2 Data, (b) Jilin-1 Data, (c) UAV Data.
Figure 8. Comparison of SAM Segmentation and Manual Annotation. (a) Sentinel-2 Data, (b) Jilin-1 Data, (c) UAV Data.
Fishes 10 00477 g008
Figure 9. Comparison of Segmentation Results for Different Band Combinations.
Figure 9. Comparison of Segmentation Results for Different Band Combinations.
Fishes 10 00477 g009
Figure 10. Line graph of segmentation accuracy results for different band combinations.
Figure 10. Line graph of segmentation accuracy results for different band combinations.
Fishes 10 00477 g010
Figure 11. Classification Results of Different Data. (a) Examples of a1–a3 with Sentinel-2 Data, (b) Examples of b1–b3 with Jilin-1 Data, (c) UAV Data.
Figure 11. Classification Results of Different Data. (a) Examples of a1–a3 with Sentinel-2 Data, (b) Examples of b1–b3 with Jilin-1 Data, (c) UAV Data.
Fishes 10 00477 g011
Figure 12. The correlation coefficient between parameters and IoU of different data. (a) Sentinel-2 Data, (b) Jilin-1 Data, (c) UAV Data.
Figure 12. The correlation coefficient between parameters and IoU of different data. (a) Sentinel-2 Data, (b) Jilin-1 Data, (c) UAV Data.
Fishes 10 00477 g012
Figure 13. Impact of IoU for important parameters. (a) Predicted IoU threshold, (b) IoU threshold, (c) Stability score threshold, (d) Crop layer downscaling factor.
Figure 13. Impact of IoU for important parameters. (a) Predicted IoU threshold, (b) IoU threshold, (c) Stability score threshold, (d) Crop layer downscaling factor.
Fishes 10 00477 g013
Table 1. Metadata for Sentinel-2 and Jilin-1 Satellite Imagery.
Table 1. Metadata for Sentinel-2 and Jilin-1 Satellite Imagery.
Satellite NameImaging SensorAcquisition
Data
Spatial
Resolution (m)
Cloud
Cover (%)
Sentinel-2BMSI19 November 2023100.2883
JL1GF03D05PMS14 April 20230.751
JL1GF03D12PMS16 April 20230.751
JL1GF03D14PMS23 June 20220.753
JL1GF03D27PMS9 April 20230.750
JL1GF03D29PMS4 August 20220.7518
Table 2. Optimal Segmentation Parameters for Different Data Sources.
Table 2. Optimal Segmentation Parameters for Different Data Sources.
ParameterSentinel-2 DataJilin-1 DataUAV Data
sampling number646464
point cloud number323232
predicted IoU threshold0.80.850.825
IoU threshold0.850.850.85
stability parameter threshold0.90.90.925
number of layers of cutting111
reduction factor for trimming number1.251.250.8
minimum division area100100500
Table 3. Standard Deviation of Parameter Sensitivity Across Different Data Sources.
Table 3. Standard Deviation of Parameter Sensitivity Across Different Data Sources.
ParameterSentinel-2 DataJilin-1 DataUAV Data
predicted IoU threshold0.0740.0410.071
IoU threshold0.0720.0430.071
stability parameter threshold0.0980.0270.048
reduction factor for trimming number0.2970.2320.217
Table 4. Comparison of Segmentation Accuracy for Different Types of Data Sources.
Table 4. Comparison of Segmentation Accuracy for Different Types of Data Sources.
Data SourceSegmentation Accuracy (%)
Sentinel-279.93
Jilin-191.64
UAV97.71
Table 5. Comparison of Classification Accuracy for Different Data Sources.
Table 5. Comparison of Classification Accuracy for Different Data Sources.
Data SourceClassification Accuracy
Sentinel-20.71
Jilin-10.918
UAV0.94
Table 6. Comparison of average processing time and peak memory usage for Different Data Sources.
Table 6. Comparison of average processing time and peak memory usage for Different Data Sources.
Data SourceArea (ha)Average Processing Time (s)Peak Memory Usage (MB)
Sentinel-21
100
2.8
270
520
620
Jilin-11
100
7.6
760
850
1050
UAV1
100
18.3
1860
1600
2300
Table 7. Comparison of Segmentation Accuracy for Different Types of Aquaculture Facilities.
Table 7. Comparison of Segmentation Accuracy for Different Types of Aquaculture Facilities.
TypeFinal IoU (%)
Net cages92.68
floating rafts91.34
Overall91.73
Table 8. Comparative Table of Advantages for Each Data Source.
Table 8. Comparative Table of Advantages for Each Data Source.
Data SourceAdvantageDisadvantagesApplicable Fields
Sentinel-2Low cost, long durationLow precisionMacroscopic, large-scale tasks
Jilin-1High resolution, no need for chunk processingaffected by complex environmentsSmall to medium-scale tasks
UAVUltra-high resolutionLarge data volume, high computing consumption, and slow processing speedsmall-scale, high-precision fine classification tasks
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, X.; Nie, K.; Yuan, S.; Fan, W.; Lu, Y.; Wang, F. Mapping for Larimichthys crocea Aquaculture Information with Multi-Source Remote Sensing Data Based on Segment Anything Model. Fishes 2025, 10, 477. https://doi.org/10.3390/fishes10100477

AMA Style

Xu X, Nie K, Yuan S, Fan W, Lu Y, Wang F. Mapping for Larimichthys crocea Aquaculture Information with Multi-Source Remote Sensing Data Based on Segment Anything Model. Fishes. 2025; 10(10):477. https://doi.org/10.3390/fishes10100477

Chicago/Turabian Style

Xu, Xirui, Ke Nie, Sanling Yuan, Wei Fan, Yanan Lu, and Fei Wang. 2025. "Mapping for Larimichthys crocea Aquaculture Information with Multi-Source Remote Sensing Data Based on Segment Anything Model" Fishes 10, no. 10: 477. https://doi.org/10.3390/fishes10100477

APA Style

Xu, X., Nie, K., Yuan, S., Fan, W., Lu, Y., & Wang, F. (2025). Mapping for Larimichthys crocea Aquaculture Information with Multi-Source Remote Sensing Data Based on Segment Anything Model. Fishes, 10(10), 477. https://doi.org/10.3390/fishes10100477

Article Metrics

Back to TopTop