A Method of Simplified Synthetic Objects Creation for Detection of Underwater Objects from Remote Sensing Data Using YOLO Networks
Abstract
1. Introduction
- Development of simplified synthetic objects;
- Assessing the impact of type, quantity, and augmentation of SSOs on detection accuracy;
- Testing and validating the neural network models using real data to ensure its efficiency;
- Analyzing the effectiveness of the CNN model in locating ground control points in the orthoimage;
- Determining the neural network creation process to prepare the model that performs best in detecting the real underwater objects in the shallow water area.
2. Materials and Methods
2.1. Research Methodology
2.2. Input Data
- Orthophotos from the resources of the Polish geoportal [56]: two images showing the same part of Dąbie Lake located in northwestern Poland. The images had a ground resolution of 5 cm and were taken in 2021 and 2022. They show the area of a shallow-water bay in natural colors. These data were the basis for fusion with simplified synthetic objects.
- Set of 129 images from the UAV photogrammetric mission: photos taken in October 2022. These photos show the previously mentioned area of Dąbie Lake in natural colors. There are 152 real GCPs visible in the entire set of photos. These data served as a test set to check the effectiveness of the detection models.
- Orthophoto from a UAV photogrammetric mission: an orthoimage showing the same bay on the lake with a visible arrangement of 5 ground control points (GCPs). This image, taken in October 2022, had a ground resolution of 4.5 cm and a composition of natural colors. The CNN-detected GCPs in this study were utilized to evaluate positional accuracy by calculating the distance differences between the actual GCP coordinates and the bounding boxes generated by the neural network. This method allowed for a precise measurement of accuracy, highlighting the effectiveness of the CNN model in locating ground control points in the orthoimage.
2.3. Simplified Synthetic Object Generation
- GCP Type A: described with the abbreviation Type A or just letter “A”, has partial transparency;
- GCP Type B: described with the abbreviation Type B or just letter “B”, has a gradient that reduces transparency and gives a sandy color;
- GCP Type C: described with the abbreviation Type C or just letter “C”, an object with the highest transparency, especially increased in the black area, and blurred edges.
2.4. Creating Image Data with SSOs
Composite Image Generation Principles
- Non-overlapping objects: ground control points must not overlap each other. This rule ensures that the simplified synthetic GCPs mimic the arrangement observed in the UAV imageries, where each GCP is distinctly positioned without overlap.
- Random rotation of objects: each GCP was randomly rotated to simulate real-world variability. This rotation adds diversity to the simplified synthetic GCPs, reflecting the natural variability in the orientation of objects in the environment.
- Objects distributed in water areas: the simplified synthetic ground control points were placed specifically in the water areas of the lake. This condition replicates the detection of underwater objects, ensuring that the synthetic images accurately represent the environment where the GCPs would be detected.
- Inclusion of transparency in synthetic ground control points: simplified synthetic objects were created with transparency, represented by an alpha channel. This transparency allows the underlying orthoimage to influence the appearance of the simplified synthetic GCP, resulting in variations in coloration depending on the underlying surface.
2.5. Preparation of Training Sequences
2.6. Network Training
2.6.1. Description of Network Hyperparameters
2.6.2. Research Series Based on Data Augmentation
- 1 series: a set of 12 models for which no additional augmentation parameters have been set. This series is marked with the letter “N”.
- 2 series: a set of 12 models in which some of the available augmentation parameters were used. These parameters were brightness, contrast, and zoom, all with basic software value settings. This series is marked with the letter “P”.
- 3 series: a set of 12 models in which all available augmentation parameters were used. These parameters were brightness, contrast, zoom, rotate, and crop, all with basic software value settings. This series is marked with the letter “F”.
2.7. Object Detection
3. Results
3.1. Validation and Testing
- Training loss: the average entropy loss function result computed across the training dataset;
- Validation loss: the average entropy loss function result calculated on the validation dataset using the trained model at each epoch;
- Average precision: the ratio of points in the validation data that were correctly classified by the model trained in a given epoch (true positives) to all points in the validation data.
3.2. Verification on Real Data
3.2.1. Performance Indicators of the Developed Models
- Intersection over Union (IOU)—a fundamental metric in object detection, quantifying the overlap between the predicted and ground truth bounding boxes [57];
- 2.
- True positive (TP)—a correct detection of a ground-truth bounding box [57];
- 3.
- False positive (FP)—an incorrect detection of a nonexistent object or a misplaced detection of an existing object [57];
- 4.
- False negative (FN)—an undetected ground-truth bounding box [57];
- 5.
- Precision—a parameter that is the percentage of correct positive predictions [57];
- 6.
- Recall—a model parameter that is the percentage of correct positive predictions among all given ground truths [57];
- 7.
- F1-score—the parameter is the weighted sum of recall and precision [38];
- 8.
- AP—average of the precision of all recall values between 0 and 1; the parameter is interpreted as finding the area under the precision–recall curve.
3.2.2. Evaluation of Model Effectiveness
3.3. Analysis of the Accuracy of Determining the Location of Detected Objects
3.3.1. Analysis of GCP Detection on Orthoimage
3.3.2. Analysis of Deviations from Detected Objects
3.4. Influence of the Size of the Training Sequence
3.5. Influence of the Augmentation Method
- The augmentation of detection networks positively affects the effectiveness of the obtained models.
- Partial augmentation gives better results for this study.
3.6. Influence of the Type of Simplified Synthetic Object
- Type A: F1-score coefficient value equal to 0.76;
- Type B: F1-score coefficient value equal to 0.50;
- Type C: F1-score coefficient value equal to 0.82.
- Type A: 1270;
- Type B: 759;
- Type C: 1478;
3.7. Test YOLO Models Outside the ArcGIS Environment
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Short Name of the Model | TP | FP | FN | Precision | Recall | F1-Score |
---|---|---|---|---|---|---|
YOLOv5 500A | 33 | 3 | 119 | 0.92 | 0.22 | 0.35 |
YOLOv5 1000A | 20 | 0 | 132 | 1.00 | 0.13 | 0.23 |
YOLOv5 2000A | 24 | 3 | 128 | 0.89 | 0.16 | 0.27 |
YOLOv5 4000A | 8 | 0 | 144 | 1.00 | 0.05 | 0.10 |
YOLOv5 500B | 8 | 1 | 144 | 0.89 | 0.05 | 0.10 |
YOLOv5 1000B | 17 | 2 | 135 | 0.89 | 0.11 | 0.20 |
YOLOv5 2000B | 30 | 0 | 122 | 1.00 | 0.20 | 0.33 |
YOLOv5 4000B | 47 | 3 | 105 | 0.94 | 0.31 | 0.47 |
YOLOv5 500C | 142 | 2 | 10 | 0.99 | 0.93 | 0.96 |
YOLOv5 1000C | 144 | 4 | 8 | 0.97 | 0.95 | 0.96 |
YOLOv5 2000C | 138 | 2 | 14 | 0.99 | 0.91 | 0.95 |
YOLOv5 4000C | 142 | 6 | 10 | 0.96 | 0.93 | 0.95 |
YOLOv6 500A | 148 | 20 | 4 | 0.88 | 0.97 | 0.93 |
YOLOv6 1000A | 148 | 48 | 4 | 0.76 | 0.97 | 0.85 |
YOLOv6 2000A | 148 | 25 | 4 | 0.86 | 0.97 | 0.91 |
YOLOv6 4000A | 149 | 26 | 3 | 0.85 | 0.98 | 0.91 |
YOLOv6 500B | 151 | 423 | 1 | 0.26 | 0.99 | 0.42 |
YOLOv6 1000B | 151 | 3456 | 1 | 0.04 | 0.99 | 0.08 |
YOLOv6 2000B | 150 | 1562 | 2 | 0.09 | 0.99 | 0.16 |
YOLOv6 4000B | 151 | 1620 | 1 | 0.09 | 0.99 | 0.16 |
YOLOv6 500C | 151 | 781 | 1 | 0.16 | 0.99 | 0.28 |
YOLOv6 1000C | 149 | 2168 | 3 | 0.06 | 0.98 | 0.12 |
YOLOv6 2000C | 149 | 323 | 3 | 0.32 | 0.98 | 0.48 |
YOLOv6 4000C | 149 | 705 | 3 | 0.17 | 0.98 | 0.30 |
YOLOv8 500A | 7 | 0 | 145 | 1.00 | 0.05 | 0.09 |
YOLOv8 1000A | 19 | 0 | 133 | 1.00 | 0.13 | 0.22 |
YOLOv8 2000A | 8 | 1 | 144 | 0.89 | 0.05 | 0.10 |
YOLOv8 4000A | 4 | 1 | 148 | 0.80 | 0.03 | 0.05 |
YOLOv8 500B | 8 | 0 | 144 | 1.00 | 0.05 | 0.10 |
YOLOv8 1000B | 33 | 2 | 119 | 0.94 | 0.22 | 0.35 |
YOLOv8 2000B | 16 | 3 | 136 | 0.84 | 0.11 | 0.19 |
YOLOv8 4000B | 20 | 5 | 132 | 0.80 | 0.13 | 0.23 |
YOLOv8 500C | 110 | 6 | 42 | 0.95 | 0.72 | 0.82 |
YOLOv8 1000C | 124 | 3 | 28 | 0.98 | 0.82 | 0.89 |
YOLOv8 2000C | 146 | 5 | 6 | 0.97 | 0.96 | 0.96 |
YOLOv8 4000C | 147 | 6 | 5 | 0.96 | 0.97 | 0.96 |
YOLOv9 500A | 14 | 1 | 138 | 0.93 | 0.09 | 0.17 |
YOLOv9 1000A | 2 | 0 | 150 | 1.00 | 0.01 | 0.03 |
YOLOv9 2000A | 21 | 1 | 131 | 0.95 | 0.14 | 0.24 |
YOLOv9 4000A | 10 | 1 | 142 | 0.91 | 0.07 | 0.12 |
YOLOv9 500B | 30 | 2 | 122 | 0.94 | 0.20 | 0.33 |
YOLOv9 1000B | 48 | 2 | 104 | 0.96 | 0.32 | 0.48 |
YOLOv9 2000B | 18 | 8 | 134 | 0.69 | 0.12 | 0.20 |
YOLOv9 4000B | 30 | 4 | 122 | 0.88 | 0.20 | 0.32 |
YOLOv9 500C | 144 | 10 | 8 | 0.94 | 0.95 | 0.94 |
YOLOv9 1000C | 141 | 11 | 11 | 0.93 | 0.93 | 0.93 |
YOLOv9 2000C | 140 | 3 | 12 | 0.98 | 0.92 | 0.95 |
YOLOv9 4000C | 144 | 7 | 8 | 0.95 | 0.95 | 0.95 |
YOLOv10 500A | 34 | 4 | 118 | 0.89 | 0.22 | 0.36 |
YOLOv10 1000A | 4 | 1 | 148 | 0.80 | 0.03 | 0.05 |
YOLOv10 2000A | 3 | 2 | 149 | 0.60 | 0.02 | 0.04 |
YOLOv10 4000A | 1 | 0 | 151 | 1.00 | 0.01 | 0.01 |
YOLOv10 500B | 14 | 1 | 138 | 0.93 | 0.09 | 0.17 |
YOLOv10 1000B | 11 | 1 | 141 | 0.92 | 0.07 | 0.13 |
YOLOv10 2000B | 15 | 1 | 137 | 0.94 | 0.10 | 0.18 |
YOLOv10 4000B | 42 | 1 | 110 | 0.98 | 0.28 | 0.43 |
YOLOv10 500C | 118 | 8 | 34 | 0.94 | 0.78 | 0.85 |
YOLOv10 1000C | 123 | 3 | 29 | 0.98 | 0.81 | 0.88 |
YOLOv10 2000C | 144 | 5 | 8 | 0.97 | 0.95 | 0.96 |
YOLOv10 4000C | 139 | 9 | 13 | 0.94 | 0.91 | 0.93 |
YOLO11 500A | 2 | 1 | 150 | 0.67 | 0.01 | 0.03 |
YOLO11 1000A | 2 | 0 | 150 | 1.00 | 0.01 | 0.03 |
YOLO11 2000A | 3 | 0 | 149 | 1.00 | 0.02 | 0.04 |
YOLO11 4000A | 6 | 1 | 146 | 0.86 | 0.04 | 0.08 |
YOLO11 500B | 6 | 1 | 146 | 0.86 | 0.04 | 0.08 |
YOLO11 1000B | 6 | 1 | 146 | 0.86 | 0.04 | 0.08 |
YOLO11 2000B | 13 | 5 | 139 | 0.72 | 0.09 | 0.15 |
YOLO11 4000B | 10 | 7 | 142 | 0.59 | 0.07 | 0.12 |
YOLO11 500C | 118 | 2 | 34 | 0.98 | 0.78 | 0.87 |
YOLO11 1000C | 138 | 62 | 14 | 0.69 | 0.91 | 0.78 |
YOLO11 2000C | 119 | 4 | 33 | 0.97 | 0.78 | 0.87 |
YOLO11 4000C | 137 | 9 | 15 | 0.94 | 0.90 | 0.92 |
YOLO12 500A | 19 | 2 | 133 | 0.90 | 0.13 | 0.22 |
YOLO12 1000A | 10 | 2 | 142 | 0.83 | 0.07 | 0.12 |
YOLO12 2000A | 6 | 1 | 146 | 0.86 | 0.04 | 0.08 |
YOLO12 4000A | 8 | 0 | 144 | 1.00 | 0.05 | 0.10 |
YOLO12 500B | 4 | 0 | 148 | 1.00 | 0.03 | 0.05 |
YOLO12 1000B | 0 | 0 | 152 | 0.00 | 0.00 | 0.00 |
YOLO12 2000B | 19 | 1 | 133 | 0.95 | 0.13 | 0.22 |
YOLO12 4000B | 13 | 6 | 139 | 0.68 | 0.09 | 0.15 |
YOLO12 500C | 128 | 3 | 24 | 0.98 | 0.84 | 0.90 |
YOLO12 1000C | 121 | 5 | 31 | 0.96 | 0.80 | 0.87 |
YOLO12 2000C | 124 | 9 | 28 | 0.93 | 0.82 | 0.87 |
YOLO12 4000C | 147 | 4 | 5 | 0.97 | 0.97 | 0.97 |
Short Name of the Model | TP | GCP#1 dmean [cm] | GCP#2 dmean [cm] | GCP#3 dmean [cm] | GCP#4 dmean [cm] | GCP#5 dmean [cm] | Δdmean [cm] |
---|---|---|---|---|---|---|---|
YOLOv5 500A | 0 | – | – | – | – | – | – |
YOLOv5 1000A | 1 | – | 30.27 | – | – | – | 30.27 |
YOLOv5 2000A | 0 | – | – | – | – | – | – |
YOLOv5 4000A | 0 | – | – | – | – | – | – |
YOLOv5 500B | 0 | – | – | – | – | – | – |
YOLOv5 1000B | 0 | – | – | – | – | – | – |
YOLOv5 2000B | 0 | – | – | – | – | – | – |
YOLOv5 4000B | 3 | – | 2.25 | 5.03 | – | – | 3.64 |
YOLOv5 500C | 5 | 3.18 | 2.25 | 5.03 | 8.11 | 3.18 | 4.35 |
YOLOv5 1000C | 5 | 2.25 | 2.25 | 3.18 | 8.11 | 7.12 | 4.58 |
YOLOv5 2000C | 5 | 3.18 | 2.25 | 3.18 | 7.12 | 2.25 | 3.60 |
YOLOv5 4000C | 5 | 3.18 | 2.25 | 3.18 | 8.11 | 2.25 | 3.80 |
YOLOv6 500A | 5 | 0.00 | 0.00 | 4.50 | 2.25 | 3.18 | 1.99 |
YOLOv6 1000A | 5 | 3.18 | 2.25 | 7.12 | 5.03 | 2.25 | 3.97 |
YOLOv6 2000A | 5 | 3.18 | 2.25 | 7.12 | 5.03 | 3.18 | 4.15 |
YOLOv6 4000A | 5 | 0.00 | 2.25 | 6.36 | 6.75 | 6.75 | 4.42 |
YOLOv6 500B | 5 | 2.25 | 2.25 | 2.25 | 2.25 | 2.25 | 2.25 |
YOLOv6 1000B | 5 | 3.18 | 4.50 | 3.18 | 3.18 | 2.25 | 3.26 |
YOLOv6 2000B | 5 | 3.18 | 2.25 | 5.03 | 3.18 | 2.25 | 3.18 |
YOLOv6 4000B | 5 | 3.18 | 5.03 | 4.50 | 2.25 | 3.18 | 3.63 |
YOLOv6 500C | 5 | 3.18 | 6.36 | 2.25 | 3.18 | 3.18 | 3.63 |
YOLOv6 1000C | 5 | 5.03 | 3.18 | 3.18 | 3.18 | 2.25 | 3.37 |
YOLOv6 2000C | 5 | 6.36 | 0.00 | 5.03 | 5.03 | 4.50 | 4.19 |
YOLOv6 4000C | 5 | 4.50 | 2.25 | 3.18 | 5.03 | 4.50 | 3.89 |
YOLOv8 500A | 0 | – | – | – | – | – | – |
YOLOv8 1000A | 0 | – | – | – | – | – | – |
YOLOv8 2000A | 0 | – | – | – | – | – | – |
YOLOv8 4000A | 0 | – | – | – | – | – | – |
YOLOv8 500B | 0 | – | – | – | – | – | – |
YOLOv8 1000B | 0 | – | – | – | – | – | – |
YOLOv8 2000B | 0 | – | – | – | – | – | – |
YOLOv8 4000B | 0 | – | – | – | – | – | – |
YOLOv8 500C | 3 | 3.18 | 2.25 | 4.50 | – | – | 3.31 |
YOLOv8 1000C | 4 | 3.18 | 2.25 | 2.25 | 8.11 | – | 3.95 |
YOLOv8 2000C | 5 | 3.18 | 2.25 | 3.18 | 7.12 | 5.03 | 4.15 |
YOLOv8 4000C | 5 | 2.25 | 2.25 | 2.25 | 5.03 | 3.18 | 2.99 |
YOLOv9 500A | 0 | – | – | – | – | – | – |
YOLOv9 1000A | 0 | – | – | – | – | – | – |
YOLOv9 2000A | 0 | – | – | – | – | – | – |
YOLOv9 4000A | 0 | – | – | – | – | – | – |
YOLOv9 500B | 1 | – | 3.18 | – | – | – | 3.18 |
YOLOv9 1000B | 2 | – | 0.00 | 4.50 | – | – | 2.25 |
YOLOv9 2000B | 0 | – | – | – | – | – | – |
YOLOv9 4000B | 1 | – | 2.25 | – | – | – | 2.25 |
YOLOv9 500C | 4 | 2.25 | 2.25 | 3.18 | 8.11 | – | 3.95 |
YOLOv9 1000C | 5 | 3.18 | 2.25 | 3.18 | 8.11 | 2.25 | 3.80 |
YOLOv9 2000C | 5 | 2.25 | 2.25 | 2.25 | 7.12 | 3.18 | 3.41 |
YOLOv9 4000C | 5 | 2.25 | 2.25 | 3.18 | 7.12 | 2.25 | 3.41 |
YOLOv10 500A | 0 | – | – | – | – | – | – |
YOLOv10 1000A | 0 | – | – | – | – | – | – |
YOLOv10 2000A | 0 | – | – | – | – | – | – |
YOLOv10 4000A | 0 | – | – | – | – | – | – |
YOLOv10 500B | 0 | – | – | – | – | – | – |
YOLOv10 1000B | 0 | – | – | – | – | – | – |
YOLOv10 2000B | 0 | – | – | – | – | – | – |
YOLOv10 4000B | 0 | – | – | – | – | – | – |
YOLOv10 500C | 4 | 2.25 | 2.25 | 3.18 | 7.12 | – | 3.70 |
YOLOv10 1000C | 3 | 2.25 | – | 2.25 | 9.55 | – | 3.51 |
YOLOv10 2000C | 5 | 2.25 | 2.25 | 2.25 | 5.03 | 3.18 | 2.99 |
YOLOv10 4000C | 5 | 3.18 | 2.25 | 2.25 | 8.11 | 3.18 | 3.80 |
YOLO11 500A | 0 | – | – | – | – | – | – |
YOLO11 1000A | 0 | – | – | – | – | – | – |
YOLO11 2000A | 0 | – | – | – | – | – | – |
YOLO11 4000A | 0 | – | – | – | – | – | – |
YOLO11 500B | 0 | – | – | – | – | – | – |
YOLO11 1000B | 0 | – | – | – | – | – | – |
YOLO11 2000B | 0 | – | – | – | – | – | – |
YOLO11 4000B | 0 | – | – | – | – | – | – |
YOLO11 500C | 2 | – | 4.50 | 4.50 | – | – | 4.50 |
YOLO11 1000C | 5 | 2.25 | 2.25 | 5.03 | 6.36 | 3.18 | 3.82 |
YOLO11 2000C | 5 | 3.18 | 2.25 | 3.18 | 5.03 | 2.25 | 3.18 |
YOLO11 4000C | 5 | 3.18 | 2.25 | 3.18 | 7.12 | 3.18 | 3.78 |
YOLO12 500A | 0 | – | – | – | – | – | – |
YOLO12 1000A | 0 | – | – | – | – | – | – |
YOLO12 2000A | 0 | – | – | – | – | – | – |
YOLO12 4000A | 0 | – | – | – | – | – | – |
YOLO12 500B | 0 | – | – | – | – | – | – |
YOLO12 1000B | 0 | – | – | – | – | – | – |
YOLO12 2000B | 0 | – | – | – | – | – | – |
YOLO12 4000B | 0 | – | – | – | – | – | – |
YOLO12 500C | 5 | 3.18 | 4.50 | 3.18 | 7.12 | 9.28 | 5.45 |
YOLO12 1000C | 4 | 2.25 | 2.25 | 3.18 | 6.36 | – | 3.51 |
YOLO12 2000C | 5 | 3.18 | 2.25 | 3.18 | 7.12 | 5.03 | 4.15 |
YOLO12 4000C | 5 | 3.18 | 2.25 | 3.18 | 5.03 | 3.18 | 3.37 |
Appendix B
References
- Li, Z.; Wang, Y.; Zhang, N.; Zhang, Y.; Zhao, Z.; Xu, D.; Ben, G.; Gao, Y. Deep Learning-Based Object Detection Techniques for Remote Sensing Images: A Survey. Remote Sens. 2022, 14, 2385. [Google Scholar] [CrossRef]
- Dunstan, A.; Robertson, K.; Fitzpatrick, R.; Pickford, J.; Meager, J. Use of Unmanned Aerial Vehicles (UAVs) for Mark-Resight Nesting Population Estimation of Adult Female Green Sea Turtles at Raine Island. PLoS ONE 2020, 15, e0228524. [Google Scholar] [CrossRef]
- Feng, J.; Jin, T. CEH-YOLO: A Composite Enhanced YOLO-Based Model for Underwater Object Detection. Ecol. Inform. 2024, 82, 102758. [Google Scholar] [CrossRef]
- Sineglazov, V.; Savchenko, M. Comprehensive Framework for Underwater Object Detection Based on Improved YOLOv8. Electron. Control Syst. 2024, 1, 9–15. [Google Scholar] [CrossRef]
- Martin, J.; Eugenio, F.; Marcello, J.; Medina, A. Automatic Sun Glint Removal of Multispectral High-Resolution Worldview-2 Imagery for Retrieving Coastal Shallow Water Parameters. Remote Sens. 2016, 8, 37. [Google Scholar] [CrossRef]
- Cao, N. Small Object Detection Algorithm for Underwater Organisms Based on Improved Transformer. J. Phys. Conf. Ser. 2023, 2637, 012056. [Google Scholar] [CrossRef]
- Mathias, A.; Dhanalakshmi, S.; Kumar, R.; Narayanamoorthi, R. Deep Neural Network Driven Automated Underwater Object Detection. Comput. Mater. Contin. 2022, 70, 5251–5267. [Google Scholar] [CrossRef]
- Dakhil, R.A.; Khayeat, A.R.H. Review on Deep Learning Techniques for Underwater Object Detection. In Proceedings of the Data Science and Machine Learning, Academy and Industry Research Collaboration Center (AIRCC), Copenhagen, Denmark, 17–18 September 2022; pp. 49–63. [Google Scholar]
- Zheng, M.; Luo, W. Underwater Image Enhancement Using Improved CNN Based Defogging. Electronics 2022, 11, 150. [Google Scholar] [CrossRef]
- Hu, K.; Weng, C.; Zhang, Y.; Jin, J.; Xia, Q. An Overview of Underwater Vision Enhancement: From Traditional Methods to Recent Deep Learning. J. Mar. Sci. Eng. 2022, 10, 241. [Google Scholar] [CrossRef]
- Mogstad, A.A.; Johnsen, G.; Ludvigsen, M. Shallow-Water Habitat Mapping Using Underwater Hyperspectral Imaging from an Unmanned Surface Vehicle: A Pilot Study. Remote Sens. 2019, 11, 685. [Google Scholar] [CrossRef]
- Grządziel, A. Application of Remote Sensing Techniques to Identification of Underwater Airplane Wreck in Shallow Water Environment: Case Study of the Baltic Sea, Poland. Remote Sens. 2022, 14, 5195. [Google Scholar] [CrossRef]
- Goodwin, M.; Halvorsen, K.T.; Jiao, L.; Knausgård, K.M.; Martin, A.H.; Moyano, M.; Oomen, R.A.; Rasmussen, J.H.; Sørdalen, T.K.; Thorbjørnsen, S.H. Unlocking the Potential of Deep Learning for Marine Ecology: Overview, Applications, and Outlook. ICES J. Mar. Sci. 2022, 79, 319–336. [Google Scholar] [CrossRef]
- Liu, Y.; An, B.; Chen, S.; Zhao, D. Multi-target Detection and Tracking of Shallow Marine Organisms Based on Improved YOLO v5 and DeepSORT. IET Image Process. 2024, 18, 2273–2290. [Google Scholar] [CrossRef]
- Flynn, K.; Chapra, S. Remote Sensing of Submerged Aquatic Vegetation in a Shallow Non-Turbid River Using an Unmanned Aerial Vehicle. Remote Sens. 2014, 6, 12815–12836. [Google Scholar] [CrossRef]
- Taddia, Y.; Russo, P.; Lovo, S.; Pellegrinelli, A. Multispectral UAV Monitoring of Submerged Seaweed in Shallow Water. Appl. Geomat. 2020, 12, 19–34. [Google Scholar] [CrossRef]
- Chabot, D.; Dillon, C.; Shemrock, A.; Weissflog, N.; Sager, E.P.S. An Object-Based Image Analysis Workflow for Monitoring Shallow-Water Aquatic Vegetation in Multispectral Drone Imagery. ISPRS Int. J. Geo-Inf. 2018, 7, 294. [Google Scholar] [CrossRef]
- Feldens, P. Super Resolution by Deep Learning Improves Boulder Detection in Side Scan Sonar Backscatter Mosaics. Remote Sens. 2020, 12, 2284. [Google Scholar] [CrossRef]
- Von Rönn, G.; Schwarzer, K.; Reimers, H.-C.; Winter, C. Limitations of Boulder Detection in Shallow Water Habitats Using High-Resolution Sidescan Sonar Images. Geosciences 2019, 9, 390. [Google Scholar] [CrossRef]
- Román, A.; Tovar-Sánchez, A.; Gauci, A.; Deidun, A.; Caballero, I.; Colica, E.; D’Amico, S.; Navarro, G. Water-Quality Monitoring with a UAV-Mounted Multispectral Camera in Coastal Waters. Remote Sens. 2022, 15, 237. [Google Scholar] [CrossRef]
- Matsui, K.; Shirai, H.; Kageyama, Y.; Yokoyama, H. Improving the Resolution of UAV-Based Remote Sensing Data of Water Quality of Lake Hachiroko, Japan by Neural Networks. Ecol. Inform. 2021, 62, 101276. [Google Scholar] [CrossRef]
- Yan, Y.; Wang, Y.; Yu, C.; Zhang, Z. Multispectral Remote Sensing for Estimating Water Quality Parameters: A Comparative Study of Inversion Methods Using Unmanned Aerial Vehicles (UAVs). Sustainability 2023, 15, 10298. [Google Scholar] [CrossRef]
- Rajpura, P.S.; Bojinov, H.; Hegde, R.S. Object Detection Using Deep CNNs Trained on Synthetic Images. arXiv 2017, arXiv:1706.06782v2. [Google Scholar]
- Ayachi, R.; Afif, M.; Said, Y.; Atri, M. Traffic Signs Detection for Real-World Application of an Advanced Driving Assisting System Using Deep Learning. Neural Process Lett. 2020, 51, 837–851. [Google Scholar] [CrossRef]
- Singh, K.; Navaratnam, T.; Holmer, J.; Schaub-Meyer, S.; Roth, S. Is Synthetic Data All We Need? Benchmarking the Robustness of Models Trained with Synthetic Images. arXiv 2024, arXiv:2405.20469v2. [Google Scholar]
- Greff, K.; Belletti, F.; Beyer, L.; Doersch, C.; Du, Y.; Duckworth, D.; Fleet, D.J.; Gnanapragasam, D.; Golemo, F.; Herrmann, C.; et al. Kubric: A Scalable Dataset Generator. arXiv 2022, arXiv:2203.03570. [Google Scholar]
- Guerneve, T.; Mignotte, P. Expect the unexpected: A man-made object detection algorithm for underwater operations in unknown environments. In Proceedings of the International Conference on Underwater Acoustics 2024, Institute of Acoustics, Bath, UK, 31 May 2024. [Google Scholar]
- Hinterstoisser, S.; Pauly, O.; Heibel, H.; Marek, M.; Bokeloh, M. An Annotation Saved Is an Annotation Earned: Using Fully Synthetic Training for Object Instance Detection. arXiv 2019, arXiv:1902.09967. [Google Scholar]
- Josifovski, J.; Kerzel, M.; Pregizer, C.; Posniak, L.; Wermter, S. Object Detection and Pose Estimation Based on Convolutional Neural Networks Trained with Synthetic Data. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: New York, NY, USA, 2018; pp. 6269–6276. [Google Scholar]
- Fabbri, M.; Braso, G.; Maugeri, G.; Cetintas, O.; Gasparini, R.; Osep, A.; Calderara, S.; Leal-Taixe, L.; Cucchiara, R. MOTSynth: How Can Synthetic Data Help Pedestrian Detection and Tracking? In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; IEEE: New York, NY, USA, 2021; pp. 10829–10839. [Google Scholar]
- Lin, S.; Wang, K.; Zeng, X.; Zhao, R. Explore the Power of Synthetic Data on Few-Shot Object Detection. arXiv 2023, arXiv:2303.13221. [Google Scholar]
- Huh, J.; Lee, K.; Lee, I.; Lee, S. A Simple Method on Generating Synthetic Data for Training Real-Time Object Detection Networks. In Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018; IEEE: New York, NY, USA, 2018; pp. 1518–1522. [Google Scholar]
- Seiler, F.; Eichinger, V.; Effenberger, I. Synthetic Data Generation for AI-Based Machine Vision Applications. Electron. Imaging 2024, 36, IRIACV-276. [Google Scholar] [CrossRef]
- Andulkar, M.; Hodapp, J.; Reichling, T.; Reichenbach, M.; Berger, U. Training CNNs from Synthetic Data for Part Handling in Industrial Environments. In Proceedings of the 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), Munich, Germany, 20–24 August 2018; IEEE: New York, NY, USA, 2018; pp. 624–629. [Google Scholar]
- Wang, R.; Hoppe, S.; Monari, E.; Huber, M.F. Defect Transfer GAN: Diverse Defect Synthesis for Data Augmentation. arXiv 2023, arXiv:2302.08366. [Google Scholar]
- Ma, Q.; Jin, S.; Bian, G.; Cui, Y. Multi-Scale Marine Object Detection in Side-Scan Sonar Images Based on BES-YOLO. Sensors 2024, 24, 4428. [Google Scholar] [CrossRef]
- Zhang, X.; Yang, P.; Cao, D. Synthetic aperture image enhancement with near-coinciding Nonuniform sampling case. Comput. Electr. Eng. 2024, 120, 109818. [Google Scholar] [CrossRef]
- Mohamed, H.; Nadaoka, K.; Nakamura, T. Automatic Semantic Segmentation of Benthic Habitats Using Images from Towed Underwater Camera in a Complex Shallow Water Environment. Remote Sens. 2022, 14, 1818. [Google Scholar] [CrossRef]
- Mohamed, H.; Nadaoka, K.; Nakamura, T. Semiautomated Mapping of Benthic Habitats and Seagrass Species Using a Convolutional Neural Network Framework in Shallow Water Environments. Remote Sens. 2020, 12, 4002. [Google Scholar] [CrossRef]
- Villon, S.; Mouillot, D.; Chaumont, M.; Darling, E.S.; Subsol, G.; Claverie, T.; Villéger, S. A Deep Learning Method for Accurate and Fast Identification of Coral Reef Fishes in Underwater Images. Ecol. Inform. 2018, 48, 238–244. [Google Scholar] [CrossRef]
- Han, F.; Yao, J.; Zhu, H.; Wang, C. Underwater Image Processing and Object Detection Based on Deep CNN Method. J. Sens. 2020, 2020, 6707328. [Google Scholar] [CrossRef]
- Jin, L.; Liang, H. Deep Learning for Underwater Image Recognition in Small Sample Size Situations. In Proceedings of the OCEANS 2017—Aberdeen, Aberdeen, UK, 19–22 June 2017; IEEE: New York, NY, USA, 2017; pp. 1–4. [Google Scholar]
- Jain, A.; Mahajan, M.; Saraf, R. Standardization of the Shape of Ground Control Point (GCP) and the Methodology for Its Detection in Images for UAV-Based Mapping Applications. In Advances in Computer Vision; Arai, K., Kapoor, S., Eds.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2020; Volume 943, pp. 459–476. ISBN 978-3-030-17794-2. [Google Scholar]
- Chuanxiang, C.; Jia, Y.; Chao, W.; Zhi, Z.; Xiaopeng, L.; Di, D.; Mengxia, C.; Zhiheng, Z. Automatic Detection of Aerial Survey Ground Control Points Based on Yolov5-OBB. arXiv 2023, arXiv:2303.03041. [Google Scholar]
- Muradás Odriozola, G.; Pauly, K.; Oswald, S.; Raymaekers, D. Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep Learning. Remote Sens. 2024, 16, 794. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: New York, NY, USA, 2016; pp. 779–788. [Google Scholar]
- Wang, C.; Wang, Q.; Wu, H.; Zhao, C.; Teng, G.; Li, J. Low-Altitude Remote Sensing Opium Poppy Image Detection Based on Modified YOLOv3. Remote Sens. 2021, 13, 2130. [Google Scholar] [CrossRef]
- Wang, Q.; Shen, F.; Cheng, L.; Jiang, J.; He, G.; Sheng, W.; Jing, N.; Mao, Z. Ship Detection Based on Fused Features and Rebuilt YOLOv3 Networks in Optical Remote-Sensing Images. Int. J. Remote Sens. 2021, 42, 520–536. [Google Scholar] [CrossRef]
- Hong, Z.; Yang, T.; Tong, X.; Zhang, Y.; Jiang, S.; Zhou, R.; Han, Y.; Wang, J.; Yang, S.; Liu, S. Multi-Scale Ship Detection From SAR and Optical Imagery Via A More Accurate YOLOv3. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6083–6101. [Google Scholar] [CrossRef]
- Chen, L.; Shi, W.; Deng, D. Improved YOLOv3 Based on Attention Mechanism for Fast and Accurate Ship Detection in Optical Remote Sensing Images. Remote Sens. 2021, 13, 660. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Yeh, I.-H.; Liao, H.-Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
- Hussain, M. YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection. Machines 2023, 11, 677. [Google Scholar] [CrossRef]
- Yeerjiang, A.; Wang, Z.; Huang, X.; Zhang, J.; Chen, Q.; Qin, Y.; He, J. YOLOv1 to YOLOv10: A Comprehensive Review of YOLO Variants and Their Application in Medical Image Detection. J. Artif. Intell. Pract. 2024, 7, 112–122. [Google Scholar] [CrossRef]
- Tian, Y.; Ye, Q.; Doermann, D. YOLOv12: Attention-Centric Real-Time Object Detectors. arXiv 2025, arXiv:2502.12524. [Google Scholar]
- Sapkota, R.; Meng, Z.; Karkee, M. Synthetic Meets Authentic: Leveraging LLM Generated Datasets for YOLO11 and YOLOv10-Based Apple Detection through Machine Vision Sensors. Smart Agric. Technol. 2024, 9, 100614. [Google Scholar] [CrossRef]
- Polish National Geoportal. Available online: https://mapy.geoportal.gov.pl/imap/Imgp_2.html (accessed on 30 July 2025).
- Padilla, R.; Netto, S.L.; Da Silva, E.A.B. A Survey on Performance Metrics for Object-Detection Algorithms. In Proceedings of the 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niterói, Brazil, 1–3 July 2020; IEEE: New York, NY, USA, 2020; pp. 237–242. [Google Scholar]
- McCoy, R.M. Field Methods in Remote Sensing; Guilford Press: Guilford, UK, 2004; pp. 101–110. [Google Scholar]
- Karwowska, K.; Wierzbicki, D. Improving Spatial Resolution of Satellite Imagery Using Generative Adversarial Networks and Window Functions. Remote Sens. 2022, 14, 6285. [Google Scholar] [CrossRef]
- Karwowska, K.; Wierzbicki, D. Using Super-Resolution Algorithms for Small Satellite Imagery: A Systematic Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 3292–3312. [Google Scholar] [CrossRef]
- Lu, T.; Wang, J.; Zhang, Y.; Wang, Z.; Jiang, J. Satellite Image Super-Resolution via Multi-Scale Residual Deep Neural Network. Remote Sens. 2019, 11, 1588. [Google Scholar] [CrossRef]
- Elhanashi, A.; Dini, P.; Saponara, S.; Zheng, Q. Integration of Deep Learning into the IoT: A Survey of Techniques and Challenges for Real-World Applications. Electronics 2023, 12, 4925. [Google Scholar] [CrossRef]
GCP Type | Number of SSOs in the ortho2021 Image | Number of SSOs in the ortho2022 Image | Total Number of Simplified Synthetic GCPs in Training Data |
---|---|---|---|
Type A | 250 | 250 | 500 |
Type A | 500 | 500 | 1000 |
Type A | 1000 | 1000 | 2000 |
Type A | 2000 | 2000 | 4000 |
Type B | 250 | 250 | 500 |
Type B | 500 | 500 | 1000 |
Type B | 1000 | 1000 | 2000 |
Type B | 2000 | 2000 | 4000 |
Type C | 250 | 250 | 500 |
Type C | 500 | 500 | 1000 |
Type C | 1000 | 1000 | 2000 |
Type C | 2000 | 2000 | 4000 |
GCP Type | The Total Number of Training Data (SSOs) | The Detection Model Used | Series | The Short Name of the Trained Model |
---|---|---|---|---|
Type A | 500 | YOLOv3 | 1 | YA500N |
Type A | 1000 | YOLOv3 | 1 | YA1000N |
Type A | 2000 | YOLOv3 | 1 | YA2000N |
Type A | 4000 | YOLOv3 | 1 | YA4000N |
Type B | 500 | YOLOv3 | 1 | YB500N |
Type B | 1000 | YOLOv3 | 1 | YB1000N |
Type B | 2000 | YOLOv3 | 1 | YB2000N |
Type B | 4000 | YOLOv3 | 1 | YB4000N |
Type C | 500 | YOLOv3 | 1 | YC500N |
Type C | 1000 | YOLOv3 | 1 | YC1000N |
Type C | 2000 | YOLOv3 | 1 | YC2000N |
Type C | 4000 | YOLOv3 | 1 | YC4000N |
Type A | 500 | YOLOv3 | 2 | YA500P |
Type A | 1000 | YOLOv3 | 2 | YA1000P |
Type A | 2000 | YOLOv3 | 2 | YA2000P |
Type A | 4000 | YOLOv3 | 2 | YA4000P |
Type B | 500 | YOLOv3 | 2 | YB500P |
Type B | 1000 | YOLOv3 | 2 | YB1000P |
Type B | 2000 | YOLOv3 | 2 | YB2000P |
Type B | 4000 | YOLOv3 | 2 | YB4000P |
Type C | 500 | YOLOv3 | 2 | YC500P |
Type C | 1000 | YOLOv3 | 2 | YC1000P |
Type C | 2000 | YOLOv3 | 2 | YC2000P |
Type C | 4000 | YOLOv3 | 2 | YC4000P |
Type A | 500 | YOLOv3 | 3 | YA500F |
Type A | 1000 | YOLOv3 | 3 | YA1000F |
Type A | 2000 | YOLOv3 | 3 | YA2000F |
Type A | 4000 | YOLOv3 | 3 | YA4000F |
Type B | 500 | YOLOv3 | 3 | YB500F |
Type B | 1000 | YOLOv3 | 3 | YB1000F |
Type B | 2000 | YOLOv3 | 3 | YB2000F |
Type B | 4000 | YOLOv3 | 3 | YB4000F |
Type C | 500 | YOLOv3 | 3 | YC500F |
Type C | 1000 | YOLOv3 | 3 | YC1000F |
Type C | 2000 | YOLOv3 | 3 | YC2000F |
Type C | 4000 | YOLOv3 | 3 | YC4000F |
Short Name of the Model | TP | FP | FN | Precision | Recall | F1-Score | AP |
---|---|---|---|---|---|---|---|
YA500N | 65 | 0 | 87 | 1.00 | 0.43 | 0.60 | 0.43 |
YA1000N | 111 | 3 | 41 | 0.97 | 0.73 | 0.83 | 0.73 |
YA2000N | 117 | 8 | 35 | 0.94 | 0.77 | 0.84 | 0.77 |
YA4000N | 127 | 28 | 25 | 0.82 | 0.84 | 0.83 | 0.83 |
YB500N | 20 | 1 | 132 | 0.95 | 0.13 | 0.23 | 0.13 |
YB1000N | 8 | 1 | 144 | 0.89 | 0.05 | 0.10 | 0.05 |
YB2000N | 123 | 42 | 29 | 0.75 | 0.81 | 0.78 | 0.79 |
YB4000N | 35 | 34 | 117 | 0.51 | 0.23 | 0.32 | 0.16 |
YC500N | 117 | 1 | 35 | 0.99 | 0.77 | 0.87 | 0.77 |
YC1000N | 84 | 0 | 68 | 1.00 | 0.55 | 0.71 | 0.56 |
YC2000N | 108 | 4 | 44 | 0.96 | 0.71 | 0.82 | 0.72 |
YC4000N | 55 | 1 | 97 | 0.98 | 0.36 | 0.53 | 0.37 |
YA500P | 26 | 0 | 126 | 1.00 | 0.17 | 0.29 | 0.17 |
YA1000P | 143 | 7 | 9 | 0.95 | 0.94 | 0.95 | 0.93 |
YA2000P | 127 | 2 | 25 | 0.98 | 0.84 | 0.90 | 0.84 |
YA4000P | 79 | 1 | 73 | 0.99 | 0.52 | 0.68 | 0.52 |
YB500P | 11 | 2 | 141 | 0.85 | 0.07 | 0.13 | 0.06 |
YB1000P | 88 | 1 | 64 | 0.99 | 0.58 | 0.73 | 0.58 |
YB2000P | 38 | 0 | 114 | 1.00 | 0.25 | 0.40 | 0.26 |
YB4000P | 125 | 6 | 27 | 0.95 | 0.82 | 0.88 | 0.82 |
YC500P | 112 | 4 | 40 | 0.97 | 0.74 | 0.84 | 0.74 |
YC1000P | 137 | 0 | 15 | 1.00 | 0.90 | 0.95 | 0.91 |
YC2000P | 151 | 13 | 1 | 0.92 | 0.99 | 0.96 | 0.99 |
YC4000P | 144 | 17 | 8 | 0.89 | 0.95 | 0.92 | 0.94 |
YA500F | 125 | 30 | 27 | 0.81 | 0.82 | 0.81 | 0.79 |
YA1000F | 144 | 35 | 8 | 0.80 | 0.95 | 0.87 | 0.91 |
YA2000F | 88 | 45 | 64 | 0.66 | 0.58 | 0.62 | 0.42 |
YA4000F | 118 | 3 | 34 | 0.98 | 0.78 | 0.86 | 0.77 |
YB500F | 23 | 1 | 129 | 0.96 | 0.15 | 0.26 | 0.15 |
YB1000F | 132 | 6 | 20 | 0.96 | 0.87 | 0.91 | 0.86 |
YB2000F | 110 | 5 | 42 | 0.96 | 0.72 | 0.82 | 0.72 |
YB4000F | 46 | 15 | 106 | 0.75 | 0.30 | 0.43 | 0.23 |
YC500F | 130 | 5 | 22 | 0.96 | 0.86 | 0.91 | 0.85 |
YC1000F | 145 | 63 | 7 | 0.70 | 0.95 | 0.81 | 0.94 |
YC2000F | 152 | 158 | 0 | 0.49 | 1.00 | 0.66 | 0.98 |
YC4000F | 143 | 17 | 9 | 0.89 | 0.94 | 0.92 | 0.90 |
Short Name of the Model | TP | GCP#1 dmean [cm] | GCP#2 dmean [cm] | GCP#3 dmean [cm] | GCP#4 dmean [cm] | GCP#5 dmean [cm] | Δdmean [cm] |
---|---|---|---|---|---|---|---|
YA500N | 0 | – | – | – | – | – | – |
YA1000N | 2 | 10.10 | – | – | 11.85 | – | 10.98 |
YA2000N | 2 | – | – | 21.49 | 13.65 | – | 17.57 |
YA4000N | 1 | – | – | 18.40 | – | – | 18.40 |
YB500N | 1 | 6.80 | – | – | – | – | 6.80 |
YB1000N | 2 | 7.16 | – | – | 7.45 | – | 7.30 |
YB2000N | 3 | 6.40 | – | 19.10 | 9.72 | – | 11.74 |
YB4000N | 2 | 9.32 | – | – | 10.71 | – | 10.02 |
YC500N | 4 | 6,40 | 24.76 | – | 12.13 | 4.79 | 12.02 |
YC1000N | 4 | 6.40 | – | 23.20 | 10.29 | 7.05 | 11.73 |
YC2000N | 5 | 6.40 | 22.39 | 20.62 | 12.33 | 8.71 | 14.09 |
YC4000N | 4 | 5.05 | – | 24.73 | 10.72 | 13.69 | 13.55 |
YA500P | 1 | 6.80 | – | – | – | – | 6.80 |
YA1000P | 5 | 7.10 | 10.62 | 4.56 | 11.74 | 5.81 | 7.97 |
YA2000P | 4 | 7.16 | 13.49 | 19.23 | 13.09 | – | 13.24 |
YA4000P | 2 | 13.55 | – | – | 13.09 | – | 13.32 |
YB500P | 3 | 9.05 | – | – | 13.66 | 4.72 | 9.14 |
YB1000P | 5 | 7.16 | 11.31 | 1.03 | 10.16 | 7.95 | 7.52 |
YB2000P | 4 | 10.10 | 14.31 | 1.97 | 17.03 | – | 10.85 |
YB4000P | 5 | 5.07 | 2.31 | 2.13 | 7.31 | 7.05 | 4.77 |
YC500P | 5 | 5.07 | 16.32 | 5.45 | 15.73 | 8.84 | 10.28 |
YC1000P | 5 | 5.07 | 10.49 | 16.07 | 12.33 | 7.28 | 10.25 |
YC2000P | 5 | 9.58 | 17.92 | 1.86 | 10.71 | 10.60 | 10.14 |
YC4000P | 5 | 5.05 | 12.06 | 16.68 | 7.28 | 11.87 | 10.59 |
YA500F | 3 | 7.79 | 25.00 | – | 17.34 | – | 16.71 |
YA1000F | 5 | 3.21 | 8.20 | 2.38 | 7.31 | 19.32 | 8.08 |
YA2000F | 0 | – | – | – | – | – | – |
YA4000F | 2 | 4.65 | 27.96 | – | – | – | 16.30 |
YB500F | 0 | – | – | – | – | – | – |
YB1000F | 2 | 6.87 | – | – | 11.83 | – | 9.35 |
YB2000F | 1 | 3.53 | – | – | – | – | 3.53 |
YB4000F | 2 | – | – | 14.56 | 43.26 | – | 28.91 |
YC500F | 5 | 20.48 | 29.55 | 21.49 | 14.56 | 11.28 | 19.47 |
YC1000F | 5 | 14.27 | 17.98 | 18.12 | 18.08 | 29.56 | 19.60 |
YC2000F | 5 | 14.43 | 33.38 | 14.68 | 20.99 | 2.95 | 17.28 |
YC4000F | 5 | 14.96 | 15.63 | 16.93 | 13.23 | 3.26 | 12.80 |
Short Name of the Model | TP | FP | FN | Precision | Recall | F1-Score | AP |
---|---|---|---|---|---|---|---|
YC2000P | 151 | 13 | 1 | 0.92 | 0.99 | 0.96 | 0.99 |
YC1000P | 137 | 0 | 15 | 1.00 | 0.90 | 0.95 | 0.91 |
YA1000P | 143 | 7 | 9 | 0.95 | 0.94 | 0.95 | 0.93 |
YC4000P | 144 | 17 | 8 | 0.89 | 0.95 | 0.92 | 0.94 |
YC4000F | 143 | 17 | 9 | 0.89 | 0.94 | 0.92 | 0.90 |
YB1000F | 132 | 6 | 20 | 0.96 | 0.87 | 0.91 | 0.86 |
YC500F | 130 | 5 | 22 | 0.96 | 0.86 | 0.91 | 0.85 |
Parameter | Value |
---|---|
imgsz | 416 |
workers | 8 |
batch | 16 |
epochs | 300 |
patience | 50 |
hsv_h | 0.015 |
hsv_s | 0.7 |
hsv_v | 0.4 |
degrees | 0.0 |
translate | 0.1 |
scale | 0.5 |
shear | 0.0 |
perspective | 0.0 |
flipud | 0.0 |
fliplr | 0.5 |
bgr | 0.0 |
mosaic | 1.0 |
mixup | 0.0 |
copy_paste | 0.5 |
copy_paste_mode | flip |
auto_augment | randaugment |
erasing | 0.4 |
crop_fraction | 1.0 |
Short Name of the Model | TP | GCP#1 dmean [cm] | GCP#2 dmean [cm] | GCP#3 dmean [cm] | GCP#4 dmean [cm] | GCP#5 dmean [cm] | Δdmean [cm] |
---|---|---|---|---|---|---|---|
YOLOv5 1000C | 5 | 2.25 | 2.25 | 3.18 | 8.11 | 7.12 | 4.58 |
YOLOv8 2000C | 5 | 3.18 | 2.25 | 3.18 | 7.12 | 5.03 | 4.15 |
YOLOv9 2000C | 5 | 2.25 | 2.25 | 2.25 | 7.12 | 3.18 | 3.41 |
YOLOv10 2000C | 5 | 2.25 | 2.25 | 2.25 | 5.03 | 3.18 | 2.99 |
YOLO12 4000C | 5 | 3.18 | 2.25 | 3.18 | 5.03 | 3.18 | 3.37 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Klukowski, D.; Lubczonek, J.; Adamski, P. A Method of Simplified Synthetic Objects Creation for Detection of Underwater Objects from Remote Sensing Data Using YOLO Networks. Remote Sens. 2025, 17, 2707. https://doi.org/10.3390/rs17152707
Klukowski D, Lubczonek J, Adamski P. A Method of Simplified Synthetic Objects Creation for Detection of Underwater Objects from Remote Sensing Data Using YOLO Networks. Remote Sensing. 2025; 17(15):2707. https://doi.org/10.3390/rs17152707
Chicago/Turabian StyleKlukowski, Daniel, Jacek Lubczonek, and Pawel Adamski. 2025. "A Method of Simplified Synthetic Objects Creation for Detection of Underwater Objects from Remote Sensing Data Using YOLO Networks" Remote Sensing 17, no. 15: 2707. https://doi.org/10.3390/rs17152707
APA StyleKlukowski, D., Lubczonek, J., & Adamski, P. (2025). A Method of Simplified Synthetic Objects Creation for Detection of Underwater Objects from Remote Sensing Data Using YOLO Networks. Remote Sensing, 17(15), 2707. https://doi.org/10.3390/rs17152707