Previous Article in Journal
Dynamic Event-Triggering Surrounding Control for Multi-USVs Under FDI Attacks via Adaptive Dynamic Programming
Previous Article in Special Issue
Optimization of Adaptive Observation Strategies for Multi-AUVs in Complex Marine Environments Using Deep Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Up-to-Date Scoping Review of Object Detection Methods for Macro Marine Debris

by
Zoe Moorton
*,
Kamlesh Mistry
,
Rebecca Strachan
and
Shanfeng Hu
Computer Information Sciences, Engineering and Environment, Northumbria University, Ellison Pl, Newcastle upon Tyne NE1 8ST, UK
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(8), 1590; https://doi.org/10.3390/jmse13081590 (registering DOI)
Submission received: 28 June 2025 / Revised: 11 August 2025 / Accepted: 14 August 2025 / Published: 20 August 2025
(This article belongs to the Special Issue Underwater Observation Technology in Marine Environment)

Abstract

Being able to accurately identify litter in a marine environment is crucial to cleaning up our seas and oceans. Research into object detection techniques to support this identification has been underway for over two decades. However, there have been substantial advancements in the past five years due to the implementation of deep learning techniques. Following the PRISMA-ScR guidelines, we provide an in-depth summary and analysis of recent and significant research contributions to the object detection of macro marine debris. From cross-referencing the results of the literature review, we deduce that there is currently no benchmarked framework for evaluating and comparing computer vision techniques for marine environments. Subsequently, we use the results from our analysis to provide a suggested checklist for future researchers in this field. Furthermore, many of the respected researchers in this field have advocated for a comprehensive database of underwater debris to support research developments in intelligent object detection and identification.

1. Introduction

Debris in marine environments has exponentially increased in recent years, and this continues to have destructive effects on marine life, the environment, and even human health. Over the last century, we have become increasingly aware of the scope of the repercussions plastic debris is causing, yet we have still not successfully targeted or solved this dilemma [1]. There is an overwhelming plethora of literature that documents the direct impact of synthetic debris on the marine ecosystem [2,3,4,5,6,7,8,9]. Inhabitants of marine settings are frequently found entangled within nets and other materials [10] or mistakenly identify debris as prey or objects to play with, consequently ingesting the toxic materials; leading to reproductive issues, blocked digestive tracts, and sometimes fatality [11,12,13,14,15,16]. In fact, a quantitative overview of marine fauna ingesting waste has found that over 700 species are confirmed to have eaten plastic [17]. Unfortunately, due to the consequences of trapped animals either rapidly sinking or being consumed by predators, it is difficult to detect or accurately estimate the quantity of marine fatalities from entanglement [18,19,20,21].
When marine debris is not consumed by animals, it often ends up sinking to the seafloor, where it may stay for hundreds of years [22]. This impacts organisms underneath and around it, for example, suffocating and blocking the light to coral and soft sediment, which is vital to the ocean ecosystem [23]. Plastic debris has properties that often prevent it from sinking, leading it to continuously degrade and break down into microplastics and nanoplastics [24]. These tiny fragments have been found throughout our water systems [11,25] and in fish and other seafood [1,26,27,28,29,30,31]. Recently, scientists even discovered nanoplastics in our blood [32]. There is still not enough research to conclude the health implications that this could have on humans and animals [33]. However, given its historical health issues with marine life [34,35,36], it is likely to have adverse repercussions.
Manual efforts have been made to clean up coastlines; scuba divers, such as those who are part of the Professional Association of Diving Instructors (PADI) Dive Against Debris program, have reportedly collected 2.6 million pieces of debris from dives since they started the program in 2011 [37]. Unfortunately, the NOAA [38] has estimated that approximately 666,667 metric tons of plastic alone enter the oceans every month, equating to approximately 8 million metric tons a year. We not only need to reduce the amount of debris entering our marine ecosystems but also remove or make safe the debris that is already there. A crucial step in this process will be to quickly and effectively identify it from other objects in that environment.
It should be noted that this review specifically focuses on marine water bodies, which often have more complex and variable backgrounds for object detection and identification compared to land environments [39], and it only covers debris of a macro size or larger. We have omitted studies on debris that have a diameter of less than 5 mm (Micro) and nano plastics (<1 μm in diameter), which would necessitate a different strategy and are beyond the scope of this paper.

1.1. Research Gap

Underwater marine debris detection is an incredibly complex process with multiple variants. Within this Scoping Review, we aim to explore the recent literature on methodologies that have been used to detect objects within the scope. Integral to object detection performance is the data it is trained on; we therefore are also exploring how the databases in this scope have been constructed. From this, we should be able to identify and evaluate the key research gaps and what might be done to address them. The combination of the unprecedented growth of marine debris, with the rapid progression of artificial intelligence (AI) within the last few years, has led to the need for an updated Scoping Review for underwater marine debris object detection.

1.2. Research Questions

To undertake this Scoping Review, we have devised three research questions:
Research Question 1 (RQ1): What data collection methods are used in publicly available datasets that support marine debris object detection?
Research Question 2 (RQ2): Which object detection architectures are currently performing the best in underwater marine environments for macro debris?
Research Question 3 (RQ3): What factors are affecting object detection performance within the scope of marine debris?

2. Methods and Materials

This review was conducted by following the PRISMA Scoping Review extension (PRISMA-ScR) [40,41] to analyze the literature on underwater macro marine debris object detection methods. It intends to help future researchers and scientists within this field and build on previous research in a methodical approach. To organize the relevant literature, following the PRISMA-ScR method, we performed the two following tasks:

2.1. Identification

Google Scholar was originally searched for “deep learning marine debris” OR “deep learning marine macro debris”. In IEEE we searched for “deep learning marine debris”. When we discovered there were too many deep learning techniques, we narrowed the scope to specifically ‘object detection’. “Marine debris yolov8”, “marine debris yolov9”, and “marine debris yolov10” were used later in the study to find updated studies relevant to unpopular methodologies to ensure these had not been missed. This has been re-run as early as July 2025. Recently, to ensure a more relevant scope for up-to-date publications, we searched for “marine debris object detection” and narrowed the results from 2024 and then also from 2025 in Google Scholar.

2.2. Screening and Eligibility

We have only used peer-reviewed publications that were dated no earlier than 2019, but with more of a focus on materials from the last two years. Computer Science publications generally avoid citations older than five years, and Artificial Intelligence moves quickly; therefore, relevant publications are usually within the last two years to remain current. We kept the sources from 2019 to show the comparison between before and after the Large Language Model ‘boom’ in 2022. When searching for baseline databases with underwater footage, we have been able to cross-analyze the benchmark sets from our Supplementary Table S1. Our search criteria included English-speaking, peer-reviewed journals, symposiums, and conferences that met quartile 1 and 2 standards. We have only reviewed works in English—as it is the language of communication at the author’s research institution—which follow stringent peer-reviewed academic regulations. To include papers, they had to be primarily object detection studies AND they had to be primarily underwater AND include marine debris. The marine debris had to be of a macro size.
In this Scoping Review, to ensure that the papers included are of high quality, are still relevant, and are at a wide enough scope, we excluded academic publications that were not written in English, had been produced before 2019, were Q3, Q4, or unranked, or had not been peer reviewed. We excluded papers that did not include object detection; some papers were excluded as they were based on sea ice. Any papers that were chiefly remote sensing, LIDAR, or using satellite data were excluded; we also excluded papers that were tracking debris as opposed to detecting it. Additionally, we excluded micro- and nano-debris studies.
Though it is difficult to measure, it is estimated that floating debris only makes up 15% of the predicted amount of debris in the oceans [42]. Hence, our focus priority is on underwater studies.
Based on the selection process, a total of 25 papers met the inclusion requirements. Though of a modest size, it is a niche scope with strict parameters. Consequently, our Scoping Review consists of these studies.

2.3. Data Charting Process

To combine all the metadata of the papers reviewed in this paper, Supplementary Table S1-Metadata of Related Papers can be used to cross-reference the model performance, amount of data, classification groups, and even the hardware to determine more accurately which results have produced the most reliable outcomes. The data is presented in ascending chronological order for ease of use. The data columns transpire in the following: Citation; Scientific Journal; Title; Year; Methodology; Architecture; Debris Depth; Dataset Used; Class Count; Class Info; Data Count; Results; Hardware. We have outlined each data column below with a short explanation of why we chose to present these areas.
Citation: for ease of reference and visibility, we have cited the authors in the Harvard Reference list format within the Supplementary Data.
Scientific Journal: to show the quality and reliability of the research reviewed. Also useful for publication scope exposure.
Title: the title of the paper for visibility, memorability, and context.
Year: includes the year of the publication, for context and relevance.
Methodology: this column recorded the type of computer vision used.
Architecture: the models and how they were adapted.
Debris Depth: whether the objects were underwater, at surface level, or on the coast.
Dataset Used: refers to whether a benchmark set was used or if it was created.
Class Count: ‘Classification Count’, the number of classifications used.
Class Info: ‘Classification Information’, the classification names.
Data Count: includes the amount and format of data.
Results: recorded evaluation metrics of the models tested.
Hardware: any hardware recorded (data collection or processing power).
Naturally, not all this information was provided; therefore, if a cell is empty, we have been unable to locate the information during the study.
Some terms have been abbreviated: accuracy (acc), validation (val), and average precision/mean average precision (AP/mAP). Multiple tests run within the same study are separated with a semicolon and treated respectively. For example: “Model01; Model02” represents two separate tests within the study, and the results would be in a format of “xx%; xx%”, respectively.

3. Database Acquisition for Macro Marine Debris

To address RQ1, we have used this section to examine the complexity of underwater marine debris acquisition and explore the publicly available datasets that are available on an open-access license.

3.1. Complexity

Deep learning on marine debris datasets is a complex task for three major but non-exhaustive reasons:
  • Collection: To collect a comprehensive underwater footage set, scuba divers or underwater rovers are often used, both of which are expensive and cannot comprehensively cover the vast size of the ocean.
  • Underwater: The underwater world (particularly marine) comes with a plethora of issues, including a huge array of biodiversity and challenging conditions (depth, light, visibility)—with depth, the conditions rapidly change too. Underwater imagery is frequently characterized by low contrast, haze, poor lighting, and color distortion; these can collectively impair object detection performance and may lead to domain shifts when models are applied to different deployment environments.
  • Debris: Trash can signify any human-made object; not only are the shapes and sizes infinite, but properties degrade and alter after time in the salt water.
Though humankind may struggle to meet the demands of a dataset that is so varied, ongoing researchers remain determined.

3.2. Benchmark Databases

This section briefly outlines some of the well-known, open-access, benchmark databases of underwater debris imagery that researchers have collected from.
The J-EDI (JAMSTEC E-Library of Deep Sea Images) dataset will be mentioned a few times within this section. It is a part of the JAMSTEC (Japan Agency for Marine-Earth Science and Technology, Yokosuka, Japan) data collection and is popular in marine footage-based studies, as so far it has been the most comprehensive open-access collection of sea debris and biodiversity footage underwater [43]. These deep-sea photos and videos are taken from a submarine off the Japanese coast under varying conditions. However, the downside to this database is that it is available for multiple uses rather than specifically for machine learning. Therefore, there is a lot of data cleaning—some images or footage contain parts of the ROV system in place, the quality is not always to a computer vision quality standard, and the data needs to be labeled appropriately. Due to the depth, the images are low in color and have a beam of light from the ROV; additionally, any biodiversity is benthic. Regardless, JAMSTEC is a popular choice for obtaining underwater footage. It is also worth mentioning that there is another database called JeDI, “The Jellyfish Database Initiative”, which could be easily confused with J-EDI, “JAMSTEC E-library of Deep-Sea Images”; JeDI is used for tracking jellyfish populations [44].
The dataset Trash-ICRA19 (later expanded to TrashCan 1.0) [45,46] was pulled from JAMSTEC videos, and the authors collected 7,212 underwater RGB stills. The dataset is separated into two parts: Instance and TrashCan-Material [47]. The Instance set was used in the study [48] to improve their model of Mask R-CNN in object detection and instance segmentation. They also applied data enhancement, such as image rotation and cropping, to ensure effective feature extraction.
The authors of this research used a train/validation ratio set of 6065:1147, respectively, on the Instance category and 6008:1204 for the Material category. The authors believe this provided a comprehensive representation of marine debris and other objects present. They noted that many of the marine debris categories had similarities in the object properties but did not specify further.
TACO (Trash Annotations in Context for Litter Detection) is an open-source database [49,50] that collects annotated images of urban trash, mostly on land. Some of this database has been adapted for marine debris detection by producing the small dataset ‘AquaTrash’ [51], which consists of 369 annotated images from TACO. The authors of AquaTrash categorized the images into four groups: glass; metal; paper; plastic. Though the sample size is small, the authors still obtained a mean Average Precision of 81% even when tested on more random images; they claim the reduced scale provides “results in a more efficient manner”; however, it would be valuable to see examples of how the dataset performs on complex tasks.
Other open-access datasets include the SeaClear Marine Debris Dataset [52] and MARIDA [53]. SeaClear Marine Debris Dataset is the newest open-access dataset and shows promise. It contains 8610 images with 40 classes under 3 ‘super categories’ of biodiversity, debris and robot parts. The authors collected the data with different cameras, within different locations in Croatia and France. However, it is exclusively for shallow water marine debris and, at the time of writing, has only been tested by the authors on Faster R-CNN and YOLOv6, although its performance was promising with a mAP50 of 61.7% and mAP50-95 of 68.3%. MARIDA, or Marine Debris Archive, is a large database collected from satellites; therefore, its primary use is for tracking debris movement or clusters, rather than detecting individual underwater objects. UNO [54], PlastOPol [55] and DeepPlastic [56] are some examples of other marine debris datasets that can be used for object detection; however, none of the studies in this Scoping Review used them.

4. Advanced Techniques in Macro-Debris Identification

To address the escalating issue of macro-debris within marine environments, researchers are applying advanced object recognition techniques, from enhancing existing models to developing novel hybrid approaches. We explore RQ1 and RQ2 by comprehensively analyzing the varied methodologies researchers have been taking to approach their own data collection and architecture choices.

4.1. VGG-16 for Debris Classification

By applying a Bottleneck (BM) technique to VGG-16, ref. [57] one study was able to increase their validation accuracy on classifying floating debris on the Cypriot coastline up to 90%, as they amplified their dataset and number of classes. They found that even from low-resolution images, VGG-16 was able to accurately identify marine debris. The authors particularly note the importance of data augmentation to expand the dataset. They continued their work [58] by running two models in parallel and collecting shore samples along six different Cypriot beaches and using YOLOv5 and YOLACT++ to classify and localize the debris in the images and discovered that the majority of debris sizes ranged from 10 cm to 30 cm; this potentially offers a plausible representation of parameters needed for future researchers.
Other authors tried to compare the VGG-16 algorithm with a custom convolutional model on a novel, underwater dataset of images [59], which was collected from open-source platforms such as JAMSTEC and a network of marine organizations. The binary classification determined if the object was ‘animal’ or ‘litter’, with the aim of safely differentiating between the two. Their results found that Jellyfish and Plastic Bags were often interchangeably misidentified. The customized CNN consisted of three convolutional layers with 32 nodes and no dense layer, whereas the VGG-16 model used ImageNet [60] transfer learning weights. Consequently, VGG-16 scored 95% accuracy, and the CNN model scored 89%. The authors conclude their research by conveying the need for a larger database with more diversity.

4.2. Mask R-CNN for Instance Segmentation

Focused on the task of marine debris detection and instance segmentation [48], researchers have produced a fortified variation of Mask R-CNN by incorporating dilated convolution in the Feature Pyramid Network, a spatial-channel attention mechanism, and a re-scoring branch. They were able to upgrade the feature extraction and enhance the accuracy of the instance segmentation to a mAP50 of 59.2%—a 2.5% increase from the standard Mask R-CNN trained on the TrashCan dataset. Within object detection, their mAP50 reached 65.2% with improved lateral connection—a 9.5% increase to the original structure, and outperforming competing models such as Faster-RCNN (55.4%), Retina-NET (57.3%) and FC0S (60.4%). Their research particularly exaggerated the importance of reinforcing available models, especially within a specialized, vast field such as marine debris.
With a focus on exploring under the surface debris, footage suitable for training machine learning models on underwater data was produced [61]. The CleanSea dataset performance was tested with Mask R-CNN due to its promising ability to detect and segment objects within an image. The authors voiced concerns that the environmental conditions of underwater data drastically change and that it is a challenge to represent the progressive degradation found in underwater data. They tested their model on two underwater videos: one within a controlled fishbowl environment and one within a real-world seabed. They discovered the complexity of the latter negatively impacted their results and have consequently recommended a greater variety than within their collection.
Furthermore, the researchers discovered confusion between categories, such as “square can”, “basket” and “metal debris”. These classifications were easily misinterpreted by the framework. Additionally, they found that some categories, such as “shoe”, were showing large confusion rates due to being underrepresented, highlighting the importance of a balanced dataset. The research also concluded that there were False Positive errors within images that contained more than one single object to detect and that the algorithm favored the easiest option. They believe this is due to mislabeled objects and where the full object is not visible (partially offscreen or overlapped). Overall, their model performed fairly well, with a mAP of 60%, but the authors acknowledge that the size of the CleanSet dataset is limited, does not contain enough variation in debris shapes and colors and has only been accumulated in a controlled environment.
Using Mask R-CNN [62], the TrashCan 1.0 dataset was used with slightly altered classes to progress the recognition performance. The study aimed to explore enhanced techniques that can be used for application on automated vehicles; to achieve this, they fortified their dataset with augmentation, achieving a mAP of 63.5% (Instance) and 65.2% (Material). The research paper does, however, conclude that synthetically produced data should be used to boost the overall performance of object detection.
Interestingly [63], researchers also trained the standard YOLOv8 on the TrashCan set to compare its results with Mask R-CNN (in addition to YOLACT and EfficientDet-D0), producing improved results with mAP 71.4%.

4.3. YOLO-Based Models

To address the challenges of small scale and occlusion of marine debris, a study aimed to develop an object detection network called YOLOTrashCan [64]. YOLOTrashCan consists of two main components: a backbone network, which enhances feature extraction, and a feature fusion network, which combines features from different scales to elevate detection accuracy. The authors found that their model produced a detection accuracy of 65.01% on the TrashCan-Instance dataset (their Material dataset produced slightly lower results of 55.66%). They were also able to reduce the network size by 30 MB, which makes it a more efficient model for practical applications (such as cleaning marine debris).
In the same year, using the same dataset [65], scientists published their findings when they proposed a modification of YOLOv5s on marine debris detection by replacing the backbone with MobileNet and introducing an attention mechanism for filtering key features. Their results found they achieved a 4.5% mAP increase from the original YOLOv5 model, achieving a mAP of 67%, whilst simultaneously meeting the requirements of real-time detection. Interestingly, this model outperformed YOLOTrashCan [64], which could refer to the model type but also the increase in classifications that the authors applied to this study.
YOLO (You Only Look Once) has consistently performed well on marine debris object detection and by 2025, had become an impressive tool for marine debris detection with the development of its later models (YOLOv8 and v12).
However, an older version, YOLOv3, was used in two studies, where both papers reported a balance of accuracy with low latency, making YOLOv3 a promising, lightweight option for automation applications. Both studies targeted underwater biodiversity and debris, the first, achieving a mAP of 69.6% for sea life and 77.2% for debris [39]. Unfortunately, the dataset is hugely unbalanced with a respective ratio of 8036:189. Meanwhile, the second study used a sample size of 300 ‘Non-Bio’ images from the Dataset of Underwater Trash, which usually holds 8580 images of ‘Non-Bio’ and ‘Bio’. They achieved a mAP of 98.15% [66] on their small sample size. Though both studies on YOLOv3 suffer from a lack of a large, diverse dataset, these examples show that YOLOv3 could be a good contender for lightweight use.
The authors [67] modified the YOLOv8 model for improved underwater recognition and classification. Using the datasets TrashCan-Material and Instance, they achieved a mAP of 72% and 66.7%, respectively. Except for three classes (Trash, Fabric, and Plastic), their modified model consistently outperformed the original YOLOv8m model across the remaining 13 categories. The modified model successfully recognized small and overlapping objects—a complex problem for underwater biodiversity, particularly benthic creatures. Additionally, the model was able to detect small instances of starfish that were not labeled in the ground truth, presenting strong potential for feature learning. A comparative analysis between their modified version of YOLOv8 and the improved YOLOv5s model by the authors [65] on the same dataset shows a 5% increase in the mean Average Precision. In a direct comparison, YOLOv8 outperformed other popular models when it was compared on TrashCan 1.0, achieving a mAP of 71% [63]. Assuming that the mAP values are the same, the modified version of YOLOv8 only improved performance by a 1% margin. YOLOv8 once again was the leading model, in a comparative study across Faster R-CNN, Mask R-CNN, YOLOv5s, v6s and v7 on a novel dataset [68]. The authors collected an underwater dataset of 10,000 images from open-source footage. While YOLOv8, 7 and 5 achieved an identical mAP of 96%, YOLOv8n outperformed the others due to its balance of high mean Average Precision with low latency, making it an ideal solution for real time object detection.
In 2025, some researchers began to significantly enhance the variations of YOLO. Using the JAMSTEC database as a dataset, a lightweight model YOLO-MES [69] was developed by replacing the backbone with MobileNetv3 and applying a bottleneck to form the ‘MECA-neck’ module which improved the adaptive feature extraction and target recognition capabilities; the authors were able to reduce the model size by 64% whilst detecting 5 classes at an accuracy of 95.8%.
By being ‘fed’ super-resolution reconstructed images into YOLOv8, which improved visibility, the Sea Floor Debris YOLO (SFD-YOLO) model achieved a mAP of 91.2% on a novel dataset that the authors collected in Thailand. Professional scuba divers collected videos with 4K GoPro cameras; high-quality images were extracted from the frames. AS the resolution was high, they were able to crop the frames into smaller images to increase the dataset quantity. Keeping only the images that included marine debris, they obtained a dataset of 512 × 512 pixel ‘patches’, which included 11 classifications. However, even with data augmentation, their collection size was still only 294 images.
The latest Ultralytics YOLO version 12 was used on the Garbage in the Ocean [70] dataset which includes 15 submerged and floating marine debris classes. The authors remark that the intersection from the bounding box to the ground truth performed well and demonstrated this result with a mAP50-95 of 70%. The authors found there was robust performance on some of the complex challenges of marine debris detection such as occlusion and small objects; however, the dataset conditions are too clear and bright to represent varying depths and therefore, the model struggled to perform on scenarios in low lighting–the authors conclude with a suggestion for a more diverse and robust dataset with less favorable conditions.

4.4. Innovations in Macro-Debris Detection

Two studies were published in the same year [71,72], which developed enhanced classification networks to identify and categorize deep-sea debris. In one study [71], they proposed a novel hybrid model called Shuffle-Xception which they compared with ResNetv2-34, ResNetV2-152, MobileNet, LeNet, and Xception. Though Shuffle-Xception performed generally better than the other models, with an approximate average F1 score of 0.95, it did struggle with plastic, metal and natural debris differentiation. This is potentially due to the changes in state from natural degradation. Interestingly, the authors have noted that the model performs higher on its Recall than Precision, indicating that it values misdiagnosis over elimination; the authors have commented that this aligns with their thoughts on collecting marine debris, but it could be argued that this is not in alignment with marine biodiversity conservation efforts.
In another publication [72], the authors explored different classification networks and ultimately proposed a one-stage network called ResNet50-YOLOv3, where YOLOv3 (multiscale detector) is built into ResNet50 (backbone). ResNet50 was selected to enhance the performance, and YOLOv3 was chosen for its balance of speed and accuracy. When compared to other detection networks, the ResNet50-YOLOv3 results outperformed them in both accuracy and speed with an 83% mAP0.5, which suggests a good understanding of boundary coordinates. They acknowledged that they chose to sacrifice accuracy in favor of speed with the ResBlocks; therefore, presenting an opportunity for future researchers to further explore this study with more processing power.
By fortifying YOLOv7 with an attention backbone on satellite surveillance data, the authors (who named their algorithm CBAM) were able to successfully detect marine debris objects in shallow waters [73]. Three hundred twenty-one images were split into a binary classification of ‘with debris’ or ‘without debris’ and were able to identify debris at a mAP0.5 of 76%. Finally, a combination method of tracking, detecting and counting debris was proposed [74] for use as a more efficient and cost-effective method of manual surveys when estimating the quantity of debris. The authors were able to estimate the abundance of debris and were able to produce a 72% mAP on the YOLOv5 model (pre-trained with COCO [75]), achieving an accuracy of 89%. They believe their findings could be integrated into other digital applications or other remote methods of surveying, which may enhance its performance.

5. Findings

The aim of this paper is to review the current applications of underwater object detection techniques for macro marine debris. An initial observation of the metadata recorded in Table 1 shows key concerns that have significantly impacted the progression in this challenging field.
  • Problem 1: The datasets currently available are limited in diversity and fail to represent the complexity of the marine environment.
  • Problem 2: There is a lack of consistency within data reporting; therefore, assessing and evaluating the results is subjective and therefore inconclusive.
Within this section, and by utilizing Supplementary Table S1, we analyze these gaps.

5.1. Cross Analysis of Metadata

Below we present Table 1, which includes a basic summary of Supplementary Table S1. In the final column, we have included the Processing Specifications where possible, including the Central Processing Unit (CPU) power and the Graphics Processing Unit (GPU). If they are not included, then the information was not provided by the authors. We chose the relevant fields to highlight the main influencing factors behind the results, which would be the type of model, data format, performance and processing power. The data format column records the number of images.
Table 1. A comparison of recent underwater object detection model studies.
Table 1. A comparison of recent underwater object detection model studies.
Ref.ModelData FormatPerformanceProcessing Specs
[76]VGG-1612,000val acc 86%Intel Xeon (2.40 GHz) NVIDIA Quadro K4200.
2019[46]TinyYOLO; YOLOv2; SSD; Faster RCNN5720Faster RCNN mAP: 81%NVIDIA GTX 1080; Embedded GPU (NVIDIA Jetson TX2); CPU (Intel i3-6100U)
[39]YOLOv3189 (debris);
8036 (bio)
mAP 77.2%;
69.6%
Intel Core i7-7800X 350 GHz, 40 GB RAM, NVIDIA GTX 1080
2020[57]VGG1632,00090% val accIntel Xeon E5-2630 v3 (2.40 GHz) 48 GB RAM NVIDIA Quadro K4200 28.6 GB
[51]RetinaNet
(Resnet50 backbone
and FPN)
369mAP 81%-
[58]YOLOV5; YOLACT++1650AP 92.4%;
69.6%
NVIDIA Tesla K80
[72]ResNet50-YOLOv310,000mAP 83.4%NVIDIA GeForce GTX 1080Ti 11 GB
2021[48]Mask RCNN7212mAP 59.2%;
65.2%
Intel Xeon Silver 4110 @2.10 GHz. GeForce RTX 2080Ti.
[71]Shuffle-Xception13,9140.95 F1 averageIntel Xeon W-2133 3.60 GHz, 31.7 GB RAM. NVIDIA GeForce GTX 1080Ti
[66]YOLOv3300mAP 98.15%-
[74]YOLOv52050mAP 89.4%NVIDIA Tesla K4 (Google Colab)
2022[59]VGG-16;
Custom CNN
174495%; 89% accDell Inspiron i7-7700HQ CPU 2.8 GHz, 16 GB RAM, NVIDIA GeForce GTX 1050ti
[61]Mask R-CNN1223mAP 60%Intel Core i7-8700 CPU@3.20 GHz, 16 GB RAM, NVIDIA GeForce RTX 2070 6 GB
[62]Mask R-CNN1223mAP Instance 63.5%; Material 65.2%-
2023[65]YOLOv5
MobileNetv3
backbone
7212mAP 67%Intel Xeon Silver 4210R CPU@2.20 GHz NVIDIA GeForce RTX * 2090Ti GPU
[64]YOLOTrashCan7212mAP 58.66%; 65.01%AMD Ryzen 7 3700X, Nvidia TITAN RTX 24 GB, 48 GB RAM
[63]YOLOv87212mAP 71%Tesla P100 GPU
2024[67]YOLOv8 modified7212mAP 72%NVIDIA GeForce RTX 3090 24 GB
[73]CBAM (enhanced YOLOv7 and
attention backbone)
321mAP@50: 76%; 72%Google Colab
[69]YOLO-MES628395.8% accIntel Core i9-11900 CPU@2.50 GHz, 64 GB RAM, NVIDIA GeForce RTX A4000
2025[77]SFD-YOLO (enhanced YOLOv8)294mAP 91.2%NVIDIA GeForce RTX 4090 64 GB RAM
[78]YOLOv125130mAP@50:84%;
mAP@50–95:70%
-
[68]YOLOv5, YOLOv7,
YOLOv8
10,000mAP 96%Tesla T4 GPU
* RTX 2090ti does not exist, please see Section 5.5 for further information.

5.2. Benchmark Dataset Integration

From our review, we have collected the benchmark datasets with the number of studies they were used for, as shown in Table 2. Image examples for these datasets can be found in Figure 1.
TrashCan 1.0 is the most popular dataset within our study for object recognition training. This could perhaps be due to its acceptable size with a diverse 22 categories. One study (see Table 3) was able to use TrashCan 1.0 to pretrain their model. Though the JAMSTEC database is very large, it is not labeled and consequently has dropped in popularity in the last few years. Furthermore, as previously mentioned, TrashCan 1.0 (and Trash-ICRA19) were built out of JAMSTEC and have proven to be the favored dataset. Therefore, combined, it does mean that there are 12 studies that were derived from JAMSTEC data.
Table 2. Number of studies within this review using benchmark datasets since 2019.
Table 2. Number of studies within this review using benchmark datasets since 2019.
DatasetStudiesClassesImages
TrashCan 1.06227212
Trash-ICRA19135720
JAMSTEC5--
CleanSea Set2191223
Table 3. Comparative table of metadata within studies that used TrashCan 1.0 either as a full dataset (D) or pretrained (P).
Table 3. Comparative table of metadata within studies that used TrashCan 1.0 either as a full dataset (D) or pretrained (P).
TrashCan 1.0
CitationUsageArchitectureGPUCPUOutcome
(mAP)
[48]DMask RCNNNVIDIA
GeForce RTX 2080Ti
Intel(R) Xeon(R) Silver 4110@2.10 GHz59.2%
65.2%
[65]DYOLOv5s
(MobileNetv3
backbone)
NVIDIA
GeForce RTX * 2090Ti
Intel(R) Xeon(R) Silver 4210R@2.20 GHz67%
[64]DYOLOTrashCanNVIDIA
TITAN RTX
AMD Ryzen 7 3700X58.66%
65.01%
[63]DYOLOv8NVIDIA
Tesla P100
-71%
[67]DYOLOv8
modified
NVIDIA GeForce RTX 3090-66.7%
72%
[77]PSFD-YOLO
(enhanced YOLOv8)
NVIDIA
Ge-Force RTX 4090
-91.2%
* See Section 5.5–this model does not exist.
Images remain the forefront of object detection databases, with video footage yet to be further explored–this could be due to the largely time-consuming manual efforts of labeling and cleaning data, as videos consist of 24–25 frames per second.

5.3. TrashCan 1.0 Comparative Results

Due to its popularity, we have presented the use cases of TrashCan 1.0 in Table 3.
For a fair comparison, we felt that it was important to show the GPU and CPU processing power and the model architecture to provide more insight into the performance.
Across the data we have analyzed, we found a common occurrence that authors reported their outcomes with varying evaluation metrics. Conveniently, all the researchers using TrashCan 1.0 reported their outcomes in mean Average Precision, which offers a much stronger comparison of performance. However, mAP could be recorded between ‘50’, ‘70’ or between ‘50’ and ‘95’ to represent the Intersection over Union (IoU) score, and this was not clarified in multiple studies.
The use of mAP allows us to more accurately comment on the performance of the later models of the YOLO family–namely version 8, surpassing other architectures in its performance.
Within Table 3, there is also a recurring theme of NVIDIA GPU usage, which we explore further in Section 5.5.

5.4. Highest Mean Average Precision Scores

In Table 4, we analyze the highest mean Average Precision scores. As a robust metric, we have chosen the mean Average Precision score, which can summarize the precision and the recall across the model’s different threshold levels and offer more of a comprehensive figure. We have selected studies that have performed higher than 80%, as they have low scores of false negatives and false positives, which means the model is performing well overall. Though for complex underwater datasets, a more lenient approach to 60% would still be an acceptable region, as a baseline.
If there are two scores, the first is a mAP@50, the second is mAP@50-95.
Studies performed within the last two years have all used the YOLO models and a high number of images within a dataset. The popularity of Ultralytics YOLO [79] models has grown exponentially as they have begun to lead computer vision.
Most recent research contains only three classes involving 10,000 images, and all achieved the same mean Average Precision. As this dataset was collected by the authors and then tested only once, it would be appropriate for a second study to run the results again and see if the outcomes continue to be identical.
It was curious that the number of instances was not communicated thoroughly across many of the studies; though it is useful to know the quantity of images for data storage purposes, we could argue that the reporting of instances is valuable too. For example, one image could potentially contain 30 instances, but 100 images may contain none. Reporting instances, therefore, is an informative and descriptive variable that should be encouraged for other researchers to share in future work.

5.5. Hardware Specifications for Underwater Object Detection

In Table 5, we provide a list of the GPUs used across the studies in the review. If the GPU has not been reported, we have excluded the study from this table.
The most immediate observation is clear: NVIDIA [80] has the monopoly on computer vision-ready GPUs. They are the exclusively used Graphics Card of choice amongst the researchers in this study. Their popularity could be due to multiple factors; their leading technological developments and the integration of features for empowering AI within the last few years could be one of the factors. Additionally, NVIDIA graphics processing units often have a large amount of RAM, a high number of cores and are easily integrated into Python environments with NVIDIA’s CUDA package, which allows the architecture to run seamlessly on the specific GPU the user chooses rather than a default selection of the first GPU (or sometimes even the CPU).
Whilst we were running a comparative analysis of the NVIDIA GPUs we became aware that one of the studies has recorded use of a GPU that does not exist [81]; the GeForce RTX 2090Ti–this is probably due to a typo but as we cannot assume its model, we had to omit this record of hardware from Table 5.
With CUDA Core integration, the performance results are generally much better due to parallel processing capabilities. In addition to this, integration of a larger memory capacity means that intense datasets can be handled more efficiently. Consequently, with their generous graphics memory and high Core Count, the latest NVIDIA GeForce models perform much better than the older ones [80].
Recent RTX models of NVIDIA (2080, 3090, TITAN, 4090) also incorporate Tensor Cores [82], which are specifically optimized for enhancing speed and performance when used in deep learning operations. The TITAN RTX is even specifically designed for machine learning [83]. Additional features of the GPUs from the 20+ RTX series include Deep Learning Super Sampling (DLSS) [84], which enhances the frame rates and resolution. The cloud models, such as Tesla T4 and P100, “The World’s First AI Supercomputing Data Center GPU” generally perform very well as they are specifically optimized for inference tasks and mixed precision capability; a feature that allows them to run models faster without sacrificing accuracy.
If a CPU or the RAM was recorded, we included that information under the GPU in brackets for the reader’s reference.
Our ranking shows that the highest number of cores, large memory size and additional features like Tensor Cores and DLSS, have placed NVIDIA’s RTX 4090 and 3090 as the top two performing computers in this review—it is important to consider then that the model performance should naturally be higher—this has not necessarily been reflected in the Supplementary Data. Therefore, we have provided an additional comparative table of these GPUs with their respective results below in Table 6, so that the reader can assess model performance for themselves.
Consequently, in theory the lowest performing processors should be the two GPUs with the lowest amount of CUDA cores and small graphics memory; though they offer varied capabilities, they are generally not considered powerful processors for demanding applications such as object detection tasks with large datasets of footage–and yet, we also included these in Table 6 and (though not measured by mAP), the studies still seemingly performed well.
Ultimately, across the metadata, we cannot draw any firm conclusions, as consistency in reporting data is lacking. Though the evidence here is overwhelming that NVIDIA Graphics Cards are the industry favorite for underwater computer vision tasks.

6. Discussion and Outcomes

We have conducted an analysis of other authors’ recommendations within this review, as well as expressing our own ideas accumulated from this research; we briefly touch on technical constraints and the challenges within this field and have concluded with three key areas to improve the field, which we feel address our original research questions.

6.1. RQ1: A Suitable Underwater Dataset

Although there are attempts of curation and use of image databases with either floating or underwater marine debris scenarios, the publicly available repositories are still too undiversified and small to produce the results that researchers are looking for. Hence, curation and making available a benchmark dataset that is open to all researchers would address one of the most common obstacles for deep learning in this field.
As we have shown in Table 2 and Table 3, the most popular benchmark datasets used are all based off of JAMSTEC data, which is respectable but only covers deep sea benthic footage in Japan, therefore, the data does not include debris that has yet to degrade; the vibrant colors, reflections and occlusion challenges of shallow water; geographical diversity and the huge array of biodiversity.
The collection of data within the marine debris topic has proven to considerably affect the results that researchers are claiming. Many authors state that database collections are just not large enough [64,85,86,87], particularly of underwater debris. In addition to an increase in image availability, researchers [64,66,86,87] have pointed out that the variation of images is also not up to a high enough standard. Their suggestions range from diversifying weather conditions, locations and visibility within the water. To provide an AI framework that works in multiple conditions, perhaps the databases should embody more challenging pieces of data to enrich the robustness of the model. For example, the condition of debris, particularly plastic or glass, should be considered. If brighter and more rigid pieces of plastic perform well, solving the degradation of color and texture within debris underwater is a challenge that needs addressing [88].
Based on the findings, it would be valuable to explore research that is able to either detect more than one object within a frame or be able to explore partially hidden objects–whether under sediment, covered by wildlife or clustered in groups [61].
Public datasets remain limited in diversity and size, with imbalances across debris types, environments, and visibility conditions. Moreover, cues can be ambiguous, limiting claims of broad applicability across habitats or times; the lack of standardized frameworks for image resolution, preprocessing, annotation workflows or labeling schemes, limits fair cross-study comparisons. Benchmarking is further constrained by the scarcity of standardized reference datasets, making objective evaluation and reproducibility challenging. Together, these data-centric constraints mean that reported gains from model advances should be interpreted in the context of dataset limitations and benchmarking gaps, rather than as universal improvements in detection capability.

6.2. RQ2: Robust Model Architecture

The next steps in this research could be reviewing and comparing the strongest algorithms currently available; versions of YOLO (particularly YOLOv5) and Faster-RCNN have previously been favored in this field for their robust object detection qualities. However, after the rapid emergence in Generative AI (GAI) since 2022 [89], with models such as ChatGPT [90], Deep Seek [91] etc. speeding up research; technology has developed at an unprecedented rate and the latest versions of YOLO (particularly version 8) have surpassed the older models with much faster and seemingly stronger results. With powerful computing resources and a large database, transformers could be explored as a hybrid method; this could improve performance by avoiding the data and algorithmic bias that deep learning may produce [92,93]. There is positive research to suggest strong results with this combination [94,95,96,97]. It has been further suggested to introduce additional attention mechanisms into the backbone, as well as incorporating other techniques to boost model performance to better suit the challenge of macro marine debris detection [64,65,72]. Such challenges could include dense clusters (e.g., shoal of fish) or small object detection. Detectors often lose fine details as objects shrink through convolutional layers, reducing accuracy for small and sparsely distributed debris; underwater conditions such as occlusion, low contrast, haze, and uneven lighting, intensify this further. In addition to attention mechanisms, other technical approaches, such as causal inference or counterfactual learning, may be an appropriate avenue to address the root problems of small or partially embedded issues. Temporal and spatial variability in sparsely distributed debris, driven by currents may reduce debris detection outside accumulation zones; video footage could be incorporated into training and validation datasets to enhance coverage of dispersed debris.
Studies performed on data collected by satellite could suffer from limited spatial resolution, irregular coverage (low signal), turbidity and additional lighting conditions, which would constrain generalizability and yield substantial uncertainty; ensemble approaches help to address these problems [73,78,88,98,99], and future work could survey such methods in further detail.

6.3. RQ3: Proposed Framework

We propose a framework for underwater object detection standardization. The lack of consistency across studies makes the results difficult to accurately and fairly compare. When detailing methods, future researchers should report model complexity, inference time, hardware requirements, preprocessing steps, data provenance and annotation schemes to improve reproducibility. Therefore, based on the findings within this Scoping Review, we suggest a framework that we propose future authors use, to enable researchers to make fair comparisons and build on previous research. From our findings, this would be a valuable piece of research in this field to propel underwater marine debris automation forward.
Additionally, we propose standardizing the evaluation metrics used for object recognition to mAP. Depending on the dataset used, this metric can also show the Intersect over Union (IoU) score. If the mAP is at 0.5, the bounding box overlaps with the ground truth box by 50%; therefore, mAP@95 is very accurate and an important metric that can be used to identify marine-specific challenges such as overlapping objects or large groups. Again, as mentioned above, marine data is complex, and we must take into consideration that the labeling process can be subjective; therefore, mAP@50 could still be an appropriate metric on complex object recognition tasks.

7. Conclusions

Based on the literature review and the results, object detection for addressing marine debris has made great strides in the last five years. However, to make significant progress, there is a substantial journey ahead for researchers in this field, with an abundance of challenges to address.
RQ1: Our key insight was the need for a collection of a varied, robust, underwater dataset and a standardized framework to compare model performances.
Future work should prioritize data-centric infrastructure to enable fairer and more generalizable evaluation of underwater debris detection methods. Specifically, we recommend:
  • Assembling and publishing a large, diverse, open benchmark dataset that spans a range of depths, lighting conditions, turbidity, geographic regions, and debris types.
  • Establishing standardized data collection and annotation protocols to reduce inter-study variability.
  • Systematically reporting data-centric metrics, including dataset size, class distribution, instance counts, density metrics and available calibration or uncertainty measures, to facilitate meaningful comparisons.
  • Developing domain-adaptation and transfer-learning approaches to reduce performance gaps when models are applied to different datasets and environments.
We found that multiple authors reported inconclusive results due to limited diversity and depth within their datasets; to address this, we propose designing a diversity index for datasets to facilitate consistent benchmarking.
RQ2: YOLO continuously outperforms other object detection methods, and in particular, the latest versions seem to be producing outstanding results, due to its high accuracy and inference speed. Using Table 3 to compare the results on TrashCan 1.0 with mAP scores, we concluded that this small sample size was the closest comparison that could begin to show any correlation between model performances. Addressing RQ1 and RQ3’s outcomes would improve RQ2’s conclusion.
RQ3: In addition to the proposed framework of a standardized checklist when completing research in this field, we suggest that if authors choose to publish one evaluation metric, it should be consistent and represent the diverse nature of underwater environments. Based on our findings, a mean Average Precision score, preferably mAP 50-95 would suffice for its broader coverage.
Whilst we have tried to provide a fair comparison, our Scoping Review excluded plenty of academic sources that could provide further insights. To build on this review and develop a more in-depth study, future work could focus on literature and field case studies, as well as consider research in different languages. Continued effort and further critical analysis of the metadata can—and should— be explored for deeper insights.
In future work, we hope to see researchers build on previous methodologies that have performed well, to finalize the use of object detection on macro marine debris, and hopefully propel this field into tangible marine debris management.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/jmse13081590/s1, Table S1 Metadata of Related Papers.

Author Contributions

Z.M. conducted research and wrote the majority of this paper. Her supervision team consists of K.M., S.H. and R.S. Conceptualization, methodology, formal analysis, writing—original draft preparation Z.M. and R.S.; writing—major review and editing, K.M. and R.S.; supervision, K.M., S.H. and R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No custom data was used in this survey.

Acknowledgments

Thank you very much to Zeyneb Kurt and Wai Lok Woo for their supervision and contributions toward the original draft of this paper in 2023. It has since been considerably developed and updated, but their help was much appreciated.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Moore, C.J.; Moore, S.L.; Leecaster, M.K.; Weisberg, S.B. A comparison of plastic and plankton in the North Pacific Central Gyre. Mar. Pollut. Bull. 2001, 42, 1297–1300. [Google Scholar] [CrossRef]
  2. Mato, Y.; Isobe, T.; Takada, H.; Kanehiro, H.; Ohtake, C.; Kaminuma, T. Plastic resin pellets as a transport medium for toxic chemicals in the marine environment. Environ. Sci. Technol. 2001, 35, 318–324. [Google Scholar] [CrossRef]
  3. Talsness, C.E.; Andrade, A.J.M.; Kuriyama, S.N.; Taylor, J.A.; vom Saal, F.S. Components of plastic: Experimental studies in animals and relevance for human health. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 2079–2096. [Google Scholar] [CrossRef]
  4. Ryan, P.G.; Connell, A.D.; Gardner, B.D. Plastic ingestion and PCBs in seabirds: Is there a relationship? Mar. Pollut. Bull. 1988, 19, 174–176. [Google Scholar] [CrossRef]
  5. Lee, K.T.; Tanabe, S.; Koh, C.H. Contamination of Polychlorinated Biphenyls (PCBs) in Sediments from Kyeonggi Bay and Nearby Areas, Korea. Mar. Pollut. Bull. 2001, 42, 273–279. [Google Scholar] [CrossRef]
  6. Oberdörster, E.; Cheek, A.O. Gender benders at the beach: Endocrine disruption in marine and estuarine organisms. Environ. Toxicol. Chem. 2001, 20, 23–36. [Google Scholar] [CrossRef]
  7. Derraik, J.G.B. The pollution of the marine environment by plastic debris: A review. Mar. Pollut. Bull. 2002, 44, 842–852. [Google Scholar] [CrossRef]
  8. Rahman, M.; Brazel, C. The plasticizer market: An assessment of traditional plasticizers and research trends to meet new challenges. Prog. Polym. Sci. 2004, 29, 1223–1248. [Google Scholar] [CrossRef]
  9. Plot, V.; Georges, J.-Y. Plastic Debris in a Nesting Leatherback Turtle in French Guiana. Chelonian Conserv. Biol. 2010, 9, 267–270. [Google Scholar] [CrossRef]
  10. Stelfox, M.; Hudgins, J.; Sweet, M. A review of ghost gear entanglement amongst marine mammals, reptiles and elasmobranchs. Mar. Pollut. Bull. 2016, 111, 6–17. [Google Scholar] [CrossRef]
  11. McAdam, R. Plastic in the ocean: How much is out there? Significance 2017, 14, 24–27. [Google Scholar] [CrossRef]
  12. Boerger, C.M.; Lattin, G.L.; Moore, S.L.; Moore, C.J. Plastic ingestion by planktivorous fishes in the North Pacific Central Gyre. Mar. Pollut. Bull. 2010, 60, 2275–2278. [Google Scholar] [CrossRef]
  13. Bugoni, L.; Krause, L.; Petry, M.V. Marine debris and human impacts on sea turtles in southern Brazil. Mar. Pollut. Bull. 2001, 42, 1330–1334. [Google Scholar] [CrossRef]
  14. Tomás, J.; Guitart, R.; Mateo, R.; Raga, J.A. Marine debris ingestion in loggerhead sea turtles, Caretta caretta, from the Western Mediterranean. Mar. Pollut. Bull. 2002, 44, 211–216. [Google Scholar] [CrossRef]
  15. Wright, S.L.; Thompson, R.C.; Galloway, T.S. The physical impacts of microplastics on marine organisms: A review. Environ. Pollut. 2013, 178, 483–492. [Google Scholar] [CrossRef]
  16. Pawar, P.; Shirgaonkar, S.; Patil, R.B. Plastic marine debris: Sources, distribution and impacts on coastal and ocean biodiversity. PENCIL Publ. Biol. Sci. (Oceanogr.) 2016, 3, 40–54. [Google Scholar]
  17. Kühn, S.; van Franeker, J.A. Quantitative overview of marine debris ingested by marine megafauna. Mar. Pollut. Bull. 2020, 151, 110858. [Google Scholar] [CrossRef]
  18. Allen, R.; Jarvis, D.; Sayer, S.; Mills, C. Entanglement of grey seals Halichoerus grypus at a haul out site in Cornwall, UK. Mar. Pollut. Bull. 2012, 64, 2815–2819. [Google Scholar] [CrossRef]
  19. Sharma, S.; Chatterjee, S. Microplastic pollution, a threat to marine ecosystem and human health: A short review. Environ. Sci. Pollut. Res. 2017, 24, 21530–21547. [Google Scholar] [CrossRef]
  20. Quayle, D.V. Plastics in the Marine Environment: Problems and Solutions. Chem. Ecol. 1992, 6, 69–78. [Google Scholar] [CrossRef]
  21. Laist, D.W. Impacts of Marine Debris: Entanglement of Marine Life in Marine Debris Including a Comprehensive List of Species with Entanglement and Ingestion Records. In Marine Debris; Springer: New York, NY, USA, 1997. [Google Scholar] [CrossRef]
  22. Goldberg, E.D. Plasticizing the seafloor: An overview. Environ. Technol. 1997, 18, 195–201. [Google Scholar] [CrossRef]
  23. Chiappone, M.; Dienes, H.; Swanson, D.W.; Miller, S.L. Impacts of lost fishing gear on coral reef sessile invertebrates in the Florida Keys National Marine Sanctuary. Biol. Conserv. 2005, 121, 221–230. [Google Scholar] [CrossRef]
  24. Alimi, O.S.; Farner Budarz, J.; Hernandez, L.M.; Tufenkji, N. Microplastics and Nanoplastics in Aquatic Environments: Aggregation, Deposition, and Enhanced Contaminant Transport. Environ. Sci. Technol. 2018, 52, 1704–1724. [Google Scholar] [CrossRef]
  25. Viehman, S.; vander Pluym, J.L.; Schellinger, J. Characterization of marine debris in North Carolina salt marshes. Mar. Pollut. Bull. 2011, 62, 2771–2779. [Google Scholar] [CrossRef]
  26. Eriksson, C.; Burton, H.; Fitch, S.; Schulz, M.; van den Hoff, J. Daily accumulation rates of marine debris on sub-Antarctic island beaches. Mar. Pollut. Bull. 2013, 66, 199–208. [Google Scholar] [CrossRef]
  27. Daniel, D.B.; Ashraf, P.M.; Thomas, S.N. Microplastics in the edible and inedible tissues of pelagic fishes sold for human consumption in Kerala, India. Environ. Pollut. 2020, 266, 115365. [Google Scholar] [CrossRef]
  28. Daniel, D.B.; Ashraf, P.M.; Thomas, S.N.; Thomson, K.T. Microplastics in the edible tissues of shellfishes sold for human consumption. Chemosphere 2021, 264, 128554. [Google Scholar] [CrossRef]
  29. Danopoulos, E.; Jenner, L.C.; Twiddy, M.; Rotchell, J.M. Microplastic contamination of seafood intended for human consumption: A systematic review and meta-analysis. Environ. Health Perspect. 2020, 128, 126002. [Google Scholar] [CrossRef]
  30. Dong, X.; Liu, X.; Hou, Q.; Wang, Z. From natural environment to animal tissues: A review of microplastics(nanoplastics) translocation and hazards studies. Sci. Total Environ. 2023, 855, 158686. [Google Scholar] [CrossRef]
  31. Lai, H.; Liu, X.; Qu, M. Nanoplastics and Human Health: Hazard Identification and Biointerface. Nanomaterials 2022, 12, 1298. [Google Scholar] [CrossRef]
  32. Leslie, H.A.; van Velzen, M.J.M.; Brandsma, S.H.; Vethaak, A.D.; Garcia-Vallejo, J.J.; Lamoree, M.H. Discovery and quantification of plastic particle pollution in human blood. Environ. Int. 2022, 163, 107199. [Google Scholar] [CrossRef] [PubMed]
  33. Smith, M.; Love, D.C.; Rochman, C.M.; Neff, R.A. Microplastics in Seafood and the Implications for Human Health. Curr. Environ. Health Rep. 2018, 5, 375–386. [Google Scholar] [CrossRef] [PubMed]
  34. Fowler, C.W. Marine debris and northern fur seals: A case study. Mar. Pollut. Bull. 1987, 18, 326–335. [Google Scholar] [CrossRef]
  35. Coleman, F.; Wehle, D. Plastic Pollution: A worldwide oceanic problem. Parks 1984, 9, 9–12. [Google Scholar]
  36. Day, R.H. The Occurrence and Characteristics of Plastic Pollution in Alaska’s Marine Birds. Master’s Thesis, University of Alaska Fairbanks, Fairbanks, AK, USA, 1980. [Google Scholar]
  37. PADI. AWARE: Marine Debris Program. Available online: https://www.padi.com/aware/marine-debris (accessed on 6 August 2025).
  38. NOAA. A Guide to Plastic in the Ocean. Available online: https://oceanservice.noaa.gov/hazards/marinedebris/plastics-in-the-ocean.html (accessed on 6 August 2025).
  39. Watanabe, J.-I.; Shao, Y.; Miura, N. Underwater and airborne monitoring of marine ecosystems and debris. J. Appl. Remote Sens. 2019, 13, 044509. [Google Scholar] [CrossRef]
  40. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
  41. Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef]
  42. Condor Ferries: Marine & Ocean Pollution Statistics & Facts 2023. Available online: https://www.condorferries.co.uk/Marine-Ocean-Pollution-Statistics-Facts (accessed on 6 August 2025).
  43. JAMSTEC. JAMSTEC OFES (Ocean General Circulation Model for the Earth Simulator) Dataset; JAMSTEC: Kochi, Japan, 2009. [Google Scholar]
  44. Condon, R.; Lucas, C. JeDI: The Jellyfish Database Initiative; University of Southampton Institutional: Hampshire, UK, 2015. [Google Scholar]
  45. Fulton, M.; Hong, J.; Sattar, J. Trash-ICRA19: A Bounding Box Labeled Dataset of Underwater Trash; University Digital Conservancy; University of Minnesota: Minneapolis, MN, USA, 2020. [Google Scholar]
  46. Fulton, M.; Hong, J.; Islam, M.J.; Sattar, J. Robotic Detection of Marine Litter Using Deep Visual Detection Models. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 5752–5758. [Google Scholar]
  47. Hong, J.; Fulton, M.S.; Sattar, J. TrashCan 1.0 An Instance-Segmentation Labeled Dataset of Trash Observations; University of Minnesota Press: Minneapolis, MN, USA, 2020. [Google Scholar]
  48. Deng, H.; Ergu, D.; Liu, F.; Ma, B.; Cai, Y. An embeddable algorithm for automatic garbage detection based on complex marine environment. Sensors 2021, 21, 6391. [Google Scholar] [CrossRef]
  49. Proença, P.F.; Simões, P. TACO: Trash Annotations in Context for Litter Detection. arXiv 2020, arXiv:2003.06975. [Google Scholar]
  50. Proença, P.F. TACO Dataset. 2025. Available online: http://www.tacodataset.org/ (accessed on 18 August 2025).
  51. Panwar, H.; Gupta, P.K.; Siddiqui, M.K.; Morales-Menendez, R.; Bhardwaj, P.; Sharma, S.; Sarker, I.H. AquaVision: Automating the detection of waste in water bodies using deep transfer learning. Case Stud. Chem. Environ. Eng. 2020, 2, 100026. [Google Scholar] [CrossRef]
  52. Đuraš, A.; Wolf, B.J.; Ilioudi, A.; Palunko, I.; De Schutter, B. A Dataset for Detection and Segmentation of Underwater Marine Debris in Shallow Waters. Sci. Data 2024, 11, 921. [Google Scholar] [CrossRef]
  53. Kikaki, K.; Kakogeorgiou, I.; Mikeli, P.; Raitsos, D.E.; Karantzalos, K. MARIDA: A benchmark for Marine Debris detection from Sentinel-2 remote sensing data. PLoS ONE 2022, 17, e0262247. [Google Scholar] [CrossRef]
  54. Barrelet, C.; Chaumont, M.; Subsol, G.; Creuze, V.; Gouttefarde, M. From TrashCan to UNO: Deriving an Underwater Image Dataset to Get a More Consistent and Balanced Version. Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges, Montreal, QC, Canada, 21–25 August 2022. [Google Scholar]
  55. Liu, J.; Wu, D.; Hellevik, C.C.; Wang, H. PlastOPol: A Collaborative Data-driven Solution for Marine Litter Detection and Monitoring. In Proceedings of the 2023 IEEE International Conference on Industrial Technology (ICIT), Orlando, FL, USA, 4–6 April 2023; pp. 1–6. [Google Scholar]
  56. Tata, G.; Royer, S.J.; Poirion, O.; Lowe, J. DeepPlastic: A Novel Approach to Detecting Epipelagic Bound Plastic Using Deep Visual Models. arXiv 2021, arXiv:2105.01882. [Google Scholar]
  57. Kylili, K.; Hadjistassou, C.; Artusi, A. An intelligent way for discerning plastics at the shorelines and the seas. Environ. Sci. Pollut. Res. 2020, 27, 42631–42643. [Google Scholar] [CrossRef] [PubMed]
  58. Kylili, K.; Artusi, A.; Hadjistassou, C. A new paradigm for estimating the prevalence of plastic litter in the marine environment. Mar. Pollut. Bull. 2021, 173, 113127. [Google Scholar] [CrossRef] [PubMed]
  59. Moorton, Z.; Kurt, Z.; Woo, W.L. Is the use of deep learning an appropriate means to locate debris in the ocean without harming aquatic wildlife? Mar. Pollut. Bull. 2022, 181, 113853. [Google Scholar] [CrossRef] [PubMed]
  60. ImageNet. Available online: https://image-net.org/ (accessed on 1 August 2025).
  61. Sánchez-Ferrer, A.; Gallego, A.J.; Valero-Mas, J.J.; Calvo-Zaragoza, J. The CleanSea Set: A Benchmark Corpus for Underwater Debris Detection and Recognition. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2022; Volume 13256. [Google Scholar] [CrossRef]
  62. Sánchez-Ferrer, A.; Valero-Mas, J.J.; Gallego, A.J.; Calvo-Zaragoza, J. An experimental study on marine debris location and recognition using object detection. Pattern Recognit. Lett. 2023, 168, 154–161. [Google Scholar] [CrossRef]
  63. Jain, R.; Zaware, S.; Kacholia, N.; Bhalala, H.; Jagtap, O. Advancing Underwater Trash Detection: Harnessing Mask R-CNN, YOLOv8, EfficientDet-D0 and YOLACT. In Proceedings of the 2nd International Conference on Sustainable Computing and Smart Systems (ICSCSS 2024), Coimbatore, India, 10–12 July 2024. [Google Scholar]
  64. Zhou, W.; Zheng, F.; Yin, G.; Pang, Y.; Yi, J. YOLOTrashCan: A deep learning marine debris detection network. IEEE Trans. Instrum. Meas. 2023, 72, 1–12. [Google Scholar] [CrossRef]
  65. Liu, J.; Zhou, Y. Marine debris detection model based on the improved YOLOv5. In Proceedings of the 2023 3rd International Conference on Neural Networks, Information and Communication Engineering, NNICE, Guangzhou, China, 24–26 February 2023; pp. 725–728. [Google Scholar]
  66. Hipolito, J.C.; Sarraga Alon, A.; Amorado, R.V.; Fernando, M.G.Z.; de Chavez, P.I.C. Detection of Underwater Marine Plastic Debris Using an Augmented Low Sample Size Dataset for Machine Vision System: A Deep Transfer Learning Approach. In Proceedings of the 19th IEEE Student Conference on Research and Development: Sustainable Engineering and Technology towards Industry Revolution, SCOReD, Kota Kinabalu, Malaysia, 23–25 November 2021; pp. 82–86. [Google Scholar]
  67. Jiang, W.; Yang, L.; Bu, Y. Research on the Identification and Classification of Marine Debris Based on Improved YOLOv8. J. Mar. Sci. Eng. 2024, 12, 1748. [Google Scholar] [CrossRef]
  68. Walia, J.S.; Haridass, K.; Pavithra, L.K. Deep Learning Innovations for Underwater Waste Detection: An In-Depth Analysis. IEEE Access 2025, 13, 88917–88929. [Google Scholar] [CrossRef]
  69. Huang, C.; Zhang, W.; Zheng, B.; Li, J.; Xie, B.; Nan, R.; Tan, Z.; Tan, B.; Xiong, N.N. YOLO-MES: An Effective Lightweight Underwater Garbage Detection Scheme for Marine Ecosystems. IEEE Access 2025, 13, 60440–60454. [Google Scholar] [CrossRef]
  70. Faisal, M.; Chaudhury, S.; Sankaran, K.S.; Raghavendra, S.; Chitra, R.J.; Eswaran, M.; Boddu, R.; Mahalle, P.N. Faster R-CNN Algorithm for Detection of Plastic Garbage in the Ocean: A Case for Turtle Preservation. Math. Probl. Eng. 2022, 2022, 3639222. [Google Scholar] [CrossRef]
  71. Xue, B.; Huang, B.; Wei, W.; Chen, G.; Li, H.; Zhao, N.; Zhang, H. An Efficient Deep-Sea Debris Detection Method Using Deep Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 12348–12360. [Google Scholar] [CrossRef]
  72. Xue, B.; Huang, B.; Chen, G.; Li, H.; Wei, W. Deep-sea debris identification using deep convolutional neural networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8909–8921. [Google Scholar] [CrossRef]
  73. Shen, A.; Zhu, Y.; Angelov, P.; Jiang, R. Marine debris detection in satellite surveillance using attention mechanisms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 4320–4330. [Google Scholar] [CrossRef]
  74. Teng, C.; Kylili, K.; Hadjistassou, C. Deploying deep learning to estimate the abundance of marine debris from video footage. Mar. Pollut. Bull. 2022, 183, 114049. [Google Scholar] [CrossRef]
  75. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
  76. Kylili, K.; Kyriakides, I.; Artusi, A.; Hadjistassou, C. Identifying floating plastic marine debris using a deep learning approach. Environ. Sci. Pollut. Res. 2019, 26, 17091–17099. [Google Scholar] [CrossRef] [PubMed]
  77. Zhao, F.; Huang, B.; Wang, J.; Shao, X.; Wu, Q.; Xi, D.; Liu, Y.; Chen, Y.; Zhang, G.; Ren, Z.; et al. Seafloor debris detection using underwater images and deep learning-driven image restoration: A case study from Koh Tao, Thailand. Mar. Pollut. Bull. 2025, 214, 117710. [Google Scholar] [CrossRef] [PubMed]
  78. Ma, J.; Zhou, Y.; Zhou, Z.; Zhang, Y.; He, L. Toward smart ocean monitoring: Real-time detection of marine litter using YOLOv12 in support of pollution mitigation. Mar. Pollut. Bull. 2025, 217, 118136. [Google Scholar] [CrossRef]
  79. Ultralytics. Available online: https://www.ultralytics.com/ (accessed on 1 August 2025).
  80. NVIDIA. NVIDIA. Available online: https://www.nvidia.com/en-gb/ (accessed on 31 July 2025).
  81. NVIDIA. GeForce 20 Series. Available online: https://www.nvidia.com/en-gb/geforce/graphics-cards/compare/?section=compare-20 (accessed on 31 July 2025).
  82. NVIDIA. Tensor Cores. Available online: https://www.nvidia.com/en-us/data-center/tensor-cores/?ncid=no-ncid (accessed on 31 July 2025).
  83. NVIDIA. TITAN RTX. Available online: https://www.nvidia.com/en-eu/deep-learning-ai/products/titan-rtx/ (accessed on 31 July 2025).
  84. NVIDIA. DLSS. Available online: https://www.nvidia.com/en-us/geforce/technologies/dlss/ (accessed on 31 July 2025).
  85. de Vries, R.; Egger, M.; Mani, T.; Lebreton, L. Quantifying floating plastic debris at sea using vessel-based optical data and artificial intelligence. Remote Sens. 2021, 13, 3401. [Google Scholar] [CrossRef]
  86. Maharjan, N.; Miyazaki, H.; Pati, B.M.; Dailey, M.N.; Shrestha, S.; Nakamura, T. Detection of River Plastic Using UAV Sensor Data and Deep Learning. Remote Sens. 2022, 14, 3049. [Google Scholar] [CrossRef]
  87. van Lieshout, C.; van Oeveren, K.; van Emmerik, T.; Postma, E. Automated River Plastic Monitoring Using Deep Learning and Cameras. Earth Space Sci. 2020, 7, e2019EA000960. [Google Scholar] [CrossRef]
  88. Sannigrahi, S.; Basu, B.; Basu, A.S.; Pilla, F. Development of automated marine floating plastic detection system using Sentinel-2 imagery and machine learning models. Mar. Pollut. Bull. 2022, 178, 113527. [Google Scholar] [CrossRef]
  89. García-Peñalvo, F.; Vázquez-Ingelmo, A. What do we mean by GenAI? A systematic mapping of the evolution, trends, and techniques involved in generative AI. Int. J. Interact. Multimed. Artif. Intell. 2023, 8, 7–16. [Google Scholar] [CrossRef]
  90. OpenAI ChatGPT. Available online: https://openai.com/chatgpt/overview/ (accessed on 1 August 2025).
  91. DeepSeek AI. Available online: https://www.deepseek.com/en (accessed on 1 August 2025).
  92. Shah, M.; Sureja, N. A Comprehensive Review of Bias in Deep Learning Models: Methods, Impacts, and Future Directions. Arch. Comput. Methods Eng. 2024, 32, 255–267. [Google Scholar] [CrossRef]
  93. Vardi, G. On the Implicit Bias in Deep-Learning Algorithms. Commun. ACM 2023, 66, 86–93. [Google Scholar] [CrossRef]
  94. Bazi, Y.; Bashmal, L.; Rahhal, M.M.A.; Dayil, R.A.; Ajlan, N.A. Vision transformers for remote sensing image classification. Remote Sens. 2021, 13, 516. [Google Scholar] [CrossRef]
  95. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. [Google Scholar]
  96. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021. [Google Scholar]
  97. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. arXiv 2021, arXiv:2012.12877. [Google Scholar]
  98. Topouzelis, K.; Papakonstantinou, A.; Garaba, S.P. Detection of floating plastics from satellite and unmanned aerial systems (Plastic Litter Project 2018). Int. J. Appl. Earth Obs. Geoinf. 2019, 79, 175–183. [Google Scholar] [CrossRef]
  99. Topouzelis, K.; Papageorgiou, D.; Suaria, G.; Aliani, S. Floating marine litter detection algorithms and techniques using optical remote sensing data: A review. Mar. Pollut. Bull. 2021, 170, 112675. [Google Scholar] [CrossRef]
Figure 1. Popular open-access datasets available for object recognition in marine environments. (a) J-EDI (JAMSTEC). (b) TRASH-ICRA19. (c) TACO. (d) AquaTrash. (e) SeaClear Marine Debris Dataset. Images were cropped to fit the table.
Figure 1. Popular open-access datasets available for object recognition in marine environments. (a) J-EDI (JAMSTEC). (b) TRASH-ICRA19. (c) TACO. (d) AquaTrash. (e) SeaClear Marine Debris Dataset. Images were cropped to fit the table.
Jmse 13 01590 g001
Table 4. Studies with the highest mAP scores in chronological order.
Table 4. Studies with the highest mAP scores in chronological order.
CitationArchitectureNo. of ClassesSize
(Images)
Outcome
(mAP)
[46]Faster RCNN35,72081%
[51]RetinaNet436981%
[71]ResNet50-YOLOv3710,00083.4%
[66]YOLOv3130098.15%
[74]YOLOv59205089.4%
[77]SFD-YOLO
(enhanced YOLOv8)
1129491.2%
[78]YOLOv12155,13084%
70%
[68]YOLOv5310,00096%
YOLOv7310,00096%
YOLOv8310,00096%
Table 5. GPU Comparison across this Scoping Review.
Table 5. GPU Comparison across this Scoping Review.
GPU
(/CPU)
Release YearCUDA CoresMemory
(/RAM)
NVIDIA GeForce RTX 4090202216,38424 GB GDDR6X
NVIDIA GeForce RTX 3090202010,49624 GB GDDR6X
Nvidia GeForce RTX A4000
(Intel(R) Core(TM) i9−11900 CPU@2.50 GHz)
2021614416 GB GDDR6
(64 GB RAM)
NVIDIA Tesla K802014499224 GB GDDR5
NVIDIA TITAN RTX
(AMD Ryzen 7 3700X)
2018460824 GB GDDR6
(48 GB RAM)
NVIDIA GeForce RTX 2080 Ti
(Intel Xeon Silver 4110)
2018435211 GB GDDR6
(32 GB RAM)
NVIDIA Tesla P1002016358416 GB HBM2
NVIDIA GTX 1080 Ti2016358411 GB GDDR5X
NVIDIA Tesla T4 (15 GB)2018256015 GB GDDR6
NVIDIA GTX 1080
(Intel Core i7-7800X@3.50 GHz)
201725608 GB GDDR5X
(40 GB RAM)
NVIDIA GTX 1080
(Intel Core i3-6100U)
201625608 GB GDDR5X
(4 GB RAM)
NVIDIA GeForce RTX 2070
(Intel Core i7-8700)
201823048 GB GDDR6 (16 GB RAM)
NVIDIA GeForce RTX 2060
(Intel Core i7-10750H)
201919206 GB GDDR6
(16 GB RAM)
NVIDIA Quadro K4200
(Intel Xeon E5-2630 v3@2.40 GHz)
201413444 GB GDDR5
(48 GB RAM)
NVIDIA GeForce GTX 1050 Ti
(Dell Inspiron i7-7700HQ@2.80 GHz)
20167684 GB GDDR5
(16 GB RAM)
Table 6. Comparison of highest and lowest hardware specifications and their performance.
Table 6. Comparison of highest and lowest hardware specifications and their performance.
ArchitectureDatasetResultGPUCores
[77]SFD-YOLO
(enhanced YOLOv8)
Collected. Pretrained on TrashCan 1.091.2%
(mAP)
RTX 4090
24 GB
16,384
[67]YOLOv8
modified
TrashCan 1.066.7%;
72%
(mAP)
RTX 3090
24 GB
10,496
[76]VGG-16Collected. Pretrained on ImageNet86%
(val Acc)
Quadro K4200
4 GB
1344
[57]VGG-16Collected. Pretrained on ImageNet90%
(val Acc)
Quadro K4200
4 GB
1344
[59]VGG-16Collected (JEDI incl)95%
(Acc)
GTX 1050 Ti
4 GB
768
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moorton, Z.; Mistry, K.; Strachan, R.; Hu, S. Up-to-Date Scoping Review of Object Detection Methods for Macro Marine Debris. J. Mar. Sci. Eng. 2025, 13, 1590. https://doi.org/10.3390/jmse13081590

AMA Style

Moorton Z, Mistry K, Strachan R, Hu S. Up-to-Date Scoping Review of Object Detection Methods for Macro Marine Debris. Journal of Marine Science and Engineering. 2025; 13(8):1590. https://doi.org/10.3390/jmse13081590

Chicago/Turabian Style

Moorton, Zoe, Kamlesh Mistry, Rebecca Strachan, and Shanfeng Hu. 2025. "Up-to-Date Scoping Review of Object Detection Methods for Macro Marine Debris" Journal of Marine Science and Engineering 13, no. 8: 1590. https://doi.org/10.3390/jmse13081590

APA Style

Moorton, Z., Mistry, K., Strachan, R., & Hu, S. (2025). Up-to-Date Scoping Review of Object Detection Methods for Macro Marine Debris. Journal of Marine Science and Engineering, 13(8), 1590. https://doi.org/10.3390/jmse13081590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop