Next Article in Journal
A Multi-Criteria Decision-Support Framework for Evaluating Alternative Fuels and Technologies Toward Zero Emission Shipping
Previous Article in Journal
Two-Dimensional Growth Patterns of Coral Nubbins
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Detection of Disaster-Causing Organisms near the Waters of Nuclear Power Plant Based on LiveScope Scanning Sonar Images

1
College of Marine Living Resource Sciences and Management, Shanghai Ocean University, Shanghai 201306, China
2
Ningde Marine Center, Ministry of Natural Resources, Ningde 352000, China
3
Zhoushan Branch of National Engineering Research Center for Oceanic Fisheries, Zhoushan 316014, China
4
National Offshore Fisheries Engineering Technology Research Center, Shanghai 201306, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Mar. Sci. Eng. 2026, 14(4), 347; https://doi.org/10.3390/jmse14040347
Submission received: 2 December 2025 / Revised: 28 January 2026 / Accepted: 5 February 2026 / Published: 11 February 2026
(This article belongs to the Section Ocean Engineering)

Abstract

Nuclear power serves as an efficient, clean, and low-carbon energy source that constitutes a significant component of the energy portfolio in numerous countries. Most nuclear power plants are predominantly situated in coastal regions, utilizing seawater as the ultimate heat sink for their cooling systems. Real-time monitoring of marine organism dynamics near water intakes is essential to mitigate the risk of unit shutdowns triggered by outbreaks of disaster-causing organisms (DCOs). This study employed LiveScope scanning sonar videos captured near the Ningde Nuclear Power Plant to develop a dataset for detecting the light spot area of the DCOs. We proposed a directionally optimized model, Bio-YOLO v7, which significantly enhances the detection of small targets in sonar images. The Bio-YOLO v7 model achieved precision, recall, and average precision rates of 85.29%, 83.28%, and 81.49%, respectively, demonstrating superior performance in identifying DCOs near the intakes of nuclear power plant. The light spot size of the DCOs exhibited significant periodic variations, serving as a crucial indicator for forecasting outbreak events.

1. Introduction

Despite ongoing debates related to accident risks and long-term radioactive waste management, nuclear energy is widely used as an efficient low-carbon power source and contributes significantly to the energy security of many nations [1]. For instance, it represents over 65% of the power generation in France [2,3] and plays a vital role in reducing carbon dioxide emissions and facilitating the transition toward carbon neutrality [4]. As of December 2024, mainland China had 57 nuclear power units in operation and 28 units under construction [5]. Many nuclear power plants under construction and in operation are situated in coastal regions, utilizing seawater as the primary heat sink for their cooling systems. However, the outbreaks of disaster-causing organisms (DCOs), including algae, jellyfish, and shrimp, can obstruct water intakes, resulting in reduced power output or unit shutdowns, posing a significant risk to the operational safety of these facilities [6]. Therefore, it is essential to monitor the dynamics of DCOs in the vicinity of the intake in real-time.
Conventional monitoring based on underwater HD cameras is strongly affected by illumination and turbidity, making it challenging to apply in natural environments with low light and poor visibility [7]. In contrast, sonar equipment can monitor marine organisms in complex aquatic environments, and the recorded sonar imagery can be automatically analyzed by image recognition techniques, reducing the human and material resources required for manual observation [8,9]. Existing multibeam echo processing tools, such as Echoview and DIDSON, partially automate fish detection and counting but are typically effective only under constrained conditions, such as large fish (>60 cm), unidirectional movement, and low fish densities [10,11]. In practical applications, sonar images often suffer from high noise levels and blurred edges, which substantially challenge traditional target detection methods and make accuracy, stability, and generalization critical requirements for reliable sonar image analysis.
In this context, deep learning-based approaches have shown great promise in addressing complex detection tasks in diverse environments [12,13], providing robust solutions to persistent challenges in underwater object detection [14]. For example, Wang et al. [15] enhanced YOLOv5 with MobileNet v3 and ShuffleNet v2 for detecting fishing nets, cloths, and plastics in forward-looking sonar imagery, achieving an accuracy of 92.60%. Connolly et al. [16] employed Faster R-CNN with a ResNet50 backbone on three sonar representations (direct acoustic, acoustic shadows, and their combination) and reported F1 scores up to 0.91. Shen et al. [17] combined YOLOv5 with DeepSort to track and count fish targets in reservoir sonar images, attaining an accuracy of 83.56%. Deep learning methods have shown great potential in tackling complex detection tasks, especially in the field of underwater object detection, where clutter and noise interference are significant challenges. Huang et al. [18] developed a YOLOv8-based deep learning framework for real-time, automated detection of migrating eels and other fish species from sonar images, achieving high detection accuracy with an F1 score and mAP@0.50 over 0.80. Furthermore, Wang et al. [19] developed YOLO-SONAR, a model for detecting marine objects in forward-looking sonar images, achieving an mAP of 81.96% on the MDFLS dataset and 82.30% on the WHFLS dataset.
Nevertheless, existing studies still predominantly focus on fish or general marine objects, and directional optimization for small and weak biological targets remains limited, especially for intake-related risk monitoring. Consequently, deep-learning-based detection of DCOs in sonar images collected near nuclear power plant intakes remains relatively underexplored. To address these gaps, this study proposes a dedicated sonar-based detection and monitoring approach for disaster-causing organisms (DCOs) near nuclear power plant intakes. The main contributions are as follows. First, we develop Bio-YOLOv7 by integrating the Convolutional Block Attention Module (CBAM) into YOLOv7 and optimizing anchor box sizes, specifically targeting small and weak sonar light spots under noisy and blurred imaging conditions. Second, we evaluate the proposed model on field-collected LiveScope scanning sonar imagery near intake waters, reporting precision, recall, average precision (AP), and inference speed (FPS) to quantify the accuracy–efficiency trade-off for real-time deployment. Third, we demonstrate an application-oriented early-warning analysis by characterizing the temporal dynamics of DCO light spot area from continuous sonar videos, supporting automated monitoring and risk-informed decision making near nuclear power intakes.

2. Materials and Methods

2.1. Dataset

Image data were acquired using the LiveScope real-time scanning sonar system at two locations: the Dayushan observation site and the water intake observation site of the Ningde Nuclear Power Plant in Xiapu County, Fujian Province, China (Figure 1). A total of 22,017 sonar videos were collected, comprising 12,821 from Dayushan waters and 9196 from the water intake site. Each video had a duration of approximately 5 min. The data collection period spanned June 2022 to May 2023 for Dayushan and September to October 2023 for the water intake site. From these videos, 4883 images were manually annotated to construct the image dataset, which was split into training (3907 images), validation (488 images), and test (488 images) sets in an 8:1:1 ratio. To ensure coverage of both biological outbreak and non-outbreak conditions, the sonar videos were first stratified according to historical outbreak records, after which frames were randomly sampled and assigned to the three subsets. For efficiency, frames were sampled at an approximate rate of one frame per three frames; however, the sampling was not strictly periodic and should be regarded as a quasi-uniform subsampling with an average interval of about three frames. Empty frames containing no relevant biological targets were excluded. All frames were labeled using LabelImg (v1.8.6) and exported in YOLO format. To standardize labeling criteria across varying target densities, representative low-density and high-density frames were selected as annotation exemplars and consistently referenced throughout the labeling process. Two annotators participated in labeling, with each annotated sample cross-checked by the other; disagreements were resolved through joint discussion with reference to the exemplar set.
In addition to the annotated frames used for model development and evaluation, the original videos were retained intact for application-oriented analyses such as biomass variation assessment. Specifically, the videos used for application analysis included 4642 and 8179 videos from Dayushan in June 2022 and May 2023, respectively, and 6820 and 2376 videos from the water intake site in September and October 2023, respectively.

2.2. Detection and Identification Model Based on YOLOv7

Although various computer-vision techniques have been applied to detection tasks [16,20,21], one-stage detectors represented by YOLO are widely adopted in practice due to their favorable balance between detection accuracy and real-time efficiency [22,23,24,25]. In particular, YOLO predicts object locations and categories in a single forward pass, avoiding the region-proposal stage used in two-stage detectors and thereby enabling higher inference efficiency for time-sensitive monitoring scenarios.
YOLOv7 follows a backbone–neck–head architecture for feature extraction, multi-scale feature aggregation, and final bounding-box prediction [26]. Its neck integrates pyramid-style aggregation (SPPCSPC) and bidirectional feature fusion (PAN), enabling richer contextual representation and multi-scale feature reuse. The backbone extracts hierarchical features, and the detection head outputs bounding-box regression and classification predictions at multiple feature-map resolutions.
Bio-YOLOv7 is developed on top of YOLOv7 by incorporating the Convolutional Block Attention Module (CBAM) and refining anchor box sizes to better detect small and weak sonar light spot targets under noisy and blurred imaging conditions. As summarized in Figure 2, the overall pipeline starts with sonar image preprocessing and data augmentation to improve robustness to background clutter and appearance variation. The detector is then optimized by enhancing feature emphasis on informative regions via CBAM and by adapting anchor priors to the dataset-specific target-scale distribution using K-means clustering. Finally, the model is trained and evaluated on the annotated dataset using standard detection metrics, and the detection outputs are further used to derive a light spot area index for video-based early-warning analysis. The technical details of each stage are provided in the following subsections.

2.2.1. Preprocessing Stage

A sufficient amount of labeled data is typically required to train a robust detector, whereas acquiring high-quality sonar images and performing manual annotation are time-consuming and labor-intensive [27,28]. In this study, a total of 4883 annotated sonar images were collected. To increase input diversity and improve robustness to appearance variation and background noise, we applied data augmentation with both photometric and geometric transformations. Specifically, photometric distortion was performed by randomly adjusting brightness, contrast, hue, and saturation and by injecting noise, while geometric augmentation included random scaling, cropping, horizontal flipping, and rotation within −10° to 10°.
Before training, each image was processed to a fixed input size of 640 × 640 pixels using cropping and padding. This operation preserves the geometric characteristics of sonar light spots and avoids the aspect-ratio distortion that may be introduced by direct resizing.

2.2.2. Model Optimization

(1)
Attention Module
CBAM (Convolutional Block Attention Module) is a lightweight attention mechanism designed to improve the feature representation of Convolutional Neural Networks (CNNs) [29]. CBAM employs two distinct attention modules to adaptively modify the feature maps of the convolutional layer, enhancing target recognition and improving model performance effectively. The channel attention module evaluates the correlation between channels and influences the final feature map by assigning weights to each channel. The spatial attention module evaluates the correlation between local spatial regions and influences the final feature map by assigning weights to each pixel.
(2)
Anchor Box Optimization
Conventional YOLO networks utilize fixed-size anchor boxes, which can lead to increased detection errors and missed detections when processing complex sonar images containing multi-scale targets. The anchor box dimensions are critical for matching priors to the dataset-specific box distribution, thereby affecting localization quality and overall detection performance.
To enhance the performance of the model across varying target scales, the K-means clustering algorithm was used to optimize the size of the anchor box in this study. K-means was selected because it provides a simple, data-driven way to summarize the width–height distribution of ground-truth bounding boxes into a small set of representative anchor priors, which better match dataset-specific target scales and aspect ratios without adding inference-time complexity. Specifically, the width and height of the ground-truth boxes in the dataset were collected and grouped into k clusters, and the cluster centroids were used as the optimized anchor box sizes. This method is more effectively suited to the variety of detected targets in sonar images and helps improve anchor coverage for small targets and irregular light spot shapes.

2.2.3. Light Spot Area Statistics

DCOs frequently manifest as dense clusters of light spots in sonar imagery, with overlapping patches often resulting in the detection of multiple biological targets as a singular entity. In this study, the ratio of the sum of the detected target light spot area to the detected area is employed as an analytical index, overcoming the limitation of traditional counting-based methods by utilizing areal coverage metrics, as expressed in the following equation:
R = i = 1 N A i A i m a g e
where R is the percentage of the total area of the target light spot, A i is the area of the ith detected target, N is the total number of detected targets, A i m a g e is the area of the input image.

2.2.4. Model Training

Model training was implemented on the PyTorch (v2.0.0) platform using a workstation running Windows 10 with 32 GB RAM and two NVIDIA GeForce RTX 3080 Ti GPUs (10 GB each). Both YOLOv7 and Bio YOLOv7 were initialized via transfer learning. Specifically, the networks were pretrained on the VOC 2007 dataset and then fine-tuned on the annotated sonar images described in Section 2.1.
To quantify the discrepancy between predictions and annotations during optimization, the training objective was defined as the sum of three loss terms, including a CIoU-based localization loss, a binary cross-entropy loss, and a mean squared error loss [30,31]. The corresponding formulations and notation are summarized in Appendix A for completeness.
Model training was conducted for a maximum of 400 epochs or until convergence. Each epoch was evaluated on the validation set, and the corresponding checkpoints and loss curves were saved. The final model weights were selected based on validation performance and were subsequently evaluated on the held-out test set to report detection metrics.

2.2.5. Performance Metrics

The performance of Bio YOLOv7 was evaluated in terms of detection accuracy and computational efficiency using five metrics, namely precision, recall, average precision (AP), F1 score, and frames per second (FPS) [32]. Precision measures the reliability of positive detections by quantifying the proportion of correct predictions among all predicted targets, whereas recall reflects detection completeness by quantifying the proportion of annotated targets that are successfully detected. AP summarizes the overall detection quality based on the precision–recall relationship across confidence thresholds. The F1 score is reported as a balanced indicator that jointly considers precision and recall. FPS is used to characterize inference efficiency and assess whether the model meets real-time monitoring requirements under the specified hardware setting. The explicit mathematical definitions of these metrics are provided in Appendix A.

2.3. Classification of Underwater Environments in Multiple Scenarios

This study considers four typical sonar imaging scenarios, namely blank, sparse, dense, and outbreak, to characterize the variability of underwater environments and support qualitative interpretation of model behavior under different target density conditions. In the blank scenario, there are minimal identifiable biological targets, with only a limited number of organisms appearing transiently. In sparse scenarios, the organism count is limited and irregularly distributed, resulting in no obvious aggregation patterns. Dense scenarios exhibit pronounced aggregations, leading to high-density regions that appear as prominent patches in the sonar images. Outbreak scenarios correspond to extensive migratory aggregations, where bright biological echoes occupy most of the field of view, and severe overlap makes individual separation difficult.
Notably, the labeled dataset was constructed by stratifying videos into outbreak and non-outbreak periods based on historical records. Frames sampled from non-outbreak periods naturally include blank, sparse, and dense conditions, whereas outbreak frames primarily originate from outbreak periods. The annotated images were labeled as a single class and were not explicitly assigned scenario tags. Therefore, the four scenario categories are used only as qualitative descriptors to facilitate discussion of model behavior and engineering applicability under different target-density conditions, rather than as explicit labels for quantitative metric reporting.

3. Results

3.1. Model Detection Performance

Figure 3 presents the detection performance of the Bio-YOLO v7 model, including precision, recall, and AP, applied to sonar image data of DCOs near the nuclear power intake. The reported metrics were computed on the annotated test set sampled from both outbreak and non-outbreak periods. The x-axis represents the confidence threshold, which corresponds to the minimum confidence required for a detection to be considered valid. Increasing this threshold generally improves precision but will reduce recall. Scenario descriptions (blank, sparse, dense, and outbreak) are used for qualitative illustration rather than for per-scenario metric reporting.
The model achieved a precision of 85.29%, indicating that it was effective in identifying the DCOs’ sonar light spots with a relatively low rate of false positives. This precision suggests that the model was able to filter out some of the noise, leading to a reduction in incorrect detections. A recall rate of 83.28% demonstrates that the model successfully identified the majority of light spot patches, exhibiting minimal false positives and strong capacity for completeness verification. The F1 score of 0.84 demonstrates a balanced optimization between precision and recall. The model achieved an AP of 81.49% on the test set, indicating strong overall detection performance by integrating the precision–recall trade-off under the chosen IoU criterion.
As summarized in Table 1, Bio-YOLO v7 increases recall (61.00% to 83.28%), AP (57.37% to 81.49%), and F1 score (0.73 to 0.84) on the test set, while precision decreases from 91.03% to 85.29%. These results indicate higher target coverage (recall) and improved integrated precision–recall performance (AP) on the test set, along with a higher F1 score reflecting a more favorable precision–recall balance, while the lower precision indicates a higher proportion of false positives among reported detections. In terms of efficiency, the inference speed drops slightly from 70.42 FPS to 63.82 FPS. An additional ablation study shows that anchor optimization contributes minimally to processing speed, recovering marginally from 63.76 FPS (CBAM-only) to 63.82 FPS (final). This indicates that the observed FPS reduction is primarily due to the computational load of the CBAM attention mechanism, while anchor box optimization improves target localization without a substantial impact on speed. Importantly, Bio-YOLO v7 still meets the real-time detection speed requirement (>30 FPS), demonstrating a practical trade-off between accuracy improvement and computational cost.

3.2. Multi-Scenario Classification and Detection in Underwater Environments

Figure 4 shows the detection capability of the Bio-YOLO v7 model across four scenarios: blank, sparse, dense, and outbreak. The model demonstrates effective performance in blank scenes, exhibiting a low incidence of false target detection, which suggests a strong capability to differentiate between noise and real targets. The model effectively detects light spot patches in sparse scenes, although it may miss light spot patches that are relatively ambiguous. In the dense scenario, the model successfully detected the majority of light spot patch targets. In the outbreak scenario, the extremely high density of light spot patches and elevated noise levels in the sonar images resulted in considerable missed detections.
Analysis revealed that 1.6% of the video data in the detection exhibited a stable detection spot area at a single value (Figure 5a,b), likely attributable to video data corruption and erroneous target misdetection. To illustrate typical false-positive patterns, Figure 5 presents two representative cases: Figure 5a shows a false positive triggered by water-surface clutter or surface reverberation, whereas Figure 5b shows a false positive triggered by bottom reverberation and seabed clutter.
Analysis of various scenarios indicates that Bio-YOLO v7 shows effective performance in detecting DCOs across different environmental conditions. In non-outbreak scenarios, the model is able to identify individual biological targets and aggregated light spots with strong resilience to interference. However, in the outbreak scenario, the extremely high density of light spot patches and elevated noise levels in the sonar images result in some missed detections. Despite these challenges, Bio-YOLO v7 is still capable of detecting smaller targets in complex biological outbreak conditions.

3.3. Dynamic Changes in Underwater Disaster-Causing Organism Biological Abundance

As shown in Figure 6, the light spot area ratio R (%), defined in Equation (1), was used to construct the time series. The intake-site monitoring data were recorded as consecutive LiveScope sonar video clips sampled at approximately 5 min intervals from 1 September 2023 to 10 October 2023. Bio-YOLO v7 was applied frame-wise to each clip, R was computed for each frame according to Equation (1), and the clip-level value was obtained by averaging R over all frames within the clip. The time series was constructed directly from the recorded clips without additional interpolation. For consistency, R (%) is also the quantity reported in Figure 7 and Figure 8, where the same index is presented in different visualization forms.
Figure 6 shows that R (%) exhibits pronounced periodic variability in the intake observations during September and October, with a noticeable increase after 8 September. To characterize the periodicity, a discrete Fourier transform analysis of the October intake data indicated a dominant period of 12.31 h.
In Dayushan waters, periodic changes in the light spot area were also observed, as shown in Figure 7 and Figure 8. The correlation with tides was more pronounced in May 2023 (Figure 7) than in June 2022 (Figure 8), with correlation coefficients (R2) ranging from 0.48 to 0.94. These coefficients indicate a variable strength of tidal influence on biological abundance over time. The data suggest that tidal forces may play a significant role in influencing zooplankton and plankton transport mechanisms, which, in turn, affect species aggregation patterns. The periodic fluctuations in the light spot area further support the notion that tidal cycles influence the biological abundance in the region, particularly in the Dayushan waters during high tide periods.

4. Discussion

4.1. Discussion on Model Performance

In the context of risk monitoring for DCOs near nuclear power plant intakes, missed detections (false negatives) are significantly more critical than false positives. Recall is typically more important than precision in such applications, as failure to detect DCOs could result in missed opportunities for timely intervention, potentially leading to significant economic losses or even safety accidents [33].
In comparison to the original YOLO v7 model (Table 1), Bio-YOLO v7 achieves a recall gain of 22.28 percentage points and an AP gain of 24.12 percentage points, with the F1 score increasing by 0.11, while precision decreases by 5.74 percentage points. Taken together, these changes indicate improved detection completeness and stronger integrated precision–recall performance on the test set, while the lower precision suggests a higher proportion of false positives among the reported detections. This is particularly relevant for DCO monitoring, where recall is prioritized to reduce missed detections.
While recall increased, the reduction in precision suggests that the model is more responsive to background noise and non-target regions. This results in an increase in false positives, which is a known consequence of attention mechanisms like CBAM that focus more on key features within the sonar image, enhancing sensitivity to both target regions and background noise [29]. Despite this, the increased recall makes Bio-YOLO v7 particularly effective for monitoring DCOs, where ensuring completeness of detection outweighs the occasional false detection.
The trade-off between detection accuracy and computational cost is a well-known consideration in deep learning-based detection tasks. Increasing accuracy typically increases computational burden, as even employing more proposals or more complex feature extractors leads to increased inference time [34]. In Bio-YOLO v7, the FPS decreases slightly after integrating CBAM due to the additional computation, which is consistent with prior YOLO-family studies reporting that adding attention or extra modules can improve accuracy but may reduce inference speed; nevertheless, Bio-YOLO v7 still satisfies real-time requirements (>30 FPS) [35]. By optimizing the anchor box sizes, Bio-YOLO v7 can improve inference efficiency by reducing unnecessary computations during prediction [36], slightly improving FPS to 63.82 FPS, ensuring practical applicability for real-time monitoring.

4.2. Discussion on Scenario-Dependent Behavior and Model Generalization

The multi-scenario results in Section 3.2 indicate that the main performance bottlenecks of Bio-YOLO v7 emerge when biological echoes become highly crowded and background interference intensifies. Across the dataset, raw sonar frames often contain notable background interference and limited frame clarity, which increases the probability of both missed and incorrect detections [37,38,39]. In dense scenes, the close spacing of organisms produces overlapping light spot patches, which reduces target separability and can lower detection completeness. This limitation is typical of high-density aggregations, where occlusion and patch merging suppress true positive identification [40,41].
In outbreak conditions, this challenge is further amplified by the simultaneous presence of extreme echo crowding and elevated noise, which makes individual biological echoes harder to disentangle and increases the risk of missed detections. Nevertheless, Bio-YOLO v7 still retains sensitivity to small targets in these challenging frames. This behavior is consistent with attention-driven feature weighting, which enhances salient cues for small objects in cluttered environments, while also increasing responsiveness to complex background patterns [42]. Consequently, a recall-oriented operating point may come with a moderate rise in false positives, but this trade-off is acceptable for DCO monitoring, where detection completeness is often prioritized over minimizing false alarms.
It is also important to address the potential impact of frame-level data splitting on model generalization. Since the images were extracted from sonar videos, a frame-level split may introduce residual temporal dependence, and thus a mild optimistic bias in performance cannot be fully ruled out. Meanwhile, coastal acoustic propagation is typically multipath-rich and rapidly time-varying [43]. Reported Doppler spreads of several to tens of hertz together with millisecond-scale multipath delay spreads in similar environments [44] are consistent with short decorrelation time scales, which may reduce, but not eliminate, the chance that the model benefits from memorizing near-duplicate background patterns. To directly assess the potential optimism caused by frame-level splitting, we additionally performed a stricter time-ordered split with outbreak-period samples reserved for testing, and the relative trends remain consistent, with Bio-YOLO v7 showing higher recall and AP at a modest FPS reduction.

4.3. Discussion on Periodicity and Early Warning Implications

Complementary seven-day boxplots reveal both distributional shifts and operational risk. Weekly medians span 3.28 to 4.60 percent, with the maximum during 15 to 21 September, aligning with the post-8 September rise in DCO abundance and indicating an upward shift in central tendency. Dispersion and tail risk peak in the subsequent window of 22 to 28 September, where the interquartile range attains 2.35, the non-outlier upper whisker is near 9.0 percent, and 65 outliers occur with extremes approaching 17.8 percent. Variability then contracts in early October, with the interquartile range reduced to 1.21 during 6 to 12 October, signaling restabilization toward a lower variance regime. Over the full record, 149 outliers are identified and cluster in late September, which demarcates the principal risk interval. From a monitoring standpoint, larger interquartile ranges and denser outliers quantify heavier upper tail exposure and a higher probability of short-lived surges that can challenge intake operation. In the detection and warning module, the system will issue a watch when the seven-day interquartile range exceeds 2.0 or increases by more than one-half week-over-week, will issue an advisory when the weekly median exceeds 4.5 percent, and will issue an alert when observations surpass the contemporaneous weekly upper whisker near 9.0 percent in high-risk windows. These thresholds are proposed as preliminary heuristic criteria. The exact values will be further calibrated after deployment of the monitoring system based on additional data analysis and operational validation. Taken together, the sequence of rising median after early September, a late September crest characterized by simultaneous elevation in level and spread, and an early October reversion indicates a transient yet pronounced intensification of outbreaks of DCOs followed by a return toward a more stable state.
Figure 7 and Figure 8 present the daily detections of light spot area in Dayushan waters for June 2022 and May 2023, alongside the tidal variations observed on those dates. In May, the total area of the target spot in the Dayushan Sea exhibited a distinct daily cyclic variation, similar to the intake waters. Tidal variations may influence this, as indicated by the coefficient of determination ( r 2 ) ranging from 0.48 to 0.94 between the Fourier-fitted light spot series data and the tidal data. This is consistent with the findings of Xia et al. [45] that tides affect fish behavior, while diurnal variation affects feeding and predator-prey dynamics, which subsequently affect biological abundance changes. Sampling results across various tidal stages and times of day indicated alterations in species composition and abundance, particularly during tidal transitions. Certain zooplanktivorous species emerged as the most abundant indicator species during high tide, exhibiting increased biological abundance during these periods. This aggregation aligns with internal-wave-driven plankton transport mechanisms observed in tidal-forced ecosystems [46]. High tides transport significant quantities of zooplankton to the breaking wave region, enhancing the feeding efficiency of zooplanktivorous species [47]. By contrast, in June 2022, the periodic signal is visually less prominent, yet Fourier fits still capture a semi-diurnal component with moderate skill, the tide–fit association is weaker and case dependent, exemplified by r 2 = 0.761 on 16 June, which suggests partial tidal control superimposed on other drivers. Relative to May, the wider confidence bands and larger intra-day variance in June indicate noisier forcing and possible aliasing between biological and physical processes, so the diurnal organization of abundance persists but is more weakly expressed. The statistical analysis of changes in the total area of light spots can provide scientific references for the automatic monitoring of DCOs in the waters surrounding the nuclear power plant and for early warning systems.

5. Conclusions and Outlook

Real-time monitoring of DCOs in the marine environment adjacent to nuclear power intake is crucial for ensuring operational safety. This study developed an enhanced detection model, Bio-YOLO v7, which incorporates an attention mechanism module and optimizes the dimensions of the anchor box. The model was utilized to detect sonar images of DCOs in the waters surrounding a nuclear power plant, employing the NVIDIA RTX 3080Ti (10G) for training and testing. It achieved a detection speed exceeding 30 FPS, meeting the requirements for real-time monitoring applications. Compared with the baseline YOLO v7, Bio-YOLO v7 shows a moderate decrease in precision and inference speed (FPS), which reflects the typical accuracy–efficiency trade-off introduced by additional modules, while still satisfying real-time deployment requirements. Despite the inherent challenges of sonar image quality, such as noise and low clarity, the model demonstrated robust performance in average accuracy, detection precision, and detection speed. Furthermore, the analysis of the total spot area results indicated a clear daily cyclic variation in the causative biological abundance, akin to tidal variation. This finding offers significant insights for predicting biological outbreaks and adjusting detection thresholds.
Future work will focus on several key areas: (1) constructing large-scale, high-quality underwater sonar biological abundance datasets; (2) increasing the dataset size and adopting video-level data splitting to avoid potential data leakage, thereby further improving the model’s generalization across different environments and sonar systems; (3) developing lightweight algorithms for deployment on embedded systems to facilitate real-time processing on mobile platforms; (4) modeling nontidal drivers of DCO abundance and integrating them as covariates in the monitoring and early warning module; and (5) improving precision and inference efficiency (FPS) through module-level optimization and deployment-oriented acceleration, so as to further reduce false alarms while maintaining real-time performance. Additionally, investigating the movement patterns of sonar light spots for DCOs with limited swim ability or wave-following behaviors could lead to more precise outbreak prediction models. These efforts will ultimately enhance the accuracy, efficiency, and practicality of automated early warning systems for nuclear power plant safety.

Author Contributions

Conceptualization, R.W.; methodology, J.Z.; validation, G.Y., S.W., A.C. and Z.L.; investigation, W.L., Y.X. and X.C.; resources, Y.G., X.C. and X.W.; data curation, W.L., Y.X., Y.G. and X.W.; writing—original draft preparation, G.Y. and S.W.; visualization, A.C. and Z.L.; writing—review and editing, C.L. and J.Z.; supervision, R.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Acknowledgments

We gratefully acknowledge the staff of the Ningde Marine Center, Ministry of Natural Resources, for their invaluable assistance and support during the field experiment with sonar.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Loss Functions and Evaluation Metrics

Appendix A.1. Loss Functions Used in Training

This appendix summarizes the loss terms and notation used in model optimization. Let B p and B g t denote the predicted and ground truth bounding boxes, respectively.

Appendix A.1.1. Intersection over Union (IoU)

I o U ( B p , B g t ) = a r e a ( B p B g t ) a r e a ( B p B g t )

Appendix A.1.2. Complete IoU (CIoU) and CIoU Loss

CIoU is used to improve bounding box localization by considering not only overlap but also center distance and aspect ratio consistency. The CIoU metric is defined as
C I o U = I o U ρ 2 ( b p , b g t ) C 2 α v
and the corresponding loss is
L C I o U = 1 C I o U
Here, b p and b g t are the center points of B p and B g t , ρ ( ) denotes the Euclidean distance, and c is the diagonal length of the smallest enclosing box covering both B p and B g t . The aspect ratio term v and the trade-off coefficient α are defined as
v = 4 π 2 ( a r c t a n w g t h g t a r c t a n w p h p ) 2
α = v ( 1 I o U ) + v
where w p , h p are the width and height of the predicted box, and w g t ,   h g t are those of the ground truth box.

Appendix A.1.3. Binary Cross-Entropy Loss (BCE)

BCELoss quantifies the discrepancy between predicted probabilities and binary labels through a cross-entropy formulation.
L B C E = 1 N i = 1 N ( y i log p i + ( 1 y i ) log ( 1 p i ) )
where y i represents the true label for the i -th sample (0 or 1), p i is the predicted probability of the positive class (class 1) for the iii-th sample, and N is the total number of samples.

Appendix A.1.4. Mean Squared Error Loss (MSE)

MSELoss computes the mean squared difference between predicted values and target values, assigning larger penalties to larger deviations.
L M S E = 1 N i = 1 N ( y ^ i y i ) 2
where N is the number of samples and y ^ i and y i are the predicted and target values, respectively.

Appendix A.1.5. Overall Training Objective

The total loss used for optimization is expressed as
L t o t a l = L C I o U + L B C E + L M S E

Appendix A.2. Evaluation Metrics

Appendix A.2.1. Precision and Recall

Precision and recall are standard metrics for evaluating detection correctness and completeness. They are defined as
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where T P , F P , and F N denote true positives, false positives, and false negatives, respectively.

Appendix A.2.2. F1 Score

The F1 score summarizes the trade-off between precision and recall using their harmonic mean:
F 1 = 2 · P r e c i s i o n · R e c a l l P r e c i s i o n + R e c a l l

Appendix A.2.3. Average Precision (AP)

Average precision (AP) is defined as the area under the precision–recall curve obtained by sweeping the confidence threshold:
A P = 0 1 P ( r ) d r
where p ( r ) denotes precision as a function of recall r .

Appendix A.2.4. Frames per Second (FPS)

F P S = 1 T

References

  1. Dassonville, C.; Siemen, T. Nuclear Energy: The Arguments for the Debate; Competence Centre Just Climate: Brussels, Belgium, 2022. [Google Scholar]
  2. Plackett, B. Why France’s Nuclear Industry Faces Uncertainty. Nature 2022, 22, 2817. [Google Scholar] [CrossRef] [PubMed]
  3. International Energy Agency (IEA). World Energy Outlook 2024; IEA: Paris, France, 2024; Available online: https://iea.blob.core.windows.net/assets/140a0470-5b90-4922-a0e9-838b3ac6918c/WorldEnergyOutlook2024.pdf (accessed on 10 November 2025).
  4. Tauseef Hassan, S.; Danish; Awais Baloch, M.; Bui, Q.; Hashim Khan, N. The heterogeneous impact of geopolitical risk and environment-related innovations on greenhouse gas emissions: The role of nuclear and renewable energy in the circular economy. Gondwana Res. 2024, 127, 144–155. [Google Scholar] [CrossRef]
  5. China Nuclear Energy Association (CNEA); China Institute of Strategic Studies (CISS); China Institute of Science and Technology Evaluation (CISTE). China Nuclear Energy Development Report (2025); Social Science Literature Publishing House: Beijing, China, 2025. [Google Scholar]
  6. Huo, J.; Li, C.; Liu, S.; Sun, L.; Yang, L.; Song, Y.; Li, J. Biomass prediction method of nuclear power cold source disaster based on deep learning. Front. Mar. Sci. 2023, 10, 1100396. [Google Scholar] [CrossRef]
  7. Han, F.; Yao, J.; Zhu, H.; Wang, C. Marine organism detection and classification from underwater vision based on the deep CNN method. Math. Probl. Eng. 2020, 2020, 3937580. [Google Scholar] [CrossRef]
  8. Li, D.; Du, Z.; Wang, Q.; Wang, J.; Du, L. Recent advances in acoustic technology for aquaculture: A review. Rev. Aquac. 2024, 16, 357–381. [Google Scholar] [CrossRef]
  9. Chai, Y.; Yu, H.; Xu, L.; Li, D.; Chen, Y. Deep learning algorithms for sonar imagery analysis and its application in aquaculture: A review. IEEE Sens. J. 2023, 23, 28549–28563. [Google Scholar] [CrossRef]
  10. Shahrestani, S.; Bi, H.; Lyubchich, V.; Boswell, K.M. Detecting a nearshore fish parade using the adaptive resolution imaging sonar (ARIS): An automated procedure for data analysis. Fish. Res. 2017, 191, 190–199. [Google Scholar] [CrossRef]
  11. Eggleston, M.R.; Milne, S.W.; Ramsay, M.; Kowalski, K.P. Improved fish counting method accurately quantifies high-density fish movement in dual-frequency identification sonar data files from a coastal wetland environment. N. Am. J. Fish. Manag. 2020, 40, 883–892. [Google Scholar] [CrossRef]
  12. Li, Q.; Wang, Z.; Li, G.; Zhou, C.; Chen, P.; Yang, C. An accurate and adaptable deep learning-based solution to floating litter cleaning up and its effectiveness on environmental recovery. J. Clean. Prod. 2023, 388, 135816. [Google Scholar] [CrossRef]
  13. Liu, L.; Wu, M.; Zhao, J.; Bing, L.; Zheng, L.; Luan, S.; Mao, Y.; Xue, M.; Liu, J.; Liu, B. Deep learning-based monitoring of offshore wind turbines in Shandong Sea of China and their location analysis. J. Clean. Prod. 2024, 434, 140415. [Google Scholar] [CrossRef]
  14. Li, J.; Xu, W.; Deng, L.; Xiao, Y.; Han, Z.; Zheng, H. Deep learning for visual recognition and detection of aquatic animals: A review. Rev. Aquac. 2023, 15, 409–433. [Google Scholar] [CrossRef]
  15. Wang, J.; Feng, C.; Wang, L.; Li, G.; He, B. Detection of weak and small targets in forward-looking sonar image using multi-branch shuttle neural network. IEEE Sens. J. 2022, 22, 6772–6783. [Google Scholar] [CrossRef]
  16. Connolly, R.M.; Jinks, K.I.; Shand, A.; Taylor, M.D.; Gaston, T.F.; Becker, A.; Jinks, E.L. Out of the shadows: Automatic fish detection from acoustic cameras. Aquat. Ecol. 2023, 57, 833–844. [Google Scholar] [CrossRef]
  17. Shen, W.; Liu, M.; Lu, Q.; Yin, Z.; Zhang, J. A fish target identification and counting method based on DIDSON sonar and YOLOv5 model. Fishes 2024, 9, 346. [Google Scholar] [CrossRef]
  18. Huang, T.; Zang, X.; Kondyukov, G.; Hou, Z.; Peng, G.; Pander, J.; Knott, J.; Geist, J.; Melesse, M.B.; Jacobson, P.; et al. Towards Automated and Real-Time Multi-Object Detection of Anguilliform Fishes from Sonar Data Using YOLOv8 Deep Learning Algorithm. Ecol. Inform. 2025, 91, 103381. [Google Scholar] [CrossRef]
  19. Wang, Z.; Guo, J.; Zhang, S.; Xu, N. Marine Object Detection in Forward-Looking Sonar Images via Semantic-Spatial Feature Enhancement. Front. Mar. Sci. 2025, 12, 1539210. [Google Scholar] [CrossRef]
  20. Xu, Z.; Cheng, X.E. Zebrafish tracking using convolutional neural networks. Sci. Rep. 2017, 7, 42815. [Google Scholar] [CrossRef]
  21. Wei, Y.; Ji, L.; An, D. Review on quantitative methods of fish school behaviors. Rev. Aquac. 2025, 17, e70023. [Google Scholar] [CrossRef]
  22. Mao, W.-L.; Chen, W.-C.; Fathurrahman, H.I.K.; Lin, Y.-H. Deep learning networks for real-time regional domestic waste detection. J. Clean. Prod. 2022, 344, 131096. [Google Scholar] [CrossRef]
  23. Chen, Y.; Luo, A.; Cheng, M.; Wu, Y.; Zhu, J.; Meng, Y.; Tan, W. Classification and recycling of recyclable garbage based on deep learning. J. Clean. Prod. 2023, 414, 137558. [Google Scholar] [CrossRef]
  24. Vijayalakshmi, M.; Sasithradevi, A. AquaYOLO: Advanced YOLO-based fish detection for optimized aquaculture pond monitoring. Sci. Rep. 2025, 15, 6151. [Google Scholar] [CrossRef] [PubMed]
  25. Zhou, C.; Wang, C.; Sun, D.; Hu, J.; Ye, H. An automated lightweight approach for detecting dead fish in a recirculating aquaculture system. Aquaculture 2025, 594, 741433. [Google Scholar] [CrossRef]
  26. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  27. Liu, F.; Xu, X.; Qing, C.; Jin, J. Probability Matrix SVM+ Learning for Complex Action Recognition. In Proceedings of the International Conference on Internet Multimedia Computing and Service (IMCS), Nanjing, China, 17 August 2018; Springer: Singapore, 2018; pp. 403–410. [Google Scholar] [CrossRef]
  28. Meng, L.; Hirayama, T.; Oyanagi, S. Underwater-drone with panoramic camera for automatic fish recognition based on deep learning. IEEE Access 2018, 6, 17880–17886. [Google Scholar] [CrossRef]
  29. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Computer Vision—ECCV 2018; Springer: Cham, Switzerland, 2018; pp. 3–19. [Google Scholar] [CrossRef]
  30. Hu, J.; Zhao, D.; Zhang, Y.; Zhou, C.; Chen, W. Real-time nondestructive fish behavior detecting in mixed polyculture system using deep-learning and low-cost devices. Expert Syst. Appl. 2021, 178, 115051. [Google Scholar] [CrossRef]
  31. Xiao, Y.; Yang, H.; Dai, D.; Wang, H.; Shan, Z.; Wu, H. CKAN-YOLOv8: A lightweight multi-task network for underwater target detection and segmentation in side-scan sonar. J. Mar. Sci. Eng. 2025, 13, 936. [Google Scholar] [CrossRef]
  32. Huang, Z.; Sui, B.; Wen, J.; Jiang, G. An intelligent ship image/video detection and classification method with improved regressive deep convolutional neural network. Complexity 2020, 2020, 1520872. [Google Scholar] [CrossRef]
  33. Xu, W.; Yang, R.; Karthikeyan, R.; Shi, Y.; Su, Q. GBiDC-PEST: A novel lightweight model for real-time multiclass tiny pest detection and mobile platform deployment. J. Integr. Agric. 2024, 24, 2749–2769. [Google Scholar] [CrossRef]
  34. Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Honolulu, HI, USA, 2017; pp. 3296–3297. [Google Scholar]
  35. Liu, Z.; Han, W.; Xu, H.; Gong, K.; Zeng, Q.; Zhao, X. Research on Vehicle Detection Based on Improved YOLOX_S. Sci. Rep. 2023, 13, 23081. [Google Scholar] [CrossRef]
  36. Zi, N.; Li, X.-M.; Gade, M.; Fu, H.; Min, S. Ocean eddy detection based on YOLO deep learning algorithm by synthetic aperture radar data. Remote Sens. Environ. 2024, 307, 114139. [Google Scholar] [CrossRef]
  37. Shen, W.; Peng, Z.; Zhang, J. Identification and counting of fish targets using adaptive resolution imaging sonar. J. Fish Biol. 2024; early view. [Google Scholar] [CrossRef] [PubMed]
  38. Pala, A.; Oleynik, A.; Malde, K.; Handegard, N.O. Self-supervised feature learning for acoustic data analysis. Ecol. Inform. 2024, 84, 102878. [Google Scholar] [CrossRef]
  39. Baletaud, F.; Villon, S.; Gilbert, A.; Come, J.-M.; Fiat, S.; Iovan, C.; Vigliola, L. Automatic detection, identification and counting of deep-water snappers on underwater baited video using deep learning. Front. Mar. Sci. 2025, 12, 1476616. [Google Scholar] [CrossRef]
  40. Zhang, Z.; Han, Q.; Liu, W.; Zhao, Y. A lightweight network based on SCMYOLO for accurate and efficient underwater fish detection. ICES J. Mar. Sci. 2025, 82, fsaf038. [Google Scholar] [CrossRef]
  41. Córdova, M.; Sokolova, M.; van Helmond, A.; Mencarelli, A.; Kootstra, G. Multi-stage image-based approach for fish detection and weight estimation. Biosyst. Eng. 2025, 257, 104239. [Google Scholar] [CrossRef]
  42. Duan, R.; Wang, Y.; Chen, X.; Li, S. An enhanced algorithm for cell-level anomaly segmentation in photovoltaic solar panels using electroluminescence imaging. Energy 2025, 331, 136711. [Google Scholar] [CrossRef]
  43. Yoo, K.B.; Edelmann, G.F. Low Complexity Multipath and Doppler Compensation for Direct-Sequence Spread Spectrum Signals in Underwater Acoustic Communication. Appl. Acoust. 2021, 180, 108094. [Google Scholar] [CrossRef]
  44. Guo, H.; Abdi, A.; Song, A.; Badiey, M. Delay and Doppler Spreads in Underwater Acoustic Particle Velocity Channels. J. Acoust. Soc. Am. 2011, 129, 2015–2025. [Google Scholar] [CrossRef]
  45. Xia, W.; Miao, Z.; Wang, S.; Chen, K.; Liu, Y.; Xie, S. Influence of tidal and diurnal rhythms on fish assemblages in the surf zone of sandy beaches. Fish. Oceanogr. 2023, 32, 448–460. [Google Scholar] [CrossRef]
  46. Robinson, E.; Hosegood, P.; Bolton, A. Dynamical oceanographic processes impact on reef manta ray behaviour: Extreme Indian Ocean Dipole influence on local internal wave dynamics at a remote tropical atoll. Prog. Oceanogr. 2023, 218, 103129. [Google Scholar] [CrossRef]
  47. Krumme, U.; Liang, T.-H. Tidal-induced changes in a copepod-dominated zooplankton community in a macrotidal mangrove channel in Northern Brazil. Zool. Stud. 2004, 43, 404–414. [Google Scholar]
Figure 1. Study area.
Figure 1. Study area.
Jmse 14 00347 g001
Figure 2. Schematic diagram of image acquisition system, image preprocessing stage, model improvement stage, model training stage, and underwater disaster-causing organism detection.
Figure 2. Schematic diagram of image acquisition system, image preprocessing stage, model improvement stage, model training stage, and underwater disaster-causing organism detection.
Jmse 14 00347 g002
Figure 3. Network Model Results. (a) Precision; (b) Recall; (c) AP; (d) F1 score.
Figure 3. Network Model Results. (a) Precision; (b) Recall; (c) AP; (d) F1 score.
Jmse 14 00347 g003aJmse 14 00347 g003b
Figure 4. Detection results of light spot area under different scenarios. (a) Blank scenario; (b) sparse scenario; (c) dense scenario; (d) outbreak scenario.
Figure 4. Detection results of light spot area under different scenarios. (a) Blank scenario; (b) sparse scenario; (c) dense scenario; (d) outbreak scenario.
Jmse 14 00347 g004
Figure 5. Representative false-positive cases in the video-based detection results: (a) false positive triggered by water-surface clutter or surface reverberation; (b) false positive triggered by bottom reverberation and seabed clutter.
Figure 5. Representative false-positive cases in the video-based detection results: (a) false positive triggered by water-surface clutter or surface reverberation; (b) false positive triggered by bottom reverberation and seabed clutter.
Jmse 14 00347 g005
Figure 6. Periodic variation in light spot area at the intake from September to October.
Figure 6. Periodic variation in light spot area at the intake from September to October.
Jmse 14 00347 g006
Figure 7. Daily periodic variation in the light spot area at the Dayushan station in May.
Figure 7. Daily periodic variation in the light spot area at the Dayushan station in May.
Jmse 14 00347 g007aJmse 14 00347 g007b
Figure 8. Daily periodic variation in the light spot area at the Dayushan station in June.
Figure 8. Daily periodic variation in the light spot area at the Dayushan station in June.
Jmse 14 00347 g008aJmse 14 00347 g008b
Table 1. Comparison between Bio-YOLO v7 Model and Original Model.
Table 1. Comparison between Bio-YOLO v7 Model and Original Model.
YOLO v7Bio-YOLO v7
Precision91.03%85.29%
Recall61.00%83.28%
AP57.37%81.49%
F1 score0.730.84
FPS70.4263.82
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, G.; Wang, S.; Liu, W.; Xia, Y.; Guo, Y.; Chen, X.; Wei, X.; Chen, A.; Lv, Z.; Lu, C.; et al. Automatic Detection of Disaster-Causing Organisms near the Waters of Nuclear Power Plant Based on LiveScope Scanning Sonar Images. J. Mar. Sci. Eng. 2026, 14, 347. https://doi.org/10.3390/jmse14040347

AMA Style

Yu G, Wang S, Liu W, Xia Y, Guo Y, Chen X, Wei X, Chen A, Lv Z, Lu C, et al. Automatic Detection of Disaster-Causing Organisms near the Waters of Nuclear Power Plant Based on LiveScope Scanning Sonar Images. Journal of Marine Science and Engineering. 2026; 14(4):347. https://doi.org/10.3390/jmse14040347

Chicago/Turabian Style

Yu, Gangyi, Shuo Wang, Wei Liu, Yongjian Xia, Yuchen Guo, Xiaolu Chen, Xueping Wei, Ao Chen, Zehua Lv, Chao Lu, and et al. 2026. "Automatic Detection of Disaster-Causing Organisms near the Waters of Nuclear Power Plant Based on LiveScope Scanning Sonar Images" Journal of Marine Science and Engineering 14, no. 4: 347. https://doi.org/10.3390/jmse14040347

APA Style

Yu, G., Wang, S., Liu, W., Xia, Y., Guo, Y., Chen, X., Wei, X., Chen, A., Lv, Z., Lu, C., Zhang, J., & Wan, R. (2026). Automatic Detection of Disaster-Causing Organisms near the Waters of Nuclear Power Plant Based on LiveScope Scanning Sonar Images. Journal of Marine Science and Engineering, 14(4), 347. https://doi.org/10.3390/jmse14040347

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop