Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (29)

Search Parameters:
Keywords = underwater fishing net

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1681 KiB  
Article
Cross-Modal Complementarity Learning for Fish Feeding Intensity Recognition via Audio–Visual Fusion
by Jian Li, Yanan Wei, Wenkai Ma and Tan Wang
Animals 2025, 15(15), 2245; https://doi.org/10.3390/ani15152245 - 31 Jul 2025
Viewed by 300
Abstract
Accurate evaluation of fish feeding intensity is crucial for optimizing aquaculture efficiency and the healthy growth of fish. Previous methods mainly rely on single-modal approaches (e.g., audio or visual). However, the complex underwater environment makes single-modal monitoring methods face significant challenges: visual systems [...] Read more.
Accurate evaluation of fish feeding intensity is crucial for optimizing aquaculture efficiency and the healthy growth of fish. Previous methods mainly rely on single-modal approaches (e.g., audio or visual). However, the complex underwater environment makes single-modal monitoring methods face significant challenges: visual systems are severely affected by water turbidity, lighting conditions, and fish occlusion, while acoustic systems suffer from background noise. Although existing studies have attempted to combine acoustic and visual information, most adopt simple feature-level fusion strategies, which fail to fully explore the complementary advantages of the two modalities under different environmental conditions and lack dynamic evaluation mechanisms for modal reliability. To address these problems, we propose the Adaptive Cross-modal Attention Fusion Network (ACAF-Net), a cross-modal complementarity learning framework with a two-stage attention fusion mechanism: (1) a cross-modal enhancement stage that enriches individual representations through Low-rank Bilinear Pooling and learnable fusion weights; (2) an adaptive attention fusion stage that dynamically weights acoustic and visual features based on complementarity and environmental reliability. Our framework incorporates dimension alignment strategies and attention mechanisms to capture temporal–spatial complementarity between acoustic feeding signals and visual behavioral patterns. Extensive experiments demonstrate superior performance compared to single-modal and conventional fusion approaches, with 6.4% accuracy improvement. The results validate the effectiveness of exploiting cross-modal complementarity for underwater behavioral analysis and establish a foundation for intelligent aquaculture monitoring systems. Full article
Show Figures

Figure 1

16 pages, 3335 KiB  
Article
An Improved DeepSORT-Based Model for Multi-Target Tracking of Underwater Fish
by Shengnan Liu, Jiapeng Zhang, Haojun Zheng, Cheng Qian and Shijing Liu
J. Mar. Sci. Eng. 2025, 13(7), 1256; https://doi.org/10.3390/jmse13071256 - 28 Jun 2025
Viewed by 537
Abstract
Precise identification and quantification of fish movement states are of significant importance for conducting fish behavior research and guiding aquaculture production, with object tracking serving as a key technical approach for achieving behavioral quantification. The traditional DeepSORT algorithm has been widely applied to [...] Read more.
Precise identification and quantification of fish movement states are of significant importance for conducting fish behavior research and guiding aquaculture production, with object tracking serving as a key technical approach for achieving behavioral quantification. The traditional DeepSORT algorithm has been widely applied to object tracking tasks; however, in practical aquaculture environments, high-density cultured fish exhibit visual characteristics such as similar textural features and frequent occlusions, leading to high misidentification rates and frequent ID switching during the tracking process. This study proposes an underwater fish object tracking method based on the improved DeepSORT algorithm, utilizing ResNet as the backbone network, embedding Deformable Convolutional Networks v2 to enhance adaptive receptive field capabilities, introducing Triplet Loss function to improve discrimination ability among similar fish, and integrating Convolutional Block Attention Module to enhance key feature learning. Finally, by combining the aforementioned improvement modules, the ReID feature extraction network was redesigned and optimized. Experimental results demonstrate that the improved algorithm significantly enhances tracking performance under frequent occlusion conditions, with the MOTA metric improving from 64.26% to 66.93% and the IDF1 metric improving from 53.73% to 63.70% compared to the baseline algorithm, providing more reliable technical support for underwater fish behavior analysis. Full article
Show Figures

Figure 1

27 pages, 6917 KiB  
Article
LatentResNet: An Optimized Underwater Fish Classification Model with a Low Computational Cost
by Muhab Hariri, Ercan Avsar and Ahmet Aydın
J. Mar. Sci. Eng. 2025, 13(6), 1019; https://doi.org/10.3390/jmse13061019 - 23 May 2025
Viewed by 527
Abstract
Efficient deep learning models are crucial in resource-constrained environments, especially for marine image classification in underwater monitoring and biodiversity assessment. This paper presents LatentResNet, a computationally lightweight deep learning model involving two key innovations: (i) using the encoder from the proposed LiteAE, a [...] Read more.
Efficient deep learning models are crucial in resource-constrained environments, especially for marine image classification in underwater monitoring and biodiversity assessment. This paper presents LatentResNet, a computationally lightweight deep learning model involving two key innovations: (i) using the encoder from the proposed LiteAE, a lightweight autoencoder for image reconstruction, as input to the model to reduce the spatial dimension of the data and (ii) integrating a DeepResNet architecture with lightweight feature extraction components to refine encoder-extracted features. LiteAE demonstrated high-quality image reconstruction within a single training epoch. LatentResNet variants (large, medium, and small) are evaluated on ImageNet-1K to assess their efficiency against state-of-the-art models and on Fish4Knowledge for domain-specific performance. On ImageNet-1K, the large variant achieves 66.3% top-1 accuracy (1.7M parameters, 0.2 GFLOPs). The medium and small variants reach 60.8% (1M, 0.1 GFLOPs) and 54.8% (0.7M, 0.06 GFLOPs), respectively. After fine-tuning on Fish4Knowledge, the large, medium, and small variants achieve 99.7%, 99.8%, and 99.7%, respectively, outperforming the classification metrics of benchmark models trained on the same dataset, with up to 97.4% and 92.8% reductions in parameters and FLOPs, respectively. The results demonstrate LatentResNet’s effectiveness as a lightweight solution for real-world marine applications, offering accurate and lightweight underwater vision. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

15 pages, 5141 KiB  
Article
Speed and Energy Efficiency of a Fish Robot Featuring Exponential Patterns of Control
by Ivan Tanev
Actuators 2025, 14(3), 119; https://doi.org/10.3390/act14030119 - 28 Feb 2025
Cited by 1 | Viewed by 666
Abstract
Fish in nature have evolved more efficient swimming capabilities compared to that of propeller-driven autonomous underwater vehicles. Motivated by such knowledge, we discuss a bionic (bio-memetic) autonomous underwater vehicle—a fish robot—that mimics the swimming of rainbow trout (Oncorhynchus mykiss) in nature. [...] Read more.
Fish in nature have evolved more efficient swimming capabilities compared to that of propeller-driven autonomous underwater vehicles. Motivated by such knowledge, we discuss a bionic (bio-memetic) autonomous underwater vehicle—a fish robot—that mimics the swimming of rainbow trout (Oncorhynchus mykiss) in nature. The robot consists of three (anterior, posterior, and tail) segments, connected via two (anterior and posterior) actuated hinge joints. We divided the half-period of undulation of the robot into two phases—thrusting and braking. In addition, we hypothesized that an asymmetric duration—a short period of thrusting and a long period of braking—implemented as an exponential (rather than “canonical”, sinusoidal) control would favorably affect the net propulsion of these two phases. The experimental results verified that, compared to sinusoidal undulation, the proposed exponential control results in increased speed of the robot between 1.1 to 4 times in the range of frequencies of undulation between 0.4 Hz and 2 Hz, and improved energy efficiency from 1.1 to 3.6 times in the same frequency range. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

22 pages, 1174 KiB  
Article
Dual Stream Encoder–Decoder Architecture with Feature Fusion Model for Underwater Object Detection
by Mehvish Nissar, Amit Kumar Mishra and Badri Narayan Subudhi
Mathematics 2024, 12(20), 3227; https://doi.org/10.3390/math12203227 - 15 Oct 2024
Cited by 2 | Viewed by 1618
Abstract
Underwater surveillance is an imminent and fascinating exploratory domain, particularly in monitoring aquatic ecosystems. This field offers valuable insights into underwater behavior and activities, which have broad applications across various domains. Specifically, underwater surveillance involves detecting and tracking moving objects within aquatic environments. [...] Read more.
Underwater surveillance is an imminent and fascinating exploratory domain, particularly in monitoring aquatic ecosystems. This field offers valuable insights into underwater behavior and activities, which have broad applications across various domains. Specifically, underwater surveillance involves detecting and tracking moving objects within aquatic environments. However, the complex properties of water make object detection a challenging task. Background subtraction is a commonly employed technique for detecting local changes in video scenes by segmenting images into the background and foreground to isolate the object of interest. Within this context, we propose an innovative dual-stream encoder–decoder framework based on the VGG-16 and ResNet-50 models for detecting moving objects in underwater frames. The network includes a feature fusion module that effectively extracts multiple-level features. Using a limited set of images and performing training in an end-to-end manner, the proposed framework yields accurate results without post-processing. The efficacy of the proposed technique is confirmed through visual and quantitative comparisons with eight cutting-edge methods using two standard databases. The first one employed in our experiments is the Underwater Change Detection Dataset, which includes five challenges, each challenge comprising approximately 1000 frames. The categories in this dataset were recorded under various underwater conditions. The second dataset used for practical analysis is the Fish4Knowledge dataset, where we considered five challenges. Each category, recorded in different aquatic settings, contains a varying number of frames, typically exceeding 1000 per category. Our proposed method surpasses all methods used for comparison by attaining an average F-measure of 0.98 on the Underwater Change Detection Dataset and 0.89 on the Fish4Knowledge dataset. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

19 pages, 5685 KiB  
Article
HRA-YOLO: An Effective Detection Model for Underwater Fish
by Hongru Wang, Jingtao Zhang and Hu Cheng
Electronics 2024, 13(17), 3547; https://doi.org/10.3390/electronics13173547 - 6 Sep 2024
Cited by 2 | Viewed by 2061
Abstract
In intelligent fisheries, accurate fish detection is essential to monitor underwater ecosystems. By utilizing underwater cameras and computer vision technologies to detect fish distribution, timely feedback can be provided to staff, enabling effective fishery management. This paper proposes a lightweight underwater fish detection [...] Read more.
In intelligent fisheries, accurate fish detection is essential to monitor underwater ecosystems. By utilizing underwater cameras and computer vision technologies to detect fish distribution, timely feedback can be provided to staff, enabling effective fishery management. This paper proposes a lightweight underwater fish detection algorithm based on YOLOv8s, named HRA-YOLO, to meet the demand for a high-precision and lightweight object detection algorithm. Firstly, the lightweight network High-Performance GPU Net (HGNetV2) is used to substitute the backbone network of the YOLOv8s model to lower the computational cost and reduce the size of the model. Second, to enhance the capability of extracting fish feature information and reducing missed detections, we design a residual attention (RA) module, which is formulated by embedding the efficient multiscale attention (EMA) mechanism at the end of the Dilation-Wise Residual (DWR) module. Then, we adopt the RA module to replace the bottleneck of the YOLOv8s model to increase detection precision. Taking universality into account, we establish an underwater fish dataset for our subsequent experiments by collecting data in various waters. Comprehensive experiments are carried out on the self-constructed fish dataset. The results on the self-constructed dataset demonstrate that the precision of the HRA-YOLO model improved to 93.1%, surpassing the original YOLOv8s model, while the computational complexity was reduced by 19% (5.4 GFLOPs), and the model size was decreased by 25.3% (5.7 MB). And compared to other state-of-the-art detection models, the overall performance of our model shows its superiority. We also perform experiments on other datasets to verify the adaptability of our model. The experimental results on the Fish Market dataset indicate that our model has better overall performance than the original model and has good generality. Full article
Show Figures

Figure 1

17 pages, 5276 KiB  
Article
SQnet: An Enhanced Multi-Objective Detection Algorithm in Subaquatic Environments
by Yutao Zhu, Bochen Shan, Yinglong Wang and Hua Yin
Electronics 2024, 13(15), 3053; https://doi.org/10.3390/electronics13153053 - 1 Aug 2024
Viewed by 1418
Abstract
With the development of smart aquaculture, the demand for accuracy for underwater target detection has increased. However, traditional target detection methods have proven to be inefficient and imprecise due to the complexity of underwater environments and the obfuscation of biological features against the [...] Read more.
With the development of smart aquaculture, the demand for accuracy for underwater target detection has increased. However, traditional target detection methods have proven to be inefficient and imprecise due to the complexity of underwater environments and the obfuscation of biological features against the underwater environmental background. To address these issues, we proposed a novel algorithm for underwater multi-target detection based on the YOLOv8 architecture, named SQnet. A Dynamic Snake Convolution Network (DSConvNet) module was introduced for tackling the overlap between target organisms and the underwater environmental background. To reduce computational complexity and parameter overhead while maintaining precision, we employed a lightweight context-guided semantic segmentation network (CGNet) model. Furthermore, the information loss and degradation issues arising from indirect interactions between non-adjacent layers were handled by integrating an Asymptotic Feature Pyramid Network (AFPN) model. Experimental results demonstrate that SQnet achieves an mAP@0.5 of 83.3% and 98.9% on the public datasets URPC2020, Aquarium, and the self-compiled dataset ZytLn, respectively. Additionally, its mAP@0.5–0.95 reaches 49.1%, 85.4%, and 84.6%, respectively, surpassing other classical algorithms such as YOLOv7-tiny, YOLOv5s, and YOLOv3-tiny. Compared to the original YOLOv8 model, SQnet boasts a PARM of 2.25 M and consistent GFLOPs of 6.4 G. This article presents a novel approach for the real-time monitoring of fish using mobile devices, paving the way for the further development of intelligent aquaculture in the domain of fisheries. Full article
Show Figures

Figure 1

18 pages, 4448 KiB  
Article
Light-YOLO: A Study of a Lightweight YOLOv8n-Based Method for Underwater Fishing Net Detection
by Nuo Chen, Jin Zhu and Linhan Zheng
Appl. Sci. 2024, 14(15), 6461; https://doi.org/10.3390/app14156461 - 24 Jul 2024
Cited by 4 | Viewed by 1555
Abstract
Detecting small dark targets underwater, such as fishing nets, is critical to the operation of underwater robots. Existing techniques often require more computational resources and operate under harsh underwater imaging conditions when handling such tasks. This study aims to develop a model with [...] Read more.
Detecting small dark targets underwater, such as fishing nets, is critical to the operation of underwater robots. Existing techniques often require more computational resources and operate under harsh underwater imaging conditions when handling such tasks. This study aims to develop a model with low computational resource consumption and high efficiency to improve the detection accuracy of fishing nets for safe and efficient underwater operations. The Light-YOLO model proposed in this paper introduces an attention mechanism based on sparse connectivity and deformable convolution optimized for complex underwater lighting and visual conditions. This novel attention mechanism enhances the detection performance by focusing on the key visual features of fishing nets, while the introduced CoTAttention and SEAM modules further improve the model’s recognition accuracy of fishing nets through deeper feature interactions. The results demonstrate that the proposed Light-YOLO model achieves a precision of 89.3%, a recall of 80.7%, and an mAP@0.5 of 86.7%. Compared to other models, our model has the highest precision for its computational size and is the lightest while maintaining similar accuracy, providing an effective solution for fishing net detection and identification. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 33281 KiB  
Article
Underwater Fish Object Detection with Degraded Prior Knowledge
by Shijian Zheng, Rujing Wang and Liusan Wang
Electronics 2024, 13(12), 2346; https://doi.org/10.3390/electronics13122346 - 15 Jun 2024
Viewed by 1441
Abstract
Understanding fish distribution, behavior, and abundance is crucial for marine ecological research, fishery management, and environmental monitoring. However, the distinctive features of the underwater environment, including low visibility, light attenuation, water turbidity, and strong currents, significantly impact the quality of data gathered by [...] Read more.
Understanding fish distribution, behavior, and abundance is crucial for marine ecological research, fishery management, and environmental monitoring. However, the distinctive features of the underwater environment, including low visibility, light attenuation, water turbidity, and strong currents, significantly impact the quality of data gathered by underwater imaging systems, posing considerable challenges in accurately detecting fish objects. To address this challenge, our study proposes an innovative fish detection network based on prior knowledge of image degradation. In our research process, we first delved into the intrinsic relationship between visual image quality restoration and detection outcomes, elucidating the obstacles the underwater environment poses to object detection. Subsequently, we constructed a dataset optimized for object detection using image quality evaluation metrics. Building upon this foundation, we designed a fish object detection network that integrates a prompt-based degradation feature learning module and a two-stage training scheme, effectively incorporating prior knowledge of image degradation. To validate the efficacy of our approach, we develop a multi-scene Underwater Fish image Dataset (UFD2022). The experimental results demonstrate significant improvements of 2.4% and 2.5%, respectively, in the mAP index compared to the baseline methods ResNet50 and ResNetXT101. This outcome robustly confirms the effectiveness and superiority of our process in addressing the challenge of fish object detection in underwater environments. Full article
Show Figures

Figure 1

19 pages, 5251 KiB  
Article
DyFish-DETR: Underwater Fish Image Recognition Based on Detection Transformer
by Zhuowei Wang, Zhukang Ruan and Chong Chen
J. Mar. Sci. Eng. 2024, 12(6), 864; https://doi.org/10.3390/jmse12060864 - 22 May 2024
Cited by 10 | Viewed by 2166
Abstract
Due to the complexity of underwater environments and the lack of training samples, the application of target detection algorithms to the underwater environment has yet to provide satisfactory results. It is crucial to design specialized underwater target recognition algorithms for different underwater tasks. [...] Read more.
Due to the complexity of underwater environments and the lack of training samples, the application of target detection algorithms to the underwater environment has yet to provide satisfactory results. It is crucial to design specialized underwater target recognition algorithms for different underwater tasks. In order to achieve this goal, we created a dataset of freshwater fish captured from multiple angles and lighting conditions, aiming to improve underwater target detection of freshwater fish in natural environments. We propose a method suitable for underwater target detection, called DyFish-DETR (Dynamic Fish Detection with Transformers). In DyFish-DETR, we propose a DyFishNet (Dynamic Fish Net) to better extract fish body texture features. A Slim Hybrid Encoder is designed to fuse fish body feature information. The results of ablation experiments show that DyFishNet can effectively improve the mean Average Precision (mAP) of model detection. The Slim Hybrid Encoder can effectively improve Frame Per Second (FPS). Both DyFishNet and the Slim Hybrid Encoder can reduce model parameters and Floating Point Operations (FLOPs). In our proposed freshwater fish dataset, DyFish-DETR achieved a mAP of 96.6%. The benchmarking experimental results show that the Average Precision (AP) and Average Recall (AR) of DyFish-DETR are higher than several state-of-the-art methods. Additionally, DyFish-DETR, respectively, achieved 99%, 98.8%, and 83.2% mAP in other underwater datasets. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

13 pages, 945 KiB  
Article
Camera-Based Net Avoidance Controls of Underwater Robots
by Jonghoek Kim
Sensors 2024, 24(2), 674; https://doi.org/10.3390/s24020674 - 21 Jan 2024
Cited by 3 | Viewed by 1650
Abstract
Fishing nets are dangerous obstacles for an underwater robot whose aim is to reach a goal in unknown underwater environments. This paper proposes how to make the robot reach its goal, while avoiding fishing nets that are detected using the robot’s camera sensors. [...] Read more.
Fishing nets are dangerous obstacles for an underwater robot whose aim is to reach a goal in unknown underwater environments. This paper proposes how to make the robot reach its goal, while avoiding fishing nets that are detected using the robot’s camera sensors. For the detection of underwater nets based on camera measurements of the robot, we can use deep neural networks. Passive camera sensors do not provide the distance information between the robot and a net. Camera sensors only provide the bearing angle of a net, with respect to the robot’s camera pose. There may be trailing wires that extend from a net, and the wires can entangle the robot before the robot detects the net. Moreover, light, viewpoint, and sea floor condition can decrease the net detection probability in practice. Therefore, whenever a net is detected by the robot’s camera, we make the robot avoid the detected net by moving away from the net abruptly. For moving away from the net, the robot uses the bounding box for the detected net in the camera image. After the robot moves backward for a certain distance, the robot makes a large circular turn to approach the goal, while avoiding the net. A large circular turn is used, since moving close to a net is too dangerous for the robot. As far as we know, our paper is unique in addressing reactive control laws for approaching the goal, while avoiding fishing nets detected using camera sensors. The effectiveness of the proposed net avoidance controls is verified using simulations. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

25 pages, 8583 KiB  
Article
Inspection Operations and Hole Detection in Fish Net Cages through a Hybrid Underwater Intervention System Using Deep Learning Techniques
by Salvador López-Barajas, Pedro J. Sanz, Raúl Marín-Prades, Alfonso Gómez-Espinosa, Josué González-García and Juan Echagüe
J. Mar. Sci. Eng. 2024, 12(1), 80; https://doi.org/10.3390/jmse12010080 - 29 Dec 2023
Cited by 9 | Viewed by 3999
Abstract
Net inspection in fish-farm cages is a daily task for divers. This task represents a high cost for fish farms and is a high-risk activity for human operators. The total inspection surface can be more than 1500 m2, which means that [...] Read more.
Net inspection in fish-farm cages is a daily task for divers. This task represents a high cost for fish farms and is a high-risk activity for human operators. The total inspection surface can be more than 1500 m2, which means that this activity is time-consuming. Taking into account the severe restrictions for human operators in such hostile underwater conditions, this activity represents a significant area for improvement. A platform for net inspection is proposed in this work. This platform includes a surface vehicle, a ground control station, and an underwater vehicle (BlueROV2 heavy) which incorporates artificial intelligence, trajectory control procedures, and the necessary communications. In this platform, computer vision was integrated, involving a convolutional neural network trained to predict the distance between the net and the robot. Additionally, an object detection algorithm was developed to recognize holes in the net. Furthermore, a simulation environment was established to evaluate the inspection trajectory algorithms. Tests were also conducted to evaluate how underwater wireless communications perform in this underwater scenario. Experimental results about the hole detection, net distance estimation, and the inspection trajectories demonstrated robustness, usability, and viability of the proposed methodology. The experimental validation took place in the CIRTESU tank, which has dimensions of 12 × 8 × 5 m, at Universitat Jaume I. Full article
(This article belongs to the Special Issue Advances in Underwater Robots for Intervention)
Show Figures

Figure 1

16 pages, 1352 KiB  
Article
Fish Recognition in the Underwater Environment Using an Improved ArcFace Loss for Precision Aquaculture
by Liang Liu, Junfeng Wu, Tao Zheng, Haiyan Zhao, Han Kong, Boyu Qu and Hong Yu
Fishes 2023, 8(12), 591; https://doi.org/10.3390/fishes8120591 - 30 Nov 2023
Cited by 4 | Viewed by 2497
Abstract
Accurate fish individual recognition is one of the critical technologies for large-scale fishery farming when trying to achieve accurate, green farming and sustainable development. It is an essential link for aquaculture to move toward automation and intelligence. However, existing fish individual data collection [...] Read more.
Accurate fish individual recognition is one of the critical technologies for large-scale fishery farming when trying to achieve accurate, green farming and sustainable development. It is an essential link for aquaculture to move toward automation and intelligence. However, existing fish individual data collection methods cannot cope with the interference of light, blur, and pose in the natural underwater environment, which makes the captured fish individual images of poor quality. These low-quality images can cause significant interference with the training of recognition networks. In order to solve the above problems, this paper proposes an underwater fish individual recognition method (FishFace) that combines data quality assessment and loss weighting. First, we introduce the Gem pooing and quality evaluation module, which is based on EfficientNet. This module is an improved fish recognition network that can evaluate the quality of fish images well, and it does not need additional labels; second, we propose a new loss function, FishFace Loss, which will weigh the loss according to the quality of the image so that the model focuses more on recognizable fish images, and less on images that are difficult to recognize. Finally, we collect a dataset for fish individual recognition (WideFish), which contains and annotates 5000 images of 300 fish. The experimental results show that, compared with the state-of-the-art individual recognition methods, Rank1 accuracy is improved by 2.60% and 3.12% on the public dataset DlouFish and the proposed WideFish dataset, respectively. Full article
Show Figures

Figure 1

18 pages, 2564 KiB  
Article
Keep It Simple: Improving the Ex Situ Culture of Cystoseira s.l. to Restore Macroalgal Forests
by Ana Lokovšek, Valentina Pitacco, Domen Trkov, Leon Lojze Zamuda, Annalisa Falace and Martina Orlando-Bonaca
Plants 2023, 12(14), 2615; https://doi.org/10.3390/plants12142615 - 11 Jul 2023
Cited by 10 | Viewed by 3063
Abstract
Brown algae from genus Cystoseira s.l. form dense underwater forests that represent the most productive areas in the Mediterranean Sea. Due to the combined effects of global and local stressors such as climate change, urbanization, and herbivore outbreaks, there has been a severe [...] Read more.
Brown algae from genus Cystoseira s.l. form dense underwater forests that represent the most productive areas in the Mediterranean Sea. Due to the combined effects of global and local stressors such as climate change, urbanization, and herbivore outbreaks, there has been a severe decline in brown algal forests in the Mediterranean Sea. Natural recovery of depleted sites is unlikely due to the low dispersal capacity of these species, and efficient techniques to restore such habitats are needed. In this context, the aims of our study were (1) to improve and simplify the current ex situ laboratory protocol for the cultivation of Gongolaria barbata by testing the feasibility of some cost-effective and time-efficient techniques on two donor sites of G. barbata and (2) to evaluate the survival and growth of young thalli during the laboratory phase and during the most critical five months after out-planting. Specifically, the following ex situ cultivation methods were tested: (A) cultivation on clay tiles in mesocosms with culture water prepared by three different procedures (a) filtered seawater with a 0.22 μm filter membrane, (b) filtered seawater with a 0.7 μm filter membrane (GF), and (c) UV-sterilized water, and (B) cultivation on clay tiles in open laboratory systems. After two weeks, all thalli were fixed to plastic lantern net baskets suspended at a depth of 2 m in the coastal sea (hybrid method), and the algal success was monitored in relation to the different donor sites and cultivation protocol. The satisfactory results of this study indicate that UV-sterilized water is suitable for the cultivation of G. barbata in mesocosm, which significantly reduces the cost of the laboratory phase. This opens the possibility of numerous and frequent algal cultures during the reproductive period of the species. Additionally, if the young thalli remain in the lantern net baskets for an extended period of several months, they can grow significantly in the marine environment without being exposed to pressure from herbivorous fish. Full article
(This article belongs to the Special Issue Seagrass Genomics, Proteomics and Metabolomics)
Show Figures

Figure 1

16 pages, 8434 KiB  
Article
A Real-Time Fish Target Detection Algorithm Based on Improved YOLOv5
by Wanghua Li, Zhenkai Zhang, Biao Jin and Wangyang Yu
J. Mar. Sci. Eng. 2023, 11(3), 572; https://doi.org/10.3390/jmse11030572 - 7 Mar 2023
Cited by 12 | Viewed by 3338
Abstract
Marine fish target detection technology is of great significance for underwater vehicles to realize fish automatic recognition. However, the complex underwater environment and lighting conditions lead to the complex background of the collected image and more irrelevant interference, which makes the fish target [...] Read more.
Marine fish target detection technology is of great significance for underwater vehicles to realize fish automatic recognition. However, the complex underwater environment and lighting conditions lead to the complex background of the collected image and more irrelevant interference, which makes the fish target detection more difficult. In order to detect fish targets accurately and quickly, a real-time fish target detection network based on improved YOLOv5s is proposed. Firstly, the Gamma transform is introduced in the preprocessing part to improve the gray and contrast of the marine fish image, which is convenient for model detection. Secondly, the ShuffleNetv2 lightweight network introducing the SE channel attention mechanism is used to replace the original backbone network CSPDarkNet53 of YOLOv5 to reduce the model size and the amount of calculation, and speed up the detection. Finally, the improved BiFPN-Short network is used to replace the PANet network for feature fusion, so as to enhance the information propagation between different levels and improve the accuracy of the detection algorithm. Experimental results show that the volume of the improved model is reduced by 76.64%, the number of parameters is reduced by 81.60%, the floating-point operations (FLOPs) is decreased by 81.22% and the mean average precision (mAP) is increased to 98.10%. The balance between lightweight and detection accuracy is achieved, and this paper also provides a reference for the development of underwater target detection equipment. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

Back to TopTop