sensors-logo

Journal Browser

Journal Browser

Marine Environmental Perception and Underwater Detection

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Environmental Sensing".

Deadline for manuscript submissions: 25 August 2024 | Viewed by 19727

Special Issue Editors

Institute of Logistics Science and Engineering, Shanghai Maritime University, Shanghai 201306, China
Interests: optical fiber sensors; marine environmental detection

E-Mail Website
Guest Editor
College of Engineering Science and Technology, Shanghai Ocean University, Shanghai 201306, China
Interests: underwater targets identification; high performance computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the aid of intelligent sensors and high-definition cameras, marine science and its related disciplines can be more easily researched. Such perception devices (sensors and cameras) and analytical techniques (e.g., underwater computer vision, deep learning-based marine environmental assessment, multi-sensor information fusion, marine optical detection, acousto-optic device-based cooperative detection) provide evidence of marine environmental change, fishery resources distribution, and underwater topography survey. These are complemented by many other important applications in both emerging and established fields, such as autonomous underwater (surface) vehicle navigation, maritime search and rescue, aquatic ecological environment assessment, and offshore inspections.

In this Special Issue, both methodological and empirical contributions are welcome that address multidisciplinary questions (e.g., environmental, hydroacoustic, surveying, mapping, recognition) using intelligent sensors and cameras in marine engineering areas, as well as technological advances in marine environmental perception and underwater detection.

Dr. Yunhe Zhao
Dr. Bowen Xing
Prof. Dr. Jingxiang Xu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • underwater computer vision
  • marine optics
  • marine environmental assessment
  • multi-sensor information fusion
  • cooperative detection

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 24650 KiB  
Article
Fine-Grained 3D Modeling and Semantic Mapping of Coral Reefs Using Photogrammetric Computer Vision and Machine Learning
by Jiageng Zhong, Ming Li, Hanqi Zhang and Jiangying Qin
Sensors 2023, 23(15), 6753; https://doi.org/10.3390/s23156753 - 28 Jul 2023
Cited by 1 | Viewed by 1652
Abstract
Corals play a crucial role as the primary habitat-building organisms within reef ecosystems, forming expansive structures that extend over vast distances, akin to the way tall buildings define a city’s skyline. However, coral reefs are vulnerable to damage and destruction due to their [...] Read more.
Corals play a crucial role as the primary habitat-building organisms within reef ecosystems, forming expansive structures that extend over vast distances, akin to the way tall buildings define a city’s skyline. However, coral reefs are vulnerable to damage and destruction due to their inherent fragility and exposure to various threats, including the impacts of climate change. Similar to successful city management, the utilization of advanced underwater videography, photogrammetric computer vision, and machine learning can facilitate precise 3D modeling and the semantic mapping of coral reefs, aiding in their careful management and conservation to ensure their survival. This study focuses on generating detailed 3D mesh models, digital surface models, and orthomosaics of coral habitats by utilizing underwater coral images and control points. Furthermore, an innovative multi-modal deep neural network is designed to perform the pixel-wise semantic segmentation of orthomosaics, enabling the projection of resulting semantic maps onto a 3D space. Notably, this study achieves a significant milestone by accomplishing semantic fine-grained 3D modeling and rugosity evaluation of coral reefs with millimeter-level accuracy, providing a potent means to understand coral reef variations under climate change with high spatial and temporal resolution. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

16 pages, 5098 KiB  
Article
A LiDAR SLAM-Assisted Fusion Positioning Method for USVs
by Wei Shen, Zhisong Yang, Chaoyu Yang and Xin Li
Sensors 2023, 23(3), 1558; https://doi.org/10.3390/s23031558 - 1 Feb 2023
Cited by 3 | Viewed by 1994
Abstract
Confronted with unmanned surface vessel (USV) operations where GNSS signals are unavailable due to obscuration and other factors, a LiDAR SLAM-assisted fusion positioning method for USVs is proposed to combine GNSS/INS positioning with LiDAR-SLAM. When the USV works in wide-open water, the carrier [...] Read more.
Confronted with unmanned surface vessel (USV) operations where GNSS signals are unavailable due to obscuration and other factors, a LiDAR SLAM-assisted fusion positioning method for USVs is proposed to combine GNSS/INS positioning with LiDAR-SLAM. When the USV works in wide-open water, the carrier phase differential GNSS/INS loosely coupled integration strategy is applied to fuse and calibrate the positioning data, and the positioning information of the USV is obtained through the coordinate conversion process. The system uses a dynamic switching strategy to enter to LiDAR-SLAM positioning when GNSS signals are not available, compensating the LiDAR data with precise angle information to ensure accurate and stable positioning. The experiments show that compared with the traditional Kalman filter and adaptive Kalman filter fusion algorithms, the positioning error is reduced by 55.4% and 43.5%. The velocity error is also limited by 78.2% and 57.9%. The standard deviation and the root mean square error are stable within 0.1 m, indicating that our method has better data stability, while the probability of positioning anomaly is effectively controlled. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

17 pages, 20377 KiB  
Article
Altering Fish Behavior by Sensing Swarm Patterns of Fish in an Artificial Aquatic Environment Using an Interactive Robotic Fish
by Udaka A. Manawadu, Malsha De Zoysa, J. D. H. S. Perera, I. U. Hettiarachchi, Stephen G. Lambacher, Chinthaka Premachandra and P. Ravindra S. De Silva
Sensors 2023, 23(3), 1550; https://doi.org/10.3390/s23031550 - 31 Jan 2023
Viewed by 1911
Abstract
Numerous studies have been conducted to prove the calming and stress-reducing effects on humans of visiting aquatic environments. As a result, many institutions have utilized fish to provide entertainment and treat patients. The most common issue in this approach is controlling the movement [...] Read more.
Numerous studies have been conducted to prove the calming and stress-reducing effects on humans of visiting aquatic environments. As a result, many institutions have utilized fish to provide entertainment and treat patients. The most common issue in this approach is controlling the movement of fish to facilitate human interaction. This study proposed an interactive robot, a robotic fish, to alter fish swarm behaviors by performing an effective, unobstructed, yet necessary, defined set of actions to enhance human interaction. The approach incorporated a minimalistic but futuristic physical design of the robotic fish with cameras and infrared (IR) sensors, and developed a fish-detecting and swarm pattern-recognizing algorithm. The fish-detecting algorithm was implemented using background subtraction and moving average algorithms with an accuracy of 78%, while the swarm pattern detection implemented with a Convolutional Neural Network (CNN) resulted in a 77.32% accuracy rate. By effectively controlling the behavior and swimming patterns of fish through the smooth movements of the robotic fish, we evaluated the success through repeated trials. Feedback from a randomly selected unbiased group of subjects revealed that the robotic fish improved human interaction with fish by using the proposed set of maneuvers and behavior. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

11 pages, 3611 KiB  
Article
Light Penetrating the Seawater Column as the Indicator of Oil Suspension—Monte Carlo Modelling for the Case of the Southern Baltic Sea
by Barbara Lednicka, Zbigniew Otremba and Jacek Piskozub
Sensors 2023, 23(3), 1175; https://doi.org/10.3390/s23031175 - 19 Jan 2023
Cited by 2 | Viewed by 1210
Abstract
The strong need to control investments related to oil extraction and the growing demand for offshore deep-water exploration are the reasons for looking for tools to make up a global underwater monitoring system. Therefore, the current study analyses the possibility of revealing the [...] Read more.
The strong need to control investments related to oil extraction and the growing demand for offshore deep-water exploration are the reasons for looking for tools to make up a global underwater monitoring system. Therefore, the current study analyses the possibility of revealing the existence of oil-in-water emulsions in the water column, based on the modelling of the downwelling radiance detected by a virtual underwater sensor. Based on the Monte Carlo simulation for the large numbers of solar photons in the water, the analyses were carried out for eight wavelengths ranging from 412 to 676 nm using dispersed oil with a concentration of 10 ppm. The optical properties of the seawater were defined as typical for the southern Baltic Sea, while the oil emulsion model was based on the optical properties of crude oil extracted in this area. Based on the above-mentioned assumptions and modelling, a spectral index was obtained, with the most favourable combination of 555/412 nm, whose value is indicative of the presence of an oil emulsion in the water. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

16 pages, 7399 KiB  
Article
Ship Fire Detection Based on an Improved YOLO Algorithm with a Lightweight Convolutional Neural Network Model
by Huafeng Wu, Yanglin Hu, Weijun Wang, Xiaojun Mei and Jiangfeng Xian
Sensors 2022, 22(19), 7420; https://doi.org/10.3390/s22197420 - 29 Sep 2022
Cited by 22 | Viewed by 4206
Abstract
Ship fire is one of the greatest dangers to ship navigation safety. Nevertheless, typical detection methods have limited detection effectiveness and accuracy due to distance restrictions and ship motion. Although the issue can be addressed by image recognition algorithms based on deep learning, [...] Read more.
Ship fire is one of the greatest dangers to ship navigation safety. Nevertheless, typical detection methods have limited detection effectiveness and accuracy due to distance restrictions and ship motion. Although the issue can be addressed by image recognition algorithms based on deep learning, the computational complexity and efficiency for ship detection are tough. This paper proposes a lightweight target identification technique based on the modified YOLOv4-tiny algorithm for the precise and efficient detection of ship fires, taking into account the distinctive characteristics of ship fires and the marine environment. Initially, a multi-scale detection technique is applied to broaden the detection range and integrate deep semantic information, thereby enhancing the feature information of small targets and obscured objects and improving the detection precision. Then, the proposed algorithm employs the SE attention mechanism for inter-channel feature fusion to improve the capability of feature extraction and the precision of ship fire detection. Last but not least, picture transformation and migration learning are added to the small ship fire dataset to accelerate the convergence pace, improve the convergence effect, and reduce dataset dependence. The simulation experiments reveal that the proposed I-YOLOv4-tiny + SE model outperforms the benchmark algorithm in terms of ship fire detection accuracy and detection efficiency and that it satisfies the real-time ship fire warning criteria in demanding maritime environments. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

21 pages, 5260 KiB  
Article
Underwater Holothurian Target-Detection Algorithm Based on Improved CenterNet and Scene Feature Fusion
by Yanling Han, Liang Chen, Yu Luo, Hong Ai, Zhonghua Hong, Zhenling Ma, Jing Wang, Ruyan Zhou and Yun Zhang
Sensors 2022, 22(19), 7204; https://doi.org/10.3390/s22197204 - 22 Sep 2022
Cited by 6 | Viewed by 1646
Abstract
Aiming at the common problems, such as noise pollution, low contrast, and color distortion in underwater images, and the characteristics of holothurian recognition, such as morphological ambiguity, high similarity with the background, and coexistence of special ecological scenes, this paper proposes an underwater [...] Read more.
Aiming at the common problems, such as noise pollution, low contrast, and color distortion in underwater images, and the characteristics of holothurian recognition, such as morphological ambiguity, high similarity with the background, and coexistence of special ecological scenes, this paper proposes an underwater holothurian target-detection algorithm (FA-CenterNet), based on improved CenterNet and scene feature fusion. First, to reduce the model’s occupancy of embedded device resources, we use EfficientNet-B3 as the backbone network to reduce the model’s Params and FLOPs. At the same time, EfficientNet-B3 increases the depth and width of the model, which improves the accuracy of the model. Then, we design an effective FPT (feature pyramid transformer) combination module to fully focus and mine the information on holothurian ecological scenarios of different scales and spaces (e.g., holothurian spines, reefs, and waterweeds are often present in the same scenario as holothurians). The co-existing scene information can be used as auxiliary features to detect holothurians, which can improve the detection ability of fuzzy and small-sized holothurians. Finally, we add the AFF module to realize the deep fusion of the shallow-detail and high-level semantic features of holothurians. The results show that the method presented in this paper yields better results on the 2020 CURPC underwater target-detection image dataset with an AP50 of 83.43%, Params of 15.90 M, and FLOPs of 25.12 G compared to other methods. In the underwater holothurian-detection task, this method improves the accuracy of detecting holothurians with fuzzy features, a small size, and dense scene. It also achieves a good balance between detection accuracy, Params, and FLOPs, and is suitable for underwater holothurian detection in most situations. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

11 pages, 2971 KiB  
Article
Sea Cucumber Detection Algorithm Based on Deep Learning
by Lan Zhang, Bowen Xing, Wugui Wang and Jingxiang Xu
Sensors 2022, 22(15), 5717; https://doi.org/10.3390/s22155717 - 30 Jul 2022
Cited by 4 | Viewed by 1733
Abstract
The traditional single-shot multiBox detector (SSD) for the recognition process in sea cucumbers has problems, such as an insufficient expression of features, heavy computation, and difficulty in application to embedded platforms. To solve these problems, we proposed an improved algorithm for sea cucumber [...] Read more.
The traditional single-shot multiBox detector (SSD) for the recognition process in sea cucumbers has problems, such as an insufficient expression of features, heavy computation, and difficulty in application to embedded platforms. To solve these problems, we proposed an improved algorithm for sea cucumber detection based on the traditional SSD algorithm. MobileNetv1 is selected as the backbone of the SSD algorithm. We increase the feature receptive field by receptive field block (RFB) to increase feature details and location information of small targets. Combined with the attention mechanism, features at different depths are strengthened and irrelevant features are suppressed. The experimental results show that the improved algorithm has better performance than the traditional SSD algorithm. The average precision of the improved algorithm is increased by 5.1%. The improved algorithm is also more robust. Compared with YOLOv4 and the Faster R-CNN algorithm, the performance of this algorithm on the P-R curve is better, indicating that the performance of this algorithm is better. Thus, the improved algorithm can stably detect sea cucumbers in real time and provide reliable feedback information. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

20 pages, 2706 KiB  
Article
Application of PSO-BPNN-PID Controller in Nutrient Solution EC Precise Control System: Applied Research
by Yongtao Wang, Jian Liu, Rong Li, Xinyu Suo and Enhui Lu
Sensors 2022, 22(15), 5515; https://doi.org/10.3390/s22155515 - 24 Jul 2022
Cited by 4 | Viewed by 1759
Abstract
In this paper, we present a nutrient solution control system, designing a nutrient solution electrical conductivity (EC) sensing system composed of multiple long-range radio (LoRa) slave nodes, narrow-band Internet of Things (NB-IoT) master nodes, and a host computer, building a nutrient solution EC [...] Read more.
In this paper, we present a nutrient solution control system, designing a nutrient solution electrical conductivity (EC) sensing system composed of multiple long-range radio (LoRa) slave nodes, narrow-band Internet of Things (NB-IoT) master nodes, and a host computer, building a nutrient solution EC control model and using the particle swarm optimization (PSO) algorithm to optimize the initial weights of a back-propagation neural network (BPNN). In addition, the optimized best weights are put into the BPNN to adjust the proportional–integral–derivative (PID) control parameters Kp, Ki, and Kd so that the system performance index can be optimized. Under the same initial conditions, we input EC = 2 mS/cm and use the particle swarm optimization BP neural network PID (PSO-BPNN-PID) to control the EC target value of the nutrient solution. The optimized scale factors were Kp = 81, Ki = 0.095, and Kd = 0.044; the steady state time was about 43 s, the overshoot was about 0.14%, and the EC value was stable at 1.9997 mS/cm–2.0027 mS/cm. Compared with the BP neural network PID (BPNN-PID) and the traditional PID control approach, the results show that PSO-BPNN-PID had a faster response speed and higher accuracy. Furthermore, we input 1 mS/cm, 1.5 mS/cm, 2 mS/cm, and 2.5 mS/cm, respectively, and simulated and verified the PSO-BPNN-PID system model. The results showed that the fluctuation range of EC was 0.003 mS/cm~0.119 mS/cm, the steady-state time was 40 s~60 s, and the overshoot was 0.3%~0.14%, which can meet the requirements of the rapid and accurate integration of water and fertilizer in agricultural production. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

Other

Jump to: Research

13 pages, 8977 KiB  
Technical Note
Research on Bathymetric Inversion Capability of Different Multispectral Remote Sensing Images in Seaports
by Wei Shen, Jiaqi Wang, Muyin Chen, Lihua Hao and Zhongqiang Wu
Sensors 2023, 23(3), 1178; https://doi.org/10.3390/s23031178 - 19 Jan 2023
Cited by 2 | Viewed by 1603
Abstract
In recent years, remote sensing has become an indispensable supplementary method for determining water depth in the seaports. At present, many scholars use multi-spectral satellite data to invert the water depth of the seaports, but how to select the appropriate satellite data in [...] Read more.
In recent years, remote sensing has become an indispensable supplementary method for determining water depth in the seaports. At present, many scholars use multi-spectral satellite data to invert the water depth of the seaports, but how to select the appropriate satellite data in the seaports area is worth exploring. In this article, the differences in the retrieving ability between domestic and foreign multispectral images are compared, through building the random forest model and the band ratio model, which use different multispectral images to conduct retrieving water depth in Nanshan Port in conjunction with the WBMS multi-beam sounding system. The band ratio model and random forest model are chosen for water depth exploration, remote sensing images use GF-6, GF-2, Sentinel-2B, and Landsat 8 OLI data, which are all popular and easily accessible. The final experiment results from the constant adjustment of the model parameter show that the domestic series of GF-6 images performed the best in this experiment. The Root Mean Square Error (RMSE) and Mean Relative Error (MRE) of the random forest model are only 1.202 and 0.187, respectively. Simultaneously, it is discovered that the ‘Red Edge’ band of GF-6 is also very helpful in improving the accuracy of water depth inversion, which is rarely mentioned in previous studies. To some extent, the preceding studies demonstrate that it is possible to investigate water depth using common multispectral remote sensing images. In the case of some bathymetry inversion models or in some waters, the aforementioned study demonstrates that it is possible to examine the water depth using domestic remote sensing images that are superior to foreign multispectral images in terms of bathymetry inversion ability. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

Back to TopTop