Next Article in Journal
Multi-Task Learning for Ocean-Front Detection and Evolutionary Trend Recognition
Previous Article in Journal
Ship Target Feature Detection of Airborne Scanning Radar Based on Trajectory Prediction Integration
Previous Article in Special Issue
Hail Damage Detection: Integrating Sentinel-2 Images with Weather Radar Hail Kinetic Energy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Deep Learning-Based Semantic Segmentation for Automatic Shoreline Extraction in Coastal Video Monitoring Systems

1
Centro de Estudos do Ambiente e do Mar (CESAM), Departamento de Geociências, Universidade de Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
2
Instituto de Telecomunicações (IT) / Departamento de Eletrónica, Telecomunicações e Informática, Universidade de Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(23), 3865; https://doi.org/10.3390/rs17233865 (registering DOI)
Submission received: 29 September 2025 / Revised: 27 October 2025 / Accepted: 21 November 2025 / Published: 28 November 2025

Abstract

Dynamic and vulnerable, coastal zones face multiple hazards such as storms, flooding, and erosion, posing serious risks to populations and ecosystems. Continuous observation of coastal processes, particularly shoreline evolution, is therefore essential. Over the past three decades, coastal video-monitoring systems have proven valuable and cost-effective for studying coastal dynamics. Several approaches have been proposed to determine shoreline position, but each presents limitations, often depending on local conditions or illumination. This study proposes a method based on semantic segmentation using deep neural networks, specifically U-Net and DeepLabv3+ architectures. Both models were trained using time-exposure images from a coastal video-monitoring system, with DeepLabv3+ further evaluated using four convolutional neural network (CNN) backbones (ResNet-18, ResNet-50, MobileNetV2, and Xception). Unlike previous satellite- or UAV-based studies, this work applies deep learning to fixed coastal video systems, enabling continuous and high-frequency shoreline monitoring. Both architectures achieved high performance, with Global Accuracy of 0.98, Mean IoU between 0.95 and 0.97, and Mean Boundary F1 Score up to 0.99. These findings highlight the effectiveness and flexibility of the proposed approach, which provides a robust, transferable, and easily deployable solution for diverse coastal settings.
Keywords: coastal video-monitoring; shoreline extraction; semantic segmentation; u-net; Deeplabv3+ coastal video-monitoring; shoreline extraction; semantic segmentation; u-net; Deeplabv3+

Share and Cite

MDPI and ACS Style

Santos, F.; Cunha, T.R.; Baptista, P. Deep Learning-Based Semantic Segmentation for Automatic Shoreline Extraction in Coastal Video Monitoring Systems. Remote Sens. 2025, 17, 3865. https://doi.org/10.3390/rs17233865

AMA Style

Santos F, Cunha TR, Baptista P. Deep Learning-Based Semantic Segmentation for Automatic Shoreline Extraction in Coastal Video Monitoring Systems. Remote Sensing. 2025; 17(23):3865. https://doi.org/10.3390/rs17233865

Chicago/Turabian Style

Santos, Fábio, Telmo R. Cunha, and Paulo Baptista. 2025. "Deep Learning-Based Semantic Segmentation for Automatic Shoreline Extraction in Coastal Video Monitoring Systems" Remote Sensing 17, no. 23: 3865. https://doi.org/10.3390/rs17233865

APA Style

Santos, F., Cunha, T. R., & Baptista, P. (2025). Deep Learning-Based Semantic Segmentation for Automatic Shoreline Extraction in Coastal Video Monitoring Systems. Remote Sensing, 17(23), 3865. https://doi.org/10.3390/rs17233865

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop