Next Article in Journal
Physical-Guided Transfer Deep Neural Network for High-Resolution AOD Retrieval
Previous Article in Journal
Four Decades of Thermal Monitoring in a Tropical Urban Reservoir Using Remote Sensing: Trends, Climatic and External Drivers of Surface Water Warming in Lake Paranoá, Brazil
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Machine and Deep Learning for Wetland Mapping and Bird-Habitat Monitoring: A Systematic Review of Remote-Sensing Applications (2015–April 2025)

1
Department of Cartography and Photogrammetry, School of Geomatics and Surveying Engineering, Agriculture and Veterinary Medicine Institute Hassan II, Rabat 10000, Morocco
2
Avian Pathology Unit, Department of Veterinary Pathology and Public Health, Agronomy and Veterinary Institute Hassan II, Rabat B.P. 6202, Morocco
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(21), 3605; https://doi.org/10.3390/rs17213605 (registering DOI)
Submission received: 11 July 2025 / Revised: 31 August 2025 / Accepted: 9 September 2025 / Published: 31 October 2025

Highlights

What are the main findings?
  • Across 121 studies (2015–April 2025), machine learning—especially Random Forest—is most used, while deep learning provides higher accuracy for complex wetlands, particularly when fusing Sentinel-1 radar with Sentinel-2 optical imagery.
  • Coverage is uneven: China and coastal wetlands dominate, bird-habitat studies are few, and validation still leans on overall accuracy with limited class-level reporting.
What is the implication of the main finding?
  • Prioritize SAR–optical fusion and fit-for-purpose deep learning models for heterogeneous wetlands; report class-level metrics and use external validation to improve comparability and transfer.
  • Address geographic and thematic gaps and link mapping outputs to bird-habitat variables; use UAV imagery for micro-habitats while minimizing disturbance.

Abstract

Wetlands, among the most productive ecosystems on Earth, shelter a diversity of species and help maintain ecological balance. However, they are witnessing growing anthropogenic and climatic threats, which underscores the need for regular and long-term monitoring. This study presents a systematic review of 121 peer-reviewed articles published between January 2015 and 30 April 2025 that applied machine learning (ML) and deep learning (DL) for wetland mapping and bird-habitat monitoring. Despite rising interest, applications remain fragmented, especially for avian habitats; only 39 studies considered birds, and fewer explicitly framed wetlands as bird habitats. Following PRISMA 2020 and the SPIDER framework, we compare data sources, classification methods, validation practices, geographic focus, and wetland types. ML is predominant overall, with random forest the most common baseline, while DL (e.g., U-Net and Transformer variants) is underused relative to its broader land cover adoption. Where reported, DL shows a modest but consistent accuracy over ML for complex wetland mapping; this accuracy improves when fusing synthetic aperture radar (SAR) and optical data. Validation still relies mainly on overall accuracy (OA) and Kappa coefficient ( κ ), with limited class-wise metrics. Salt marshes and mangroves dominate thematically, and China geographically, whereas peatlands, urban marshes, tundra, and many regions (e.g., Africa and South America) remain underrepresented. Multi-source fusion is beneficial yet not routine; The combination of unmanned aerial vehicles (UAVs) and DL is promising for fine-scale avian micro-habitats but constrained by disturbance and labeling costs. We then conclude with actionable recommendations to enable more robust and scalable monitoring. This review can be considered as the first comparative synthesis of ML/DL methods applied to wetland mapping and bird-habitat monitoring, and highlights the need for more diverse, transferable, and ecologically/socially integrated AI applications in wetland and bird-habitat monitoring.

1. Introduction

During the last decades, the world has undergone profound transformations that have significantly impacted different aspects, including the natural functioning of ecosystems and biodiversity dynamics [1]. These changes include a variety of factors such as climate change [2], land use and land cover shifts [3], pollution, and invasive species introductions, among others. Different studies highlight the relationship between these global changes and their effects on ecosystem function and structure, emphasizing the impact of human activities and their contribution to increasing rates of species invasions and species extinctions [4,5], which lead to altering ecosystem services and biodiversity loss.
Wetlands stand out as the most vulnerable ecosystems and are highly affected by these global shifts. Wetlands provide critical ecosystem services, including water purification, carbon sequestration, and flood regulation, while also serving as essential habitats for numerous species. They emerge as vital ecosystems that support a wide range of bird species, providing breeding, feeding, and stopover sites during migration. They play an essential role in maintaining environmental stability and ecological balance. However, half of the wetland areas worldwide have been damaged due to agriculture, urbanization, and climate change [6]. The loss of these habitats can significantly impact wild bird populations, as it reduces their access to resources and impacts their migration patterns.
Interestingly, birds serve all four categories of ecosystem services [7], including supporting, provisioning, regulating, and cultural services. In light of this, wild birds are studied because they are considered to help better understand the lives of birds in natural environments [8]. Indeed, wild birds serve as important indicators of ecosystem health and biodiversity, as they are sensitive to anthropogenic changes and they are counted as a key actor in the interplay between people and nature [9]. Moreover, they quickly respond to changes in resource availability due to their high mobility and adaptive behaviors, along with their sensitivity to changes at lower trophic levels since they are at or near the top of the food chain [10]. This ability to exploit resources effectively makes birds competent ecosystem engineers. They also play a key role in linking spatial and temporal ecosystem processes and fluxes across great distances and time scales [7]. For example, migratory birds can transport nutrients and energy across continents, influencing productivity and biodiversity in both breeding and wintering areas [11].
The global shifts that our planet is encountering have drastic consequences on migratory networks and the viability of migration flyways [12], which can alter species composition and survival rates among bird populations [13]. Thus, understanding the complexity of interactions between birds, ecosystems, and human activities, as well as enhancing wild birds’ monitoring methods, is essential to developing effective conservation strategies and rendering the decision-making process more efficient and targeted. In other words, monitoring wild birds has proven to be very important in understanding their behavior, habitat preferences, and population dynamics [14]. It provides valuable insights into their ecological requirements and helps in making informed decisions regarding their conservation and management.
Traditionally, monitoring wild birds and migratory birds has relied on manual observation by birdwatchers and volunteers and the involvement of citizen science programs [15,16]. These methods involve systematic bird surveys, bird banding, and the collection of observational data on bird behaviors, abundance, and distribution across various habitats and seasons [17]. Despite their decent accuracy, these traditional methods for collecting data are very time-consuming, work-intensive, and costly [18,19]. In other cases, the ornithologist may face access challenges; some areas are considered dangerous, and others are restricted [20]. Another drawback is the limited number of samples produced along with the poor geographic coverage of field surveys [21,22].
Given these challenges, researchers have been developing new methods to address these issues, focusing on the integration of automated monitoring systems, such as acoustic sensors, GNSS tracking tags, camera traps, and meteorological stations [23,24]; as well as remote-sensing technologies, including satellite imagery and unmanned aerial vehicles (UAVs). These technologies offer more opportunities for data collection and play a vital role in monitoring bird habitats, assessing landscape changes, and identifying the different variables that influence bird populations [25,26]. Satellite data can provide information on vegetation cover, land cover changes, and habitat fragmentation [27]. UAVs, equipped with high-resolution sensors and cameras, help gather detailed spatial data on nesting sites and migratory stopover locations [28,29].
The non-stopping evolution of technologies has stepped on another frontier with the integration of artificial intelligence (AI) into bird monitoring and conservation planning [30,31]. AI-based image recognition algorithms can process vast amounts of camera trap or UAV imagery to identify and classify bird species, count individuals, and monitor their real-time movement [32]. AI models are also capable of analyzing acoustic data from recording devices to identify bird vocalizations [33]. In addition, AI-powered predictive modeling techniques can forecast species distribution, potential threats, and habitat suitability based on several variables and historical data [34].
The critical need to understand migration patterns and how wetland shifts affect wild bird populations, along with the evolving landscape of ML applications, highlights the importance of a comprehensive review of existing research. Despite advancements in remote sensing and ML models in different domains, there are still gaps when it comes to wetlands monitoring and the assessment of wild birds’ habitats. Our survey aims to explore remote-sensing technologies used in this domain and understand the efficacy of ML models for more accurate monitoring. We not only aim to provide a state-of-the-art analysis of remote-sensing techniques coupled with ML but also to highlight the contributions of these previous studies in advancing this field. Our work is to be considered one of the few, not to say the first-of-its-kind, comprehensive review that aims to bridge the gap between remote-sensing applications and ML-driven analysis specifically for wetland ecosystems. By synthesizing existing methodologies, their strengths, and their limitations, as well as the challenges that hinder each of them, we seek to identify the most effective approaches for wetland monitoring, the most suitable indicators to explore, and the type of input data required to ensure successful monitoring. This review will encompass both satellite remote sensing and unmanned aerial vehicles (UAVs), with a further focus on the first technology. By positioning our review at the intersection of remote sensing, ML, and wild birds and wetland conservation, we intend to offer valuable insights to researchers, conservationists, and policymakers.
In this article, we aim to explore the field of wetland mapping and habitat monitoring using remote sensing, particularly satellite imagery, along with ML and DL models. Therefore, our paper is organized into distinct sections, beginning with an introduction to contextualize the environmental and ecological importance of wetlands. The following section describes our systematic review methodology, including the development of focused research questions, the use of the SPIDER framework, and clearly defined inclusion and exclusion criteria. The core analysis of the paper is divided into thematic categories, including the characterization of wetland types, the remote-sensing technologies utilized (with an emphasis on satellite imagery), the ML/DL models adopted, and the geographic and ecological distribution of the studies reviewed. We examine and compare the performance of various models, identify dominant validation metrics, and synthesize key patterns, gaps, and methodological innovations across studies.
This is followed by a comprehensive discussion section that addresses the research questions, highlights underexplored wetland types and regions, and examines challenges such as data scarcity, model transferability, and limited multi-source fusion. The manuscript concludes by summarizing our findings, emphasizing the need for more diverse, transparent, and ecologically integrated approaches, and proposing strategic directions for future research in AI-enabled wetland monitoring.

2. Materials and Methods

2.1. Scope and Time Window

This review covers the literature published between January 2015 and April 2025. We selected 2015 as the baseline because two inflection points reshaped wetland monitoring. First, a sensor/data inflection: the launch of Sentinel-2A (2015) and the maturation of the open Copernicus program (together with Sentinel-1 C-band SAR, operational from late 2014) delivered freely available, global 10 m optical and all-weather SAR time series at high revisit, enabling routine multi-temporal mapping and accelerating adoption of ML/DL workflows on cloud platforms [35]. Second, a policy inflection: the 2030 Agenda for Sustainable Development (2015) and the Ramsar Strategic Plan (2016–2024) increased institutional demand for standardized wetland inventories and monitoring. Concluding on 30 April 2025 provides a complete, policy-relevant decade that captures the period of fastest growth in open satellite data, cloud computation, and AI-enabled mapping [36]. Moreover, before 2015, wetland mapping relied mainly on Landsat/ASTER/SPOT optical data and early spaceborne SAR (ERS-1/2, JERS-1, RADARSAT). Workflows were dominated by unsupervised/supervised classifiers and maximum-likelihood; object-based image analysis (OBIA) and early ML (SVMs, ANNs) were emerging but not yet mainstream. Standard tools included spectral indices such as NDWI for open-water detection [37,38] and early optical–SAR fusion noted in synthesis papers [39]. DL was essentially absent, revisit/resolution limited fine-scale habitat work, and desktop processing constrained large time-series analyses—conditions that changed markedly after 2015.

2.2. Research Protocol

In our research on wetland monitoring, we adopt a systematic review, designed according to the PRISMA 2020 statement [40]. Although the review protocol was finalized before data collection, it was not prospectively registered.
This approach guarantees a rigorous, transparent, and comprehensive literature synthesis. It follows a well-defined methodology that minimizes bias, enhances reproducibility, and enables an in-depth review of all the previous studies that align with our research questions. This method helps encompass the analysis of used remote-sensing technologies, the processing and classification techniques used, the types of wetlands studied, the various ML/DL models used, and the challenges encountered in the field. This systematic approach is known for its ability to provide comprehensive coverage of relevant literature. Thus, by employing structured search strategies across multiple scientific databases. This thorough approach captures a holistic understanding of how remote-sensing technologies, especially satellite imagery, and image processing based on emerging ML/DL techniques, have been applied in wetland monitoring and wild bird-habitat assessment. Moreover, the use of well-chosen keywords and Boolean operators helps to identify relevant studies systematically, reducing the risk of overlooking critical studies. In light of this, we used the SPIDER framework (Sample, Phenomenon of Interest, Design, Evaluation, and Research Type) presented by [41]. We chose this framework for its effectiveness in addressing qualitative and mixed-method research questions, which is suitable for our study. Unlike traditional frameworks such as PICO (Population, Intervention, Comparison, Outcome) [42], which focus primarily on clinical and intervention-based studies, SPIDER allows for a broader exploration of research trends, methodologies, and challenges related to our topic. The systematic review consists of several well-structured phases. It began by defining the research objective and using the SPIDER framework to establish the scope of our review. We then developed the search and identified keywords to conduct our research. We established detailed inclusion and exclusion criteria to select our results. Once eligible studies were identified, we collected critical information, evaluated them, and synthesized them to identify trends, challenges, and gaps in the existing literature. The findings serve as a valuable resource for researchers, guide future studies, and inform conservation strategies (Figure 1).
Our systematic review follows a well-structured approach to ensure a rigorous and comprehensive analysis of existing literature on the use of remote sensing for wetland monitoring. The review follows several key steps, as explained below, to guarantee an in-depth analysis.

2.2.1. Definition of the Research Objective

The first step in the systematic review process is to establish a clear research objective, using the phenomenon of interest (P) component of the SPIDER framework. This P explores how remote-sensing technologies, especially satellite imagery, contribute to wetland ecosystem monitoring and wild bird-habitat assessment, using ML/DL algorithms. Additionally, our review aims to understand how these algorithms enhance the processing and interpretation of remotely sensed data to improve wetland classification and monitoring accuracy. To surround our research objective, we have formulated the following questions:
  • Which remote-sensing technologies are used for wetland and bird-habitat monitoring?
  • Which ML/DL models are most frequently used?
  • Which wetland types and regions are over-/underrepresented?

2.2.2. Search Strategy Development

To identify relevant literature, we used multiple academic databases, including Scopus, ScienceDirect, MDPI, SpringerLink, Web of Science, and IEEE. All searches were performed in April 2025 and the results were limited to publications from 1 January 2015 to 30 April 2025. We then defined keywords and Boolean operators (AND, OR, NOT) to refine the search queries. We based the selection of our search terms on the sample (S) and research type (R) components of the SPIDER framework, focusing on studies that examined the application of remote sensing in wetland and bird-habitat monitoring as well as the role of ML and DL techniques in improving result accuracy.
To ensure a comprehensive search, we used a combination of specific keywords, including “wild birds”, “wetland”, “monitoring”, “remote sensing”, “UAVs”, “satellite imagery”, “artificial intelligence”, “machine learning”, “migration flyways”, “assessment” and “habitat“ These keywords were combined in various ways using Boolean operators (e.g., AND, OR) to expand or refine search results. Searches were repeated with different keyword combinations to maximize the coverage of relevant studies.

2.2.3. Definition of Inclusion and Exclusion Criteria

Establishing inclusion and exclusion criteria is an important aspect of systematic reviews to ensure high-quality research protocols [43,44]. Inclusion criteria help identify the essential characteristics that studies must meet to be included in our review. When dealing with a rapidly evolving field like remote sensing, well-defined inclusion and exclusion criteria help minimize bias and improve reproducibility in systematic reviews [43]. Ref. [44] suggests a balanced approach to increase inclusiveness with quality control, by outlining a seven-step process which includes applying inclusion/exclusion criteria progressively and eliminating irrelevant studies in phases to make the decision.
Table 1 below, summarizes the inclusion and exclusion criteria.
Studies relying exclusively on MODIS were excluded from this review because the 250–500 m spatial resolution is generally insufficient for class-resolved wetland delineation and micro-habitat analysis. While MODIS provides valuable long-term environmental indicators (e.g., vegetation dynamics, flooding frequency, or LST) [45,46], such products are more effectively used as ancillary inputs in combination with higher-resolution sensors (e.g., Sentinel-2 or Landsat) rather than as standalone datasets for wetland classification. This exclusion ensures that only studies with adequate spatial detail for habitat-level assessment were considered.

2.2.4. Risk of Bias Assessment

To ensure the selection of high-quality studies, a structured assessment process was implemented. We used the QUADAS-2 tool for diagnostic-accuracy studies to judge methodological rigor and risk of bias [47]. QUADAS-2 evaluates four domains:
Study relevance/data Selection: The study focuses on wetland monitoring using remote sensing.
  • The study involves ML algorithms.
  • The study answers the questions that were defined above in the research objective section.
Dataset quality/reference standard:
  • The study provides details on the remote-sensing datasets that are used.
Methodological rigor/index test:
  • The study provides a detailed and clear methodology description.
  • The study uses appropriate remote sensing and ML techniques for classification.
Performance evaluation/flow and timing:
  • The study reports performance metrics
  • There should be a comparison with benchmark models or traditional methods.
To systematically rank the quality of the studies, we implemented a (0–5) scoring system:
0: Low-quality study -> excluded from review;
1–3: Moderate-quality study -> included if the methodology is well-detailed;
4–5: High-quality study -> included in results.

2.2.5. Data Extraction

We first used the Scopus database for the systematic retrieval of relevant studies, applying our predefined keywords and selected timeline. We also enriched our findings with studies from other academic databases, as mentioned above. All articles that met these initial filtering criteria were exported in RIS format for further processing. We imported our files into Zotero, a reference management tool, where the studies were organized and structured. This step enabled an efficient preliminary filtering. We then exported our dataset in CSV format for further analysis. In Excel, we developed a structured spreadsheet with columns that align with our research questions, including study metadata (authors, year, title, study area, methodology, results, performance metrics, etc.). We screened the titles and abstracts in our sheet and removed all duplicates. This framework made our systematic extraction process easier and ensured that all the selected studies correspond to our research inquiries.

3. Results

This section synthesizes the major findings of our systematic review according to our predefined research questions and in line with the established inclusion and exclusion criteria.

3.1. Statistical Overview and Study Characteristics

A total of 572 records were retrieved from the combined database search. After automated and manual de-duplication (122 duplicates), 511 unique articles remained. Title-and-abstract screening then removed 176 papers that were review articles, benchmark studies, inventories, or other secondary literature, as well as 22 non-English papers, leaving 313 studies. Full-text assessment of these 313 articles against the predefined inclusion criteria yielded a final corpus of 121 primary research articles that addressed at least two of our research questions and were therefore retained for quantitative and qualitative synthesis (Figure 2).
The selected studies were then analyzed according to wetland types, data sources, applied methodologies, model performance, and the main challenges identified in the field.

3.2. Temporal Distribution

The analysis of the temporal distribution of the selected studies reveals a progressive increase in scientific attention toward wetland monitoring, particularly from 2020 onwards (Figure 3).
Figure 3 shows a clear shift in publication activity in the review window. Output is sparse before 2018 (fewer than two articles per year), increases in 2018–2020 (6 each), and then accelerates sharply: 12 (2021), 15 (2022), 29 (2023), and 34 (2024). Approximately half of all included studies cluster in 2023–2024, and almost 90% were published from 2020 onward, indicating that the growth of the field is recent and rapid. The 2025 count (n = 11) covers January–April only; extrapolated over a full year, it would likely match 2023–2024 levels, so we treat 2025 as incomplete in trend comparisons. This surge aligns with easier access to free, high-cadence satellite archives (e.g., Sentinel-2), the maturation of cloud platforms (e.g., Google Earth Engine) [48,49,50] that lowered the barrier to processing time-series imagery at scale, and the fast uptake of ML/DL methods for pixel-wise classification and change detection [51,52].

3.3. Study Geographic Distribution

Geographically, the research focus remains heavily weighted towards China [52,53,54]. Figure 4 summarizes the geographic distribution of the 121 included studies, highlighting a flagrant spatial imbalance. China accounts for the largest share (34 studies), followed by Canada (18) and the United States (17). European countries collectively contribute 20 studies, while other Asian nations account for 8, South America for 9, Africa for 7, and Australia for 4. A small proportion of studies (8) adopt a global or multi-regional scope. Overall, research efforts are heavily concentrated in China and North America, which together represent 63% of the total. In contrast, substantial geographic gaps persist, particularly in the Amazon Basin, large parts of Africa, and across the Asian continent beyond China.
Specifically, extensive investigations have been carried out on pivotal wetland complexes such as Dongting Lake [55], Shengjin Lake [56], the Yellow River Delta [57], and the Liaohe Estuary [58,59]. Research targeting coastal mangrove ecosystems in Southeast China was equally substantial, often integrating high-resolution UAV imagery alongside satellite datasets to capture the dynamics of these sensitive habitats [60,61,62]. Outside China, noteworthy contributions came from studies focusing on some wetlands of Canada [63,64] and the Great Lakes Basin in North America [65], demonstrating the global relevance of wetland conservation.

3.4. Wetland and Bird-Habitat Classes

The 121 included studies cluster into five recurring themes. The corpus is dominated by coastal and estuarine systems—including mangroves, tidal flats, and salt marshes—accounting for 32.8% of the studies [66,67,68], followed by salt marshes [69,70], and tidal flats [71,72]. Inland wetlands, including seasonal floodplain lakes [73,74], marshes [75], and open waters [76]. A smaller share targets peatlands and cold-region systems [77,78], which are methodologically demanding. Studies of artificial/managed and urban wetlands (rice paddies, reservoirs/ponds, canals, aquaculture, city lakes/rivers) are growing, but are still very limited (only four studies found) [79,80,81,82]. Moreover, several studies pursued thematic objectives beyond mapping physical extents, investigating soil salinity gradients [83], vegetative succession and habitat dynamics [58,84], and species-specific habitat models such as those for Kandelia mangroves [85] and migratory waterbirds [64]. The diversity in ecological focus highlights an evolution in research priorities from pure land cover mapping towards a more integrated understanding of wetland ecosystem processes and services [86,87]. Among the 121 studies reviewed, 39 (32%) addressed birds in some capacity, ranging from species distribution modeling [88] to biodiversity assessments [89,90]. However, only 27 (22%) explicitly framed wetlands as bird habitats, directly linking wetland condition, extent, or vegetation composition to avian ecology [91,92]. The remaining 12 studies involved bird monitoring in broader landscape contexts [93,94,95] (e.g., mixed ecosystems [89], agricultural mosaics [96] or urban areas [88]) where wetlands were only a minor component of the analysis. Wetland-framed bird-habitat studies were dominated by research in coastal/estuarine systems [97,98] and large inland lakes/floodplains [99], with smaller representation from peatlands, depressional wetlands, and riverine floodplains [74]. Collectively, bird-habitat studies show that seasonal hydroperiod and vegetation structure are the primary proximal drivers of habitat suitability: optical indices (NDVI/NDBI/NDWI) and SAR backscatter/coherence capture vegetation greenness, canopy moisture, and inundation persistence—variables repeatedly associated with shorebird and waterfowl presence [100].

3.5. Data Sources and Acquisition Techniques

Through our systematic review, we noted that Sentinel-1 SAR data, with its all-weather imaging capability, is commonly used for mapping wetland hydrological dynamics [52,101,102]. Concerning Sentinel-2, optical imagery provided critical spectral detail for vegetation characterization and water body delineation [48,103]. The utility of long-term datasets was equally emphasized, with Landsat archives supporting analyses of historical wetland changes spanning multiple decades [58,66,104]. Enhancing these primary datasets, many studies incorporated auxiliary data layers such as digital elevation models to account for topographic influences on wetland hydrology [64,103], climatic variables to model environmental drivers [84], and soil data for applications like salinity mapping [57,83]. Species occurrence data, particularly for avian studies, were also utilized to model habitat suitability and distribution [86]. The integration of UAV-based imagery represented a significant advancement for high-resolution wetland mapping, especially in studies requiring fine-scale monitoring of vegetation structures and habitat conditions [75,105,106]. Notably, UAV-derived data were successfully employed in Southeast China for mangrove monitoring, where object-based classification approaches leveraged the spatial granularity of drone imagery to enhance mapping resolution [104,107].
As shown in Figure 5, sensor families break down as follows: medium-resolution optical dominates (43%; led by Sentinel-2 and Landsat), radar/microwave is second (23%; chiefly Sentinel-1), followed by very-high-resolution optical imagery. UAV/aerial imagery contributes 9% combined, elevation/structure auxiliaries 10%, while atmospheric products are negligible.
Across the literature, the use of cloud-computing platforms, like Google Earth Engine GEE (5%) [59,108], was transformative, enabling efficient processing of large, multi-temporal datasets. Multi-source data fusion emerged as a recurring theme (20%), with the combined use of SAR and optical imagery consistently delivering improved classification accuracy and ecological interpretability [52,109,110]. These integrations underline a collective shift towards more holistic data frameworks capable of capturing the multifaceted nature of wetland environments.
Across the 39 bird-habitat studies, medium-resolution optical satellites dominate the sensing stack: Sentinel-2 MSI is cited in 17/43 (43 total sensor mentions) (39.5%) of sensor mentions, and the Landsat series is cited in 10/43 (23.3%), together supplying 62.8% of all mentions. These are primarily used for multi-class wetland/bird-habitat mapping [111,112], seasonal compositing [91], and vegetation/water indices [74]. MODIS appears 4/43 (9.3%) for hydroperiod and thermal context [74] and habitat suitability maps [95]. SAR totals 4/43 (9.3%)—Sentinel-1 (1/43) plus ALOS-2, and GF-3 (1/43 each)—to capture inundation and under-canopy water, and to stabilize time series under cloud [113,114]. High-resolution optical assets (RapidEye, GF-6 WFV, ASTER) contribute 3/43 (7%) for small features/coastlines [88,115], UAVs contribute 4/43 (9.3%) to fine-scale micro-habitat and nesting detection [113,116,117], while airborne LiDAR appears 3/43 (7%) for elevation/canopy structure, and VIIRS night-lights plus field spectrometers are each used once as anthropogenic/vegetation stress proxies [118] (Figure 6).

3.6. Applied Methodologies and Classification Approaches

3.6.1. Methods Usage by Wetland Type

  • ML usage by Wetland type
Random forest (RF) is the dominant method across nearly all classes, with the largest counts in marsh (41), coastal–generic (23), swamp (19), bog (13), and fen (12). SVM is the second workhorse, especially in marsh (18), coastal–generic (12), and mangrove (6), while XGBoost/GBM appears much less frequently (e.g., marsh 9, swamp 5, bog/fen 3 each). MLP/ANN is rare. Overall, ML activity concentrates in marsh, coastal, and swamp classes, consistent with workflows that rely on index-rich optical features, simple SAR statistics, and object/proximity features (Figure 7).
  • DL usage by Wetland type
DL usage is highest in peatland/forested–herbaceous systems, with CNN/ResNet-style encoders peaking in bog (16), fen (14), and swamp (13) and still substantial in marsh (20); Transformers follow the same pattern (bog 7, fen 6, swamp 7, marsh 8). U-Net occurs across several types but at lower counts (bog/fen 4–5; marsh/swamp 4 each). Temporal DL appears where phenology/hydroperiod matters (marsh 4; swamp/bog/fen 4 each). Object detectors and GANs are niche (fewer than 4 per class), used mainly for augmentation or specific detection tasks. In contrast, coastal classes are comparatively DL-light (coastal–generic: CNN/ResNet 6, Transformer 1; salt marsh: CNN/ResNet 3, Transformer 0), reflecting continued reliance on ML baselines in these environments (Figure 8).
Based upon the above results, RF/SVM dominate marsh, salt marsh, coastal, and floodplain classes, whereas DL families (CNN/ResNet encoders, Transformers, U-Net variants, and temporal DL) are most prevalent in bog, fen, and swamp. This pattern aligns with scene physics and label ecology: coastal and marsh environments often separate well with index-rich optical features, simple SAR statistics, and context layers—favoring RF/SVM—while peatland/forested systems exhibit fine-scale heterogeneity and weak single-date spectral contrast, where multi-scale, multi-sensor DL brings larger gains (Table 2). Temporal dependence further tilts bog/fen/swamp toward DL (TempCNN/LSTM), especially when hydroperiod and moisture dynamics are explicit. For coastal classes, tide-aware compositing is the key determinant; without it, robust ML baselines frequently outperform heavier DL.

3.6.2. Machine Learning Applied to Wetland Mapping and Bird-Habitat Monitoring

Across the 121 selected papers, classical ML remains the dominant choice, while DL is used more selectively. In wetland mapping (N = 82), ML appears in 56 studies (68.3%) [119] and DL in 30 (36.6%) [120]. In the bird-habitat subset (N = 39), ML appears in 30 (76.9%) [112], whereas DL is rare (1/39) [111]. This confirms that DL is more visible in general wetland mapping than in habitat-focused work.
In wetland mapping, ML usage is led by random forest (RF: 46/82; 56.1%) [121], followed by SVM (18/82; 22.0%) [122], shallow ANN/MLP (10/82; 12.2%) [123], decision tree/CART/C5.0 (8/82; 9.8%) [71] and XGBoost (6/82; 7.3%) [70], with smaller showings of kNN (4/82; 4.9%) [124] and occasional maximum-likelihood and rule-based baselines (each 2/82; 2.4%) [125]. Single appearances include MaxEnt and naïve Bayes (each 1/82; 1.2%) [126,127] (Figure 9).
Habitat-focused papers rely chiefly on interpretable, data-efficient ML pipelines. RF is most frequent (20/39; 51.3%), followed by SVM (11/39; 28.2%) [118], shallow ANN/MLP (6/39; 15.4%) [91], decision-tree/CART/C5.0 (4/39; 10.3%) [96], XGBoost (4/39; 10.3%) [128], GLM (4/39; 10.3%) [93] and MaxEnt (4/39; 10.3%) [95]. Less frequent are GBM and GAM (each 2/39) [89,94], with single instances of rule-based, maximum-likelihood, kNN, and naïve Bayes (each 1/39) [129] (Figure 10).
In [130], RF was integrated within a two-phase feature selection framework combining LDJ ranking and recursive feature elimination with cross-validation, enabling the reduction of a large multi-source feature space (SAR and Sentinel-2) from 270 to 38 features. This not only improved model interpretability but also enhanced classification accuracy, achieving an OA of 94% for wetland mapping in the Yellow River Delta while reducing training time by over 40%. Similarly, ref. [52] implemented RF in a two-stage classification approach using OBIA. An initial OBIA-RF model classified six broad wetland types across 11 ecoregions in East Asia, using 86 spatiotemporal features derived from Sentinel-1/2 time series, while a secondary hierarchical decision tree refined the waterbody subclassification. The resulting wetland map achieved an OA of 88.74%. In a different context, ref. [119] applied RF for potential wetland distribution modeling using over 30 environmental variables (e.g., climate, topography, vegetation), demonstrating RF’s capability in ecological niche modeling. The model reached an OA of 79.3%, with variable importance analysis identifying slope and precipitation as the dominant predictors. Beyond classification, RF was also used in regression settings. Ref [77] employed RF regression to model floristic gradients from satellite spectral indices in restored peatlands, with R2 values reaching up to 0.62 for open mires, confirming RF’s potential for continuous vegetation modeling. Finally, ref. [60] used RF to develop an interpretable mangrove classification model by extracting decision rules directly from trained RF trees. The derived five-rule interpretable mangrove mapping algorithm (IMMA) maintained high accuracy (82.3% in China, 78.8% in Florida), demonstrating RF’s utility in generating transferable and explainable classification logic. Collectively, these studies underscore RF’s versatility across classification and regression tasks, its scalability across spatial extents, and its capacity for integration with object-based, hierarchical, and interpretable modeling frameworks in wetland remote sensing.
In addition, gradient boosting methods such as XGBoost and LightGBM are increasingly adopted, particularly in mangrove-focused and coastal wetland research, where their advanced boosting capabilities allow high classification accuracies [57,62]. These models were used in data-rich environments via UAV imagery and LiDAR data, where they were able to outperform other models due to their ability to handle heterogeneous features and imbalanced classes [131]. Moreover, ref. [132] combines Sentinel-2 with UAV and SAR data to identify mangrove species under high spectral similarity conditions, using XGBoost, which reached an OA of 94%. Additionally, traditional classifiers were also employed. Support Vector Machine (SVM) was applied in deltaic zones using Landsat imagery, achieving high accuracies and proving its relevance in complex and heterogeneous areas [66]. C5.0 decision trees were used in a time-series NDVI-based classification of salt marsh vegetation [106], showing competitive results compared to other algorithms.
Probabilistic and predictive models like MaxEnt and CA-Markov were utilized in habitat suitability and wetland change forecasting [84,102], particularly when the goal is long-term spatial analysis. In [102], MaxEnt was mainly chosen for its robustness with presence-only data and its ability to handle complex nonlinear relationships between species occurrence and environmental predictors. In [84], an RF model was used to map the historical distribution of wetlands using Landsat imagery. A Markov chain was used to simulate future land cover changes.
Overall, the analysis reveals a clear trend toward using random forest models for their generalizability and performance across wetland types and sensor configurations. However, the application of non-tree-based models such as SVM remains limited to specific use cases (UAV data or comparison studies). There is also a significant lack of studies evaluating model transferability across regions or integrating ecological variables beyond remote sensing inputs, which suggests a critical research gap that should be addressed in the future.

3.6.3. Deep Learning Models Applied to Wetland Mapping and Bird-Habitat Monitoring

The growing progression of methodological approaches was also evident in the adoption of DL frameworks, where architectures such as U-Net demonstrated strong capabilities in segmenting complex wetland landscapes with high spatial resolution [133,134]. DenseNet and VGG-16 architectures were similarly explored for their strength in extracting deep hierarchical features, enhancing mapping accuracy in heterogeneous wetland environments [63,109]. In wetland mapping, DL methods concentrate in vision backbones and segmenters: generic CNNs (12/82; 14.6%) [135], the U-Net family (10/82; 12.2%) [136] and Transformers (ViT/Swin; 6/82; 7.3%) [135], with smaller uses of ResNet (5/82; 6.1%) [137] and GAN-based augmentation (3/82; 3.7%) [138]. VGG, FCN, DeepLab, DenseNet, and temporal DL (LSTM/TempCNN) each appear once (1/82; 1.2%) [100,139]. In this set, DL is typically deployed for pixel-wise delineation at 10–30 m, or to encode broader spatial context (Figure 11).
DL is rare in bird-habitat monitoring: only one study uses a U-Net-family model (1/39; 2.6%), and no DL-only designs are observed [111] (Figure 10). Among the dominant architectures, convolutional neural networks (CNNs) remain the cornerstone of DL in wetland research. Multiple studies successfully used CNN frameworks to exploit the spatial patterns in satellite imagery, improving the delineation of wetland boundaries and reducing classification confusion with adjacent land cover types. In particular, CNN-based models proved highly effective when applied to multispectral data from Sentinel-2 [140], offering precise mapping of vegetation zones within wetland complexes by capturing fine-grained spatial features present in high-resolution optical data [141]. Similarly, CNN models combined with Sentinel-1 SAR imagery demonstrated strong performance in detecting flooded vegetation areas, thanks to the sensitivity of radar backscatter to moisture content [142]. A significant evolution of CNN architecture is represented by U-Net models [143], including their enhanced variants such as 3D U-Net and Residual U-Net [109,144], which were prominently featured in several studies. These models, originally designed for biomedical image segmentation [144], have been effectively adapted for wetland classification tasks, particularly in scenarios requiring pixel-level delineation of complex landscape features [49]. Applied predominantly to Sentinel-2 and UAV-derived datasets, U-Net models excelled in segmenting salt marshes and mangrove zones, delivering high accuracy while maintaining computational efficiency [107,145]. Furthermore, the integration of vegetation indices into the U-Net framework augmented the spectral discrimination of wetland vegetation, thereby avoiding issues of class confusion frequently encountered in heterogeneous environments [109]. A notable advancement in DL for wetland monitoring is the adoption of Transformer-based architectures, particularly the Swin Transformer [135] and cross-attention Vision transformer network (CVTNet) [146]. These models, leveraging self-attention mechanisms, captured long-range dependencies within high-dimensional remote-sensing data, outperforming conventional CNNs in several complex mapping scenarios. For instance, the Swin Transformer, when applied to multi-source data from Sentinel-1 and Sentinel-2, provided higher classification accuracy and better generalization across different wetland types [109,135,147]. Similarly, CVTNet effectively combined the strengths of CNNs and Vision Transformers, demonstrating exceptional performance in delineating subtle transitions between wetland vegetation types [146]. Another innovative contribution is the development of WetNet, a spatial-temporal ensemble DL model specifically designed for wetland classification [129]. WetNet integrates CNNs for spatial feature extraction with recurrent neural networks (RNNs) to capture temporal dynamics, enabling robust classification even under conditions of seasonal variability and cloud cover. In parallel, some studies explored the potential of generative adversarial networks (GANs), particularly for data augmentation and enhancing the robustness of classification models under limited training data scenarios. GANs were employed to expand training datasets synthetically, thereby improving model performance and avoiding overfitting risks in data-scarce wetland mapping applications [138]. Moreover, ref. [109] proposed a 3D-GAN integrated with a Vision Transformer (ViT) to perform complex wetland classification using Sentinel-1 and Sentinel-2 data. The GAN component was tasked with synthesizing additional training samples for underrepresented wetland classes, leveraging a conditional map unit to generate class-specific data patches. The generator and discriminator were structured in a competitive learning setting, where the generator sought to mimic the statistical distribution of real Sentinel image patches, while the discriminator aimed to distinguish between real and synthetic inputs. Once the GAN reached satisfactory classification accuracy, the combined real and synthetic samples were used to train the Vision Transformer for final classification. The method was applied across three Canadian sites (Saint John, Sussex, Fredericton), and achieved promising classification metrics: an OA of 75.61%, an average accuracy of 73.4%, and a Kappa index of 71.87%, demonstrating the feasibility of synthetic data augmentation for large-scale wetland mapping.
Further advancements included ensemble learning strategies, where multiple algorithms were combined to capitalize on their respective strengths, leading to notable improvements in classification outcomes [63,148]. To enhance model transparency and interpretability, several studies implemented SHAP (Shapley additive explanations), which provided insights into feature contributions and supported the refinement of input variable selection [86]. Approaches to training data generation were also diversified; sample-migration techniques were applied to bridge temporal gaps and increase the robustness of models across different monitoring periods [59], while automatic sample generation facilitated the scaling of models to larger geographic extents [69]. Moreover, the integration of DL with historical datasets was suggested in a study exploring historical topographic maps combined with DL architectures to reconstruct wetland distribution changes over time. This approach highlighted the adaptability of DL frameworks to non-traditional data sources, broadening the applicability of AI-driven mapping techniques beyond contemporary satellite imagery [107]. To summarize architectures, Table 2 consolidates the main DL families used in wetland mapping, their typical inputs, and the wetland types most often targeted.
Table 2. Synthesis overview of Deep learning architectures used in wetland mapping.
Table 2. Synthesis overview of Deep learning architectures used in wetland mapping.
Deep Learning ArchitectureMain InputWetland Type(s)References
U -Net/Residual U -NetSentinel-2 MSI; UAV RGB/HSI; S1 + S2 fusion; DEM/LiDAR auxiliariesMangroves; salt marsh; tidal flats; freshwater marshes; peatlands[49,133,142,144,145]
CNN (encoder/backbone)Sentinel-2; Landsat; S1 + S2 multi-sensor stacksSalt marsh; mangroves; boreal/estuarine mixes[63,67,135]
Transformer (ViT/Swin/SegFormer)S1 + S2 multi-temporal stacks; multi-source fusionBoreal/estuarine; freshwater marsh[108,135,146]
3D CNNS1 + S2 cubes or UAV HSI cubes (space–time/spectral)Floodplain; marsh mosaics[140]
ConvLSTM/TempCNNSentinel-2 time series; S1 coherence/backscatter seriesMarsh/swamp phenology; hydroperiod mapping[75,135]
R -CNNUAV RGB/multispectral; very-high-resolution opticalNest/colony detection; micro-habitats in coastal/inland wetlands[116,117]
GAN (augmentation/synthesis)S1 + S2 patches; class-balanced synthetic samplesHistorical/boreal/estuarine wetlands; class-imbalance scenarios[63,140]
Ensembles (CNN ensembles/WetNet)S1 + S2; ancillary DEM/shoreline masksSalt marsh; boreal/estuarine; general wetland classes[63,143]
DNN/Deep MLPHigh-dimensional feature stacks (indices, texture, topography)Mangroves; mixed inland wetlands[141]
Hybrid CNN–ViT (attention fusion)S1 + S2; optionally UAV/LiDAR-derived structureCoastal wetlands and shoreline–wetland transitions[146]

3.7. Model Training, Validation, and Performance

Based on the reviewed studies, there was a variability regarding model training strategies, validation schemes, and performance reporting. The diversity reflects both the heterogeneity of wetland environments and the evolving methodologies in ML and DL applications in remote sensing.

3.7.1. Training Data and Labeling Strategies

Across both corpora, manual labeling dominates. In the bird-habitat studies, 64.1% of the selected papers report manual labels from expert visual interpretation [112], high-resolution imagery and/or field points [118]; 10.3% use semi-automated/stratified sampling [81], and 2.6% note GEE-assisted labeling [95]. In the wetland mapping set, 54.9% (45/82) rely on manual labels [149,150]; 3.7% use rule-based labeling [151]; 3.7% mention GEE-assisted sampling [152]; 1.2% report semi-automated/stratified sampling [72]; and 1.2% use pseudo/weak labels (e.g., teacher/pseudo-label or sample-migration strategies). Overall, semi-automated or weak-label approaches remain uncommon in both subsets, which likely contributes to the preference for data-efficient ML over DL in habitat-focused papers.

3.7.2. Validation Protocols and Performance

In bird-habitat related studies, external tests (cross-site/independent site) are reported by 87.2% [118,128]; 46.2% use random hold-outs [115]; 17.9% report cross-year (temporal) transfer [91,117]; and 15.4% use k-fold cross-validation [111]. In wetland mapping, external tests appear in 26.8% [153]; random hold-outs in 9.8%; cross-year in 4.9% [59]; and k-fold CV in 3.7% [154]. Thus, bird-habitat papers more often claim external testing, while wetland mapping papers more often stay within internal splits [155].
The validation protocols were consistently rigorous, employing primary confusion matrices and OA evaluations (63.4% in wetland mapping studies and 51.3% in bird-habitat studies) [52,69,84,140], followed by Kappa statistics [52,71], F1 score and AUC evaluations to quantify model performance [54,85,156] (Figure 12a,b). We report accuracy metrics separately for bird-habitat monitoring (N = 39) and wetland mapping (N = 82). Values represented in Figure 12 are per-paper occurrences, since many studies report multiple metrics, and categories overlap. Consequently, the percentages within each panel can exceed 100%.
Across methodologies, the reported accuracies were generally high. RF models typically achieved accuracy between 85% and 90% [48,55,59], XGBoost models exceeded 92% in several applications [105,148], and DL models, such as U-Net and DenseNet, exceeded 95% in complex classification tasks [86,102].

3.7.3. Model Selection and Tuning

For ML approaches, random forest (RF) and XGBoost were the most widely used, due to their robustness and ability to handle high-dimensional feature spaces [105,157,158]. These models were often tuned using grid search or trial-and-error optimization. In DL-based studies, U-Net, ResNet, and Swin Transformer architectures were prominent, especially for pixel-wise classification tasks. However, hyperparameter tuning in DL studies was often underreported or only briefly mentioned, pointing to a lack of transparency and reproducibility in some cases [77,132].
The results reveal that general wetland mapping has matured around multi-sensor fusion (e.g., S1 + S2, HS + SAR + DEM) and robust classical ML. In contrast, bird-focused studies more often rely on SDMs or acoustics with limited DL uptake and sparse class-wise reporting. This imbalance partly explains why DL’s potential for fine-scale habitat remains underexplored despite clear sensor synergies.

4. Discussion

Based on the results of our systematic review, we note the rapid evolution and diversification in the field of wetland mapping, driven by advances in satellite-based Earth observation and ML/DL models. The integration of optical and radar remote sensing with these architectures has markedly improved classification performance and spatiotemporal resolution in recent years. However, the review also exposes persistent thematic, geographic, and methodological gaps that limit the approaches’ generalizability and operational scalability.
  • Sensor Usage and Data Availability
Sentinel-2 emerged as the most frequently used satellite platform, cited in over 40% of studies, followed by Landsat and Sentinel-1. This dominance reflects the high spatial, spectral, and temporal resolution of Sentinel-2, as well as its free availability. However, the relatively low use of radar data (Sentinel-1) and the even lower integration of multi-sensor fusion techniques suggest that many studies still rely on single-technique imagery, potentially underutilizing the synergistic potential of SAR–optical combinations. UAV data—despite clear advantages for fine-scale habitat structure—remain sporadic in both subsets. Beyond operational, cost, and processing hurdles, disturbance and permitting constraints (species-specific responses at low altitudes/approach angles and agency-mandated buffers/permits) [159] further restrict routine deployment, even though nesting sites and micro-habitats frequently occur below the 10–30 m pixel scale.
  • Wetland Types and Regional Biases
The review revealed a marked geographic concentration, with China accounting for over 27% of the reviewed studies (Figure 4). This concentration aligns with China’s enabling policy and data environment: a dedicated Wetlands Protection Law (effective 1 June 2022) institutionalizes wetland inventories and management [55], the CHEOS/Gaofen Earth-observation program has expanded national high-resolution imaging capacity [160]; and widely used domestic cloud platforms such as PIE-Engine lower barriers to large-area ML/DL workflows. Together with China’s globally significant wetland estate and Ramsar network, these factors plausibly drive higher research output relative to many regions [161,162]. In contrast, entire regions—particularly Africa, the Middle East, and South America—are critically underrepresented. These include ecologically fragile systems where data is sparse or uneven—e.g., Maghreb/North-African sebkhas and chotts, Sahelian seasonal pans and floodplains, Horn of Africa and Red Sea fringe wetlands, Arabian sabkhas and oases, Orinoco Llanos floodplains, and parts of the Congo Basin peatlands—all of which are sensitive to hydrologic and climatic variability and often lack consistent reference data. Thematically, mangroves, coastal wetlands, and floodplains dominate the literature, while peatlands, tundra wetlands, and urban wetlands are rarely addressed.
  • Dominant Approaches and Model Robustness
Across the corpus, ML remains predominant while DL is unevenly adopted: in wetland mapping (n = 82), ML appears in 68.3% and DL in 36.6% of studies (counts by presence); design-wise, 52.4% are ML-only, 20.7% DL-only, and 15.9% are a combination of both. In the bird-habitat subset (n = 39), ML dominates (76.9%) but DL is rare (2.6%), with 76.9% ML-only and 0% DL-only. These distributions confirm that DL is materially present for generic mapping but remains underused when wetlands are treated explicitly as bird habitats.
Where sub-meter data are available, DL paired with UAV imagery is particularly effective for micro-habitat tasks. Fine-scale avian micro-habitats (nests, roosts, colony edges) are best captured by UAV imagery combined with DL instance/semantic segmenters (e.g., R-CNN/U-Net variants), which outperform ML baselines for small objects in cluttered backgrounds. However, label scarcity and disturbance constraints limit routine deployment. Label-efficiency strategies (weak/pseudo-labels, semi-supervised learning) and UAV→satellite knowledge distillation are promising to scale DL while reducing field disturbance.
To compare studies fairly, we used a simple, consistent scheme: we noted whether each paper used machine learning or deep learning, what data stack it relied on (e.g., Sentinel-2, SAR, or SAR+optical), and whether performance was tested on independent sites/years or only with internal splits. If a paper tried several models, we kept one representative per family to avoid double-counting, and we used F1 where available (otherwise converted from mIoU or, if needed, OA).
In wetland mapping, using OA as the common metric, DL outperformed ML across the largest strata: External/Transfer—DL 92.7 % ; Internal—DL 94.2 % vs. ML 92.1 % ; and in the most populated sensor stratum S1 + S2; internal—DL 94.9 % vs. ML 90.5 % . For bird-habitat studies, DL samples are scarce; in the only matched External/Transfer, S2-only stratum, DL reached 95.5 % vs. ML 89.9 % . We treat this bird-habitat DL advantage as indicative rather than conclusive since one study is insufficient.
  • Validation and Transferability Challenges
Although many studies reported high internal accuracy (>85%), only a few tested model performances across different temporal periods or geographical regions. Cross-regional or cross-temporal transfer learning—essential for building generalizable and operational models—was rarely attempted. Similarly, few studies conducted detailed post-classification error analysis to explain systematic misclassifications, such as confusion between flooded vegetation and dry marsh or spectral overlap between seasonal wetland classes.

5. Limitations

5.1. Geographic Skew

The corpus is unevenly distributed: 27.2% (33/121) of studies are China-based, while Africa, the Middle East, and South America are sparsely represented. This raises the risk of ecological/taxonomic overfitting to East-Asian deltas and temperate coasts and limits external validity in arid, tropical, and high-elevation systems.

5.2. Wetland Type Imbalance

Our corpus is dominated by marsh (58) and coastal–generic (36), while several inland/peat classes are sparse (e.g., bog 24, fen 22, wet meadow 8, peatland–generic 8, aquatic bed 5). This uneven coverage creates small-N categories, where percentages are unstable, confidence is low, and some statistical tests (e.g., chi-square) are strained. Consequently, type-resolved results are most reliable for high-N classes (marsh/coastal) and indicative for low-N classes; we therefore report counts alongside percentages and caution on generalizing to undersampled types.

5.3. Label Scarcity

Many studies rely on Google Earth visual interpretation or legacy wetland inventories for training and reference data. However, tide phase, seasonal windows, and water-level variability often differ between the imagery used for labeling and the analysis mosaics, and inventories may have coarse or inconsistent boundaries. These factors introduce boundary and class-assignment noise (e.g., fringe marsh vs. flooded meadow, water under canopy), which then propagates into both training and validation, inflating within-area accuracy and masking systematic errors at ecotones. Minimal mitigations include time-stamped labeling (tide/phenology), spatially independent QA points, and consensus/soft labels or uncertainty-aware training to reflect ambiguous edges.

5.4. UAV Availability

Although UAV imagery is uniquely valuable for nesting- and micro-habitat mapping (sub-meter structure, DSM/CHM), its use remains sporadic. Wildlife-protection protocols and park rules often restrict low-altitude flights because flush and disturbance rates rise at small altitudes/steep approach angles, especially during breeding, and many protected areas are no-fly zones without special authorization [159,163]. These constraints, together with airspace compliance, pilot/insurance costs, short battery endurance/weather windows, and the processing burden (large orthomosaics, radiometric normalization, annotation of dense instances), raise the effective cost of routine campaigns. The result is limited, uneven UAV coverage, precisely where DL would benefit most from fine, instance-level labels, and a tendency to favor ML pipelines trained on coarser satellite labels. Practical mitigations (e.g., higher-altitude fixed-wing passes, strict buffer policies, off-season surveys, and label-efficiency strategies such as weak/pseudo-labels or UAV-to-satellite distillation) can reduce disturbance and cost but are not yet standard practice.

5.5. Analytical and Reporting Limitations

Out-of-area validation remains limited: only a minority of studies evaluate models on independent regions or years, so reported accuracies likely overestimate true transferability. Reporting is also heterogeneous. Many papers provide only OA and Kappa coefficients, with inconsistent inclusion of class-wise metrics (user’s/producer’s accuracy, F1) or segmentation metrics (IoU (intersection over union/mIoU), and uncertainty maps are rarely reported. This heterogeneity prevents a rigorous meta-analysis of performance differences across algorithms and wetland types. In addition, taxonomic schemes vary (e.g., CWCS/ANAE versus bespoke coastal classes), which hinders cross-study comparison and model portability across jurisdictions. Finally, results are sensitive to sensor and preprocessing choices: cloud contamination drives temporal mosaicking; SAR stacks require careful speckle denoising and co-registration; and fusion/resampling steps (e.g., pansharpening) can introduce artifacts if not documented.

6. Recommendations for Practice and Future Research

Drawing on the synthesis of 121 primary studies (2015–April 2025), we offer the following recommendations to strengthen methodological rigor, ecological relevance, and practical utility in wetland and bird-habitat monitoring with remote sensing.
(1)
Standardize performance reporting and validation.
Studies should report, at minimum, OA, κ , class-wise precision/recall/F1, and/or IoU/mIoU. External validation on independent sites and/or years ought to be routine, with explicit disclosure of train/validation/test partitions. Per-class confusion matrices and, for imbalanced problems, precision–recall curves should be provided. Uncertainty maps are strongly encouraged to guide risk-aware interpretation.
(2)
Adopt SAR–optical fusion as a default data input.
As a general practice, Sentinel-1 (C-band SAR) and Sentinel-2 should be fused, with tide- and phenology-aware compositing. Ancillary data should be more incorporated to improve class separability.
(3)
Match model families to wetland classes.
Random forest provides robust baselines for modest label volumes and index-rich inputs. Deep learning is preferable for complex and heterogeneous peatland/forested systems, complex boundaries, or high-resolution inputs (UAV/VHR).
(4)
Integrate avian ecology explicitly.
Wetland maps intended for bird-habitat assessment should quantify habitat-relevant predictors (e.g., hydroperiod metrics, vegetation structure/height proxies, edge and connectivity metrics) and couple them with species distribution, occupancy, or density models where observations exist. Thresholds that translate map outputs into management triggers (e.g., minimum inundation duration for shorebird staging) should also be reported.
(5)
Leverage UAV × DL for micro-habitats.
When sub-meter data are available, pairing UAV imagery with instance/semantic segmentation is recommended for nests, roosts, and colony edges. To minimize disturbance, protocols should prioritize higher-altitude passes, buffer zones, and off-breeding surveys.
(6)
Test transferability and manage domain shift.
Future studies should include cross-region and cross-year evaluations and, where applicable, domain adaptation and sensor-transfer tests (e.g., Landsat↔Sentinel).
(7)
Address geographic and typological gap.
Priority should be given to underrepresented regions (Africa, the Middle East, South America) and wetland types (peatlands, tundra, urban/constructed wetlands) as well as focus on the wetland–bird-habitat correlation for a more advanced and integrative monitoring.

7. Conclusions

This systematic review maps a decade of machine learning and deep learning for wetland mapping and bird-habitat monitoring (n = 121; 2015–2025). We find a strong thematic emphasis on coastal systems—especially salt marshes and mangroves—and a geographic concentration in China, Europe, the USA, and Canada, alongside clear gaps across Africa, the Middle East, and South America, and in several wetland types (e.g., peatlands and urban/constructed wetlands). Methodologically, classical ML remains dominant, yet DL—particularly U-Net and Transformer variants—delivers consistent gains when paired with multi-source inputs (Sentinel-1/2) and high-resolution imagery (UAV/VHR). Reporting and validation, however, are heterogeneous, with limited external tests and sparse class-wise metrics.
For bird monitoring specifically, evidence is comparatively scarce: most studies treat wetlands indirectly rather than as explicit avian habitats. Where sub-meter data are available, UAV × DL shows clear advantages for micro-habitat delineation (nests, roosts, colony edges), but deployment is constrained by labeling cost and disturbance/permits. These issues, along with uneven sensing stacks and reference data, currently limit transferability.
Ultimately, this review not only offers a detailed inventory of existing approaches but also underscores the critical need for more inclusive, ecologically and socially grounded, and transferable monitoring frameworks. By identifying current limitations and pointing toward emergent solutions—such as GAN-based data augmentation, hybrid CNN–Transformer models, and explainable AI—the study provides a strategic ground for future research and practical applications in conservation planning, habitat monitoring, and wetland management.

Author Contributions

Conceptualization, M.Z., K.A.E.K., I.S. and S.F.; methodology, M.Z., K.A.E.K. and I.S.; software, M.Z. and K.A.E.K.; validation, M.Z., K.A.E.K. and I.S.; formal analysis, M.Z., K.A.E.K. and I.S.; investigation, M.Z., K.A.E.K. and I.S.; resources, M.Z.; writing—original draft preparation, M.Z., K.A.E.K. and I.S.; writing—review and editing, M.Z., K.A.E.K. and I.S.; visualization, M.Z., K.A.E.K. and I.S.; supervision, K.A.E.K. and I.S.; project administration, K.A.E.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
κ Kappa coefficient
3D GAN3D Generative Adversarial Network
AIArtificial Intelligence
AUCArea under the curve
C5.0 Decision TreeC5.0 Classification Algorithm
CA-MarkovCellular Automata-Markov Chain Model
CNNConvolutional Neural Network
CNN ensembleEnsemble of Convolutional Neural Networks
CNN-Vision Transformer fusionCombined CNN and Vision Transformer Architecture
Coordinate AttentionAttention Mechanism Using Spatial Coordinates
DLDeep Learning
DTW-Kmeans++Dynamic Time Warping with K-means++
Deep CNNDeep Convolutional Neural Network
DenseNetDensely Connected Convolutional Network
Fractional-order derivativesMathematical Feature Transformation Method
GANGenerative Adversarial Network
GBMGradient Boosting Machine
GEEGoogle Earth Engine
IEEEInstitute of Electrical and Electronics Engineers
IoUIntersection Over Union
InSARInterferometric Synthetic Aperture Radar
K-meansK-means Clustering
KNNK-Nearest Neighbors
LiDARLight Detection and Ranging
LightGBMLight Gradient Boosting Machine
mIoUMean Intersection Over Union
MLMachine Learning
MaxEntMaximum Entropy Model
MDPIMultidsciplinary Digital Publishing Institute
NDVINormalized Difference Vegetation Index
NDWINormalized Difference Water Index
OAOverall Accuracy
OBIAObject-Based Image Analysis
PICOPopulation, Intervention, Comparison, Outcome
RFRandom Forest
RFERecursive Feature Elimination
Residual Attention U-Net           U-Net with Residual and Attention Mechanisms
Residual U-NetU-Net with Residual Connections
SARSynthetic Aperture Radar
SHAP-DNNSHapley Additive Explanations for Deep Neural Networks
SPIDERSample-Phenomenon of Interest-Design-Evaluation-Research
Type
SVMSupport Vector Machine
UAV-LiDARUnmanned Aerial Vehicle-Mounted Light Detection and
Ranging
VHRVery-High-Resolution
ViTVision Transformer
XGBoostExtreme Gradient Boosting

References

  1. Dornelas, M.; Gotelli, N.J.; McGill, B.; Shimadzu, H.; Moyes, F.; Sievers, C.; Magurran, A.E. Assemblage time series reveal biodiversity change but not systematic loss. Science 2014, 344, 296–299. [Google Scholar] [CrossRef] [PubMed]
  2. Pimm, S.L.; Jenkins, C.N.; Abell, R.; Brooks, T.M.; Gittleman, J.L.; Joppa, L.N.; Raven, P.H.; Roberts, C.M.; Sexton, J.O. The biodiversity of species and their rates of extinction, distribution, and protection. Science 2014, 344, 1246752. [Google Scholar] [CrossRef]
  3. Newbold, T. Future effects of climate and land-use change on terrestrial vertebrate community diversity under different scenarios. Proc. R. Soc. B Biol. Sci. 2018, 285, 20180792. [Google Scholar] [CrossRef] [PubMed]
  4. Cepic, M.; Bechtold, U.; Wilfing, H. Modelling human influences on biodiversity at a global scale–A human ecology perspective. Ecol. Model. 2022, 465, 109854. [Google Scholar] [CrossRef]
  5. Huang, J.; Wang, R.Y.; Chen, L. Analysis of Bird Habitat Suitability in Chongming Island Based on GIS and Fragstats. Int. J. For. Anim. Fish. Res. 2023, 7, 1–23. [Google Scholar] [CrossRef]
  6. Demarquet, Q.; Rapinel, S.; Dufour, S.; Hubert-Moy, L. Long-Term Wetland Monitoring Using the Landsat Archive: A Review. Remote Sens. 2023, 15, 820. [Google Scholar] [CrossRef]
  7. Mahendiran, M.; Azeez, P.A. Ecosystem Services of Birds: A Review of Market and Non-market Values. Entomol. Ornithol. Herpetol. Curr. Res. 2018, 7, 1000209. [Google Scholar] [CrossRef]
  8. Fair, J.M.; Paul, E.; Jones, J. Guidelines to the Use of Wild Birds in Research; Ornithological Council: Washington, DC, USA, 2010; pp. 1–215. [Google Scholar]
  9. Gregory, R.D.; Strien, A.V. Wild bird indicators: Using composite population trends of birds as measures of environmental health. Ornithol. Sci. 2010, 9, 3–22. [Google Scholar] [CrossRef]
  10. Fraixedas, S.; Lindén, A.; Piha, M.; Cabeza, M.; Gregory, R.; Lehikoinen, A. A state-of-the-art review on birds as indicators of biodiversity: Advances, challenges, and future directions. Ecol. Indic. 2020, 118, 106728. [Google Scholar] [CrossRef]
  11. Hunter, D.; Marra, P.P.; Perrault, A.M. Migratory Connectivity and the Conservation of Migratory Animals. Environ. Law 2011, 41, 317–354. [Google Scholar]
  12. Xu, Y.; Si, Y.; Takekawa, J.; Liu, Q.; Prins, H.H.; Yin, S.; Prosser, D.J.; Gong, P.; de Boer, W.F. A network approach to prioritize conservation efforts for migratory birds. Conserv. Biol. 2020, 34, 416–426. [Google Scholar] [CrossRef]
  13. Runge, C.A.; Watson, J.E.; Butchart, S.H.; Hanson, J.O.; Possingham, H.P.; Fuller, R.A. Protected areas and global conservation of migratory birds. Science 2015, 350, 1255–1258. [Google Scholar] [CrossRef] [PubMed]
  14. IJsseldijk, L.L.; ten Doeschate, M.T.; Brownlow, A.; Davison, N.J.; Deaville, R.; Galatius, A.; Gilles, A.; Haelters, J.; Jepson, P.D.; Keijl, G.O.; et al. Spatiotemporal mortality and demographic trends in a small cetacean: Strandings to inform conservation management. Biol. Conserv. 2020, 249, 108733. [Google Scholar] [CrossRef]
  15. Sauer, J.R.; Pardieck, K.L.; Ziolkowski, D.J.; Smith, A.C.; Hudson, M.A.R.; Rodriguez, V.; Berlanga, H.; Niven, D.K.; Link, W.A. The first 50 years of the North American Breeding Bird Survey. Condor 2017, 119, 576–593. [Google Scholar] [CrossRef]
  16. Dickinson, J.L.; Shirk, J.; Bonter, D.; Bonney, R.; Crain, R.L.; Martin, J.; Phillips, T.; Purcell, K. The current state of citizen science as a tool for ecological research and public engagement. Front. Ecol. Environ. 2012, 10, 291–297. [Google Scholar] [CrossRef]
  17. Ralph, C.J.; Geupel, G.R.; Pyle, P.; Martin, T.E.; DeSante, D.F. Handbook of Field Methods for Monitoring Landbirds. Director 1993, 144, 1–41. [Google Scholar]
  18. Lieury, N.; Devillard, S.; Besnard, A.; Gimenez, O.; Hameau, O.; Ponchon, C.; Millon, A. Designing cost-effective capture-recapture surveys for improving the monitoring of survival in bird populations. Biol. Conserv. 2017, 214, 233–241. [Google Scholar] [CrossRef]
  19. Markova-Nenova, N.; Engler, J.O.; Cord, A.F.; Wätzold, F. A Cost Comparison Analysis of Bird-Monitoring Techniques for Result-Based Payments in Agriculture. 2023, Volume 32. Available online: https://mpra.ub.uni-muenchen.de/id/eprint/116311 (accessed on 8 September 2025).
  20. Witmer, G.W. Wildlife population monitoring: Some practical considerations. Wildl. Res. 2005, 32, 259–263. [Google Scholar] [CrossRef]
  21. Wang, D.; Shao, Q.; Yue, H. Surveying wild animals from satellites, manned aircraft and unmanned aerial systems (UASs): A review. Remote Sens. 2019, 11, 1308. [Google Scholar] [CrossRef]
  22. Sauer, J.R.; Link, W.A.; Fallon, J.E.; Pardieck, K.L.; Ziolkowski, D.J. The North American Breeding Bird Survey 1966–2011: Summary Analysis and Species Accounts. N. Am. Fauna 2013, 79, 1–32. [Google Scholar] [CrossRef]
  23. Tibbetts, J.H. Remote sensors bring wildlife tracking to new level. BioScience 2017, 67, 411–417. [Google Scholar] [CrossRef]
  24. Stephenson, P.J. Technological advances in biodiversity monitoring: Applicability, opportunities and challenges. Curr. Opin. Environ. Sustain. 2020, 45, 36–41. [Google Scholar] [CrossRef]
  25. Stephenson, P.J. Integrating Remote Sensing into Wildlife Monitoring for Conservation. Environ. Conserv. 2019, 46, 181–183. [Google Scholar] [CrossRef]
  26. Kuiper, S.D.; Coops, N.C.; Hinch, S.G.; White, J.C. Advances in remote sensing of freshwater fish habitat: A systematic review to identify current approaches, strengths and challenges. Fish Fish. 2023, 24, 829–847. [Google Scholar] [CrossRef]
  27. Pettorelli, N.; Laurance, W.F.; O’Brien, T.G.; Wegmann, M.; Nagendra, H.; Turner, W. Satellite remote sensing for applied ecologists: Opportunities and challenges. J. Appl. Ecol. 2014, 51, 839–848. [Google Scholar] [CrossRef]
  28. Anderson, K.; Gaston, K.J. Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef]
  29. Hodgson, J.C.; Mott, R.; Baylis, S.M.; Pham, T.T.; Wotherspoon, S.; Kilpatrick, A.D.; Raja Segaran, R.; Reid, I.; Terauds, A.; Koh, L.P. Drones count wildlife more accurately and precisely than humans. Methods Ecol. Evol. 2018, 9, 1160–1167. [Google Scholar] [CrossRef]
  30. Lahoz-Monfort, J.J.; Magrath, M.J. A Comprehensive Overview of Technologies for Species and Habitat Monitoring and Conservation. BioScience 2021, 71, 1038–1062. [Google Scholar] [CrossRef]
  31. Drakshayini; Mohan, S.T.; Swathi, M.; Kadali, N. Leveraging Machine Learning and Remote Sensing for Wildlife Conservation: A Comprehensive Review. Int. J. Adv. Res. 2023, 11, 636–647. [Google Scholar] [CrossRef] [PubMed]
  32. Gonzalez, L.F.; Montes, G.A.; Puig, E.; Johnson, S.; Mengersen, K.; Gaston, K.J. Unmanned aerial vehicles (UAVs) and artificial intelligence revolutionizing wildlife monitoring and conservation. Sensors 2016, 16, 97. [Google Scholar] [CrossRef] [PubMed]
  33. Xie, J.; Hu, K.; Zhu, M.; Yu, J.; Zhu, Q. Investigation of Different CNN-Based Models for Improved Bird Sound Classification. IEEE Access 2019, 7, 175353–175361. [Google Scholar] [CrossRef]
  34. Pinto-Ledezma, J.N.; Cavender-Bares, J. Predicting species distributions and community composition using satellite remote sensing predictors. Sci. Rep. 2021, 11, 16448. [Google Scholar] [CrossRef]
  35. Yuan, S.; Liang, X.; Lin, T.; Chen, S.; Liu, R.; Wang, J.; Zhang, H.; Gong, P. A comprehensive review of remote sensing in wetland classification and mapping. arXiv 2025, arXiv:2504.10842. [Google Scholar] [CrossRef]
  36. Liu, Y.; Zhang, H.; Cui, Z.; Zuo, Y.; Lei, K.; Zhang, J.; Yang, T.; Ji, P. Precise Wetland Mapping in Southeast Asia for the Ramsar Strategic Plan 2016–24. Remote Sens. 2022, 14, 5730. [Google Scholar] [CrossRef]
  37. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  38. Besnard, A.G.; Davranche, A.; Maugenest, S.; Bouzillé, J.B.; Vian, A.; Secondi, J. Vegetation maps based on remote sensing are informative predictors of habitat selection of grassland birds across a wetness gradient. Ecol. Indic. 2015, 58, 47–54. [Google Scholar] [CrossRef]
  39. Ozesmi, S.L.; Bauer, M.E. Satellite remote sensing of wetlands. Wetl. Ecol. Manag. 2002, 10, 381–402. [Google Scholar] [CrossRef]
  40. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  41. Cooke, A.; Smith, D.; Booth, A. Beyond PICO: The SPIDER tool for qualitative evidence synthesis. Qual. Health Res. 2012, 22, 1435–1443. [Google Scholar] [CrossRef]
  42. Frandsen, T.F.; Bruun Nielsen, M.F.; Lindhardt, C.L.; Eriksen, M.B. Using the full PICO model as a search tool for systematic reviews resulted in lower recall for some PICO elements. J. Clin. Epidemiol. 2020, 127, 69–75. [Google Scholar] [CrossRef]
  43. Rezac, S.J.; Salkind, N.J.; McTavish, D.; Loether, H. Inclusion and exclusion criteria in research studies: Definitions and why they matter. Teach. Sociol. 2018, 29, 257. [Google Scholar] [CrossRef]
  44. Meline, T. Selecting Studies for Systemic Review: Inclusion and Exclusion Criteria. Contemp. Issues Commun. Sci. Disord. 2006, 33, 21–27. [Google Scholar] [CrossRef]
  45. Wang, L.; Qu, J.J. NMDI: A normalized multi-band drought index for monitoring soil and vegetation moisture with satellite remote sensing. Geophys. Res. Lett. 2007, 34, 1–5. [Google Scholar] [CrossRef]
  46. Di Vittorio, C.A.; Georgakakos, A.P. Land cover classification and wetland inundation mapping using MODIS. Remote Sens. Environ. 2018, 204, 1–17. [Google Scholar] [CrossRef]
  47. NICE. The Guidelines Manual: Appendices B-I Audit and Service Improvement. 2012. Available online: https://www.nice.org.uk/process/pmg6/resources/the-guidelines-manual-appendices-bi-pdf-3304416006853 (accessed on 8 September 2025).
  48. Wang, D.; Mao, D.; Wang, M.; Xiao, X.; Choi, C.Y.; Huang, C.; Wang, Z. Identify and map coastal aquaculture ponds and their drainage and impoundment dynamics. Int. J. Appl. Earth Obs. Geoinf. 2024, 134, 104246. [Google Scholar] [CrossRef]
  49. Jia, M.; Wang, Z.; Mao, D.; Ren, C.; Song, K.; Zhao, C.; Wang, C.; Xiao, X.; Wang, Y. Mapping global distribution of mangrove forests at 10-m resolution. Sci. Bull. 2023, 68, 1306–1316. [Google Scholar] [CrossRef]
  50. Wang, M.; Mao, D.; Wang, Y.; Li, H.; Zhen, J.; Xiang, H.; Ren, Y.; Jia, M.; Song, K.; Wang, Z. Interannual changes of urban wetlands in China’s major cities from 1985 to 2022. ISPRS J. Photogramm. Remote Sens. 2024, 209, 383–397. [Google Scholar] [CrossRef]
  51. Zhang, Q.; Liu, M.; Zhang, Y.; Mao, D.; Li, F.; Wu, F.; Song, J.; Li, X.; Kou, C.; Li, C.; et al. Comparison of Machine Learning Methods for Predicting Soil Total Nitrogen Content Using Landsat-8, Sentinel-1, and Sentinel-2 Images. Remote Sens. 2023, 15, 2907. [Google Scholar] [CrossRef]
  52. Wang, M.; Mao, D.; Wang, Y.; Xiao, X.; Xiang, H.; Feng, K.; Luo, L.; Jia, M.; Song, K.; Wang, Z. Wetland mapping in East Asia by two-stage object-based Random Forest and hierarchical decision tree algorithms on Sentinel-1/2 images. Remote Sens. Environ. 2023, 297, 113793. [Google Scholar] [CrossRef]
  53. Zhao, C.; Jia, M.; Zhang, R.; Wang, Z.; Ren, C.; Mao, D.; Wang, Y. Mangrove species mapping in coastal China using synthesized Sentinel-2 high-separability images. Remote Sens. Environ. 2024, 307, 114151. [Google Scholar] [CrossRef]
  54. Chen, C.; Zhu, G.; Chen, X. Wetland Scene Segmentation of Remote Sensing Images Based on Lie Group Feature and Graph Cut Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 18, 1345–1361. [Google Scholar] [CrossRef]
  55. Deng, Y.C.; Jiang, X. Wetland Protection Law of the People’s Republic of China: New Efforts in Wetland Conservation. Int. J. Mar. Coast. Law 2023, 38, 141–160. [Google Scholar] [CrossRef]
  56. Wu, P.; Zhan, W.; Cheng, N.; Yang, H.; Wu, Y. A Framework to Calculate Annual Landscape Ecological Risk Index Based on Land Use/Land Cover Changes: A Case Study on Shengjin Lake Wetland. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11926–11935. [Google Scholar] [CrossRef]
  57. Zhang, Z.; Fan, Y.; Zhang, A.; Jiao, Z. Baseline-Based Soil Salinity Index (BSSI): A Novel Remote Sensing Monitoring Method of Soil Salinization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 202–214. [Google Scholar] [CrossRef]
  58. Mao, D.; Wang, Z.; Wang, Y.; Choi, C.Y.; Jia, M.; Jackson, M.V.; Fuller, R.A. Remote Observations in China’s Ramsar Sites: Wetland Dynamics, Anthropogenic Threats, and Implications for Sustainable Development Goals. J. Remote Sens. 2021, 2021, 9849343. [Google Scholar] [CrossRef]
  59. Ke, L.; Tan, Q.; Lu, Y.; Wang, Q.; Zhang, G.; Zhao, Y.; Wang, L. Classification and spatio-temporal evolution analysis of coastal wetlands in the Liaohe Estuary from 1985 to 2023: Based on feature selection and sample migration methods. Front. For. Glob. Change 2024, 7, 1406473. [Google Scholar] [CrossRef]
  60. Zhao, C.; Jia, M.; Wang, Z.; Mao, D.; Wang, Y. Identifying mangroves through knowledge extracted from trained random forest models: An interpretable mangrove mapping approach (IMMA). ISPRS J. Photogramm. Remote Sens. 2023, 201, 209–225. [Google Scholar] [CrossRef]
  61. Cao, J.; Liu, K.; Zhuo, L.; Liu, L.; Zhu, Y.; Peng, L. Combining UAV-based hyperspectral and LiDAR data for mangrove species classification using the rotation forest algorithm. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102414. [Google Scholar] [CrossRef]
  62. Jia, M.; Zhang, R.; Zhao, C.; Zhou, Y.; Ren, C.; Mao, D.; Li, H.; Sun, G.; Zhang, H.; Yu, W.; et al. Synergistic estimation of mangrove canopy height across coastal China: Integrating SDGSAT-1 multispectral data with Sentinel-1/2 time-series imagery. Remote Sens. Environ. 2025, 323, 114719. [Google Scholar] [CrossRef]
  63. Jamali, A.; Mahdianpari, M.; Brisco, B.; Granger, J.; Mohammadimanesh, F.; Salehi, B. Comparing solo versus ensemble convolutional neural networks for wetland classification using multi-spectral satellite imagery. Remote Sens. 2021, 13, 2046. [Google Scholar] [CrossRef]
  64. Banks, S.; White, L.; Behnamian, A.; Chen, Z.; Montpetit, B.; Brisco, B.; Pasher, J.; Duffe, J. Wetland Classification with Multi-Angle/Temporal SAR Using Random Forests. Remote Sens. 2019, 11, 670. [Google Scholar] [CrossRef]
  65. Valenti, V.L.; Carcelen, E.C.; Lange, K.; Russo, N.J.; Chapman, B. Leveraging Google Earth Engine User Interface for Semiautomated Wetland Classification in the Great Lakes Basin at 10 m with Optical and Radar Geospatial Datasets. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6008–6018. [Google Scholar] [CrossRef]
  66. Lemenkova, P. Support Vector Machine Algorithm for Mapping Land Cover Dynamics in Senegal, West Africa, Using Earth Observation Data. Earth 2024, 5, 420–462. [Google Scholar] [CrossRef]
  67. Guo, M.; Yu, Z.; Xu, Y.; Huang, Y.; Li, C. Me-net: A deep convolutional neural network for extracting mangrove using sentinel-2A data. Remote Sens. 2021, 13, 1292. [Google Scholar] [CrossRef]
  68. Ghorbanian, A.; Ahmadi, S.A.; Amani, M.; Mohammadzadeh, A.; Jamali, S. Application of Artificial Neural Networks for Mangrove Mapping Using Multi-Temporal and Multi-Source Remote Sensing Imagery. Water 2022, 14, 244. [Google Scholar] [CrossRef]
  69. Jie, L.; Wang, J. Research on the extraction method of coastal wetlands based on sentinel-2 data. Mar. Environ. Res. 2024, 198, 106429. [Google Scholar] [CrossRef]
  70. Campbell, A.D. Mapping aboveground biomass and carbon in salt marshes across the contiguous United States. J. Appl. Remote Sens. 2024, 18, 032404. [Google Scholar] [CrossRef]
  71. Aslam, R.W.; Naz, I.; Shu, H.; Yan, J.; Quddoos, A.; Tariq, A.; Davis, J.B.; Al-Saif, A.M.; Soufan, W. Multi-temporal image analysis of wetland dynamics using machine learning algorithms. J. Environ. Manag. 2024, 371, 123123. [Google Scholar] [CrossRef]
  72. Data, U.S.; Random, Z.B. Fine-Resolution Wetland Mapping in the Yellow River Basin Using Sentinel-1/2 Data via Zoning-Based Random Forest with Remote-Sensing Feature Preferences. Water 2024, 16, 2415. [Google Scholar]
  73. Hardy, A.; Ettritch, G.; Cross, D.E.; Bunting, P.; Liywalii, F.; Sakala, J.; Silumesii, A.; Singini, D.; Smith, M.; Willis, T.; et al. Automatic Detection of Open and Vegetated Water Bodies Using Sentinel 1 to Map African Malaria Vector Mosquito Breeding Habitats. Remote Sens. 2019, 11, 593. [Google Scholar] [CrossRef]
  74. Nath, R.; Ramachandran, A.; Tripathi, V.; Badola, R. Ecological Informatics Spatio-temporal habitat assessment of the Gangetic floodplain in the Hastinapur wildlife sanctuary. Ecol. Inform. 2022, 72, 101851. [Google Scholar] [CrossRef]
  75. Sun, C.; Li, J.; Liu, Y.; Liu, Y.; Liu, R. Plant species classification in salt marshes using phenological parameters derived from Sentinel-2 pixel-differential time-series. Remote Sens. Environ. 2021, 256, 112320. [Google Scholar] [CrossRef]
  76. Avci, C.; Budak, M.; Yagmur, N.; Balcik, F.B. Comparison between random forest and support vector machine algorithms for LULC classification. Int. J. Eng. Geosci. 2023, 8, 1–10. [Google Scholar] [CrossRef]
  77. Isoaho, A.; Elo, M.; Marttila, H.; Rana, P.; Lensu, A.; Räsänen, A. Monitoring changes in boreal peatland vegetation after restoration with optical satellite imagery. Sci. Total Environ. 2024, 957, 177697. [Google Scholar] [CrossRef]
  78. DeLancey, E.R.; Simms, J.F.; Mahdianpari, M.; Brisco, B.; Mahoney, C.; Kariyeva, J. Comparing deep learning and shallow learning for large-scalewetland classification in Alberta, Canada. Remote Sens. 2020, 12, 2. [Google Scholar] [CrossRef]
  79. Dragoş, M.; Petrescu, A.; Merciu, G.L. Analysis of vegetation from satelite images correlated to the bird species presence and the state of health of the ecosystems of Bucharest during the period from 1991 to 2006. Geogr. Pannonica 2017, 21, 9–25. [Google Scholar] [CrossRef]
  80. Munizaga, J.; Garc, M.; Ureta, F.; Novoa, V.; Rojas, O.; Rojas, C. Mapping Coastal Wetlands Using Satellite Imagery and Machine Learning in a Highly Urbanized Landscape. Sustainability 2022, 14, 5700. [Google Scholar] [CrossRef]
  81. Zhipeng, G.; Jiang, W.; Peng, K.; Deng, Y.; Wang, X. Wetland Mapping and Landscape Analysis for Supporting International Wetland Cities: Case Studies in Nanchang City and Wuhan City. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 8858–8870. [Google Scholar] [CrossRef]
  82. Lou, A.; He, Z.; Zhou, C.; Lai, G. International Journal of Applied Earth Observation and Geoinformation Long-term series wetland classification of Guangdong-Hong Kong-Macao Greater Bay Area based on APSMnet. Int. J. Appl. Earth Obs. Geoinf. 2024, 128, 103765. [Google Scholar] [CrossRef]
  83. Lao, C.; Yu, X.; Zhan, L.; Xin, P. Monitoring soil salinity in coastal wetlands with Sentinel-2 MSI data: Combining fractional-order derivatives and stacked machine learning models. Agric. Water Manag. 2024, 306, 109147. [Google Scholar] [CrossRef]
  84. Ji, P.; Su, R.; Wu, G.; Xue, L.; Zhang, Z.; Fang, H.; Gao, R.; Zhang, W.; Zhang, D. Projecting Future Wetland Dynamics Under Climate Change and Land Use Pressure: A Machine Learning Approach Using Remote Sensing and Markov Chain Modeling. Remote Sens. 2025, 17, 1089. [Google Scholar] [CrossRef]
  85. Zhao, C.; Jia, M.; Zhang, R.; Wang, Z.; Mao, D.; Zhong, C.; Guo, X. Distribution of Mangrove Species Kandelia obovata in China Using Time-series Sentinel-2 Imagery for Sustainable Mangrove Management. J. Remote Sens. 2024, 4, 0143. [Google Scholar] [CrossRef]
  86. Li, S.; Meng, W.; Liu, D.; Yang, Q.; Chen, L.; Dai, Q.; Ma, T.; Gao, R.; Ru, W.; Li, Y.; et al. Migratory Whooper Swans Cygnus cygnus Transmit H5N1 Virus between China and Mongolia: Combination Evidence from Satellite Tracking and Phylogenetics Analysis. Sci. Rep. 2018, 8, 7049. [Google Scholar] [CrossRef] [PubMed]
  87. Zhang, C.; Xiao, X.; Wang, X.; Qin, Y.; Doughty, R.; Yang, X.; Meng, C.; Yao, Y.; Dong, J. Mapping wetlands in Northeast China by using knowledge-based algorithms and microwave (PALSAR-2, Sentinel-1), optical (Sentinel-2, Landsat), and thermal (MODIS) images. J. Environ. Manag. 2024, 349, 119618. [Google Scholar] [CrossRef] [PubMed]
  88. Wellmann, T.; Lausch, A.; Scheuer, S.; Haase, D. Earth observation based indication for avian species distribution models using the spectral trait concept and machine learning in an urban setting. Ecol. Indic. 2020, 111, 106029. [Google Scholar] [CrossRef]
  89. Regos, A.; Tapia, L.; Gil-Carrera, A.; Domínguez, J. Monitoring protected areas from space: A multi-temporal assessment using raptors as biodiversity surrogates. PLoS ONE 2017, 12, e0181769. [Google Scholar] [CrossRef] [PubMed]
  90. Gaspar, L.P.; Scarpelli, M.D.A.; Oliveira, E.G.; Alves, R.S.c.; Gomes, A.M.; Wolf, R.; Ferneda, R.V.; Kamazuka, S.H.; Gussoni, C.O.A.; Ribeiro, M.C. Predicting bird diversity through acoustic indices within the Atlantic Forest biodiversity hotspot. Front. Remote Sens. 2023, 4, 1283719. [Google Scholar] [CrossRef]
  91. Bay, P.P. Artificial Intelligence for Computational Remote Sensing: Quantifying Patterns of Land Cover Types around Cheetham. J. Mar. Sci. Eng. 2024, 12, 1279. [Google Scholar] [CrossRef]
  92. Radović, A.; Kapelj, S.; Taylor, L.T. Utilizing Remote Sensing Data for Species Distribution Modeling of Birds in Croatia. Diversity 2025, 17, 399. [Google Scholar] [CrossRef]
  93. Awoyemi, A.G.; Alabi, T.R.; Ibáñez-Álamo, J.D. Remotely sensed spectral indicators of bird taxonomic, functional and phylogenetic diversity across Afrotropical urban and non-urban habitats. Ecol. Indic. 2025, 170, 112966. [Google Scholar] [CrossRef]
  94. Roilo, S.; Spake, R.; Bullock, J.M.; Cord, A.F. A cross-regional analysis of red-backed shrike responses to agri-environmental schemes in Europe. Environ. Res. Lett. 2024, 19, 034004. [Google Scholar] [CrossRef]
  95. Alírio, J.; Sillero, N.P.; Garcia, N.; Campos, J.J.; Arenas-Castro, S.; Pôças, I.; Duarte, L.; Teodoro, A.C.M. Montrends: A Google Earth Engine application for analysing species habitat suitability over time. Ecol. Inform. 2024, 13197, 51. [Google Scholar] [CrossRef]
  96. Bekkema, M.E.; Eleveld, M. Mapping grassland management intensity using sentinel-2 satellite data. GI_Forum 2018, 6, 194–213. [Google Scholar] [CrossRef]
  97. Ivanova, I.; Stankova, N.; Borisova, D.; Spasova, T.; Dancheva, A. Dynamics and development of Alepu marsh for the period 2013-2020 based on satellite data. In Proceedings of the Earth Resources and Environmental Remote Sensing/GIS Applications XII, Online, 13–17 September 2021; Volume 11863, pp. 1–9. [Google Scholar] [CrossRef]
  98. Abbott, B.N.; Wallace, J.; Nicholas, D.M.; Karim, F.; Waltham, N.J. Bund Removal to Re-Establish Tidal Flow, Remove Aquatic Weeds and Restore Coastal Wetland Services-North Queensland, Australia. PLoS ONE 2020, 15, e0217531. [Google Scholar] [CrossRef]
  99. Hayri Kesikoglu, M.; Haluk Atasever, U.; Dadaser-Celik, F.; Ozkan, C. Performance of ANN, SVM and MLH techniques for land use/cover change detection at Sultan Marshes wetland, Turkey. Water Sci. Technol. 2019, 80, 466–477. [Google Scholar] [CrossRef] [PubMed]
  100. Yang, Z.; Na, X. Mapping herbaceous wetlands using combined phenological and hydrological features from time-series Sentinel-1/2 imagery. Int. J. Digit. Earth 2025, 18, 2498600. [Google Scholar] [CrossRef]
  101. Minotti, P.G.; Rajngewerc, M.; Alí Santoro, V.; Grimson, R. Evaluation of SAR C-band interferometric coherence time-series for coastal wetland hydropattern mapping. J. South Am. Earth Sci. 2021, 106, 102976. [Google Scholar] [CrossRef]
  102. Wang, Q.; Cui, G.; Liu, H.; Huang, X.; Xiao, X.; Wang, M.; Jia, M.; Mao, D.; Li, X.; Xiao, Y.; et al. Spatiotemporal Dynamics and Potential Distribution Prediction of Spartina alterniflora Invasion in Bohai Bay Based on Sentinel Time-Series Data and MaxEnt Modeling. Remote Sens. 2025, 17, 975. [Google Scholar] [CrossRef]
  103. Cao, J.; Liu, Q.; Yu, C.; Chen, Z.; Dong, X.; Xu, M.; Zhao, Y. Extracting waterline and inverting tidal flats topography based on Sentinel-2 remote sensing images: A case study of the northern part of the North Jiangsu radial sand ridges. Geomorphology 2024, 461, 109323. [Google Scholar] [CrossRef]
  104. Cao, H.; Han, L.; Liu, Z.; Li, L. Monitoring and driving force analysis of spatial and temporal change of water area of Hongjiannao Lake from 1973 to 2019. Ecol. Inform. 2021, 61, 101230. [Google Scholar] [CrossRef]
  105. Villoslada, M.; Berner, L.T.; Juutinen, S.; Ylänne, H.; Kumpula, T. Upscaling vascular aboveground biomass and topsoil moisture of subarctic fens from Unoccupied Aerial Vehicles (UAVs) to satellite level. Sci. Total Environ. 2024, 933, 173049. [Google Scholar] [CrossRef] [PubMed]
  106. Sun, C.; Fagherazzi, S.; Liu, Y. Classification mapping of salt marsh vegetation by flexible monthly NDVI time-series using Landsat imagery. Estuarine Coast. Shelf Sci. 2018, 213, 61–80. [Google Scholar] [CrossRef]
  107. Zheng, J.Y.; Hao, Y.Y.; Wang, Y.C.; Zhou, S.Q.; Wu, W.B.; Yuan, Q.; Gao, Y.; Guo, H.Q.; Cai, X.X.; Zhao, B. Coastal Wetland Vegetation Classification Using Pixel-Based, Object-Based and Deep Learning Methods Based on RGB-UAV. Land 2022, 11, 2039. [Google Scholar] [CrossRef]
  108. Wang, Y.; Jin, S.; Dardanelli, G. Vegetation Classification and Evaluation of Yancheng Coastal Wetlands Based on Random Forest Algorithm from Sentinel-2 Images. Remote Sens. 2024, 16, 1124. [Google Scholar] [CrossRef]
  109. Jamali, A.; Mahdianpari, M.; Brisco, B.; Mao, D.; Salehi, B.; Mohammadimanesh, F. 3DUNetGSFormer: A deep learning pipeline for complex wetland mapping using generative adversarial networks and Swin transformer. Ecol. Inform. 2022, 72, 101904. [Google Scholar] [CrossRef]
  110. Heath, J.T.; Grimmett, L.; Gopalakrishnan, T.; Thomas, R.F.; Lenehan, J. Integrating Sentinel 2 Imagery with High-Resolution Elevation Data for Automated Inundation Monitoring in Vegetated Floodplain Wetlands. Remote Sens. 2024, 16, 2434. [Google Scholar] [CrossRef]
  111. Gannod, M.; Masto, N.; Owusu, C.; Highway, C.; Brown, K.E.; Blake-bradshaw, A.; Feddersen, J.C.; Hagy, H.M.; Talbert, D.A.; Cohen, B. Semantic Segmentation with Multispectral Satellite Images of Waterfowl Habitat. In Proceedings of the International FLAIRS Conference Proceedings, Miami, FL, USA, 16–19 May 2021. [Google Scholar]
  112. Bu, F.; Dai, Z.; Mei, X.; Chu, A.; Cheng, J.; Lan, L. Machine learning-based mapping wetland dynamics of the largest freshwater lake in China. Glob. Ecol. Conserv. 2025, 59, e03585. [Google Scholar] [CrossRef]
  113. Yao, H.; Fu, B.; Zhang, Y.; Li, S.; Xie, S.; Qin, J.; Fan, D.; Gao, E. Combination of Hyperspectral and Quad-Polarization SAR Images to Classify Marsh Vegetation Using Stacking Ensemble Learning Algorithm. Remote Sens. 2022, 14, 5478. [Google Scholar] [CrossRef]
  114. Approach, M.s.; Hemati, M.; Mahdianpari, M.; Shiri, H. Integrating SAR and optical data for aboveground biomass estimation of coastal wetlands using machine learning: Multi-scale approach. Remote Sens. 2024, 16, 831. [Google Scholar]
  115. Jiang, W.; Zhang, M.; Long, J.; Pan, Y.; Ma, Y. HLEL: A wetland classification algorithm with self-learning capability, taking the Sanjiang Nature Reserve I as an example. J. Hydrol. 2023, 627, 130446. [Google Scholar] [CrossRef]
  116. Niedzielko, J.; Kope, D.; Jaroci, A. Testing Textural Information Base on LiDAR and Hyperspectral Data for Mapping Wetland Vegetation: A Case Study of Warta River Mouth National Park ( Poland). Remote Sens. 2023, 15, 3055. [Google Scholar]
  117. Prentice, R.M.; Villoslada, M.; Ward, R.D.; Bergamo, T.F.; Joyce, C.B. Synergistic use of Sentinel-2 and UAV-derived data for plant fractional cover distribution mapping of coastal meadows with digital elevation models. Biogeosciences 2024, 21, 1411–1431. [Google Scholar] [CrossRef]
  118. Gasela, M.; Kganyago, M.; Jager, G.D. Using resampled nSight-2 hyperspectral data and various machine learning classifiers for discriminating wetland plant species in a Ramsar Wetland site, South Africa. Appl. Geomat. 2024, 16, 429–440. [Google Scholar] [CrossRef]
  119. Xiang, H.; Xi, Y.; Mao, D.; Mahdianpari, M.; Zhang, J.; Wang, M.; Jia, M.; Yu, F.; Wang, Z. Mapping potential wetlands by a new framework method using random forest algorithm and big earth data: A case study in China’s Yangtze River Basin. Glob. Ecol. Conserv. 2023, 42, e02397. [Google Scholar] [CrossRef]
  120. Jafarzadeh, H.; Member, S.; Mahdianpari, M.; Member, S.; Gill, E.W.; Member, S. Wet-GC: A Novel Multimodel Graph Convolutional Approach for Wetland Classification Using Sentinel-1 and 2 Imagery with Limited Training Samples. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5303–5316. [Google Scholar] [CrossRef]
  121. Turnbull, A.; Soto, M.; Michael, B. Delineation and Classification of Wetlands in the Northern Jarrah Forest, Western Australia Using Remote Sensing and Machine Learning. Wetlands 2024, 44, 52. [Google Scholar] [CrossRef]
  122. Tu, C.; Li, P.; Li, Z.; Wang, H.; Yin, S.; Li, D.; Zhu, Q.; Chang, M.; Liu, J.; Wang, G. Synergetic Classification of Coastal Wetlands over the Yellow River Delta with GF-3 Full-Polarization SAR and Zhuhai-1 OHS Hyperspectral Remote Sensing. Remote Sens. 2021, 13, 4444. [Google Scholar] [CrossRef]
  123. Mallick, J.; Talukdar, S.; Pal, S.; Rahman, A. Ecological Informatics A novel classifier for improving wetland mapping by integrating image fusion techniques and ensemble machine learning classifiers. Ecol. Inform. 2021, 65, 101426. [Google Scholar] [CrossRef]
  124. Musungu, K.; Dube, T.; Smit, J.; Shoko, M. Using UAV multispectral photography to discriminate plant species in a seep wetland of the Fynbos Biome. Wetl. Ecol. Manag. 2024, 32, 207–227. [Google Scholar] [CrossRef]
  125. Zheng, G.; Wang, Y.; Zhao, C.; Dai, W.; Kattel, G.R. Quantitative Analysis of Tidal Creek Evolution and Vegetation Variation in Silting Muddy Flats on the Yellow Sea. Remote Sens. 2023, 15, 5107. [Google Scholar] [CrossRef]
  126. Gxokwe, S.; Dube, T.; Mazvimavi, D. Leveraging Google Earth Engine platform to characterize and map small seasonal wetlands in the semi-arid environments of South Africa. Sci. Total Environ. 2022, 803, 150139. [Google Scholar] [CrossRef] [PubMed]
  127. Chignell, S.M.; Evangelista, P.H.; Luizza, M.W.; Skach, S.; Young, N.E. An integrative modeling approach to mapping wetlands and riparian areas in a heterogeneous Rocky Mountain watershed. Remote Sens. Ecol. Conserv. 2018, 4, 150–165. [Google Scholar] [CrossRef]
  128. Bartold, M.; Kluczek, M. Ecological Informatics Estimating of chlorophyll fluorescence parameter Fv/Fm for plant stress detection at peatlands under Ramsar Convention with Sentinel-2 satellite imagery. Ecol. Inform. 2024, 81, 102603. [Google Scholar] [CrossRef]
  129. Wen, L. Coastal Wetland Mapping Using Ensemble Learning Algorithms: A Comparative Study of Bagging, Boosting and Stacking Techniques Li. Remote Sens. 2020, 12, 1683. [Google Scholar] [CrossRef]
  130. Zhao, J.Q.; Wang, Z.; Zhang, Q.; Niu, Y.; Lu, Z.; Zhao, Z. A novel feature selection criterion for wetland mapping using GF-3 and Sentinel-2 Data. Ecol. Indic. 2025, 171, 113146. [Google Scholar] [CrossRef]
  131. Yang, Y.; Meng, Z.; Zu, J.; Cai, W.; Wang, J.; Su, H.; Yang, J. Fine-Scale Mangrove Species Classification Based on UAV Multispectral and Hyperspectral Remote Sensing Using Machine Learning. Remote Sens. 2024, 16, 3093. [Google Scholar] [CrossRef]
  132. Zhen, J.; Mao, D.; Shen, Z.; Zhao, D.; Xu, Y.; Wang, J.; Jia, M.; Wang, Z.; Ren, C. Performance of XGBoost Ensemble Learning Algorithm for Mangrove Species Classification with Multisource Spaceborne Remote Sensing Data. J. Remote Sens. 2024, 4, 0146. [Google Scholar] [CrossRef]
  133. Li, H.; Cui, G.; Liu, H.; Wang, Q.; Zhao, S.; Huang, X.; Zhang, R.; Jia, M.; Mao, D.; Yu, H.; et al. Dynamic Analysis of Spartina alterniflora in Yellow River Delta Based on U-Net Model and Zhuhai-1 Satellite. Remote Sens. 2025, 17, 226. [Google Scholar] [CrossRef]
  134. Chen, B.; Zhang, Q.; Yang, N.; Wang, X.; Zhang, X.; Chen, Y.; Wang, S. Deep Learning Extraction of Tidal Creeks in the Yellow River Delta Using GF-2 Imagery. Remote Sens. 2025, 17, 676. [Google Scholar] [CrossRef]
  135. Jamali, A.; Mahdianpari, M. Swin Transformer for Complex Coastal Wetland Classification Using the Integration of Sentinel-1 and Sentinel-2 Imagery. Water 2022, 14, 178. [Google Scholar] [CrossRef]
  136. Li, H.; Wang, C.; Cui, Y.; Hodgson, M.; Carolina, S. ISPRS Journal of Photogrammetry and Remote Sensing Mapping salt marsh along coastal South Carolina using U-Net. ISPRS J. Photogramm. Remote Sens. 2021, 179, 121–132. [Google Scholar] [CrossRef]
  137. Pouliot, D.; Latifovic, R.; Pasher, J.; Duffe, J. Assessment of Convolution Neural Networks for Wetland Mapping with Landsat in the Central Canadian Boreal Forest Region. Remote Sens. 2019, 11, 772. [Google Scholar] [CrossRef]
  138. Jamali, A.; Mahdianpari, M.; Mohammadimanesh, F.; Brisco, B. A Synergic Use of Sentinel-1 and Sentinel-2 Imagery for Complex Wetland Classification Using Generative Adversarial Network (GAN) Scheme. Water 2021, 13, 3601. [Google Scholar] [CrossRef]
  139. Marjani, M.; Mohammadimanesh, F.; Mahdianpari, M.; Gill, E.W. A novel spatio-temporal vision transformer model for improving wetland mapping using multi-seasonal sentinel data. Remote Sens. Appl. Soc. Environ. 2025, 37, 101401. [Google Scholar] [CrossRef]
  140. Jamali, A.; Mahdianpari, M.; Mohammadimanesh, F.; Homayouni, S. A deep learning framework based on generative adversarial networks and vision transformer for complex wetland classification using limited training samples. Int. J. Appl. Earth Obs. Geoinf. 2022, 115, 103095. [Google Scholar] [CrossRef]
  141. Mainali, K.; Evans, M.; Saavedra, D.; Mills, E.; Madsen, B.; Minnemeyer, S. Convolutional neural network for high-resolution wetland mapping with open data: Variable selection and the challenges of a generalizable model. Sci. Total Environ. 2023, 861, 160622. [Google Scholar] [CrossRef]
  142. Hosseiny, B.; Mahdianpari, M.; Brisco, B.; Mohammadimanesh, F.; Salehi, B. WetNet: A Spatialoral Ensemble Deep Learning Model for Wetland Classification Using Sentinel-1 and Sentinel-2. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  143. Bakkestuen, V.; Venter, Z.; Ganerød, A.J.; Framstad, E. Delineation of Wetland Areas in South Norway from Sentinel-2 Imagery and LiDAR Using TensorFlow, U-Net, and Google Earth Engine. Remote Sens. 2023, 15, 1203. [Google Scholar] [CrossRef]
  144. Ghaznavi, A.; Saberioon, M.; Brom, J.; Itzerott, S. Comparative performance analysis of simple U-Net, residual attention U-Net, and VGG16-U-Net for inventory inland water bodies. Appl. Comput. Geosci. 2024, 21, 100150. [Google Scholar] [CrossRef]
  145. Cui, B.; Wu, J.; Li, X.; Ren, G.; Lu, Y. Combination of deep learning and vegetation index for coastal wetland mapping using GF-2 remote sensing images. Natl. Remote Sens. Bull. 2023, 27, 6–16. [Google Scholar] [CrossRef]
  146. Marjani, M.; Mahdianpari, M.; Mohammadimanesh, F.; Gill, E.W. CVTNet: A Fusion of Convolutional Neural Networks and Vision Transformer for Wetland Mapping Using Sentinel-1 and Sentinel-2 Satellite Data. Remote Sens. 2024, 16, 2427. [Google Scholar] [CrossRef]
  147. Ibtehaz, N.; Rahman, M.S. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 2020, 121, 74–87. [Google Scholar] [CrossRef]
  148. Jia, M.; Mao, D.; Wang, Z.; Ren, C.; Zhu, Q.; Li, X.; Zhang, Y. Tracking long-term floodplain wetland changes: A case study in the China side of the Amur River Basin. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102185. [Google Scholar] [CrossRef]
  149. Dutt, R.; Ortals, C.; He, W.; Curran, Z.C.; Angelini, C.; Canestrelli, A.; Jiang, Z. A Deep Learning Approach to Segment Coastal Marsh Tidal Creek Networks from High-Resolution Aerial Imagery. Remote Sens. 2024, 16, 2659. [Google Scholar] [CrossRef]
  150. Vincent, W.F.; Pina, P.; Freitas, P.; Vieira, G.; Mora, C. A trained Mask R-CNN model over PlanetScope imagery for very-high resolution surface water mapping in boreal forest-tundra. Remote Sens. Environ. 2024, 304, 114047. [Google Scholar] [CrossRef]
  151. Cai, F.; Tang, B.h.; Member, S.; Sima, O.; Chen, G.; Zhang, Z. Fine Extraction of Plateau Wetlands Based on a Combination of Object-Oriented Machine Learning and Ecological Rules: A Case Study of Dianchi Basin. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5364–5377. [Google Scholar] [CrossRef]
  152. Mohseni, F.; Amani, M.; Mohammadpour, P.; Kakooei, M.; Jin, S.; Moghimi, A. Wetland Mapping in Great Lakes Using Sentinel-1/2 Time-Series Imagery and DEM Data in Google Earth Engine. Remote Sens. 2023, 15, 3495. [Google Scholar] [CrossRef]
  153. Sun, Z.; Jiang, W.; Ling, Z.; Zhong, S.; Zhang, Z.; Song, J.; Xiao, Z. Using Multisource High-Resolution Remote Sensing Data (2 m) with a Habitat–Tide–Semantic Segmentation Approach for Mangrove Mapping. Remote Sens. 2023, 15, 5271. [Google Scholar] [CrossRef]
  154. Hu, X.; Zhang, P.; Zhang, Q.; Wang, J. Improving wetland cover classification using artificial neural networks with ensemble techniques. GISci. Remote Sens. 2021, 58, 603–623. [Google Scholar] [CrossRef]
  155. Habib, W.; Mcguinness, K.; Connolly, J. Mapping artificial drains in peatlands—A national-scale assessment of Irish raised bogs using sub-meter aerial imagery and deep learning methods. Remote Sens. Ecol. Conserv. 2024, 10, 551–562. [Google Scholar] [CrossRef]
  156. Zhuo, W.; Wu, N.; Shi, R.; Liu, P.; Zhang, C.; Fu, X.; Cui, Y. Aboveground biomass retrieval of wetland vegetation at the species level using UAV hyperspectral imagery and machine learning. Ecol. Indic. 2024, 166, 112365. [Google Scholar] [CrossRef]
  157. Masood, M.; He, C.; Shah, S.A.; Rehman, S.A.U. Land Use Change Impacts over the Indus Delta: A Case Study of Sindh Province, Pakistan. Land 2024, 13, 1080. [Google Scholar] [CrossRef]
  158. Fu, C.; Song, X.; Xie, Y.; Wang, C.; Luo, J.; Fang, Y.; Cao, B.; Qiu, Z. Research on the Spatiotemporal Evolution of Mangrove Forests in the Hainan Island from 1991 to 2021 Based on SVM and Res-UNet Algorithms. Remote Sens. 2022, 14, 5554. [Google Scholar] [CrossRef]
  159. Vas, E.; Lescroël, A.; Duriez, O.; Boguszewski, G.; Grémillet, D. Approaching birds with drones: First experiments and ethical guidelines. Biol. Lett. 2015, 11, 20140754. [Google Scholar] [CrossRef]
  160. Chen, L.; Letu, H.; Fan, M.; Shang, H.; Tao, J.; Wu, L.; Zhang, Y.; Yu, C.; Gu, J.; Zhang, N.; et al. An Introduction to the Chinese High-Resolution Earth Observation System: Gaofen-1∼7 Civilian Satellites. J. Remote Sens. 2022, 2022, 9769536. [Google Scholar] [CrossRef]
  161. Lichun, M.; Ram, P. Wetland conservation legislations: Global processes and China’s practices. J. Plant Ecol. 2024, 17, rtae018. [Google Scholar] [CrossRef]
  162. Ye, S.; Pei, L.; He, L.; Xie, L.; Zhao, G.; Yuan, H.; Ding, X.; Pei, S.; Yang, S.; Li, X.; et al. Wetlands in China: Evolution, Carbon Sequestrations and Services, Threats, and Preservation/Restoration. Water 2022, 14, 1152. [Google Scholar] [CrossRef]
  163. McEvoy, J.F.; Hall, G.P.; McDonald, P.G. Evaluation of unmanned aerial vehicle shape, flight path and camera type for waterfowl surveys: Disturbance effects and species recognition. PeerJ 2016, 2016, e1831. [Google Scholar] [CrossRef]
Figure 1. The methodological steps to a systematic review.
Figure 1. The methodological steps to a systematic review.
Remotesensing 17 03605 g001
Figure 2. PRISMA-2020 flow diagram illustrating the identification, screening, eligibility, and inclusion of studies (n = 121) in this systematic review.
Figure 2. PRISMA-2020 flow diagram illustrating the identification, screening, eligibility, and inclusion of studies (n = 121) in this systematic review.
Remotesensing 17 03605 g002
Figure 3. Number of included studies per year (January 2015–April 2025); 2025 is partial year.
Figure 3. Number of included studies per year (January 2015–April 2025); 2025 is partial year.
Remotesensing 17 03605 g003
Figure 4. Geographic distribution of included studies (2015–April 2025).
Figure 4. Geographic distribution of included studies (2015–April 2025).
Remotesensing 17 03605 g004
Figure 5. Distribution of sensor families used in wetland monitoring (n = 159).
Figure 5. Distribution of sensor families used in wetland monitoring (n = 159).
Remotesensing 17 03605 g005
Figure 6. Distribution of sensor families used in bird-habitat monitoring (n = 43).
Figure 6. Distribution of sensor families used in bird-habitat monitoring (n = 43).
Remotesensing 17 03605 g006
Figure 7. Most-used machine learning architectures by wetland type.
Figure 7. Most-used machine learning architectures by wetland type.
Remotesensing 17 03605 g007
Figure 8. Deep Learning usage per wetland type.
Figure 8. Deep Learning usage per wetland type.
Remotesensing 17 03605 g008
Figure 9. Distribution of machine learning architectures in wetland mapping.
Figure 9. Distribution of machine learning architectures in wetland mapping.
Remotesensing 17 03605 g009
Figure 10. Distribution of ML/DL architectures in bird-habitat monitoring.
Figure 10. Distribution of ML/DL architectures in bird-habitat monitoring.
Remotesensing 17 03605 g010
Figure 11. Distribution of deep learning architectures in wetland mapping.
Figure 11. Distribution of deep learning architectures in wetland mapping.
Remotesensing 17 03605 g011
Figure 12. Reported performance metrics across the corpus: (a) bird-habitat studies and (b) wetland mapping studies. Percentages are computed per metric; because many papers report multiple metrics, categories are not mutually exclusive, and totals exceed 100%.
Figure 12. Reported performance metrics across the corpus: (a) bird-habitat studies and (b) wetland mapping studies. Percentages are computed per metric; because many papers report multiple metrics, categories are not mutually exclusive, and totals exceed 100%.
Remotesensing 17 03605 g012
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
The study must focus on wetland monitoring and/or bird-habitat assessment.Studies focusing exclusively on terrestrial ecosystems or general land cover mapping without relation to wetlands or bird-habitats.
The study must apply ML/DL algorithms for wetland and/or bird-habitat monitoring.Studies that rely solely on traditional field-based methods without remote sensing or ML/DL integration.
The study must explore remote sensing technologies.Studies using only conventional ground surveys, expert-based manual mapping, or MODIS-only papers.
The study was published from 2015 onward.The study was published before 2015.
The study must be written in English for better accessibility.Studies written in languages other than English.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zerrouk, M.; Ait El Kadi, K.; Sebari, I.; Fellahi, S. Machine and Deep Learning for Wetland Mapping and Bird-Habitat Monitoring: A Systematic Review of Remote-Sensing Applications (2015–April 2025). Remote Sens. 2025, 17, 3605. https://doi.org/10.3390/rs17213605

AMA Style

Zerrouk M, Ait El Kadi K, Sebari I, Fellahi S. Machine and Deep Learning for Wetland Mapping and Bird-Habitat Monitoring: A Systematic Review of Remote-Sensing Applications (2015–April 2025). Remote Sensing. 2025; 17(21):3605. https://doi.org/10.3390/rs17213605

Chicago/Turabian Style

Zerrouk, Marwa, Kenza Ait El Kadi, Imane Sebari, and Siham Fellahi. 2025. "Machine and Deep Learning for Wetland Mapping and Bird-Habitat Monitoring: A Systematic Review of Remote-Sensing Applications (2015–April 2025)" Remote Sensing 17, no. 21: 3605. https://doi.org/10.3390/rs17213605

APA Style

Zerrouk, M., Ait El Kadi, K., Sebari, I., & Fellahi, S. (2025). Machine and Deep Learning for Wetland Mapping and Bird-Habitat Monitoring: A Systematic Review of Remote-Sensing Applications (2015–April 2025). Remote Sensing, 17(21), 3605. https://doi.org/10.3390/rs17213605

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop