Mapping of Coral Reefs with Multispectral Satellites: A Review of Recent Papers

: Coral reefs are an essential source of marine biodiversity, but they are declining at an alarming rate under the combined effects of global change and human pressure. A precise mapping of coral reef habitat with high spatial and time resolutions has become a necessary step for monitoring their health and evolution. This mapping can be achieved remotely thanks to satellite imagery coupled with machine-learning algorithms. In this paper, we review the different satellites used in recent literature, as well as the most common and efﬁcient machine-learning methods. To account for the recent explosion of published research on coral reel mapping, we especially focus on the papers published between 2018 and 2020. Our review study indicates that object-based methods provide more accurate results than pixel-based ones, and that the most accurate methods are Support Vector Machine and Random Forest. We emphasize that the satellites with the highest spatial resolution provide the best images for benthic habitat mapping. We also highlight that preprocessing steps (water column correction, sunglint removal, etc.) and additional inputs (bathymetry data, aerial photographs, etc.) can signiﬁcantly improve the mapping accuracy.


Introduction
Coral reefs are complex ecosystems, home to many interdependent species [1] whose roles and interactions in the reef functioning are still not fully understood [2]. By the end of the 20th century, reefs were estimated to cover a global area of 255,000 km 2 [3], which is roughly the size of the United Kingdom. Although this number represents less than 0.01% of the total surface of the oceans [4], reefs were estimated to be home to 5% of the global biota at the end of the 1990s [5] and to 25% of all marine species [6]. Furthermore, each year reefs provide services and products worth the equivalent of 172 billion US$ per km 2 [7], thus "producing" a total equivalent of 2000 times the United States GDP.
Despite their importance, coral populations are collapsing due to several factors mainly driven by global climate change and human activity. One of the main threats is the rise of global temperatures [8]. Increasing sea surface temperature is strongly correlated with coral bleaching [9], which tends to be enhanced by the intensity and the frequency of thermal-stress anomalies [10]. Bleaching does not mean that the corals are dead, but it leads to a series of adverse consequences: atrophy, necrosis, increase of the death rate [11], less efficient recovery from disease [12] and loss of architectural complexity [13]. Repeated bleaching events are even more damaging, impairing the coral colony recovery [14] and making them less resistant and more vulnerable to ocean warming [15]. These cumulative impacts are particularly alarming when considering the increasing frequency of severe bleaching events in the last 40 years [16] and predictions that by 2100, more than 95% of reefs will experience severe bleaching events at least twice a decade [17].
The worldwide decline of coral reefs has prompted an unprecedented research effort, reflected by the exponential growth of scientific articles dedicated to coral reefs (see Figure 1). A key prospect faced by the scientific community is the development of open and robust monitoring tools to survey the reef distribution on a global scale for the next decades. Mapping benthic reef habitats is crucially important for tracking their time and space evolution, with direct outcomes for reef geometry and health surveys [18], for developing numerical models of circulation and wave agitation in reef-lagoon systems [19,20] and for socio-economic and environmental management policies [21]. A powerful tool to survey coral reefs is coral mapping or coral classification. This involves using raw input data of a coral site, such as videos or images, extracting the characteristics of the ground and classifying the elements as coral, sand, seagrass, etc. To perform this mapping, there are two possibilities: manually extracting the characteristics, which is a highly accurate method but tedious and time consuming, or training machine-learning algorithms to easily do it in a short time but with a higher chance of misclassification. In this article, the terms "coral mapping" and "coral classification" will both refer to the same meaning being the "automatic machine-learning mapping" if not otherwise stated.
Coral mapping can be accurately achieved from underwater images, as done in most papers published in 2020 [22][23][24][25][26][27][28][29][30][31][32]. However, a major drawback of underwater images is that they are difficult to acquire at a satisfying time resolution for most remote places, thus making it unfeasible to have a worldwide global map with this kind of data. One solution is to use data from satellite imagery.
Aiming to help the ongoing and future efforts for coral mapping at the planetary scale, this paper will mainly focus on multispectral satellite images for coral classification and will mostly omit other sources of data. The main goal of this paper is to highlight the current most efficient methods and satellites to map coral reef. As depicted in Figure 1, there are twice as many papers published in the past two years than there were ten years ago. Furthermore, as described later, the resolution of satellites is quickly improving, and with it the accuracy of coral maps. This is also true for machine-learning methods and image processing. Finally, substantive reviews of work related to coral mapping are only available to 2017 [33,34]. For these reasons, we decided to narrow our analysis to papers published since 2018. Between 2018 and 2020, 446 documents tagging "coral mapping" or "coral remote sensing" have been published ( Figure 1). However, most of these papers do not fit within the scope of our study: they are for instance treating tidal flats, biodiversity problems, chemical composition of the water, bathymetry retrieval, and so on. Thus, out of these 446, only 75 deal with coral classification or coral mapping problems. The data sources used in these papers are summarized in Figure 2. Within these 75 studies, a subset of 37 papers that deal with satellite data (25 with satellite data only) will be specifically included in the present study. Used in almost 50% of the papers, satellite imagery is recommended by the Coral Reef Expert Group for habitat mapping and change detection on a broad scale [35]. It allows benthic habitat to be mapped more precisely than via local environmental knowledge [36] on a global scale, at frequent intervals and with an affordable price.
This review is divided into four parts. First, the different multispectral satellites are presented, and their performance compared. Following this is a review of the preprocessing steps that are often needed for analysis. The third part provides an overview of the most common automatic methods for mapping and classification based on satellite data. Finally, the paper will introduce some other technologies improving coral mapping.

Spatial and Spectral Resolutions
When trying to classify benthic habitat, two conflicting parameters are generally put in balance for choosing the satellite image source: the spatial resolution (the surface represented by a pixel) and the spectral resolution. The latter generally refers to the number of available spectral bands, i.e., the precision of the wavelength detection by the sensor. The former parameter has a straightforward effect: a higher spatial resolution will allow a finer habitat mapping but will require a higher computational effort. The primary effect of the spectral resolution is that model accuracy generally increases with the number of visible bands [37-39] and the inclusion of infrared bands [40]. Although no clear definition exists, a distinction is generally made in terms of spectral resolution between multispectral and hyperspectral satellites. The former sensors produce images on a small number of bands, typically less than 20 or 30 channels, while hyperspectral sensors provide imagery data on a much larger number of narrow bands, up to several hundred-for instance NASA's Hyperion imager with 220 channels. Most of the time, multispectral and hyperspectral sensors have an additional panchromatic band (capturing the wavelengths visible to the human eye) with a slightly higher spatial resolution than the other bands.
A major drawback of hyperspectral satellites is that the best achievable resolution is generally several tens of meters and can be up to 1km for some of them [41], while most multispectral sensors have a resolution better than 4m. A high spectral resolution coupled with a low-spatial-resolution result in a problem known as "spectral unmixing", which is the process of decomposing a given mixed pixel into its component elements and their respective proportions. Some existing algorithms can tackle this issue with a high level of accuracy [42][43][44]. When unmixing pixels, algorithms may face errors due to the heterogeneity of seabed reflectance, disturbing the radiance with the light scattered on the neighboring elements [45]. This process, called the adjacency effect, has negative effects on the accuracy of remote sensing [46] and can modify the radiance by up to 26% depending on turbidity and water depth [47].
In this review, we purposefully omitted the hyperspectral sensors to focus on multispectral satellite sensors, since only the latter have a spatial resolution fine enough to map coral colonies. Moreover, in our case where we are studying how to create high-resolution maps of coral presence, multispectral satellites are more efficient, i.e., they provide more accurate results [35]. In the following parts, unless otherwise stated, the spatial resolution will be referred to as "resolution".

Satellite Data
We found 14 different satellites appearing in benthic habitat mapping studies, and gathered in Table 1 their main characteristics, in particular their spectral bands, spatial resolution, revisit time and pricing. The Landsat satellites prior to Landsat 6 do not appear in the table because they are almost universally not used in recent studies, the Landsat 5 being deactivated in 2013.
Sentinel-2 [62], a European Space Agency's satellite, can be compared to Landsat satellites in terms of spatial and spectral resolution. Sentinel-2 was initially designed for land monitoring [63] but has been used for monitoring oceans (and more specifically coral reefs bleaching) and mapping benthic habitat [64][65][66][67][68][69][70]. Specific spectral bands of Sentinel-2, such as SWIR-cirrus and water vapor bands, are especially useful for cloud detection and removal algorithms [71][72][73][74][75]. One major advantage of Landsat and Sentinel-2 satellites is that their data are open access. However, these satellites are defined as "low-resolution", with a resolution of tens of meters which may be a significant weakness when trying to map and to classify the fine and complex distribution of coral reef colonies.
With a typical spatial resolution of several meters, medium-resolution satellites are more accurate than the aforementioned satellites. Well-known medium-resolution satellites are SPOT-6 [76,77] and RapidEye [78][79][80], with respectively 4 and 5 bands. A major strength of RapidEye is that the image data are produced by a constellation of five identical satellites, thus providing images at a high frequency (global revisit time of one day). Note however that up to now, RapidEye has not been found in recent literature for coral mapping. The principle of using multiple similar satellites is also found with the PlanetScope constellation, composed of 130 Planet Dove satellites. Their total revisit time is less than one day, and they can be found in several recent coral mapping studies [81][82][83][84][85]. Finally, high-resolution sensors are defined as those with a few meters resolution, such as 3 m or less. IKONOS-2 belongs to this category and can be found in several studies of benthic habitat mapping [86][87][88][89], but mostly before 2015, the year it has ceased operating. GaoFen-2 satellite, launched in 2014, has the same spatial and spectral resolution as IKONOS-2, but is not as widely used [90], perhaps because of its age: it was launched in 2014, when some sensors already had a better resolution. GaoFen have different satellites (from GaoFen-1 to GaoFen-14) that have the same or a lower resolution than GaoFen-2.
With a similar sensor and a slightly better resolution than IKONOS-2, the Quickbird-2 satellite provides images for several studies of reef mapping [58,[91][92][93][94][95][96]. Please note that the Quickbird-2 program was stopped in 2015. Similar features are proposed by the Pleiades-1 satellites, from the Optical and Radar Federated Earth Observation program, also present in the literature [97,98]. An even higher accuracy can be found with GeoEye-1 satellite, providing images at a resolution of less than 1m, making it particularly useful to study coral reefs [99].
The most common and most precise satellite images come from WorldView satellites. For instance, WorldView-2 (WV-2), launched in 2009, has been widely used for benthic habitat mapping and coastline extraction [40, 76,90,[100][101][102][103][104][105][106][107]. Despite the high-resolution images provided by WV-2, the highest quality images available at the current time come from WorldView-3 (WV-3), launched in 2014 [39, [108][109][110]. WV-3 has a total of 16 spectral bands and is thus able to compete with hyperspectral sensors with more than a hundred bands (such as Hyperion). Moreover, its spatial resolution is the highest available among current satellites, and is even similar to local measurement techniques such as Unmanned Airborne Vehicles (UAV) [111]. Among all the spectral bands offered by the WV-3 sensors, the coastal blue band (400-450 nm) is especially useful for bathymetry, as this wavelength penetrates water more easily and may help to discriminate seagrass patterns [112]. Although the raw SWIR resolution is lower than the one achieved in visible and near-infrared bands, it can be further processed to generate high-resolution SWIR images [113]. In addition, the WV-3 panchromatic resolution is 0.3 m, which almost reaches the typical size of coral reef elements (0.25 m), thus making it also useful for reef monitoring [114].
To further evaluate the importance of each satellite in the global literature (not only on coral studies) and to detect trends in their use, we searched in Scopus and analyzed the number of articles in which they appear between 2010 and 2020. Several trends can be seen. First, among low-resolution satellites, it appears that while the usage of Landsat remains stable over the year, the usage of Sentinel has exploded (by a multiplication factor of 20 between the period 2014-2014 and 2018-2020). Regarding high-resolution satellites, we detect trends in their usage: in the period 2010-2014, Quickbird and IKONOS satellites were predominant, but their usage decreased by more than 85% during the years 2018-2020. On the other hand, the number of papers published using WorldView and PlanetScope has been increasing: respectively from 108 and 0 in 2010-2014, to 271 and 164 in 2018-2020. The complete numbers for each satellite can be found in Figure A1. Figure 3 depicts which satellites were employed in the 37 studies using satellites ("satellite only" and "satellite + other" in Figure 2). Please note that some studies use data from more than one satellite. From this analysis, WorldView satellites appear to be the most commonly used ones for coral mapping, confirming that high-resolution multispectral satellites are more suitable than low-resolution ones for coral mapping.

Image Correction and Preprocessing
Even though satellite imagery is a unique tool for benthic habitat mapping, providing remote images at a relatively low cost over large time and space scales, it suffers from a variety of limitations. Some of these are not exclusively related to satellites but are shared with other remote sensing methods such as UAV. Most of the time, existing image correction methods can overcome these problems. In the same way, preprocessing methods often result in improved accuracy of classification. However, the efficiency of these algorithms is still not perfect and can sometimes induce noise when trying to create coral reef maps. This part will describe the most common processing that can be performed, as well as their limitations.

Clouds and Cloud Shadows
One major problem of remote sensing with satellite imagery is missing data, mainly caused by the presence of clouds and cloud shadows, and their effect on the atmosphere radiance measured on the pixels near clouds (adjacency effect) [115]. For instance, Landsat-7 images have on average a cloud coverage of 35% [116]. This problem is globally present, not only for the ocean-linked subjects but for every study using satellite images, such as land monitoring [117,118] and forest monitoring [119,120]. Thus, several algorithms have been developed in the literature to face this issue [121][122][123][124][125][126][127][128]. One widely used algorithm for cloud and cloud shadow detection is Function of mask, known as Fmask, for images from Landsat and Sentinel-2 satellites [129][130][131]. Given a multiband satellite image, this algorithm provides a mask giving a probability for each pixel to be cloud, and performs a segmentation of the image to segregate cloud and cloud shadow from other elements. However, the cloudy parts are just masked, but not replaced.
A common approach to remove cloud and clouds shadows is to create a composite image from multi-temporal images. This involves taking several images at different time periods but close enough to assume that no change has occurred in between, for instance over a few weeks [132]. These images are then combined to take the best cloud-free parts of each image to form one final composite image without clouds nor cloud shadows. This process is widely used [133][134][135][136] when a sufficient number of images is available.

Water Penetration and Benthic Heterogeneity
The issue of light penetration in water occurs not only with satellite imagery, but with all kinds of remote sensing imagery, including those provided by UAV or boats. The sunlight penetration is strongly limited by the light attenuation in water due to absorption, scattering and conversion to other forms of energy. Most sunlight is therefore unable to penetrate below the 20 m surface layer. Hence, the accuracy of a benthic mapping will decrease when the water depth increases [137]. The light attenuation is wavelength dependent, the stronger attenuation being observed either at short (ultraviolet) or long (infrared) wavelengths while weaker attenuation in the blue-green band allows deeper penetration. Specific spectral bands such as the green one may be viable for benthic habitat mapping and coral changes, such as bleaching [67]. As the penetration through water depends on the wavelength, image preprocessing may be needed to correct this effect. Water column correction methods enable retrieval of the real bottom reflectance from the reflectance captured by the sensor, using either band combination or algebraic computing depending on the method used. Using a water column correction method can improve the mapping accuracy by more than 20% [138,139]. Several models of water column correction exist, each of them with different performances [140], the best known one being Lyzenga's [141]. The best model strongly depends on the input data and the desired result; see Zoffoli et al. 2014 [140] for a detailed overview of the water column correction methods.
When it is known that the water depth of the study field is homogeneous, it is possible to classify the benthic habitat without applying any correction [142]. However, even in a shallow environment that would be weakly impacted by the light penetration issue (i.e., typically less than 2 m deep), a phenomenon called spectral confusion can occur if the depth is not homogeneous [143]. At different depths, the response of two different-color elements can be similar on a wide part of the light spectrum. Hence, with an unknown depth variation, the spectral responses of elements such as dead corals, seagrasses, bleached corals and live corals can be mixed up and their separability significantly affected, making it harder to map correctly [144]. Nevertheless, this depth heterogeneity problem can be overcome: when mixing satellite images with in situ measurements (such as single-beam echo sounder), it is possible to have an accurate benthic mapping of reefs with complex structures in shallow waters [108]. However, the advantage of not needing ground-truth data (information collected on the ground) when working with satellite imagery is lost with this solution.

Light Scattering
When remotely observing a surface such as water, especially with satellite imagery, its reflectance may be influenced by the atmosphere. Two phenomena modify the reflectance measured by the sensor. First, the Rayleigh's scattering causes smaller wavelengths (e.g., blue 400 nm) to be more scattered than larger ones (e.g., red 800 nm). Secondly, small particles present in the air cause so-called aerosol scattering, also altering the radiance perceived by the satellites [145,146]. Hence, the reflectance perceived by the satellite's sensors is composed of the true reflectance to which are added both Rayleigh-and aerosolrelated scattered components [147,148].
It is possible to apply algorithms to correct the effects due to Earth's atmosphere [149][150][151], making some assumptions such as the horizontal homogeneity of the atmosphere, or the flatness of the ocean. However, these atmospheric corrections do not always result in a significant increase in the classification accuracy when using multispectral images [152], and they are not as frequent as water column corrections, which is why we consider them as optional.

Masking
Masking consists of removing geographic areas that are not useful or usable: clouds, cloud shadows, land, boats, wave breaks, and so on. Masking can improve the performance of some algorithms such as crop classification [153] or Sea Surface Temperature (SST) retrievals [154].
Even though highly accurate algorithms exist to detect most clouds, as discussed previously, some papers employ manual masking for higher accuracy [155]. It is also possible to mask deep water, in which coral reef mapping is difficult to achieve. Deep water to be masked can be defined by a criterion such as a reflectance threshold over the blue band (450-510 nm) [156].

Sunglint Removal
When working with water surfaces, such as an ocean or lagoon, sunglint poses a high risk of altering the quality of the image, not only for satellite imagery but for every remote sensing system. Sunglint happens when the sunlight is reflected on the water surface with an angle similar to the one the image is being taken with, often because of waves. Thus, higher solar angles induce more sunglint; on the other hand, they are also correlated with a better quality for bathymetry mapping based on physical analysis methods [155]. Although this reflectance can be easily avoided when taking field images from an airborne vehicle (by controlling the time of the day and the direction), it is harder to avoid with satellite imagery. It thus must be removed from the image for better accuracy of benthic habitat mapping. This can be achieved, for instance, by a simple linear regression [157]. Some other models can also efficiently tackle this issue [158][159][160][161]. According to Muslim et al. 2019 [162], the most efficient sunglint removal procedure when mapping coral reef from UAV is the one described in Lyzenga et al. 2006 [163]. As the procedures compared in the paper depend on multispectral UAV data, we can imagine that the result may be true for satellite data as well.

Geometric Correction
Geometric correction consists of georeferencing the satellite image by matching it to the coordinates of the elements on the ground. It allows, for instance, removal of spatial distortion from an image or drawing a parallel between two different sources of data, such as several satellite images, satellite images with other images (e.g., aerial), or images mixed with bathymetry inputs (sonar, LiDAR). This step is especially important in the case of satellite imagery, which is subject to a large number of variations such as angle, radiometry, resolution or acquisition mode [164].
Geometric corrections are needed to be able to use ground-truth control points. These data can take several forms, for instance divers' underwater videos or acoustic measurements from a boat. Even though control points are not used in every study, they are frequent because they enable a high-quality error assessment and/or a more accurate training set. However, control points are not that easy to acquire because they require a field survey, which is not always possible and may be expensive for some remote sites. Thus, control points are not always used.

Radiometric Correction
When working with multi-temporal images of the same place, the series of images is likely to be heterogeneous because of some noise for instance induced by sensors, illumination, solar angle or atmospheric effects. Radiometric correction enables normalization of several images to make them consistent and comparable. Radiometric corrections significantly improve accuracy in change detection and classification algorithms [165][166][167][168]. Identically to geometric corrections that are only required when working with ground-truth control points, radiometric corrections are only needed when working with several images of the same place. Radiometric corrections are also useful to provide factors needed in the equations of some atmospheric correction algorithms [169].

Contextual Editing
Contextual editing is a postprocessing of the image, subsequent to the classification step that takes into account the surrounding pattern of an element [170,171]. Indeed, some classes cannot be surrounded by another given class, and if it is found to be the case then the classifier has probably made a mistake. For instance, an element classified as "land" that is surrounded by water elements is more likely to be a class such as "algae".
The use of contextual editing can greatly enhance the performance of a classifier, be it for land area [172] or for coral reefs [138,173]. However, surprisingly, it appears that this method has not been widely employed in the published literature, especially with benthic habitat related topics. To the best of our knowledge, even though we found some papers using contextual editing for bathymetry studies, it has not been applied to coral reef mapping in the past 10 years.

From Images to Coral Maps
Satellite imagery represents a powerful tool to assess coral maps, should we be able to tackle the problems that come with it. Manual mapping of coral reefs from a given image is a long and arduous work and synthetic expert mapping over large spatial area and/or long time periods is definitely out of reach, especially when the area to be mapped has a size of several km 2 . Coral habitats are at the moment unequally studied, with some sites that are almost not analyzed at all by scientists: for instance, studies on coldwater corals mostly focus on North-East Atlantic [174]. The development of automated processing algorithms is a necessary step to target a worldwide and long-term monitoring of corals from satellite images. The mapping of coral reefs from remote sensing usually follows the flow chart given in Andréfouët 2008 [175] consisting of several steps of image corrections, as seen previously, followed by image classification. For instance, with one exception, all the studies published since 2018 that deal with mapping coral reefs from satellite images perform at least three out of the four preprocessing steps given in [175]. The following subsections provide a comparison of the accuracies given by different statistical and machine-learning methods.

Pixel-Based and Object-Based
Before comparing the machine-learning methods, a difference must be drawn between two main ways to classify a map: pixel-based and object-based. The first consists of taking each pixel separately and assigning it a class (e.g., coral, sand, seagrass, etc.) without taking into account neighboring pixels. The second consists of taking an object (i.e., a whole group of pixels) and giving it a class depending on the interaction of the elements inside of it.
The object-based image analysis method performs well for high-resolution images, due to a high heterogeneity of pixels which is not suited for pixel-based approaches [176]. This implies that object-based methods should be used in the study of reef changes working with high-resolution multispectral satellite images instead of low-resolution hyperspectral satellite images. Indeed, the object-based method has an accuracy 15% to 20% higher than the pixel-based one in the case of reef change detection [156,177,178] and benthic habitats mapping [77,179].
The relative superiority of the object-based approach has also been shown when applied to land classification [180,181], such as bamboo mapping [182] or tree classification [183,184]. Nonetheless, even if the object-based methods are generally more accurate, they remain harder to set up because they need to perform a segmentation step (to create the objects) before the classification.

Maximum Likelihood
Maximum likelihood (MLH) classifiers are particularly efficient when the seabed does not have a too complex architecture [185]. With good image condition, i.e., clear shallow water (<7 m) and almost no cloud cover, a MLH classifier can discriminate Acropora spp. corals from other classes (sand, seagrass, mixed coral species) with an accuracy of 90% [186]. Moreover, a MLH classifier works well under two conditions: when the spectral responses of the habitats are different enough to be discriminated, and when the area analyzed is in shallow waters (< 5 m) [87]. It is however very likely that these results can be applied to other machine-learning methods.
Nevertheless, when compared to other classification methods such as Support Vector Machine (SVM) or Neural Networks (NN), MLH classifiers appear to be less efficient, be it for land classification [152,[187][188][189][190] or for coastline extraction [103,191]. A comparison of some algorithms applied to crop classification also confirms that SVM and NN perform better, with an accuracy of more than 92% [192].

Support Vector Machine
Across several studies, SVM appears to be the method with the best accuracy [187,188,193], especially with edge pixels, i.e., pixels which border two different classes [191]. In the studies published between 2018 and 2020, SVM classifiers had on average an accuracy of 70% for coral mapping, but can achieve up to 93% classification accuracy among 9 different classes of benthic habitat [194] when coupling high-resolution satellite images from WV-3 with drone images [194].

Random Forest
Random Forest (RF) methods also are very efficient in remote sensing classification problems [195]. They perform well to classify and map seagrass meadows [196] or landuse [197], the most often forests [198][199][200], although performance for land ecosystems may not be directly compared to that obtained for marine habitat mapping. For shallow water benthic mapping, a RF classifier can still outperform a SVM classifier in benthic habitat mapping [185], or at least have an identical overall accuracy but with a better spatial distribution. Globally, RF classifiers can map benthic habitat with an overall accuracy ranging from 60% to 85% [85,[201][202][203][204][205], depending on the study site, the satellite imagery involved, and the preprocessing steps applied to the images.

Neural Networks
NN are commonly used to classify coral species with underwater images, but to date have rarely been used to map coral reefs from satellite images alone [90,206,207]. However, NN often appear in papers where satellite images are mixed with other sources such as aerial photographs, bathymetry data, or underwater images [208,209]. NN can be useful to perform a segmentation to extract features before performing the classification with a more common machine-learning method such as SVM [90] or K-nearest neighbors, with more than 80% mapping accuracy [207].

Unsupervised Methods
Unsupervised machine-learning methods are less frequent but still appear in some studies. The most present methods are based on K-means and ISODATA [65,68,77] with an accuracy ranging from 50% to 80%, reaching 92% when discriminating between 3 benthic classes [81]. The latter is an improvement of the former, where the user does not have to specify the number of clusters as an input. In the first place, the algorithm clusters the data, and then assign each cluster a class.

Synthesis
Given that the results on which is the best classifier can vary from a paper to another, we decided to gather in Figure 4 the coral mapping studies since 2018 using satellite imagery only. Please note that we excluded the methods that appeared in less than 3 papers, leading to an analysis of a subset of 20 study of the 25 papers depicted in Figure 2. We regrouped the methods K-means and ISODATA under the same label "K-means +" because these two methods are based on the same clustering process. Despite our comprehensive search of the literature, we acknowledge the possibility that some studies may have been overlooked. All the papers used here can be found in Table A1. One point is a one study, its X-axis value correspond to the method used and its color correspond to the satellite used. One paper can create several points if it used different methods or different satellites. The red line is the mean of each method. The method "K-means +" regroups the methods K-means and ISODATA. "RF" is Random Forest, "SVM" is Support Vector Machine, "MLH" is Maximum Likelihood and "DT" is Decision Tree. The number of studies using each method appears in parentheses.
From the previous section and Figure A1, we recommend that the most accurate methods are RF and SVM. However, this recommendation has to be carefully evaluated because all the studies compared in this paper are based on different methods (how the performance of the model is evaluated and which preprocessing are performed on the images) and data sets (location of the study site and satellite images used), which may have a strong influence on the obtained results.

Improving Accuracy of Coral Maps
Although we have focused so far on satellite images-derived maps, there are many other ways to locate coral reefs without directly mapping them. This section will describe how to study reefs without necessarily mapping them, and the technologies that allow improvements in the precision of reef mapping.

Indirect Sensing
It is possible to acquire information on reefs and their localization without directly mapping them. Indirect sensing refers to these methods, studying reefs by analyzing their surrounding factors.
For instance, measuring Sea Surface Temperatures (SST) have helped to draw conclusions that corals have already started adapting to the rise of ocean temperature [17]. Similarly, as anomalies in SST are an important factor in coral outplant survival [210], an algorithm forecasting SST can predict which heat stress may cause a coral bleaching event [211]. Furthermore, it is possible to use deep neural networks to predict SST even more accurately [212]. However, even though the measured SST and the real temperature experienced by reefs can be similar [213], it is not always the case depending on the sensors used and other measurements such as wind, waves and seasons [214]. To try to overcome this issue and obtain finer predictions of the severity of bleaching events, it is possible to combine water temperature with other factors such as the light stress factor [215], known to be a cause of bleaching [216].
Backscatter and absorption measurements, as well as chlorophyll-a levels, can also be analyzed to detect reef changes [217,218]. The chlorophyll-a levels and total suspended matter can be highly accurately retrieved with some algorithms based on satellite images [219][220][221][222]. Similarly, computation on bottom reflectance can detect coral bleaching [223].
We could imagine that these indirect measurements, performed with satellite imagery and providing useful data about coral health, could be incorporated as an additional input to some classifiers to improve their accuracy. This is something we have not been able to find in current literature and that we suggest trying.

Additional Inputs to Coral Mapping
First, to enhance the classification accuracy, it appears evident that a higher satellite image resolution implies a higher accuracy for a same algorithm [224]. Notwithstanding this, we will describe here the different means to enhance the mapping with a given satellite resolution.
To be able to effectively detect environmental changes, several factors are important [225], among which the quality of the satellite images [226] and the quantity of data over time [227]. Indeed, it is essential to have a temporal resolution of a few days or even less, to be able to select the best images, without cloud nor sunglint [228]. A solution can thus be to couple images from a high-resolution satellite with a high-frequency satellite, for instance WV-3 and RapidEye [194].
To be able to discriminate some coral reefs with a special topography, satellite imagery may not be enough. Adding bathymetry data, for instance acquired with LiDAR, can improve the accuracy of the results [88,156,[229][230][231]. It is possible to estimate bathymetry and water depth, with one of a numbered methods that currently exist [232][233][234][235], and to include this as an additional input to a coral reef mapping algorithm [236]. This method is found in Collin et al. 2021 [39], where it improves the accuracy by up to 3%, allowing more than 98% overall accuracy with high-resolution WV-3 images.
Underwater images can also be used jointly with satellite images. They can be obtained from underwater photos taken by divers [94,237,238], as well as underwater videos taken from a boat [239].

Citizen Science
Crowd sourcing can help classify images or provide large sets of data [244][245][246][247][248][249][250], in remote sensing of coral reefs as well as in other fields. However, the citizen scientists can be wrong or provide different classification [251][252][253] and thus still some modifications are often needed to learn from citizens' responses [254,255]. The Neural Multimodal Observation and Training Network (NeMO-Net), a NASA project, is a good example of how citizen science can be used to generate highly accurate 3D maps and provide a global reef assessment, based on an interactive classification game [207,256,257]. This type of data can especially be helpful to feed a neural network, knowing that ground-truth knowledge and expert classification are hard to acquire.

Conclusions and Recommendations
Through all the papers studying coral reefs between 2018 and 2020 and mapping them from satellite imagery, the best results are obtained with RF and SVM methods, even though the achieved overall accuracy almost never reaches 90%, and is often below 80%. The arrival of very high-resolution satellites dramatically increases this to more than 98% [39]. To map coral reefs with a higher accuracy, we recommend using satellite images with additional inputs when it is possible.
When performing coral mapping from satellite images, it is very common to apply a wide range of preprocessing. Out of the four preprocessing methods proposed in Andréfouët 2008 [175], we suggest applying a water column correction (see [140] for the best method), and a sunglint correction (we recommend [163]). Geometric correction is only needed when working with ground-truth points, and radiometric correction when working with multi-temporal images. Interestingly, some postprocessing methods such as contextual editing appear to be less well used and could improve accuracy [138,173].
Presently, several projects exist to study and map coral reefs at a worldwide scale, using an array of resources, from satellite imagery to bathymetry data or underwater photographs: the Millennium Coral Reef Mapping Project [258], the Allen Coral Atlas [241] or the Khaled bin Sultan Living Ocean Foundation [259]. These maps are proven useful to the scientific community for coral reef and biodiversity monitoring and modeling, as well as inventories or socio-economic studies [260].
However, when examining the maps created by all these projects, we can see that many sites are yet to be studied. Furthermore, some reef systems have been mapped at a given time but would need to be analyzed more frequently, to be able to detect changes and obtain a better understanding of the current situation. Hence, even if the work achieved to date by the scientific community is huge, a lot still needs to be done. Great promise lies in upcoming very high-resolution satellites coupled with the cutting-edge technology of machine-learning algorithms.

Conflicts of Interest:
The authors declare no conflict of interest.