Next Article in Journal
Correcting Forecast Time Biases in CMA-MESO Using Himawari-9 and Time-Shift Method
Next Article in Special Issue
Hyperspectral Target Detection Based on Masked Autoencoder Data Augmentation
Previous Article in Journal
Multi-Size Voxel Cube (MSVC) Algorithm—A Novel Method for Terrain Filtering from Dense Point Clouds Using a Deep Neural Network
Previous Article in Special Issue
A Multi-Scale Fusion Deep Learning Approach for Wind Field Retrieval Based on Geostationary Satellite Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Temporal Remote Sensing Satellite Data Analysis for the 2023 Devastating Flood in Derna, Northern Libya

1
Interdisciplinary Research Center for Aviation and Space Exploration, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
2
Department of Aerospace Engineering, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
3
Department of Physics, College of Engineering and Physics, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
4
Centre of Research Excellence in Renewable Energy, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(4), 616; https://doi.org/10.3390/rs17040616
Submission received: 15 December 2024 / Revised: 18 January 2025 / Accepted: 30 January 2025 / Published: 11 February 2025
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)

Abstract

:
Floods are considered to be among the most dangerous and destructive geohazards, leading to human victims and severe economic outcomes. Yearly, many regions around the world suffer from devasting floods. The estimation of flood aftermaths is one of the high priorities for the global community. One such flood took place in northern Libya in September 2023. The presented study is aimed at evaluating the flood aftermath for Derna city, Libya, using high resolution GEOEYE-1 and Sentinel-2 satellite imagery in Google Earth Engine environment. The primary task is obtaining and analyzing data that provide high accuracy and detail for the study region. The main objective of study is to explore the capabilities of different algorithms and remote sensing datasets for quantitative change estimation after the flood. Different supervised classification methods were examined, including random forest, support vector machine, naïve-Bayes, and classification and regression tree (CART). The various sets of hyperparameters for classification were considered. The high-resolution GEOEYE-1 images were used for precise change detection using image differencing (pixel-to-pixel comparison and geographic object-based image analysis (GEOBIA) for extracting building), whereas Sentinel-2 data were employed for the classification and further change detection by classified images. Object based image analysis (OBIA) was also performed for the extraction of building footprints using very high resolution GEOEYE images for the quantification of buildings that collapsed due to the flood. The first stage of the study was the development of a workflow for data analysis. This workflow includes three parallel processes of data analysis. High-resolution GEOEYE-1 images of Derna city were investigated for change detection algorithms. In addition, different indices (normalized difference vegetation index (NDVI), soil adjusted vegetation index (SAVI), transformed NDVI (TNDVI), and normalized difference moisture index (NDMI)) were calculated to facilitate the recognition of damaged regions. In the final stage, the analysis results were fused to obtain the damage estimation for the studied region. As the main output, the area changes for the primary classes and the maps that portray these changes were obtained. The recommendations for data usage and further processing in Google Earth Engine were developed.

1. Introduction

Global geohazard monitoring is a complex and multifaceted challenge that requires advanced tools and techniques to address its diverse and dynamic nature. Natural haz-ards, including hurricanes, earthquakes, floods, and other extreme events, pose significant threats to both human populations and the environment. These disasters often lead to widespread destruction, disrupting communities, damaging infrastructure, and resulting in substantial human casualties [1,2]. Leading worldwide scientific and economic organizations such as the National Oceanic and Atmospheric Administration (NOAA), the North Atlantic Treaty Organization (NATO), and the World Bank Group [3,4] have focused on understanding and estimating the aftermath of natural hazards, including infrastructure damages [5]. Over the years, an enormous amount of research has been carried out in an attempt to manage natural hazards and assess and reduce their risks [6,7,8]. Recent achievements in computer sciences, such as artificial intelligence, big data, and machine learning algorithms, have significantly improved disaster management [9,10,11,12]. However, leveraging the considered methods is only possible in the case of reliable and precise initial data for further simulation. Therefore, data collection, processing, and analysis are indispensable steps of successful disaster management. Researchers have become increasingly interested in applying remote sensing data and technologies for disaster monitoring in recent decades. Numerous scholars have stressed the importance of remote sensing data for monitoring drought expansion [13], wildfire monitoring [14], tsunami [15], earthquake aftermaths [16], etc. Among the mentioned disasters, flood monitoring is of particular interest [17,18]. Floods have a major impact on highly populated regions and large cities [19]. Flood studies using remote sensing data have received considerable research attention. Among the available remote sensing data, the highest priority is the data obtained by space or aircraft-based sensors, namely unmanned aerial vehicles (UAVs) [20,21]. As floods are a global phenomenon, priority should be given to the data providing the maximum area coverage tailored to the highest possible geometric and radiometric resolution. This is why several recent studies have looked into the capability of satellite images in optical bands [17,22,23], multispectral satellite images [24,25,26], SAR images [25,27], aerial images [28], and UAV-grabbed images [29,30] for flood monitoring. Previous studies have comprehensively reviewed the remote sensing technology application for flood monitoring [31,32,33,34]. Satellite and UAV data ensure the necessary information for the following analysis. This analysis can be carried out using various approaches, e.g., physical, statistical, geospatial, etc., or their combinations. Geographical Information System (GIS) is a leading platform for flood monitoring and management [35,36,37,38]. Therefore, we have used a fusion of different remote sensing data, GIS technologies, and mathematical algorithms that, in complex, allow for finding the solution to flood monitoring and management.
Machine learning methods are the primary mathematical methods and algorithms that help to process massive datasets. These methods use contemporary achievements in essential machine learning [26], neural networks, and deep learning [39,40]. The main burden that machine learning takes for remote sensing data handling is related to image processing. The sizes of the current datasets make manual image processing impossible. Evidently, flood monitoring and management require the solution of image change detection problems [41,42,43,44]. Change detection methods differ in calculation approaches and comparison strategies [14,45,46,47]. However, they all consider the changes in imagery radiance [46,48,49]. Many studies have shown that radiance depends on various factors [50]. Among those, different studies highlight sensor calibration, solar angle, atmospheric conditions, seasons, etc. The reduction of these factors is the greatest challenge for change detection strategies. To overcome this issue, various radiometric correction methods are being applied. A researcher must keep in mind that images of different observation epochs should be uniform. As a prerequisite, we must account for atmospheric and solar angle corrections and precise geometric co-registration of images. Change detection algorithms work well for cases when the changes on the ground are significant and differences in radiance are considerably larger than the impacts of other aforementioned factors. Thus, there is no universal solution for the change detection problem. From a computational point of view, change detection methods can be divided into four groups: image differencing, post-classification comparison, image transformation, and combined methods [51]. Image differencing methods include simple image differencing and image rationing. Post-classification comparison contains different methods that are based on image preliminary classification. Image transformation methods suppose some preliminary data transformation or additional parameters calculation from pixel reflectance (linear transformation, change vector analysis, image regression, multitemporal spectral mixture analysis, etc.). All methods have pros and cons [44,52,53,54,55]. Apart from image differencing methods, the efficiency of other methods depends on machine learning algorithms used for data processing. In general, supervised and unsupervised learning are used. Recent studies have shown the high efficiency of the following supervised learning approaches: convolutional neural networks [32,56,57,58], fully convolutional networks [57], deep learning [59], Siamese convolutional neural networks [60,61], deep Siamese semantic network [62], conditional adversarial networks [63,64], generative discriminatory classified networks [65], dual-dense convolution networks [66], multilayer Markov random field models [67], deep capsule networks [68], hierarchical difference representation learning by neural networks [65], deep belief networks [69], transfer learning approach [70], etc. This list is by no means exhaustive, but it demonstrates the role of different supervised learning techniques in change detection tasks. One may find examples of unsupervised learning for change detection in [62,71,72,73]. A considerable body of research indicates that the change detection problem is far from a reliable solution. In our opinion, the image classification approach has a significant capability, especially in conjunction with machine learning methods, e.g., convolutional neural networks [39], deep learning [33], and statistical approaches [74,75] or their combinations.
The primary goal of this study is to assess the capabilities of various remote sensing indices, algorithms, and datasets for detecting and quantifying qualitative and quantitative changes caused by the devastating flood in Derna, Libya, in September 2023. Specifically, the research focuses on determining the most effective indices and algorithms for post-flood change detection, exploring the contributions of different remote sensing datasets to the accuracy and reliability of flood impact assessments. By employing image classification-based change detection techniques, the study aims to identify specific land cover and hydrological changes while evaluating the strengths and limitations of these methods in comparison to other approaches. This integrated approach seeks to bridge methodological exploration with practical outcomes, offering valuable insights into the impact of the flood and the effectiveness of remote sensing tools in post-disaster analysis. The aim of this research is threefold. The first is to explore different image classification algorithms for change detection, including random forest [76], classification and regression tree (CART) [77], Naïve Bayes [78], and support vector machine (SVM) [79] algorithms. The second method estimates change detection using high-resolution remote sensing data and the image differencing method. The third is to develop an integrated measure for change estimation that will account for the outputs of different change detection methods. In addition, different indices (normalized difference vegetation index (NDVI) [80], soil adjusted vegetation index (SAVI) [81], transformed NDVI (TNDVI), and normalized difference moisture index (NDMI) [82] were calculated using Multispectral Sentinel-2 images to facilitate the recognition of damaged regions by the changes in vegetation that were flashed away during the flood. The last objective of the study was to extract buildings using object-based image analysis (OBIA) and very high-resolution GEOEYE-1 images to quantify the buildings that collapsed due to floods.
The remainder of this paper is structured as follows. Section 2 provides an overview of the study area and key features of the flood, its reasons, and its aftermath. Section 3 deals with the description of remote sensing data used for analysis. Section 4 presents a theoretical framework for change detection methods. A brief description of image differencing, spectral indices, post-classification, and geographic object-based image analysis are provided. Section 5 details the findings of change detection research. This section outlines the results of image differencing change detection using high-resolution remote sensing data. The section then presents the results of post-classification change detection using different classification methods. The integrated measure for different layer change estimation is considered. The analysis concludes with calculation and comparison. Section 5 furnishes the final output of the analysis of the detected changes. Section 6 is dedicated to conclusions.

2. Study Area

Libya, situated in northern Africa, covers a vast expanse of 1,759,540 square kilometers and possesses a highly distinctive climate (Figure 1). The country has an estimated population of around 7 million, according to the United Nations in 2021 [83]. Indeed, Libya is characterized mainly by a desert climate, marked by summer temperatures often exceeding 40 degrees Celsius and relatively mild winters, with sporadic rainfall primarily concentrated in the winter months [83]. Over 95% of the country’s land area lies below the 100 mm isohyet, classifying it as a hyper-arid zone, if not outright desert [84]. However, the coastal regions in the west enjoy a Mediterranean climate with precipitation ranging between 300 and 500 mm per year. Occasionally, Mediterranean-level storms can take advantage of favorable meteorological conditions (such as elevated sea surface temperatures) to cause flooding in Europe and along the African coast. This pluviometric situation presents significant imperatives for water resource management and calls for adaptive solutions to ensure sustainable development in the country.
In September 2023, Storm Daniel formed above the Ionian Sea between Italy, Greece, and the Balkan Peninsula in the Mediterranean basin. Elevated temperatures during this season contributed to the necessary moisture production. The sea surface temperature of the Mediterranean was hot, surpassing the usual temperatures by two to three degrees and reaching a record of 28.71 °C in July. On 4 September, the storm moved inland, impacting the Balkans and bringing heavy rains to the region. The Greek National Meteorological Service officially named it, following the classification of European meteorological services. This event occurred during the European winter storm season of 2022–2023. Between 4 September 2023 and 7 September 2023, torrential precipitation led to severe flooding in Turkey, Greece, and Bulgaria, resulting in loss of human lives, with at least 26 reported deaths and 2 individuals still missing.
In the following days, the system moved southeastward, reaching its peak as a subtropical storm with recorded winds of 83 km/h, according to Meteorological Operational satellite (MetOp) instruments. On 10 September, Storm Daniel continued its trajectory eastward, penetrating further inland and eventually reaching the Northeast of Libya. There, it unleashed 414 mm of rain in a single day. The consequences were catastrophic: floods and mudslides, triggered by the collapse of the Derna dams, led to over 11,470 fatalities and left at least 10,000 individuals missing. Subsequently, the storm weakened and reached the North of Egypt on the 11th of September, causing moderate-level precipitation. Ultimately, it significantly diminished in intensity, transitioning into a residual depression due to the interaction of dry air and friction before eventually dissipating completely on 11 September 2023.
The most impacts of Storm Daniel’s flooding were concentrated in Libya, specifically in the port city of Derna, which has a population of approximately 90,000. The torrential rainfall of 414 mm led to the breach of the adjacent Derna and Abu Mansur dams, both around fifty years old. This allowed a flow more than 100 m wide to rush in, inundating the city’s heart along the Wadi Derma riverbed, which is typically dry for most of the year. Dwellings collapsed, resulting in both human and material losses. However, the breach of the second dam, located just one kilometer inland from Derna, released floodwaters measuring 3 to 7 m in height, submerging the city. Sudden floods destroyed roads and swept entire neighborhoods towards the sea (Figure 2).

3. Data

The study encompasses Derna and surrounding regions located along Libya’s Mediterranean coast. Detailed information about the data used for the study is provided in Table 1. In addition to high-resolution data, Sentinel-2 data were used for change detection using image classification and indices. It is well known that the more spectral bands we have, the better the classification results. As long as GEOEYE-1 has only three spectral bands, this number is critical. Therefore, we decided to split the data into two sets depending on the task. The high-resolution GEOEYE-1 images were used for precise change detection using image differencing (pixel-to-pixel comparison and geographic object-based image analysis (GEOBIA)), whereas Sentinel-2 data were employed for the classification and further change detection by classified images, as well as for various indices calculation.
Different software has been used for different computational procedures. Image differencing (pixel-based) change detection was accomplished using QGIS. For building footprint extraction, we used object-based image analysis in Ecognition Developer. Post-classification change detection and different indices calculation were performed by using Sentinel-2 imagery in the Google Earth Engine environment.

4. Methodology

4.1. Image Differencing

The methods based on image differencing and rationing provide only quantitative information about probable image changes. Therefore, we may determine changes without referencing these changes to specific objects or map layers. The Image Differencing method is based on the pixel radiance comparison. The equation of the image differencing method is simple:
x i = x i b t 2 x i b t 1 ,
where x i b t 1 ,   x i b t 2 are ith pixel radiances for different observation epochs ( t 1 , t 2 ) and band b.
The value x i can take positive, negative, or 0 values (no changes). The method needs neat preprocessing to eliminate any inappropriate radiance changes. Insofar as we need to calculate differences, then, according to the theory of errors, the final result will have more considerable ambiguity than each contributor separately. If we designate m x t i as the standard deviation of the radiance of pixel for time t i then the standard deviation of difference will be:
m x i j = m x t i 2 + m x t j 2 ,
or for approximately equal errors, m x t i = m x t j :
m x i j = 2 m x t i
The threshold value for changes can be assigned by the value of standard deviation. However, due to a vast number of factors that affect standard deviation, it is suggested that the threshold value be assigned by default, in our case, 0.2. Choosing a threshold value of 0.2 for change detection in flood images involves balancing sensitivity and robustness to achieve meaningful and reliable results. This value reflects an empirical decision based on the nature of radiance data and the expected noise levels after preprocessing. A threshold of 0.2 represents a practical compromise: it is high enough to minimize false positives caused by insignificant radiance fluctuations but low enough to ensure detection of genuine flood-induced changes.

4.2. Change Detection by Spectral Indices

Unlike the simple pixel radiance comparison, the various indices may indicate specific changes related to variation, e.g., vegetation, moisture, etc. Multispectral Sentinel-2 images were used to calculate different indices. Various remote sensing indices are key in flood studies for mapping inundation and analyzing impacts. The Normalized Difference Vegetation Index (NDVI) and its derivatives, such as the Soil-Adjusted Vegetation Index (SAVI) and Transformed NDVI (TNDVI), monitor vegetation health and flood-induced changes, with SAVI reducing soil interference. The Normalized Difference Moisture Index (NDMI) detects moisture variations in vegetation, aiding flood impact assessment. For water detection, the Modified Normalized Difference Water Index (MNDWI) enhances sensitivity to water, while the Land Surface Water Index (LSWI) focuses on soil moisture and surface water. Enhanced Vegetation Index (EVI), Delta Vegetation Index (DVEL), and Soil-Adjusted Total Vegetation Index (SATVI1) provide refined tools for analyzing vegetation and inundation areas, while the Atmospherically Resistant Vegetation Index (AR-VI) ensures accuracy in atmospheric interference. These indices collectively offer robust capabilities for flood mapping and monitoring. Among the variety of the indexes, the following have been chosen and calculated [85,86]:
Normalized difference vegetation index (NDVI) was designed to quantify vegetation used and to measure and monitor plant growth, vegetation cover, and biomass production using NIR and Red spectral bands [80]. The NDVI equation and its change is as follows:
N D V I t i = N I R R E D N I R + R E D ,      N D V I = N D V I t 2 N D V I t 1
Soil-adjusted vegetation index (SAVI) is used to minimize soil brightness influences from spectral vegetation indices involving red and near-infrared (NIR) wavelengths [81]. The equation for SAVI and its change is as follows:
S A V I t i = 1 + L N I R R E D N I R + R E D + L ,   L = 0.5 ,      S A V I = S A V I t 2 S A V I t 1
Transformed NDVI and its change:
T N D V I t i = N D V I + 0.5 ,      T N D V I = T N D V I t 2 T N D V I t 1
Although different vegetation indices have been used in remote sensing to detect changes in vegetation, normalized difference moisture index (NDMI) is more sensitive to vegetation disturbances and more resistant to data noise than any other tested index [82]. The equation for NDMI and its change is as follows:
N D M I t i = S W I R N I R S W I R + N I R ,      N D M I = N D M I t 2 N D M I t 1
where NIR is near-infrared, SWIR is short-wave infrared, and RED is red spectral bands.
The calculated indexes were compared, and their differences were transformed into areas. The obtained changes provided a rough estimation due to the relatively low resolution of multispectral images from Sentinel-2. The more precise but less informative change assessment was accomplished using the image differencing method on high-resolution images.

4.3. Classification Comparison

The idea behind this method is similar to that of image differencing. The only difference is that the preliminary image classification and comparison are used instead of the pixel radiance of the classified pixels. Thanks to this approach, we can identify the changes peculiar to different object layers (water, vegetation, urban, etc.). The quality of this method depends on classification accuracy. The classification accuracy, in turn, depends on the classification strategy (unsupervised or supervised), classification algorithms (random forest, support vector machine, etc.), and pixel radiance. If we classify images using the supervised classification strategy then the impact of different pixel radiance for the same pixels will be reduced. For classification, the following algorithms were tested: Random Forest, classification and regression tree (CART), Naïve Bayes, and support vector machine (SVM). The algorithms tested for classification—Random Forest, Classification and Regression Tree (CART), Naïve Bayes, and Support Vector Machine (SVM)—were chosen due to their diverse approaches to modeling. Random Forest, an ensemble method, is well-suited for handling high-dimensional data and addressing overfitting, making it a robust choice for complex datasets. CART, a decision-tree-based algorithm, is intuitive and effective for tasks with clear decision boundaries but may struggle with overfitting on noisy data. Naïve Bayes, a probabilistic classifier based on Bayes’ theorem, is computationally efficient and effective for text or categorical data but assumes feature independence, which may limit its performance in visually complex tasks like flood im-age classification. SVM is highly effective for linearly separable data and can handle non-linearity with kernel tricks, but it is computationally intensive for large datasets. In the comparative analysis, Random Forest outperformed others due to its ensemble nature, capturing diverse patterns in the data, while SVM showed strong performance on well-defined feature spaces. CART’s performance was moderate, impacted by its sensitivity to noise. However, since we again deal with differences, the final accuracy will be lower than the accuracy of each classified image. We propose to estimate the post-classification change detection accuracy using the following expression:
m C i j = m C t i 2 + m C t j 2 ,
where m C t i is classification accuracy in the image for observation epoch ti.
Estimation of the classification quality can be obtained in different ways. The popular measures for classification quality are overall accuracy (OA), errors of omission, errors of commission, producer’s accuracy (PA), and consumer’s or user accuracy (UA). The overall accuracy represents what proportion of the reference data of all classes was classified correctly and is calculated as the total number of correctly identified pixels of all classes divided by the total number of pixels of all classes in the reference sample. The producers’ accuracy represents what percentage of each classes were correctly classified, and is calculated as the number of correctly identified pixels of a given class divided by the total number of pixels actually in that class. The consumer’s accuracy, also known as user’s accuracy, represents reliability, or the percentage of other classes that were wrongly included in a given class. The consumer’s accuracy is calculated as the number of correctly identified pixels of a given class divided by the total number of pixels claimed to be in that class [87,88]. The confusion matrix is the primary source for these measures. Section 5 will give a detailed description of these measures.

4.4. Geographic Object-Based Image Analysis (GEOBIA)

For very high-resolution satellite images, the use of traditional statistical analysis of single pixels is not appropriate as the pixel under consideration and its neighboring pixels may differ spectrally but belong to the same land cover class [89]. The high spectral variability within the same land cover class in high-resolution satellite images creates a “salt-and-pepper” effect during classification. In contrast to pixel-based approaches for the classification of high-resolution satellite images, object-based image analysis is very effective for the classification of objects at different scales [90].
The first step in object-based image analysis is image segmentation for the creation of objects. Image segmentation is the process by which homogeneous image objects are created by aggregating groups of pixels regarding spectral and spatial characteristics. The term ‘homogeneous’ implies that within-object variance is low compared to that between objects, and those identified objects also contain additional information about geometry (size and shape), contextual, textural, and spectral information [91]. These homogeneous objects reflect real-world objects of interest [89].
In object-based image analysis, besides spectral information of objects, geometry (shape and size), spatial, and contextual information for characterization of land use classes are very helpful [92]. This is similar to how humans usually recognize landscape patterns by their spatial relationship to neighborhood objects. Spatial relationships between adjacent pixels in the form of texture provide important information for the identification of individual objects, which are building blocks of original features of interest [93]. In this way, homogeneous objects based on spatially connected groups of pixels with similar spectral characteristics can be identified.

4.5. Change Detection Analysis Scheme

Therefore, we have selected four change detection approaches. The different change detection methods can be fused into a general flowchart (Figure 3).
In the given flowchart, B and A denote images before and after the flood, respectively. The results of different change detection approaches will be analysed according to Figure 3. To improve the test/training output for Sentinel 2 images, we used GEOEYE-1 images as a reference. This approach helped us determine and identify the necessary classes in Sentinel 2 images.

5. Results and Discussion

5.1. Image Differencing Change Detection

The image differencing method was applied to GEOEYE-1 images. Post-flood GEOEYE-1 images were georeferenced with pre-flood images to minimize the offset between two different time images in QGIS. Before differencing, every image was corrected for abnormal radiance deviations and normalized using standard image processing procedures in QGIS. In Figure 4a,b, one may see the results of the image differencing procedure for the Derna testing region. The threshold values were accepted with an interval of 0.4 in a range from −1 to 1. The sea surface was excluded before further analysis. The regions that span values from −0.20 to 0.20 are accepted as having no changes. In the earlier explanation, a threshold value of 0.2 was proposed as a practical cut-off to distinguish between significant changes and noise. Here, the range from −0.20–0.20 effectively incorporates that threshold by treating radiance differences within this range as “no changes”. This approach aligns with the idea that radiance changes below the magnitude of 0.2 (whether positive or negative) are likely due to noise, minor environmental fluctuations, or preprocessing artifacts, rather than actual changes in the scene.
Figure 5a–c presents the area change histograms. The mean area of change exceeds 30%.
The image differencing approach gives a first glance at the degree of changes and allows us to estimate the approximate level of destruction.

5.2. Spectral Indices-Based Change Detection

Spectral index is the most popular technique developed for image analysis. The first index used for the analysis is the traditional NDVI (4). The different values of NDVI correspond to water bodies (−1 to 0), barren rocks and sand (−0.1 to 0.1), sparse vegetation (shrubs, grasslands, or senescing crops, 0.2 to 0.5), and dense vegetation (0.6 to 1.0). The second index used for analysis is SAVI (5). The third is the TNDVI (6). This index shows the same sensitivity as SAVI to the optical properties of bare soil subjacent to the cover. Since we are dealing with flooded areas, the NDMI (7) is of interest. From its name, it is clear that this index can be used for monitoring vegetation, water, and drought. The values of NDMI are distributed in the following way: barren soil (−1), water stress (−0.2 to 0.4), and high canopy without water stress (0.4 to 1). For our case, the primary interest is not the indices’ values themselves but rather their changes between observation epochs. These changes will allow us to identify the ground transformation invoked by the flood, e.g., vegetated areas into barren or sand lands. To this end, the primary stress has been placed on the index differences converted into areas. The visual presentation of the differences calculated by Sentinel-2 images is given in Figure 6, where the left images present the spectral indices before the flood (Figure 6a,c,e,g), and the right images correspond to the differences of the spectral indices before and after the flood (Figure 6b,d,f,h).
Based on the obtained differences, the changes in square kilometers were calculated (Table 2).
Generally, indices differences describe the reasons for changes well. The issue is determining the types of objects that underwent the changes. To overcome this problem, the images must be previously classified, and appropriate changes must be determined based on class differences.

5.3. Classification Change Detection

Google Earth Engine was used as the primary tool for classification and further layer differencing using Sentinel-2. The classification algorithms considered were outlined in Section 4.3. The first tested algorithm was Smile Random Forest. The basic hyperparameter of the Smile Random Forest algorithm is the “number of trees”. The study explored three different values of “number of trees” (10, 100, and 300). The classification results before and after the flood are presented in Figure 7.
Seven land cover classes were chosen for the classification: vegetation, urban territories, barren lands, sand, roads, shallow water, and water bodies. Forty points and twenty polygons of different sizes have been chosen for supervised learning in each class. Additionally, twenty points for each layer were collected for the classification estimation. The measurements allowed us to generate a confusion matrix for before and after flood classification. The classification accuracy for different classes was calculated using the figures from the confusion matrix (Table 3).
Regardless of the accuracy obtained, the differences between respective classes were calculated. These differences were obtained in pixels and converted into square kilometers. Areas in square kilometers are presented in Table 4.
Figure 8a,b present the classification results for the CART algorithm. The hyperparameters for this algorithm are the maximum number of leaf nodes in each tree, which is by default assigned no limit, and the minimum leaf population, which is by default equal to 1.
Classification accuracy derived from the confusion matrix is outlined in Table 5.
The differences between the respective classes are shown in Table 6.
The third classification was accomplished by leveraging the Naïve Bayes algorithm. The Naïve Bayes algorithm has only one hyperparameter: the smoothing lambda coefficient, 0.000001. The classification results for the Naïve Bayes algorithm are presented in Figure 9. Naïve Bayes performed poorly, highlighting its limitations in handling correlated and visually complex features, as seen in flood image classification, where spatial dependencies and complex patterns are crucial. Understanding these nuances is critical for selecting the most suitable model for specific classification tasks.
Classification accuracy is outlined in Table 7.
The differences between respective layers are shown in Table 8.
Unlike the others, the Support Vector Machine (SVM) algorithm has multiple hyperparameters. For convenience, we merged all possible ways and hyperparameters for SVM classification into the diagram in Figure 10. Different sets of hyperparameters are possible depending on the SVM and kernel type. One may find the detailed description of the hyperparameters in Google Earth Engine documentation. Only specific SVM types fit well for classification tasks. The focus has been placed on SVM type c-SVC. The different kernels, namely LINEAR, POLY, RBF, and SIGMOID, were tested for this SVM type. The various degrees (two, three, and four) were considered for the polynomial kernel.
Below, we provided the results of two classifications. The first results correspond to the linear kernel case with other parameters assigned by default. The visual presentation is given in Figure 11a,b. The second result is provided for the classification that ensured the highest accuracy. The classification was accomplished for the polynomial kernel with a degree of three (Figure 12a,b).
Table 9 outlines classification accuracy derived from SVM classification, while Table 10 shows classification accuracy for SVM with a polynomial kernel.
The differences between the respective layers are given in Table 11 and Table 12.
The obtained results presented in Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 require generalization to estimate area changes. Section 5.4 presents such an estimation.

5.4. Integrated Estimation of Changes

Insofar as the different classification algorithms ensure different values, it is necessary to work out the estimate of the area change to account for the various accuracies of the algorithms. The integrated measure for different algorithms of area estimation is offered to calculate by the following expressions:
Mean value calculation for each layer over the different algorithms:
m e a n A = A i n
Deviations calculation:
A i = A i m e a n A
Control and elimination of the area estimations that exceed the value:
i f A i < t · s t d A ,   t h e n   A i   i s   o u t l i e r
Weighted values calculation of the area changes before and after the flood:
A ¯ b e f o r e = A i p i p i
A ¯ a f t e r = A i p i p i
where p i = c o n s u m e r   a c c u r a c y .
First, we calculate mean values (15) over each region for the same classes. The deviations are calculated based on the mean values (16). If a particular deviation exceeds the value t · s t d A then the area is considered an outlier and is ruled out from further analysis. The area changes for each layer are calculated using the producer’s accuracy as weights p i . The results of this algorithm are given in Table 13.
The integrated estimation of changes shows that the changes are mainly related to the transformation of urban territories to barren lands.

5.5. Geographic Object-Based Image Analysis

For this study, buildings were extracted from very high-resolution GEOEYE-1 data using Geographic Object-Based Image Analysis (GEOBIA). As we have only three band RGB images and no elevation information, our classification is based on only three bands with a spatial resolution of 0.5 m, which makes it suitable for detecting the shape of the rooftops. First, image objects were created using a multiresolution segmentation algorithm. This algorithm is bottom-up segmentation based on a pairwise region merging technique. Multiresolution segmentation is an optimization procedure that minimizes the average heterogeneity and maximizes their respective homogeneity for a given number of image objects. In multiresolution segmentation, we selected the optimum value of scale parameters according to the shape and size of buildings using a trial-and-error approach. After object creation, we use RGB bands to calculate different indices, such as brightness, blue-to-green, and blue-to-red ratios. We also used geometric features of rectangular fit and compactness. As a post-flood image has shadows, a blue band was used to separate shadows from other objects. A rule-set approach was used by incorporating different spectral, geometric, and contextual features for the classification of buildings. To further refine buildings, the relative border to shadow feature was used to classify building footprints accurately. Building footprints from post-flood images were compared with pre-flood building footprints to estimate the number of buildings that have been totally destroyed due to flash floods along the banks of the Derna River (Figure 13). We compared the buildings extracted using the GEOBIA method against the high-resolution imagery and found that extracted buildings are in good agreement with the boundary of the buildings in high-resolution imagery, with an overall accuracy of 88%.
Figure 13 shows significant damage to critical infrastructure, including bridges, roads, and electricity grids, especially since most of the buildings were totally destroyed along the banks of the river. By comparing the buildings extracted using a post-flood image with pre-flood buildings, it was found that more than 600 buildings totally collapsed along the banks of the Derna River, in addition to major damage to buildings away from the banks of the Derna River.

5.6. Discussion

As indicated in the Introduction, the initial objective of the study was to explore the capabilities of different algorithms and remote sensing data processing strategies to estimate the flood aftermaths in northern Libya (Derna). The changes can be assessed in two different ways: by variations in spectral characteristics or by object detection algorithms. The correct estimation can be obtained only by combining these two approaches because the study region has urban and rural areas. Object detection algorithms will not provide adequate results for the rural areas’ lack of man-made objects. Therefore, we suggested a strategy for which we step-by-step make the change identification process more complex. We commence with simple image differencing, proceed with spectral indices and post-classification changes, and finish with object-based image analysis.
The first examined approach was image differencing. The differences were calculated for three spectral bands of GEOEYE-1 images. All spectral bands had similar differences. The normalized differences ranged from −1 to 1. Regardless of the difference sign, all values greater than 0.2 were treated as significant. The values lower than 0.2 were considered as measurement noise. Under such a premise, the mean changes reached 30%. It was unsurprising to see that the image differencing approach has low accuracy due to reasons mentioned in Section 4. Another drawback is the impossibility of identifying the type of objects that changed their characteristics. Thus, we can recommend this approach for very tentative estimation.
Image differencing proved the existence of changes. To understand the object types that underwent the changes, we suggested accompanying the simple image differences with spectral index differences. Therefore, we selected four indices that might help us grasp the object types. This study found that NDVI changes corresponded to 4.6% of the study area. This finding was not unexpected since this index describes mainly vegetation changes, and the study region has very sparse vegetation. The detected changes may correspond to the transformation of some grasslands to barren lands of flood debris. The best estimation was obtained using the SAVI index. Insofar as SAVI describes barren lands, its change corresponds to debris spread throughout the study area. The total change was determined to be around 15% against 30% for the image differencing. These conflicting experimental results could be associated with the nature of the data. Image differencing is based on high-resolution images. However, the differences were determined for only three spectral bands, which is, in our opinion, considered unreliable. On the other hand, the spatial resolution of Sentinel-2 images is much lower, but the spectral is much better. Thus, the changes found by spectral indices for these data have better reliability since more spectral bands were included for calculation. The overlay of image differencing and SAVI changes showed the approximate coincidence of changes detected by these two methods.
To dive deeper into object identification and their changes, we suggested applying post-classification change detection. The first explored classification algorithm was Smile Random Forest. The classification accuracy before the flood was 0.94; after that, it was 0.89. The algorithm identified a significant change in land class around 10.3 sq. km, equivalent to 11.6%. The second algorithm was CART. CART ensured a lower classification accuracy: 0.89 before and 0.84 after. Such an accuracy level is critical, as most scientists recommend having it around 0.9–0.95. The changes in class areas are not distinct and vary in a range of +/−4–6 sq. km. The Naïve Bayes algorithm has even worse accuracy. Moreover, it is the only algorithm that has unclassified areas. The best accuracy was achieved by the support vector machine algorithm (SVM). SVM has a lot of hyperparameters. By tweaking these hyperparameters, the optimal combination is chosen, and the highest accuracy is ensured. In our case, the achieved pre-event classification accuracy was 0.94, and the post-event classification accuracy was 0.93. Contrary to expectations, the SVM did not find significant changes in the land class (3.5 sq. km). What is surprising is that changes in other classes were negligible. Different classification results pushed us to develop the integrated estimation of changes. This estimation uses classification accuracies as weights. Using this integrated estimation, we obtained the maximum changes for land class (+5.4%), road class (−3.5%), urban class (−2.6%), and vegetation class (+2.4%). The accumulation of flood debris explains the reduction of the road network and urban areas. Meanwhile, the growth of vegetation is related to the irrigation of flooded areas. The total absolute change is 17.5%, which corresponds well to the changes determined by spectral indices—roughly 15%. However, we now know the types of objects with appropriate, reliable accuracy.
At the final step of our strategy, we applied geographic object-based image analysis to explore the changes in urban areas. This algorithm works well for artificial objects with distinct structures and geometry. Of course, GEOBIA can be applied to natural objects, but the final results may be contradictory. So, once we determined the overall level of changes and object classes with the highest variability, we used GEOBIA to calculate the number of objects changed after the flood. As anticipated, the main changes happened in the central part of the city, where many buildings were washed away after the dam collapsed. GEOBIA allowed us to determine the number of totally destroyed buildings, which is equal to 600. Therefore, we obtained a quantitative estimation of the destruction level in addition to a qualitative assessment.
Returning to the question posed at the beginning of this paper, the suggested strategy can now be successfully applied to assess natural disaster aftermath using multi-temporal and multi-source data. As mentioned in the Introduction, flood studies using remote sensing data have become the most common tool for analysis. In this line, our analysis strategy demonstrates a higher flexibility compared to others. For earlier studies, e.g., [23], the authors focused their attention on particularly water detection, which limits the study to one detectable class. Therefore, the aftermath estimation is problematic. Moreover, such studies are based on entirely optical bands that put forward additional conditions on object classification. A comparison of our findings with those of other studies [31,32] confirms that the existing works mostly stressed pre-flood analysis using remote sensing and GIS tools [34,35,36,37]. In contrast, our research and results deal with after-flood analysis. Generally speaking, the after-flood analysis can be referred to as a broader task known as change detection. According to this, the obtained result has not previously been described. The works [42,44,50] consider solely object-based change detection; meanwhile, [52,54,55] consider only one type of remote sensing data (visible or multispectral) for analysis. In light of this discussion, our approach is different since we fused different data (visible and multispectral bands) and various processing strategies (image differencing, object-based image analysis, image classification, and spectral indices) into one flowchart that made our results more reliable. Image differencing can be used for coarse estimation, as its results can be two times different from the actual destruction level. Spectral indices provide a more reliable analysis. However, choosing the correct index is not a trivial task. Post-classification change detection is recommended when we want to know which object classes fell under the changes. Finally, if we want to know the exact number of changed objects, it is recommended to use geographic object-based image analysis. Depending on the requirements for the final accuracy, we may use simple image differencing, spectral index, or more sophisticated image processing algorithms.

6. Conclusions

The study presents the results of Libya’s flood exploration and its environmental impact. The suggested change detection strategy allows for determining the distribution of changes after the flood and estimating the changes that occurred quantitatively. The most significant changes have been for image differencing (nearly 30%). However, this value can be affected by different pixels’ brightness. Therefore, this estimation is low in reliability. A more detailed assessment has been obtained for various NDVI comparisons. The calculations of NDVIs, their differences, and areas were accomplished using Google Earth Engine. Since the main aftermaths have led to the silting of urban areas, the best evaluation was gained by the soil-adjusted vegetation index (SAVI), which shows the regions where the changes in vegetation or urban areas led to the growth of barren land. Using this index, we determined significant changes (15%). At the next stage, the classification with further comparison using GEE has been accomplished. Thanks to the detailed classification control provided by the confusion matrix, it was possible to evaluate classification accuracy and rule out unacceptable solutions. Moreover, we developed the integrated measure using classification accuracy to combine appropriate solutions for different classification algorithms. With these measures, we calculated the area changes for each classification layer. We detected the changes for the Derna region, where urban and vegetation lands were substituted by barren lands (precise estimation is 17.5%). The change detection analysis using the GEOBIA algorithm was used to estimate the building damage. The primary destruction took place along the Derna River. In total, 600 buildings collapsed along the banks of the Derna River. Thus, this paper has developed and tested the workflow for estimating the aftermath of the devasting flood. The obtained results prove the reliability and efficiency of the analysis considered.

Author Contributions

Conceptualization, R.S. and A.F.; methodology, R.S. and A.F.; software, R.S. and M.U.; validation, R.S., M.U. and M.M.R.; formal analysis, A.F., M.U. and M.M.R.; investigation, R.S., M.U. and A.F.; resources, R.S.; data curation, M.U. and M.M.R.; writing—original draft, R.S., M.U. and M.M.R.; writing—review and editing, A.F., M.U. and M.M.R.; visualization, R.S. and M.U.; supervision, R.S. and A.F.; project administration, R.S.; funding acquisition, M.U. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to acknowledge the support the Research Center for Aviation and Space Exploration provided through the Deanship of Research (DR) at the King Fahd University of Petroleum and Minerals (KFUPM) for funding this work through the project “Remote Sensing and UAV Data Fusion for Territorial Seawaters Monitoring” No. INAE2302.

Data Availability Statement

The data are available through the links to the Google Earth Engine codes.

Acknowledgments

Author A.F. thanks Abdulhaleem Labban, Department of Meteorology, Faculty of Meteorology, Environment and Arid Land Agriculture, King Abdulaziz University, Jeddah, for their utmost support. The authors would like to thank the anonymous reviewers for their comments/suggestions that have helped us improve the manuscript’s current version.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. de Ruiter, M.C.; Ward, P.J.; Daniell, J.E.; Aerts, J.C.J.H. A Comparison of Flood and Earthquake Vulnerability Assessment Indicators. Nat. Hazards Earth Syst. Sci. 2017, 17, 1231–1251. [Google Scholar] [CrossRef]
  2. Papathoma-Köhle, M.; Gems, B.; Sturm, M.; Fuchs, S. Matrices, Curves and Indicators: A Review of Approaches to Assess Physical Vulnerability to Debris Flows. Earth Sci. Rev. 2017, 171, 272–288. [Google Scholar] [CrossRef]
  3. Altay, G.; Ersoy, O.; Wahab, M.A.; El Afandi, G.; Shokr, M.; El Ghazawi, T.; Mohamed, M.A.; Eleithy, B.; El-Magd, I.A.; Biehl, L.; et al. Deployment of Real-Time Satellite Remote Sensing Infrastructure to Support Disaster Mitigation: A NATO Science for Peace Collaboration Project with Research Universities in Turkey, Egypt and the USA. In Proceedings of the 2009 4th International Conference on Recent Advances in Space Technologies, Istanbul, Turkey, 10–12 June 2009; pp. 18–22. [Google Scholar]
  4. Kussul, N.; Shelestov, A.; Skakun, S. Flood Monitoring from SAR Data. In Use of Satellite and In-Situ Data to Improve Sustainability; Kogan, F., Powell, A., Fedorov, O., Eds.; Springer: Dordrecht, The Netherlands, 2011; pp. 19–29. [Google Scholar]
  5. Sedona, R.; Cavallaro, G.; Jitsev, J.; Strube, A.; Riedel, M.; Benediktsson, J.A. Remote Sensing Big Data Classification with High Performance Distributed Deep Learning. Remote Sens. 2019, 11, 3056. [Google Scholar] [CrossRef]
  6. Büchele, B.; Kreibich, H.; Kron, A.; Thieken, A.; Ihringer, J.; Oberle, P.; Merz, B.; Nestmann, F. Flood-Risk Mapping: Contributions towards an Enhanced Assessment of Extreme Events and Associated Risks. Nat. Hazards Earth Syst. Sci. 2006, 6, 485–503. [Google Scholar] [CrossRef]
  7. Kappes, M.S.; Papathoma-Köhle, M.; Keiler, M. Assessing Physical Vulnerability for Multi-Hazards Using an Indicator-Based Methodology. Appl. Geogr. 2012, 32, 577–590. [Google Scholar] [CrossRef]
  8. Gurung, D.R.; Shrestha, M.; Shrestha, N.; Debnath, B.; Jishi, G.; Bajracharya, R.; Dhonju, H.K.; Pradhan, S. Multi Scale Disaster Risk Reduction Systems Space and Community Based Experiences over HKH Region. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 1301–1307. [Google Scholar] [CrossRef]
  9. Der Sarkissian, R.; Zaninetti, J.-M.; Abdallah, C. The Use of Geospatial Information as Support for Disaster Risk Reduction; Contextualization to Baalbek-Hermel Governorate/Lebanon. Appl. Geogr. 2019, 111, 102075. [Google Scholar] [CrossRef]
  10. Sun, W.; Bocchini, P.; Davison, B.D. Applications of Artificial Intelligence for Disaster Management. Nat. Hazards 2020, 103, 2631–2689. [Google Scholar] [CrossRef]
  11. Jena, R.; Pradhan, B.; Beydoun, G.; Alamri, A.M.; Ardiansyah; Nizamuddin; Sofyan, H. Earthquake Hazard and Risk Assessment Using Machine Learning Approaches at Palu, Indonesia. Sci. Total Environ. 2020, 749, 141582. [Google Scholar] [CrossRef]
  12. Abid, S.K.; Sulaiman, N.; Chan, S.W.; Nazir, U.; Abid, M.; Han, H.; Ariza-Montes, A.; Vega-Muñoz, A. Toward an Integrated Disaster Management Approach: How Artificial Intelligence Can Boost Disaster Management. Sustainability 2021, 13, 12560. [Google Scholar] [CrossRef]
  13. Melati, M.D.; Fleischmann, A.S.; Fan, F.M.; Paiva, R.C.D.; Athayde, G.B. Estimates of Groundwater Depletion under Extreme Drought in the Brazilian Semi-Arid Region Using GRACE Satellite Data: Application for a Small-Scale Aquifer. Hydrogeol. J. 2019, 27, 2789–2802. [Google Scholar] [CrossRef]
  14. Wang, Y.; Tian, F.; Huang, Y.; Wang, J.; Wei, C. Monitoring Coal Fires in Datong Coalfield Using Multi-Source Remote Sensing Data. Trans. Nonferrous Met. Soc. China 2015, 25, 3421–3428. [Google Scholar] [CrossRef]
  15. Eckert, S.; Jelinek, R.; Zeug, G.; Krausmann, E. Remote Sensing-Based Assessment of Tsunami Vulnerability and Risk in Alexandria, Egypt. Appl. Geogr. 2012, 32, 714–723. [Google Scholar] [CrossRef]
  16. Syifa, M.; Kadavi, P.R.; Lee, C.W. An Artificial Intelligence Application for Post-Earthquake Damage Mapping in Palu, Central Sulawesi, Indonesia. Sensors 2019, 19, 542. [Google Scholar] [CrossRef]
  17. Moawad, M.B.; Abdel Aziz, A.O.; Mamtimin, B. Flash Floods in the Sahara: A Case Study for the 28 January 2013 Flood in Qena, Egypt. Geomat. Nat. Hazards Risk 2014, 7, 215–236. [Google Scholar] [CrossRef]
  18. Glas, H.; Rocabado, I.; Huysentruyt, S.; Maroy, E.; Salazar Cortez, D.; Coorevits, K.; De Maeyer, P.; Deruyter, G. Flood Risk Mapping Worldwide: A Flexible Methodology and Toolbox. Water 2019, 11, 2371. [Google Scholar] [CrossRef]
  19. Stephenson, V.; D’Ayala, D. A New Approach to Flood Vulnerability Assessment for Historic Buildings in England. Nat. Hazards Earth Syst. Sci. 2014, 14, 1035–1048. [Google Scholar] [CrossRef]
  20. Erdelj, M.; Natalizio, E.; Chowdhury, K.R.; Akyildiz, I.F. Help from the Sky: Leveraging UAVs for Disaster Management. IEEE Pervasive Comput. 2017, 16, 24–32. [Google Scholar] [CrossRef]
  21. Whitehurst, D.; Friedman, B.; Kochersberger, K.; Sridhar, V.; Weeks, J. Drone-Based Community Assessment, Planning, and Disaster Risk Management for Sustainable Development. Remote Sens. 2021, 13, 1739. [Google Scholar] [CrossRef]
  22. Tsanis, I.K.; Seiradakis, K.D.; Daliakopoulos, I.N.; Grillakis, M.G.; Koutroulis, A.G. Assessment of GeoEye-1 Stereo-Pair-Generated DEM in Flood Mapping of an Ungauged Basin. J. Hydroinform. 2014, 16, 1–18. [Google Scholar] [CrossRef]
  23. Detecting, Extracting, and Monitoring Surface Water from Space Using Optical Sensors: A Review—Huang—2018—Reviews of Geophysics—Wiley Online Library. Available online: https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2018RG000598 (accessed on 27 April 2024).
  24. Khan, S.I.; Hong, Y.; Wang, J.; Yilmaz, K.K.; Gourley, J.J.; Adler, R.F.; Brakenridge, G.R.; Policelli, F.; Habib, S.; Irwin, D. Satellite Remote Sensing and Hydrologic Modeling for Flood Inundation Mapping in Lake Victoria Basin: Implications for Hydrologic Prediction in Ungauged Basins. IEEE J. Mag. 2010, 49, 85–95. [Google Scholar] [CrossRef]
  25. Hussein, S.; Abdelkareem, M.; Hussein, R.; Askalany, M. Using Remote Sensing Data for Predicting Potential Areas to Flash Flood Hazards and Water Resources. Remote Sens. Appl. Soc. Environ. 2019, 16, 100254. [Google Scholar] [CrossRef]
  26. Puttinaovarat, S.; Horkaew, P. Internetworking flood disaster mitigation system based on remote sensing and mobile GIS. Geomat. Nat. Hazards Risk 2020, 11, 1886–1911. [Google Scholar] [CrossRef]
  27. Mashaly, J.; Ghoneim, E. Flash Flood Hazard Using Optical, Radar, and Stereo-Pair Derived DEM: Eastern Desert, Egypt. Remote Sens. 2018, 10, 1204. [Google Scholar] [CrossRef]
  28. Manfredonia, I.; Stallo, C.; Ruggieri, M.; Massari, G.; Barbante, S. An early-warning aerospace system for relevant water bodies monitoring. In Proceedings of the 2015 IEEE Metrology for Aerospace (MetroAeroSpace), Benevento, Italy, 4–5 June 2015; pp. 536–540. [Google Scholar] [CrossRef]
  29. Sharma, S.; Muley, A.; Singh, R.; Nbsp, A.G. UAV for Surveillance and Environmental Monitoring. Indian J. Sci. Technol. 2016, 9, 1–4. [Google Scholar] [CrossRef]
  30. AL-Dosari, K.; Hunaiti, Z.; Balachandran, W. Systematic Review on Civilian Drones in Safety and Security Applications. Drones 2023, 7, 210. [Google Scholar] [CrossRef]
  31. Klemas, V. Remote Sensing of Floods and Flood-Prone Areas: An Overview. J. Coast. Res. 2015, 31, 1005–1013. [Google Scholar] [CrossRef]
  32. Wang, X.; Xie, H. A Review on Applications of Remote Sensing and Geographic Information Systems (GIS) in Water Resources and Flood Risk Management. Water 2018, 10, 608. [Google Scholar] [CrossRef]
  33. Nex, F.; Duarte, D.; Steenbeek, A.; Kerle, N. Towards Real-Time Building Damage Mapping with Low-Cost UAV Solutions. Remote Sens. 2019, 11, 287. [Google Scholar] [CrossRef]
  34. Shah, A.; Kantamaneni, K.; Ravan, S.; Campos, L.C. A Systematic Review Investigating the Use of Earth Observation for the Assistance of Water, Sanitation and Hygiene in Disaster Response and Recovery. Sustainability 2023, 15, 3290. [Google Scholar] [CrossRef]
  35. Curebal, I.; Efe, R.; Ozdemir, H.; Soykan, A.; Sönmez, S. GIS-Based Approach for Flood Analysis: Case Study of Keçidere Flash Flood Event (Turkey). Geocarto Int. 2016, 31, 355–366. [Google Scholar] [CrossRef]
  36. Saidi, S.; Ghattassi, A.; Anselme, B.; Bouri, S. GIS Based Multi-Criteria Analysis for Flood Risk Assessment: Case of Manouba Essijoumi Basin, NE Tunisia. In Advances in Remote Sensing and Geo Informatics Applications; El-Askary, H.M., Lee, S., Heggy, E., Pradhan, B., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 273–279. [Google Scholar]
  37. Mohamed, S.A. Application of Satellite Image Processing and GIS-Spatial Modeling for Mapping Urban Areas Prone to Flash Floods in Qena Governorate, Egypt. J. Afr. Earth Sci. 2019, 158, 103507. [Google Scholar] [CrossRef]
  38. Sharma, S.K.; Misra, S.K.; Singh, J.B. The role of GIS-enabled mobile applications in disaster management: A case analysis of cyclone Gaja in India. Int. J. Inf. Manag. 2020, 51, 102030. [Google Scholar] [CrossRef]
  39. Yang, L.; Cervone, G. Analysis of Remote Sensing Imagery for Disaster Assessment Using Deep Learning: A Case Study of Flooding Event. Soft Comput. 2019, 23, 13393–13408. [Google Scholar] [CrossRef]
  40. Irwansyah, E.; Gunawan, A.A.S. Deep Learning in Damage Assessment with Remote Sensing Data: A Review. In Data Science and Algorithms in Systems; Silhavy, R., Silhavy, P., Prokopova, Z., Eds.; Springer International Publishing: Cham, Switzerland, 2023; pp. 728–739. [Google Scholar]
  41. Lu, D.; Mausel, P.; Brondízio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2401. [Google Scholar] [CrossRef]
  42. Conchedda, G.; Durieux, L.; Mayaux, P. An Object-Based Method for Mapping and Change Analysis in Mangrove Ecosystems. ISPRS J. Photogramm. Remote Sens. 2008, 63, 578–589. [Google Scholar] [CrossRef]
  43. Fraser, R.H.; Olthof, I.; Pouliot, D. Monitoring Land Cover Change and Ecological Integrity in Canada’s National Parks. Remote Sens. Environ. 2009, 113, 1397–1409. [Google Scholar] [CrossRef]
  44. Wang, Z.; Wei, C.; Liu, X.; Zhu, L.; Yang, Q.; Wang, Q.; Meng, Y. Object-based change detection for vegetation disturbance and recovery using Landsat time series. GISci. Remote Sens. 2022, 59, 1706–1721. [Google Scholar] [CrossRef]
  45. Adams, J.B.; Smith, M.O.; Gillespie, A.R. Simple Models for Complex Natural Surfaces: A Strategy for The Hyperspectral Era of Remote Sensing|IEEE Conference Publication|IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/567138 (accessed on 27 April 2024).
  46. Mas, J.-F. Monitoring Land-Cover Changes: A Comparison of Change Detection Techniques. Int. J. Remote Sens. 1999, 20, 139–152. [Google Scholar] [CrossRef]
  47. Coppin, P.; Jonckheere, I.; Nackaerts, K.; Muys, B.; Lambin, E. Digital Change Detection Methods in Ecosystem Monitoring: A Review. Int. J. Remote Sens. 2004, 25, 1565–1596. [Google Scholar] [CrossRef]
  48. Ellis, E.C.; Wang, H.; Xiao, H.S.; Peng, K.; Liu, X.P.; Li, S.C.; Ouyang, H.; Cheng, X.; Yang, L.Z. Measuring Long-Term Ecological Changes in Densely Populated Landscapes Using Current and Historical High Resolution Imagery. Remote Sens. Environ. 2006, 100, 457–473. [Google Scholar] [CrossRef]
  49. Kennedy, R.E.; Townsend, P.A.; Gross, J.E.; Cohen, W.B.; Bolstad, P.; Wang, Y.Q.; Adams, P. Remote Sensing Change Detection Tools for Natural Resource Managers: Understanding Concepts and Tradeoffs in the Design of Landscape Monitoring Projects. Remote Sens. Environ. 2009, 113, 1382–1396. [Google Scholar] [CrossRef]
  50. Chen, G.; Zhao, K.; Powers, R. Assessment of the Image Misregistration Effects on Object-Based Change Detection. ISPRS J. Photogramm. Remote Sens. 2014, 87, 19–27. [Google Scholar] [CrossRef]
  51. Ye, S.; Chen, D.; Yu, J. A Targeted Change-Detection Procedure by Combining Change Vector Analysis and Post-Classification Approach. ISPRS J. Photogramm. Remote Sens. 2016, 114, 115–124. [Google Scholar] [CrossRef]
  52. Jin, S.; Yang, L.; Danielson, P.; Homer, C.; Fry, J.; Xian, G. A Comprehensive Change Detection Method for Updating the National Land Cover Database to circa 2011. Remote Sens. Environ. 2013, 132, 159–175. [Google Scholar] [CrossRef]
  53. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change Detection from Remotely Sensed Images: From Pixel-Based to Object-Based Approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  54. Tewkesbury, A.P.; Comber, A.J.; Tate, N.J.; Lamb, A.; Fisher, P.F. A critical synthesis of remotely sensed optical image change detection techniques. Remote Sens. Environ. 2015, 160, 1–14. [Google Scholar] [CrossRef]
  55. Xiao, P.; Zhang, X.; Wang, D.; Yuan, M.; Feng, X.; Kelly, M. Change Detection of Built-up Land: A Framework of Combining Pixel-Based Detection and Object-Based Recognition. ISPRS J. Photogramm. Remote Sens. 2016, 119, 402–414. [Google Scholar] [CrossRef]
  56. Sakurada, K.; Okatani, T. Change Detection from a Street Image Pair Using CNN Features and Superpixel Segmentation. Proc. Brit. Mach. Vis. Conf. 2015, 61.1–61.12. [Google Scholar]
  57. Caye Daudt, R.; Le Saux, B.; Boulch, A. Fully Convolutional Siamese Networks for Change Detection. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 4063–4067. [Google Scholar]
  58. Doshi, J.; Basu, S.; Pang, G. From Satellite Imagery to Disaster Insights. arXiv 2018, arXiv:1812.07033. [Google Scholar]
  59. Peng, D.; Zhang, Y.; Guan, H. End-to-End Change Detection for High Resolution Satellite Images Using Improved UNet++. Remote Sens. 2019, 11, 1382. [Google Scholar] [CrossRef]
  60. Arabi, M.E.A.; Karoui, M.S.; Djerriri, K. Optical Remote Sensing Change Detection Through Deep Siamese Network. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 5041–5044. [Google Scholar]
  61. Zhang, W.; Lu, X. The Spectral-Spatial Joint Learning for Change Detection in Multispectral Imagery. Remote Sens. 2019, 11, 240. [Google Scholar] [CrossRef]
  62. Zhan, T.; Gong, M.; Liu, J.; Zhang, P. Iterative Feature Mapping Network for Detecting Multiple Changes in Multi-Source Remote Sensing Images. ISPRS J. Photogramm. Remote Sens. 2018, 146, 38–51. [Google Scholar] [CrossRef]
  63. Lebedev, M.A.; Vizilter, Y.V.; Vygolov, O.V.; Knyaz, V.A.; Rubis, A.Y. Change Detection in Remote Sensing Images Using Conditional Adversarial Networks. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 565–571. [Google Scholar] [CrossRef]
  64. Niu, X.; Gong, M.; Zhan, T.; Yang, Y. A Conditional Adversarial Network for Change Detection in Heterogeneous Images. IEEE Geosci. Remote Sens. Lett. 2019, 16, 45–49. [Google Scholar] [CrossRef]
  65. Gong, M.; Zhan, T.; Zhang, P.; Miao, Q. Superpixel-Based Difference Representation Learning for Change Detection in Multispectral Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2658–2673. [Google Scholar] [CrossRef]
  66. Wiratama, W.; Lee, J.; Park, S.-E.; Sim, D. Dual-Dense Convolution Network for Change Detection of High-Resolution Panchromatic Imagery. Appl. Sci. 2018, 8, 1785. [Google Scholar] [CrossRef]
  67. Benedek, C.; Shadaydeh, M.; Kato, Z.; Szirányi, T.; Zerubia, J. Multilayer Markov Random Field Models for Change Detection in Optical Remote Sensing Images. ISPRS J. Photogramm. Remote Sens. 2015, 107, 22–37. [Google Scholar] [CrossRef]
  68. Ma, W.; Xiong, Y.; Wu, Y.; Yang, H.; Zhang, X.; Jiao, L. Change Detection in Remote Sensing Images Based on Image Mapping and a Deep Capsule Network. Remote Sens. 2019, 11, 626. [Google Scholar] [CrossRef]
  69. Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  70. Demir, B.; Bovolo, F.; Bruzzone, L. Updating Land-Cover Maps by Classification of Image Time Series: A Novel Change-Detection-Driven Transfer Learning Approach. IEEE Trans. Geosci. Remote Sens. 2012, 51, 300–312. [Google Scholar] [CrossRef]
  71. Bruzzone, L.; Prieto, D.F. Automatic Analysis of the Difference Image for Unsupervised Change Detection. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1171–1182. [Google Scholar] [CrossRef]
  72. Usman, M.; Gul, A.A.; Abbas, S.; Rabbani, U.; Irteza, S.M. Identifying morphological hotspots in large rivers by optimizing image enhancement. Remote Sens. Lett. 2023, 14, 1173–1185. [Google Scholar] [CrossRef]
  73. Celik, T. Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and K-Means Clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  74. Chen, X.; Liu, M.; Ma, J.; Liu, X.; Liu, D.; Chen, Y.; Li, Y.; Qadeer, A. Health Risk Assessment of Soil Heavy Metals in Housing Units Built on Brownfields in a City in China. J. Soils Sediments 2017, 17, 1741–1750. [Google Scholar] [CrossRef]
  75. Wu, C.; Du, B.; Cui, X.; Zhang, L. A Post-Classification Change Detection Method Based on Iterative Slow Feature Analysis and Bayesian Soft Fusion. Remote Sens. Environ. 2017, 199, 241–255. [Google Scholar] [CrossRef]
  76. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  77. Band, S.S.; Janizadeh, S.; Chandra Pal, S.; Saha, A.; Chakrabortty, R.; Melesse, A.M.; Mosavi, A. Flash Flood Susceptibility Modeling Using New Approaches of Hybrid and Ensemble Tree-Based Machine Learning Algorithms. Remote Sens. 2020, 12, 3568. [Google Scholar] [CrossRef]
  78. Padao, F.R.F.; Maravillas, E.A. Using Naïve Bayesian Method for Plant Leaf Classification Based on Shape and Texture Features. In Proceedings of the 2015 International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), Cebu, Philippines, 9–12 December 2015. [Google Scholar]
  79. Vapnik, V. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  80. Rouse, J.W.; Haas, R.H.; Scheel, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS. In Proceedings of the 3rd Earth Resource Technology Satellite (ERTS) Symposium, Washington, DC, USA, 25–27 August 1975; Volume 19741, pp. 48–62. Available online: https://ntrs.nasa.gov/citations/19740022614 (accessed on 1 December 2024).
  81. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  82. Ochtyra, A.; Marcinkowska-Ochtyra, A.; Raczko, E. Threshold- and trend-based vegetation change monitoring algorithm based on the inter-annual multi-temporal normalized difference moisture index series: A case study of the Tatra Mountains. Remote Sens. Environ. 2020, 249, 112026. [Google Scholar] [CrossRef]
  83. UNDP Annual Report 2021. Available online: https://www.undp.org/publications/undp-annual-report-2021 (accessed on 28 April 2024).
  84. WMO E-Library. Available online: https://library.wmo.int (accessed on 27 April 2024).
  85. Weng, Q. Remote Sensing and GIS Integration: Theories, Methods, and Applications; McGraw Hill: New York City, NY, USA, 2009; ISBN 978-0-07-160653-0. [Google Scholar]
  86. The Role of Spatial and Spectral Resolution on the Effectiveness of Satellite-Based Vegetation Indices. Available online: https://www.spiedigitallibrary.org/conference-proceedings-of-spie/9998/1/The-role-of-spatial-and-spectral-resolution-on-the-effectiveness/10.1117/12.2241316.short (accessed on 27 April 2024).
  87. Nicolau, A.P.; Dyson, K.; Saah, D.; Clinton, N. Accuracy Assessment: Quantifying Classification Quality. In Cloud-Based Remote Sensing with Google Earth Engine: Fundamentals and Applications; Springer International Publishing: Cham, Switzerland, 2024; pp. 135–145. Available online: https://link.springer.com/chapter/10.1007/978-3-031-26588-4_7 (accessed on 30 November 2024).
  88. Usman, M.; Ejaz, M.; Nichol, J.E.; Farid, M.S.; Abbas, S.; Khan, M.H. A Comparison of Machine Learning Models for Mapping Tree Species Using WorldView-2 Imagery in the Agroforestry Landscape of West Africa. ISPRS Int. J. Geo. Inf. 2023, 12, 142. [Google Scholar] [CrossRef]
  89. Blaschke, T.; Strobl, J. What’s wrong with pixels? Some recent developments interfacing remote sensing and GIS. GIS Z. Für Geoinformationssyst. 2001, 14, 12–17. [Google Scholar]
  90. De Roeck, E.; Van Coillie, F.; De Wulf, R.; Soenen, K.; Charlier, J.; Vercruysse, J.; Hantson, W.; Ducheyne, E.; Hendrickx, G. Fine-scale mapping of vector habitats using very high resolution satellite imagery: A liver fluke case-study. Geospat. Health 2014, 8, S671–S683. [Google Scholar] [CrossRef] [PubMed]
  91. Laliberte, A.S.; Rango, A.; Havstad, K.M.; Paris, J.F.; Beck, R.F.; McNeely, R.; Gonzalez, A.L. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sens. Environ. 2004, 93, 198–210. [Google Scholar] [CrossRef]
  92. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  93. Thomas, N.; Hendrix, C.; Congalton, R.G. A Comparison of Urban Mapping Methods Using High-Resolution Digital Imagery. Photogramm. Eng. Remote Sens. 2003, 69, 963–972. [Google Scholar] [CrossRef]
Figure 1. Map of Libya with the location of Derna (P.C. BBC News after the Flooding).
Figure 1. Map of Libya with the location of Derna (P.C. BBC News after the Flooding).
Remotesensing 17 00616 g001
Figure 2. Pre-flood (1 July 2023) and post-flood (13 September 2023) images show damage caused by the collapse of the Derna Dam.
Figure 2. Pre-flood (1 July 2023) and post-flood (13 September 2023) images show damage caused by the collapse of the Derna Dam.
Remotesensing 17 00616 g002
Figure 3. Change detection analysis flowchart.
Figure 3. Change detection analysis flowchart.
Remotesensing 17 00616 g003
Figure 4. Image differencing for Derna region using (a) band 1, (b) band 2, and (c) band 3.
Figure 4. Image differencing for Derna region using (a) band 1, (b) band 2, and (c) band 3.
Remotesensing 17 00616 g004
Figure 5. Area changes: (a) band 1, (b) band 2, and (c) band 3.
Figure 5. Area changes: (a) band 1, (b) band 2, and (c) band 3.
Remotesensing 17 00616 g005
Figure 6. Spectral indices and their changes for the Derna area using NDVI (a,b), SAVI (c,d), TNDVI (e,f), and NDMI (g,h).
Figure 6. Spectral indices and their changes for the Derna area using NDVI (a,b), SAVI (c,d), TNDVI (e,f), and NDMI (g,h).
Remotesensing 17 00616 g006aRemotesensing 17 00616 g006b
Figure 7. (a) Derna Region Random Forest Classification results for image dated (a) 18 August 2023 and (b) Derna Region Random Forest Classification dated 22 September 2023.
Figure 7. (a) Derna Region Random Forest Classification results for image dated (a) 18 August 2023 and (b) Derna Region Random Forest Classification dated 22 September 2023.
Remotesensing 17 00616 g007
Figure 8. Derna Region CART Classification results for image dated (a) 18 August 2023 and (b) Derna Region Random Forest Classification dated 22 September 2023.
Figure 8. Derna Region CART Classification results for image dated (a) 18 August 2023 and (b) Derna Region Random Forest Classification dated 22 September 2023.
Remotesensing 17 00616 g008
Figure 9. Derna Region Naïve Bayes Classification for 18 August 2023 (a) and 22 September 2023 (b).
Figure 9. Derna Region Naïve Bayes Classification for 18 August 2023 (a) and 22 September 2023 (b).
Remotesensing 17 00616 g009
Figure 10. SVM hyperparameters and classification ways.
Figure 10. SVM hyperparameters and classification ways.
Remotesensing 17 00616 g010
Figure 11. Derna Region SVM Classification for 18 August 2023 (a) and 22 September 2023 (b).
Figure 11. Derna Region SVM Classification for 18 August 2023 (a) and 22 September 2023 (b).
Remotesensing 17 00616 g011
Figure 12. Derna Region SVM Classification, the polynomial kernel for 18 August 2023 (a) and 22 September 2023 (b).
Figure 12. Derna Region SVM Classification, the polynomial kernel for 18 August 2023 (a) and 22 September 2023 (b).
Remotesensing 17 00616 g012
Figure 13. Building footprint extracted using GEOBIA and building damaged due to flash.
Figure 13. Building footprint extracted using GEOBIA and building damaged due to flash.
Remotesensing 17 00616 g013
Table 1. Specification of datasets.
Table 1. Specification of datasets.
RegionSatellite, Resolution mDate PreDate Post
DernaGEOEYE-1, 0.301/07/202313/09/2023
DernaSentinel-2, 2018/08/202322/09/2023
Table 2. Area changes for the different NDVIs.
Table 2. Area changes for the different NDVIs.
IndexChange, sq. km (%)
NDVI4.1 (4.6)
SAVI11.5 (12.9)
TNDVI1 (1.1)
NDMI2.2 (2.5)
Total Area88.9
Table 3. Classification accuracy for Smile Random Forest.
Table 3. Classification accuracy for Smile Random Forest.
ClassBeforeAfter
UAPAUAPA
Vegetation1111
Water1111
Urban0.714310.91670.5500
Land0.952410.67860.9500
Sand10.90000.95000.9500
Roads10.6500--
Shallow11--
Accuracy0.9379 0.8900
Kappa0.9275 0.8625
Table 4. Area change estimation by layers for Smile Random Forest.
Table 4. Area change estimation by layers for Smile Random Forest.
ClassArea Before, km2Area After, km2Change, km2
Vegetation3.65.11.5
Water32.232.60.4
Urban14.25.4−8.8
Land33.243.510.3
Sand2.82.3−0.5
Roads2.4 -
Shallow0.6 -
Table 5. Classification accuracy for CART.
Table 5. Classification accuracy for CART.
ClassBeforeAfter
UAPAUAPA
Vegetation0.9524110.9000
Water110.95241
Urban0.65380.85000.92310.6000
Land0.90480.95000.66670.9000
Sand10.95000.90480.9500
Roads0.84620.5500--
Shallow11--
Accuracy0.9034 0.8700
Kappa0.8872 0.8375
Table 6. Area change estimation by layers for CART.
Table 6. Area change estimation by layers for CART.
ClassArea Before, km2Area After, km2Change, km2
Vegetation2.96.73.8
Water32.131.4−0.7
Urban11.88.0−3.8
Land32.339.06.7
Sand1.63.82.2
Roads2.4-
Shallow0.8-
Table 7. Classification accuracy for Naïve Bayes.
Table 7. Classification accuracy for Naïve Bayes.
ClassBeforeAfter
UAPAUAPA
Unclassified0000
Vegetation10.950010.9500
Water0.8000111
Urban0.53570.75000.69230.4500
Land0.94740.90000.80001
Sand0.86670.65000.69560.8000
Roads0.66670.6000--
Shallow10.8000--
Accuracy0.8069 0.8400
Kappa0.7750 0.8000
Table 8. Area change estimation by layers for Naïve Bayes.
Table 8. Area change estimation by layers for Naïve Bayes.
ClassArea Before, km2Area After, km2Change, km2
Unclassified0.20.20
Vegetation0.93.42.3
Water32.232.60.4
Urban8.85.2−3.6
Land33.235.82.6
Sand10.411.61.2
Roads2.7--
Shallow0.5--
Table 9. Classification accuracy for SVM, linear kernel.
Table 9. Classification accuracy for SVM, linear kernel.
ClassBeforeAfter
UAPAUAPA
Vegetation1111
Water1111
Urban0.73080.95000.93330.7000
Land110.76000.9500
Sand10.85000.95000.9500
Roads0.94120.8000--
Shallow11--
Accuracy0.9448 0.9200
Kappa0.9356 0.9000
Table 10. Classification accuracy for SVM, polynomial kernel.
Table 10. Classification accuracy for SVM, polynomial kernel.
ClassArea Before, km2Area After, km2Change, km2
Vegetation4.35.81.5
Water32.132.60.5
Urban8.46.6−1.8
Land37.841.33.5
Sand1.42.51.1
Roads4.4--
Shallow0.6--
Table 11. Area change estimation by classes, SVM algorithm with linear kernel.
Table 11. Area change estimation by classes, SVM algorithm with linear kernel.
ClassBeforeAfter
UAPAUAPA
Vegetation1110.9500
Water1111
Urban0.72000.90000.94120.8000
Land110.79170.9500
Sand10.85000.95000.9500
Roads0.88890.8000--
Shallow11--
Accuracy0.9448 0.9300
Kappa0.9275 0.9125
Table 12. Area change estimation by layers, SVM algorithm with polynomial kernel.
Table 12. Area change estimation by layers, SVM algorithm with polynomial kernel.
ClassArea Before, km2Area After, km2Change, km2
Vegetation4.25.11.5
Water32.132.60.5
Urban8.97.1−1.8
Land37.741.43.5
Sand1.42.61.1
Roads4.0--
Shallow0.6--
Table 13. Integrated values of changes.
Table 13. Integrated values of changes.
LayerChange, sq. km (%)
Vegetation2.1 (2.4)
Water1.9 (2.1)
Urban−2.3 (−2.6)
Land4.8 (5.4)
Sand0.8 (0.9)
Roads−3.1 (−3.5)
Shallow water−0.6 (−0.7)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shults, R.; Farahat, A.; Usman, M.; Rahman, M.M. Multi-Temporal Remote Sensing Satellite Data Analysis for the 2023 Devastating Flood in Derna, Northern Libya. Remote Sens. 2025, 17, 616. https://doi.org/10.3390/rs17040616

AMA Style

Shults R, Farahat A, Usman M, Rahman MM. Multi-Temporal Remote Sensing Satellite Data Analysis for the 2023 Devastating Flood in Derna, Northern Libya. Remote Sensing. 2025; 17(4):616. https://doi.org/10.3390/rs17040616

Chicago/Turabian Style

Shults, Roman, Ashraf Farahat, Muhammad Usman, and Md Masudur Rahman. 2025. "Multi-Temporal Remote Sensing Satellite Data Analysis for the 2023 Devastating Flood in Derna, Northern Libya" Remote Sensing 17, no. 4: 616. https://doi.org/10.3390/rs17040616

APA Style

Shults, R., Farahat, A., Usman, M., & Rahman, M. M. (2025). Multi-Temporal Remote Sensing Satellite Data Analysis for the 2023 Devastating Flood in Derna, Northern Libya. Remote Sensing, 17(4), 616. https://doi.org/10.3390/rs17040616

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop