An Experimental Research on the Use of Recurrent Neural Networks in Landslide Susceptibility Mapping

: Natural hazards have a great number of inﬂuencing factors. Machine-learning approaches have been employed to understand the individual and joint relations of these factors. However, it is a challenging process for a machine learning algorithm to learn the relations of a large parameter space. In this circumstance, the success of the model is highly dependent on the applied parameter reduction procedure. As a state-of-the-art neural network model, representative learning assumes full responsibility of learning from feature extraction to prediction. In this study, a representative learning technique, recurrent neural network (RNN), was applied to a natural hazard problem. To that end, it aimed to assess the landslide problem by two objectives: Landslide susceptibility and inventory. Regarding the ﬁrst objective, an empirical study was performed to explore the most convenient parameter set. In landslide inventory studies, the capability of the implemented RNN on predicting the subsequent landslides based on the events before a certain time was investigated respecting the resulting parameter set of the ﬁrst objective. To evaluate the behavior of implemented neural models, receiver operating characteristic analysis was performed. Precision, recall, f-measure, and accuracy values were additionally measured by changing the classiﬁcation threshold. Here, it was proposed that recall metric be utilized for an evaluation of landslide mapping. Results showed that the implemented RNN achieves a high estimation capability for landslide susceptibility. By increasing the network complexity, the model started to predict the exact label of the corresponding landslide initiation point instead of estimating the susceptibility level. signiﬁcantly increased in the test evaluations of the model. The textural variation


Introduction
Natural hazards such as earthquakes, landslides, tsunamis, and volcanic activities that all have serious effects on human beings have a great number of influencing factors. The individual and joint effects of these factors are not always fully understood since each factor introduces a potentially large degree of uncertainty into any quantitative analysis [1]. Additionally, the data acquired from observations are usually sparse and lack accuracy and completeness, which is another kind of uncertainty [1]. To reduce these uncertainties, machine learning (ML) algorithms have been implemented, particularly for the last decade (Table 1).
Recently, ML techniques have become popular in spatial prediction of natural hazards studies such as wildfire [2], sinkhole [3], groundwater and flood [4][5][6], drought [7], gully erosion [8,9], earthquake [10], land/ground subsidence [11], and landslide studies [12][13][14][15][16][17][18][19][20]. ML is a subdivision of artificial intelligence (AI) that uses computer techniques to analyze and forecast information by learning from training data. ML algorithms that have been used for landslide prediction include support vector machine [21,22], artificial neural network [23,24], decision trees [24], etc. Ensemble models have been used in landslide susceptibility mapping due to their novelty and their ability to comprehensively assess landslide-related parameters for discrete classes of independent factors [25,26]. Additionally, different performance metrics were used to evaluate the prediction capacities of ML models, and depending on the natural hazard problem as well as the algorithm, different results were acquired (as shown in Table 1).
The classification performance of an ML algorithm is affected by the complexity of the corresponding problem. Complex problems have a high dimensional parameter space where ML algorithms start to suffer from a serious problem called the curse of dimensionality [27,28]. In these circumstances: (i) Possible patterns in data increase; (ii) it becomes difficult to identify the relation between model parameters and the output during training iterations; and (iii) the cost of the training process increases. Additionally, this problem causes over-fitting, which misleads the prediction performance during model evaluation. The effects and limitations of the curse of dimensionality problem on conventional ML algorithms seem unavoidable in such high dimensional parameter spaces [28]. Therefore, these algorithms require a successful feature extractor to reduce the dimensionality and increase the quality of the parameter space. The accuracy of classification depends on the success of the feature extractor used as well as the accuracy of the ML algorithm in the background. At this point, deep learning becomes a powerful alternative since it learns the important features during training and still has a high approximation capability against the complex problems which are particularly in image or text.
As mentioned before, natural hazard assessment has been investigated by conventional ML approaches and promising results have been achieved [29]. Yet the success of these models has not attained enough maturity. Some of the reasons for this issue are: The model cannot handle a large parameter space [18]; the landmarks in the terrain affect each other as a nature of hazard assessment; and the model may not consider this effect properly [30,31]. In this study, one of the natural hazards, the landslide problem, was evaluated with a deep learning approach (recurrent neural networks (RNNs)) in an experimental manner regarding the parameter space with a large parameter space containing several topographic, hydro-topographic, hydrologic, anthropogenic, vegetation, and lithology factors.
Utilizing aerial or satellite images, deep learning has entered into the field of geoscience regarding classification [32][33][34], analysis [35,36], and damage prediction [37][38][39]. Surely there exist some comprehensive deep learning solutions for natural hazards as well [37][38][39]. This study differs from existing studies in literature in two aspects: (i) Previous studies based on deep learning rely on images, and (ii) they aim to predict disaster areas after the occurrence of natural hazards. Here, utilized data contains several characteristic features of landslide initiation points obtained by field measurements, and the goal is detecting the hazardous areas from these features without tending to assume that the majority of the terrain is hazardous (or unstable), but only the exact susceptible areas as much as possible. Addressing this approach, the possibility of using deep learning in a different data type with complex parameter space (other than image) has been investigated in this study. A methodological deep learning solution was proposed for the problems with complex parameter sets in the field of geoscience.
In this study, the landmarks of the terrain were determined by sequences. By this approach, the information of previously processed landslide initiation points were utilized to predict the probability of landslide in the subsequent initiations. In these circumstances, recurrent neural network (RNN) was applied, since this kind of deep neural network architecture is the most proper solution for sequence modeling [40,41].
The landslide problem was taken into account in two aspects. First, it was treated as a landslide susceptibility mapping problem. Here, an empirical approach was applied to investigate the parameter sensitivity. This investigation aims to explore the effect of individual and joint parameters on the model accuracy and observe the behavior of a deep learning approach against the parameter set becoming more complex. The second aspect of the landslide problem was handled as landslide inventory mapping. In the literature, this task covers documenting the distribution of landslides, and investigating the types and recurrence of slope failures to determine landslide susceptibility, hazard, and risk [42].

General Characteristics of the Study Area
The Buyukkoy catchment area, with an area of 87.6 km 2 in Cayeli district of Rize located to the Eastern Black Sea Region of Turkey in which shallow landslides frequently occur in residual soils, was selected as the experimental test site of this study ( Figure 1). The Eastern Black Sea Region is the rainiest region in Turkey. Annual mean precipitation in the period from 1971 to 2000 is about 2189 mm (DMI, 2008). Because of the extreme climate and the geological and geomorphologic properties, landslides and flood events repeatedly happen in the region. The Eastern Black Sea Region is a region that is experiencing frequent incidents of the fatal landslides in Turkey. A total of 252 deaths and 2585 structural demolitions occurred in the region since 1970 [59].
In the catchment area, different lithological units cropped out from Cretaceous to Quaternary [60,61]. The landslides typically take place in the residual soils of the Upper Cretaceous and Lower-Middle Eocene-aged volcanic rocks and Palaeocene-aged granite intrusions). The region in which the experimental site is located in a mountainous region. The topographic elevations vary between 15 and 1470 m. The mean slope gradient in the region is 0.50 rad (±0.18 rad) (28°).

Data
The source data of this experimental work was published by Nefeslioglu et al. [62]. There are 251 shallow landslide initiation points of which characteristics were already reported in the study area ( Figure 2) [62]. The mean volume of the displaced material of these failures was determined to be below 2000 m 3 . Additionally, depending on the magnitude of the events, runout distances varied in the range from 5 to 500 m throughout the catchment, with a mean value of 77 m. The instabilities first start with a circular failure, not deeper than~5 m, and then continue as flow at the toe of the slides. Therefore, the dominant failure mode can be defined as shallow landslide. As mentioned, these rapid-very rapid shallow failures occurred in residual soils decomposed from the magmatic rocks that cropped out in the catchment. Since the characteristics of the soil formed from different lithology will be different, the lithology map is able to represent the spatial change of soil properties.
In this study, six topographic parameters, three hydro-topographic parameters, two hydrologic parameters, three anthropogenic parameters, vegetation cover, and six lithology variables considered to control the occurrences of shallow landslides were evaluated (Tables 2 and 3) because these parameters have been determined as conditioning parameters by Nefeslioglu et al. in [62]. The spatial resolution of the grid data implemented in this study is 25 × 25 m 2 . The dataset contains 875,816 data lines, including 21 independent variables described here and shallow landslide initiation information expressed as the dependent variable.  Table 3. Shallow landslide distribution for the discrete parameters [62].

Discrete Parameters # of Grid Cells # of Grid Cells with Shallow Landslides
Alluvium (

Methodological Background
A fundamental assumption of conventional neural networks is that inputs are independent. While relatively successful predictions have been achieved by adhering to this assumption and updating the weights of these independently handled inputs by conventional neural networks, this approach may not be sufficient for many problems in nature. In fact previously learned information is crucial for producing a successful prediction model. This is where Recurrent Neural Networks (RNNs) are needed.
RNNs are neural sequence models that achieve remarkable prediction performance basically for challenging tasks based on sequential data, the data of which all lines present a sequence of ordered events. The theoretical background of the method can be found in Goodfellow's comprehensive book in [63]. The common use of RNNs contains language modeling, document classification, machine translation, speech recognition, image captioning, and time series analysis [41]. According to their sequential characteristics as well as their complex feature space, RNNs are highly compatible with the data used in these problems. However, RNN applications should not be limited to these data types because there are different problems in nature, which can be evaluated as a set of sequences as well.
The simplified definition of landslide, which is one of these, is defined as the slope downward movement of rock, soil, or debris material under the influence of gravity [64]. This natural phenomenon that develops on natural slopes is a complex problem. Landslides that occur as products of local geological, hydrological and topographic conditions affected by vegetation, land use and human activities are controlled by the frequency of precipitation and seismic events [65]. To reduce the damage caused by the natural phenomenon, it is necessary to map the existing landslides and identify the landslide areas likely to occur in the future [66]. The modeling of shallow landslides rapidly developing in residual soils, where the area related to the displaced material is narrow and the volume of the wasting material is low, is a very challenging problem because in such complex geological environments, ground material properties and groundwater conditions contain high uncertainties. This situation makes it particularly difficult to predict the failures with conventional techniques. Additionally, these shallow landslides that occur within the residual layers are quickly erased from the terrain. Hence, it is not always possible to completely prepare the event inventories of shallow landslides that occur after a rainy period. For this reason, it is important to evaluate the high capacity prediction algorithms in the assessment of such type of natural hazard problems. In this study, landslide data were evaluated as a set of sequences of which each raw corresponds to a grid cell defining a shallow landslide initiation on the terrain. Here, the fundamental discussion in handling the data into a set of sequences is "whether previously processed information of former landslide initiations may be used to correctly predict the likelihood of occurrence of a landslide of current location?". RNN is a suitable method to clarify this discussion. Here, what is expected from an RNN is utilizing the previous predictions of former locations as an additional property to the features of the current landslide initiation to precisely classify it.
During the implementation detailed in Section 5, RNN architecture contains one input layer, one hidden layer with the changing number of RNN cells, and one output cell used to finalize class label identification based on probabilities. Each cell in RNN is a Long Short-Term Memory (LSTM) cell based on [41]. Please note that LSTM is a special kind of RNNs which is capable of avoiding long-term dependency problem introduced in [67]. In an LSTM cell, the model decides what to remember and what to forget by additional neural networks cooperating with each other.

Landslide Mapping
In this study, the natural hazard problem was handled by two approaches: (i) landslide susceptibility mapping, (ii) landslide inventory mapping. The term 'landslide inventory mapping' is used to define the spatial locations of the recent shallow landslides. Briefly, in landslide susceptibility mapping, parameters' sensitivity and the effect on susceptibility maps were investigated to explore the most influential parameter set for this problem. Considering the selected parameter set, the landslide inventory mapping problem was then investigated by feeding the deep learning algorithm with the limited information of previous shallow landslide initiations.

Objective
(i) To observe the effect of changing parameter complexity against the model accuracy and stability.
(ii) To explore the most convenient parameter set, which maximizes the model performance.
Note that determining this parameter set is valuable not only for the corresponding deep learning method, but also for the accuracy of any ML algorithm which desperately desires a feature extractor.

Stratified Sampling
Regarding the points with no landslide records, the entire terrain was divided into 600 strata, and the parameter values of a randomly chosen point were inserted into the training set. Two-hundred randomly chosen points that encountered landslide were inserted into the set as well. Therefore, the training set contains 200 positive (points with landslide record) and 600 (points without landslide record) negative cases. For testing, on the other hand, all-terrain with 875,816 points (with 251 reported landslide initiations) was considered to obtain the susceptibility maps.

Implementation
To observe the parameter effect on susceptibility, an incremental approach was employed by considering the six topographic parameters and adding the other parameters individually. Table 4 is a representation of this incremental approach, where the nth parameter set is abbreviated as S n for simplicity. Here, S n+1 is the concatenation of S n and the corresponding additional parameter. The basic parameters of applied RNN are listed below: Besides the utilized deep learning model, an artificial neural network (ANN) with a gradient descent backpropagation algorithm was also employed for comparison. The ANN was a shallow multilayer perceptron (MLP) containing three layers with m, 2 * m + 1 and 1 neurons, where m is the number of parameters in the corresponding parameter set. The learning rate was 0.001 and the maximum number of iterations was 1 million. The properties mentioned earlier of RNN and ANN were adapted similarly in inventory mapping detailed in Section 5.2. Receiver operating characteristic (ROC) analysis was employed to understand the model's success on susceptibility analysis. The area under the curve (AUC) values are presented in Table 5 for training and in Table 6 for testing the model. The left part of these tables contains RNN results, while the last column gives a brief understanding of ANN behaviors. In Table 6, the maximum value on the horizontal axis was shown in bold and italic text regarding each model with changing the number of neurons, and the maximum value on the vertical axis was underlined. Here, the test set contains all of the points in the study area to obtain maps of the entire terrain. The experiments were repeated by changing the number of neurons in the inner layer of RNN and increasing the number of iterations during training. From Tables 5 and 6, it can be emphasized that the increase in the number of iterations has a serious effect on the AUC values both for training and testing. However, a significant improvement does not exist when the RNN contains more neurons in the inner layer. As a consequence, considering the testing performance in Table 6, 256-neuron RNN with 1 million training iterations seems like the best model. When Table 6 is examined on the horizontal axis, it can be understood that the best parameter set which maximizes all of the AUC values is S 10 (containing topographic, hydro-topographic, hydrologic, anthropogenic parameters, and vegetation, and excluding the lithology variable).
The main reason for this peculiarity can be explained as follows: The parameters defined as lithology variables in the test field are magmatic rock types except for alluvium. However, the shallow landslides observed in the study area are developed in the residual layers decomposed form these magmatic rocks. In other words, the shallow landslide occurrence in the region is controlled by the thickness of the residual soils developed on these units, rather than the magmatic rock types defined under the lithology variables [68]. Therefore, because of the lack of knowledge of residual soil thickness, lithology variables remain redundant parameters in estimating shallow landslide occurrence in the model.
As a second approach to the landslide susceptibility problem, the selected parameter set (S 10 ) was more experimented by keeping the number of neurons constant at 256 and increasing the number of iterations in model training. Figure 3 shows the training and testing results based on ROC-AUC values. After 5-million training iterations, the model reaches saturation and classifies all of the training data accurately (AUC = 1). It does not mean that the system starts to have an over-fitting problem because, regarding the testing performance on the points of the whole terrain, the AUC values keep rising (reach around 0.93). In other words, the model can generalize very well from the training data to the unseen data points of the terrain. The only use of ROC-AUC based model evaluation may be debatable and even inefficient for the landslide susceptibility problem as well as any classification task. The textural properties of the predicted classes on the maps should be evaluated as well. In such an assessment, artificial zones on the map texture or patterns that do not correspond to any natural process or structure are not desired. It is expected that the landslides present in the area will be within the limits of high and very high susceptibility classes. Additionally, the spatial distributions of the high and very high landslide susceptibility classes should be minimal [69]. To observe these points, some of the contributing maps were produced and presented in Figure 4 to support performance evaluation measures. In Figure 4, the resultant maps of six scenarios were presented, and the implementation details of these maps were presented individually. The first three sub-figures show the effect of the number of neurons in the inner layer of RNN for landslide susceptibility mapping. Here, the number of iterations is constant, which is 1.0 × 10 6 . Subsequently, the model was forced to increase the number of iterations for training to decrease the cross-entropy loss value as much as possible. In this study, when all these six maps produced are evaluated from the textural point of view, it can be seen that when the number of iterations is kept constant at 1 million iterations, the spatial distributions of low and very low susceptibility classes are unchanged depending on the increasing number of neurons ( Figure 4). However, in these circumstances, the moderate susceptibility class shows a transition to high and very high susceptibility classes. As a consequence, it is observed that the predictive capacities of the resultant maps are significantly increased in the test evaluations of the model. The textural variation of this peculiarity can be observed in Figure 4 while quantification of this result can also be seen in Table 6. On the other hand, when the number of neurons is kept constant at 256, and the number of iterations is increased, it is observed that the resultant maps are divided into 2 classes, which are very low and very high susceptibility classes. When this result is evaluated together with the number of iterations and ROC-AUC values given in Figure 3, it is understood that the results up to 1 million iterations can be expressed as landslide susceptibility. However, after 5 million iterations, the results obtained directly correspond to the shallow landslide rupture zones. In this case, another research question arises: "Can the high-capacity deep learning algorithm (the number of neurons ≥ 256 and the number of iterations > 5 million) be used for landslide inventory mapping?".

Objectives
(i) To observe the capability of deep learning approach on estimating subsequent landslides based on the occurred events before a certain year. (ii) To discuss the ability of deep learning in landslide inventory mapping; to find out the exact areas with high possibility of landslide, and avoid to assign a large portion of the terrain as unstable.

Time-Based Sampling
Besides the main dataset containing independent variables, the utilized data repository also includes landslide information representing 251 landslide initiation points. Each of these points corresponds to rupture zone of a shallow landslide (c i ) that occurred between year the 1955 and 2007, Similar to the sampling strategy of the landslide susceptibility problem (detailed in Section 5.1.2), negative 600 cases (points without a landslide record) were selected by stratified sampling. However, 200 positive cases (points with a landslide record) were randomly selected from the set of landslides that occurred before 2005. Therefore the training dataset contains 800 lines of data. For testing, on the other hand, all-terrain with 875,816 points (with 251 reported landslides) was considered to obtain the susceptibility maps, and 19 landslides occurred in 2005 and after (year i ≥ 2005) was also evaluated to observe how much capable the resulting model is for predicting the exact landslide points.

Implementation
Experiments are repeated for changing the number of neurons in the inner layer of RNN and the number of its training iterations. The RNN parameters were the same as the landslide susceptibility mapping part of this study (Section 5.1). Here, the parameter set is constant (topographic, hydro-topographic, hydrologic, anthropogenic parameters and vegetation) which was empirically evaluated in landslide susceptibility mapping.
Running on a device with Intel(R) Core(TM) i7-8700 CPU, 32.0 GB RAM and NVIDIA GeForce RTX 2070, training of RNN with 256 neurons required around 1386 s for 5 million iterations.

Results
The model evaluation of the landslide inventory mapping problem was performed by the objective basis determined in Section 5.2.1.
Regarding the first objective, the main expectation from the proposed RNN is having a high estimation capability of landslides that occurred in 2005 and after. As already mentioned, there are 19 landslides to be estimated, and the testing dataset contains only these lines of data. Since the dataset contains only positive instances, the model evaluation was performed by recall metric which is the ratio of the number of true-positives (TP) to the number of all positive instances (including TPs and false-negatives (FNs)) as seen in Equation (1), and equals the accuracy metric for corresponding dataset.
Obtained results from the first objective is presented in Figure 5. Each differently colored bar represents a different classification threshold used to calculate the recall value. Here, there are two points to be considered. First, the estimation capability of the conventional neural network model is very low compared to the corresponding RNNs. Its approximation capacity is not efficient for a distinct classification since its outputs are around the value 0.5, which is the most ambiguous output for a classification task. On the other hand, RNNs can distinguish the cases in the testing dataset with much less ambiguity. The second inference of Figure 5 is the effect of changing the modeling parameters of RNNs. In the early stages of training, the model tends to learn the positive instances, then it starts to concentrate on finding a certain balance that can handle all of the positive and negative samples as much as possible. This situation causes oscillation, while the number of iterations increases. To ensure this hypothesis, it is surely crucial to observe the behaviors of these RNNs against the entire terrain. At this point, landslide inventory mapping based on a time-base sampling was investigated regarding the second objective defined in Section 5.2.1.
The main expectation from the implemented deep learning model is to find out the exact area of landslides; the true-positive rate (TPs) should be high. Therefore, recall values and the number of TPs should be paid more attention during the performance evaluation of the inventory mapping problem. Yet, there is another significant point in model evaluation which is: Avoiding extrapolating that a large portion of the terrain includes landslide. In other words, the model should not incorrectly classify the points without a landslide record as 'unstable', i.e., the number of false-positives (FPs) should be low. Therefore, precision metric (Equation (2)) was also utilized as minor support to recall, and as a combination of them, f-measure (the harmonic mean of precision and recall in Equation (4)) was investigated as well.
The obtained results show that the precision values are very low for each model, and it pulls down the value of f-measure intrinsically. This situation is because of the nature of utilized imbalance data containing only the origin points of landslides (251 unstable points with landslide occurrence in 875,816 points in the entire terrain). In these circumstances, the model predicts a landslide area beside the origin of the corresponding landslide. Therefore, it causes a great increase in the number of FPs, and the resulting precision values become very low. A similar outcome occurs in the accuracy metric. In the corresponding problem, the model is highly accurate if it labels all of the points in the area as stable. Therefore precision, f-measure, and accuracy metrics may trigger misunderstanding in performance evaluation and the main attention was paid on recall values.
Regarding the ROC-AUC based model evaluation, Table 7 presents them on the testing dataset which contains the points of the entire terrain. Here it seems that the conventional neural network model performs better than RNN in some cases. To examine this conflict more deeply, the models are also assessed by using the confusion matrix. In Table 7 the number of true-positive (TP), true-negative (TN), false-positive (FP) and false-negative (TN) instances are presented. Recall (Equation (1)), precision (Equation (2)), and f-measure (Equation (4)), and accuracy (Equation (3)) values were also calculated. Just like the aforementioned model evaluation approach, these values were reconsidered by changing the classification threshold. Here, the conventional neural network tends to estimate the instances as negative (stable points with no landslide occurrence). Since the majority of the instances in the testing dataset does not have landslide occurrence, the accuracy and ROC-AUC values of this model are high. However, the correctly predicted landslide points (TPs) are really low, e.g., TP = 18 for the case in which the classification threshold is 0.7, AUC is 0.83, and accuracy is 0.99. Because of this contradiction and possible incorrect assessment of the artificial models, attention should still be paid to the recall values.
To represent the values in Table 7 more clearly, recall, precision, and ROC-AUC values were restructured as bar charts in Figure 6. Here, it can be observed that the increase in model complexity (which corresponds to the number of neurons in the hidden layer in this case) has a great effect on both precision and recall. There is not a remarkable difference in ROC-AUC values. The number of training iterations improves the model estimation capability with regards to precision, recall, and ROC-AUC. However, according to the results given in Figure 6, particularly considering recall values, it is revealed that the conventional artificial neural network (i.e., ANN with 2n + 1 neurons in the inner layer where n represents the number of inputs) has no capability to be used in landslide inventory problem. The main reason for this peculiarity can be explained as follows: Whether it is landslide susceptibility mapping models or landslide inventory mapping models, accurate prediction of instances shows the actual performance. Since the probable instances are searched for within the existing negatives (the stable points) in landslide susceptibility mapping models, the evaluation of the metrics using the estimation performance of negatives as a criterion of the model success in the landslide problem is open to debate. In landslide inventory mapping, it can be expected that the current negatives are predicted to be negative. However, if the inventory is missing and the model catches the landslides that have been missed, the limitation expressed here for the negatives again occurs. In other words, there is no obligation for existing negatives to always be negative in landslide modeling, but it is expected that the existing positives (unstable points or points with landslide) always be estimated as positive. Accordingly, the implemented RNN model has a strong prediction power for inventory mapping.

Discussions and Conclusions
In this study, landslide mapping was studied by a deep learning approach (recurrent neural network), and the reaction of this model on landslide susceptibility and landslide inventory mapping was investigated. In this sense, one contribution of this study to the investigation of landslide susceptibility mapping is to enable the elimination of the feature selection stage because of the power of deep learning to extract the salient features automatically. More importantly, by using a deep learning algorithm, landslide susceptibility models with high estimation capacity can be produced. Additionally, a comprehensive literature review was presented to compare the performance of other state-of-the-art machine learning implementations. Being one of the most commonly used approaches, the lack of prediction capability of the shallow artificial neural network was revisited in the experiments. Here, this shallow neural network implementation suffered in predicting the points with landslides whose number of occurrences is too small for the entire terrain. On the other hand, the implemented deep learning approach was able to distinguish the landslide areas more precisely that, with a proper network structure and enough training iterations, the model tended to detect the exact points of landslides with a very small portion of false-positive predictions. At this point, it was observed that the resulting model evolved from susceptibility mapping to landslide inventory mapping. In other words, the high capacity of the deep learning algorithm made it possible to define a transition zone in terms of capacity and performance of the landslide susceptibility model and the landslide inventory mapping model. In this study, the results of the implemented deep learning model with 256 neurons up to 1 million iterations were interpreted as landslide susceptibility, while the results obtained after 5 million iterations were evaluated as the landslide inventory. The zone between 1 million and 5 million iterations is considered as a transition zone.
It was concluded in the empirical observations of this study that high-capacity deep learning algorithms allow landslide inventory maps to be generated semi-automatically. Such an acquisition will enable faster and more accurate production of event inventories for landslides occurring during a rainy season or after an earthquake. In this sense, there are still issues to be investigated. It is necessary to compare the event inventories to be produced by the deep learning algorithms with the event inventory maps generated by the object-based classification algorithms frequently used in this area. As a highly required further investigation, the transition zone defined in this study needs to be deeply analyzed considering other kinds of deep learning models as well.