Drawback in the Change Detection Approach: False Detection during the 2018 Western Japan Floods

: Synthetic aperture radar (SAR) images have been used to map ﬂooded areas with great success. Flooded areas are often identiﬁed by detecting changes between a pair of images recorded before and after a certain ﬂood. During the 2018 Western Japan Floods, the change detection method generated signiﬁcant misclassiﬁcations for agricultural targets. To evaluate whether such a situation could be repeated in future events, this paper examines and identiﬁes the causes of the misclassiﬁcations. We concluded that the errors occurred because of the following. (i) The use of only a single pair of SAR images from before and after the ﬂoods. (ii) The unawareness of the dynamics of the backscattering intensity through time in agricultural areas. (iii) The effect of the wavelength on agricultural targets. Furthermore, it is highly probable that such conditions might occur in future events. Our conclusions are supported by a ﬁeld survey of 35 paddy ﬁelds located within the misclassiﬁed area and the analysis of Sentinel-1 time series data. In addition, in this paper, we propose a new parameter, which we named “conditional coherence”, that can be of help to overcome the referred issue. The new parameter is based on the physical mechanism of the backscattering on ﬂooded and non-ﬂooded agricultural targets. The performance of the conditional coherence as an input of discriminant functions to identify ﬂooded and non-ﬂooded agricultural targets is reported as well.


Introduction
Floods are natural phenomena that can perturb ecosystems and societies [1,2]. Microwave remote sensing is extremely useful for post-disaster analysis of floods for two reasons: First, microwaves can penetrate clouds, which are almost certainly present during heavy rainfall events. Second, there is a clear physical mechanism for backscattering in waterbodies, that is, specular reflection. There are several methods to trace the flooded area from microwave remote sensing. Schumann and Moller [3] provided a comprehensive review of the role of microwave remote sensing on flood inundations. Their study focuses on the different potential conditions, such as mapping inundation at floodplains, coastal shorelines, wetlands, forest, and urban areas. For each case, the limitations and advantages of microwaves images are drawn. Nakmuenwai et al. [4] proposed a framework to identify flood-based water areas along the Chao Phraya River basin of central Thailand, an area were floods occur almost every year. Their study established an inventory of permanent waterbodies within the Chao Phraya River basin. Then, for an arbitrary SAR image, local thresholds are computed using the inventory of waterbodies as references. The architecture of the proposed network is independent of the satellite acquisition condition and normalization was not necessary. Liu and Yamazaki [5] The application of change detection to map the effect of a flood in agricultural targets is therefore more complicated that its application in urban areas. Previous studies have stated that their results might contain changes produced by the flood and agricultural activities. Liu and Yamazaki [5] reported that the change of waterbodies included large paddy fields flooded for irrigation purposes, which could not be removed. Boni et al. [6] pointed out it is necessary to include a pre-event image recorded a few days before the flood to successfully remove all the permanent waters, including those from the agricultural activities. However, this is usually not available. To the best of our knowledge, there is no report of the severity of this issue, namely, the inability to discriminate changes of waterbodies from the flood and those from agricultural activities.
Thus, the aim of this study is to report on the above issue during our analysis of the 2018 heavy rainfall that occurred in western Japan and to provide a potential solution. The next section describes the region of interest, the available dataset during the early disaster response, and the data collected on subsequent days, such as additional microwave imagery, governmental reports, and the field survey. Section 3 reports the current approaches to map flooded areas. Their limitations to discriminate flooded and non-flooded agricultural targets are highlighted. Moreover, a new parameter is proposed to improve the identification of flooded and non-flooded agricultural targets. The results of the proposed metric and the performance of machine learning classification algorithms calibrated with the new parameter is reported in Section 4. Additional comments are provided in Section 5. Finally, our conclusions are reported in Section 6.

The 2018 Western Japan Floods
Beginning on 5 July 2018, heavy rainfall occurred in western Japan. A detailed report can be found in [22], although a summary follows. The cause of the heavy rainfall was attributed to the convergence of a weather front and the Typhoon No. 7. The total rainfall between 28 June and 8 July was 2-4 times the average monthly values for July. As of 2 August 2018, 220 casualties, nine missing persons, and 381 injuries had been reported due to the floods, mainly in Hiroshima and Okayama Prefectures. Furthermore, 9663 houses were partially or completely collapsed, 2579 were partially damaged, 13,983 houses were flooded to the first floor, and 20,849 houses were flooded below the first floor. Figure 1 shows the geographical location of the region of interest (ROI) of this study, which is located in the prefecture of Okayama. It includes the cities of Kurashiki, Okayama, and Soja. Within Kurashiki city is located the town of Mabi, the most affected area in Okayama Prefecture during the heavy rainfall.   Figure 1. Geographical location of the study area. The polygons at Mabi town denotes the estimated flooded area provided by Geospatial Information Authority of Japan (GSI). The polygons labeled as surveyed area denotes the paddy fields that were inspected.

The Advance Land Observing Satellite-2 (ALOS-2)
In the aftermath of the 2018 heavy rainfall event, the ALOS-2 performed observations on the affected areas for early disaster response purposes. The ALOS-2 system was developed by the Japan Aerospace Exploration Agency (JAXA). The ALOS-2 hosts an L-band SAR system, named PALSAR-2, that is capable of right and left looking imaging with an incident angle range of 8 • -70 • . The revisit time of ALOS-2 is 14 days. PALSAR-2 is able to operate in three different modes: (i) spotlight mode, with azimuth and range resolution of 1 m and 3m, respectively; (ii) stripmap mode, which has the ultrafine, high-sensitivity, and fine submodes; (iii) and ScanSAR mode, with low resolution for large swaths. Figure 2a shows a RGB color composite of a pair of images acquired on 14 April 2018 (R), and 7 July 2018 (G and B) in the ROI. The images are in Stripmap-ultrafine mode, which has a ground resolution of 3 m, and both are in HH polarization. The gray tones are areas where the backscattering intensity remain unchanged, whereas the red tones indicated the areas where the backscattering intensity decreased.
The two referred images were provided at the early stage of the disaster. Later on, another pre-event image with the same acquisition conditions was requested for this study. The third image was recorded on 3 March 2017. Note that all the images were radiometrically calibrated, speckle filtered, and terrain corrected.  igure 2. (a) Color composite of pre-and post-event ALOS-2 SAR images. The inset denotes the location of the study area in the western Japan. (b) Thresholding-based water body identification. The black rectangles denote confirmed flooded and non-flooded agriculture areas.

The Sentinel-1 Satellite
To reveal the impact of the irrigation and seasonal changes, annual variations of backscattering at the agriculture targets were investigated using Sentinel-1 data. The Sentinel-1 constellation, operated within the European Commission's Copernicus program, can provide dual-polarized C-band SAR images every six days [23]. Annual changes in backscattering from paddy fields were analyzed using ground range detected (GRD) images taken from 1 January 2017 to 31 December 2018. Note that only the images taken from the descending path under VV polarization were extracted to reduce the effects of the acquisition conditions. As a result, a total of 58 SAR images were selected for this study. The preprocessing of these images consists of three steps: orbit correction, calibration, and geometric terrain correction. First, orbit state vectors were modified using supplementary orbit information provided by the Copernicus Precise Orbit Determination Service. This modification has a high influence on the quality of several preprocessing steps. Second, brightness was converted into backscattering coefficient, which represents the radar intensity per unit area in the ground range. Third, geometric distortion derived from varying terrain were corrected using digital elevation data provided by the Shuttle Radar Topographic Mission.

Truth Data
Currently, the Geospatial Information Authority of Japan (GSI) provides data related to the effects of the floods [24]. A series of derived products are available in the referred data base, such as orthophotos taken before and after the disaster, collapsed areas map, inundation range, estimation of inundation depth, and digital elevation map. Figure 1 illustrates the extent of the flooded area in Mabi-town. It is worth pointing out that the area delineated by GSI was performed using photos and videos only; and thus its accuracy was not confirmed.
As seen in Figure 2a, the flooded area in Mabi town exhibit a reduction of the backscattering intensity, which is expressed as red tones. However, many other areas exhibit the same pattern as well. A closer look revealed that most of those areas are used for agriculture activities. Therefore, to complement the data provided by GSI, a field survey was performed on 25 July, 18 days after the recording of the post-event image. The location of the surveyed paddy fields is shown in Figure 1 and a closer look is shown in Figure 3. A total of 35 agricultural fields were inspected (Figure 3a). There was no evidence of flooding, and direct communication with the inhabitants confirmed that no flooding occurred in the surveyed area. Furthermore, it was found that the plants were in their early stage of growth, and a layer of water was present in most of the fields (Figure 3b,c).

Intensity Thresholding
This section begins reproducing one of the simplest approaches to estimate flooded areas in the ROI, that is, identifying areas with low backscattering values in the ALOS-2 SAR imagery. To define a proper threshold, permanent waterbodies from the Asahi and Oda rivers, shown as blue polygons in Figure 2, are employed as references. As pointed out in [4] , by using water body references from the same images, there is no effect of the acquisition conditions of the images (i.e., incident angle, satellite path, etc.) in the results. A threshold of −16 dB for the sigma naught backscattering was set, and the waterbodies were identified in both the pre-and post-event images with the shortest time baseline; that is, the images recorded on April and July 2018. The permanent waterbodies in the post-event image were removed using the waterbodies at the pre-event image. Then, the erosion and dilatation operator, with window size of 3 × 3, were applied in order to remove small objects. Figure 2b depicts the estimated flooded areas based on changes in waterbodies. As stated previously, there are several systematic procedures to set a threshold value. It was found, at least in this event, that slight modifications to the threshold value (−16 dB) did not change the overall flood map significantly. A similar flood map for Okayama Prefecture for the same event is reported in [25]. In Figure 2b, two areas are highlighted by black rectangles. The rectangle at the left side is located in Mabi-town, hereafter referred to as the F-area. It contains the flooded area mapped by GSI, which was confirmed as flooded by the media immediately after the disaster occurred. The second rectangle, hereafter referred as NF-area, corresponds to an area mainly devoted to agricultural activities. Note that the surveyed agricultural fields are located in the NF-area. According to the flood map (Figure 2b), the NF-area was severely affected. However, it came to our attention that there was no report of damage in this area, neither from the government nor the media. Recall, during the field survey, that the inhabitants also confirmed the NF-area was not flooded. Furthermore, a closer look at the SAR images showed dark rectangular areas distributed in a rather uniform way ( Figure 3a). Therefore, it is concluded that these areas contained waterbodies used for agricultural activity and not caused by flooding. For the sake of clarity, fields that contain waterbodies due to agricultural activities are referred as "non-flooded agricultural targets". Likewise, agricultural fields covered with waterbodies produced by the flood are interpreted as "flooded agricultural targets". Specular reflection was indeed the main backscattering mechanism in the NF-area. Double bouncing did not occur for two reasons: the very thin stalks of the plants and the rather large L-band wavelength (15-30 cm). The backscattering sigma naught (σ 0 ) intensity of the flooded area in Mabi town, and the surveyed paddy fields were measured and shown in Figure 4. The range of the backscatter intensity in the flooded agricultural fields was almost the same as that of the non-flooded agricultural fields. Therefore, it is expected that a pixel-based classification approach will produce incorrect results. Some of the agricultural fields were inactive (F07, F10, F29, F30, F34, and F35); that is, we only observed bare soil by the time of the survey, and the associated backscattering intensities were larger than those of the remaining fields. The large intensity shown for field F19 was attributed to artifacts of double bouncing due to a nearby building.

Coherence Approach
The use of complex coherence is another approach that has been frequently used in recent years to map damaged areas [5,18]. Coherence is computed from a pair of complex SAR data using the following expression, where I pre i,j denotes the complex backscattering of the pre-event SAR image, I pos i,j is the complex backscattering of the post-event SAR image, and * denotes the complex conjugate. Coherence tends to be large in urban areas, unless significant changes have occurred, such as the condition immediately after a large-scale disaster. In urban areas surrounded by dense vegetation, a single coherence computed from pre-and post-event SAR imagery might not be sufficient, because vegetation also exhibits low coherence. This pitfall can be overcome with a land cover map or an additional coherence image computed from two pre-event images. This approach was evaluated in the present study as well. The pre-event coherence were computed with the two pre-event SAR images (i.e., the images recorded on March 2017 and April 2018). The co-event coherence was computed with the post-event and the pre-event SAR images that have the shortest temporal baseline (i.e., the images recorded on April and July 2018). Figure 5 depicts the RGB color composite constructed from both coherence images (R: co-event coherence; G and B: pre-event coherence). The cyan tones are areas where the pre-event coherence dominates, which is mostly observed in urban areas affected by the flood. See the red polygons in Figure 5b for instance. The white tones are areas where both the pre-event and co-event have high values. This pattern is observed in non-flooded urban areas. It is promising to observe the NF-area exhibit a clear different tone than that shown in the F-area. The non-flooded agricultural targets have low values at both the pre-event and co-event coherence, which is reflected by dark tones. However, a closer look revealed that flooded agricultural targets exhibit dark tones as well. For instance, see the green polygons in Figure 5b. Therefore, using coherence-based change detection, floods in agriculture targets cannot be identified. To confirm this conclusion, Figure 6 depicts a scatter plots of flooded agricultural areas, flooded urban areas, and non-flooded agricultural areas. Each scatter plot were constructed from 10,000 samples randomly selected from the polygons shown in Figure 3a,b. It is clear that there is a significant overlapping between samples from flooded ( Figure 6a) and non-flooded (Figure 6c) agriculture targets.

Backscattering Dynamics of Agriculture Targets
The evidence from the field survey demonstrated that the paddy fields were filled with water by the time the post-event image was recorded. Furthermore, Figure 2b suggests that such waterbodies were not present by the time the pre-event image was recorded. However, there is not conclusive evidence. If the pre-event image was taken before irrigation, the change detection approach would include irrigation changes and flood damage. Before starting irrigation and transplanting, the backscattering response from paddy fields is generally dominated by surface scattering of bare soil.
Fortunately, agriculture activities are associated with the seasons. Thus, the variation of the backscattering can be modeled as a periodic signal with a period of a year. Figure 7 illustrates the variation of the averaged backscattering intensity through the years 2017 (red line) and 2018 (blue line), corresponding to the surveyed paddy field F26. The shadowed area denotes the limits of their standard deviation. Note the rest of the surveyed paddy fields that contained waterbodies show the same pattern as shown in Figure 7. The time series data revealed a sudden decrease in the intensity to approximately −17 dB between June and July in both 2017 and 2018. This downward trend can be regarded as the start of rice planting, at which farmers irrigate paddy fields. After irrigation, the backscattering exhibited a gradual increasing trend. Note that the Sentinel-1 (3.75-7.5 cm wavelength) and ALOS-2 (15-30 cm wavelength) satellites use different wavelength bands; thus, the period of lowest backscattering intensity should be larger for the ALOS-2 SAR images. Recall that the pre-event ALOS-2 SAR image closest to the onset of the heavy rainfall was recorded on 14 April, before the irrigation period; the floods occurred between 5 and 8 July soon after the beginning of the irrigation period. The post-event ALOS-2 SAR image was recorded on 7 July. Therefore, we can conclude that the onset of the irrigation period hampered the traditional change detection approach for the identification of flooded areas.

A New Metric: Conditional Coherence
At this stage, it is clear that traditional methods may cause false detection of flooded areas for agricultural targets. It is imperative to track the periodic activities of agricultural targets and, consequently, be aware of the dynamics of the backscattering sigma naught. Additional information might be useful to judge the reliability of the results. Therefore, for future references, it is strongly recommended to verify whether a flood event occurred during the onset of the irrigation period; if that is not the case, then the previous approaches should be adequate. In case a similar situation occurs, new solutions are required. It is our belief that new solutions should consider the spatial arrangements of the fields and how the waterbodies are present in flooded and non-flooded agricultural targets. A first important fact is that the agricultural fields have a defined shape, which is often rectangular with its edges bounded by street, roads, alleys, or buildings. Another key feature is that the waterbodies in non-flooded fields are stored in a controlled manner; namely, they have the same shape as the agricultural fields (Figure 3). On the other hand, waterbodies in flooded areas do not have a clear defined shape.
The automatic detection of regular shapes, such as rectangular, might be a potential solution to discriminate non-flooded agricultural areas from those that are flooded. There is extensive literature regarding this purpose [26][27][28][29]. Although object detection it is a mature subject, it is worth mentioning the edges of the agricultural fields are not clear defined in microwave images because of the speckle noise; and thus it might not be a straightforward procedure. Current technologies such as supervised machine/deep learning methods can overcome this issue if a proper and extensive training data is at hand and extensive computational resources are available. Instead, we propose a novel heuristic metric, which is based on the fact that the waterbodies in non-flooded paddy fields are stored within the agricultural fields. Thus, the areas between consecutive paddy fields, commonly used for transportation, and all nearby infrastructure remain dry with a more steady backscattering mechanism over time. We then propose to recompute coherence under the following modifications.
Given the X and Y domains, defined as L x = {0, 1, ..., N x } and L y = {0, 1, ..., N y }, respectively, a complex SAR image can be defined as a function that assigns a complex number to each pixel in L x × L y ; I : L x × L y → C. Let A T be a subset of pixel coordinates in L x × L y whose pixel value in the post-event image I pos meets the following condition, where T is a threshold to filter out pixels with low intensity (units of dB). The coherence between I pos and a pre-event image I pre computed over pixels whose coordinates belong to A is referred to in this paper as "conditional coherence" (γ T ): The concept behind Equation (3) is to recompute the coherence after filtering low-intensity pixels at the post-event SAR image using a user-defined threshold, T. A suitable option for T is the same threshold used to create waterbodies map in Figure 2b. The low intensity pixels are filtered out because these are the information the flooded and non-flooded agricultural targets have in common. On the other hand, medium/large intensity pixels do have different pattern. For the purpose of identifying flooded areas in agricultural targets, the conditional coherence must be computed over a moving window which size (i.e., N x and N y ) should be large enough to include paddy field edges and surrounding structures.

Machine Learning Classification
The performance of the conditional coherence as input of a machine learning classifiers to characterize flooded and non-flooded agricultural fields is evaluated here. Two different machine learning classifiers that can be used in different circumstances are evaluated. In a first situation, it is assumed that there is not information other than the remote sensing data. That is, there is no available truth data. Then, under this context, an unsupervised machine learning classifier is a suitable option. In a second circumstance, areas confirmed as flooded at a very early stage of the disaster are used as input of a supervised machine learning classification.

Unsupervised Classification
The expectation-maximization (EM) algorithm [30,31], an unsupervised classifier, is here used to discriminate the flooded and non-flooded agricultural targets. The EM algorithm considers that the distribution of the conditional coherence is composed of a mixture of distributions: where k is the number of classes defined beforehand, C i denotes a class, and P(C i ) are the mixture proportions. The distributions are defined from an iterative optimization. For instance assuming Gaussian distributions, the calibration of their parameters is as follows, where m i and S i denote the vector mean and variance, respectively. In this study, Gaussian distributions are used. After a calibration of the distributions through the iterative process, each pixel sample is associated with the class C i that gives the maximum p(γ T |C i ).

Supervised Thresholding
Areas recognized as flooded immediately after the floods occurred are used here to define a range in γ −16 and use it to perform predictions. Namely, new samples that are located inside the range will be classified as flooded; otherwise, it will be classified as non-flooded. The decision whether a sample belong to a flooded sample is based on the sign of the following expression:, where the parameters w and ρ are calculated from the following optimization problem, min w,ρ where γ T i is the conditional coherence of sample i ∈ {1, ..., N}, with N being the number of samples; ξ i is a slack variable; and ν ∈ (0, 1] denotes the upper bound of the fraction of outlier samples. Equations (6) and (7) denote a particular case of the method proposed by Schölkopf et al. [32], from which the case of multidimensional features is tamed. Figure 8a,b shows subimages of 101 × 101 pixels of two non-flooded agricultural targets located within the NF-area. Both, the pre-event and post-event SAR images are depicted. After removing pixels whose intensity at the post-event image was lower than −16 dB (i.e., pixels whose coordinates do not belong to A −16 ), the remaining pixels consist of those located in the paths between paddy fields and/or the infrastructure nearby. The conditional coherence computed at both pairs of images is~0.30. Likewise, Figure 8c,d shows subimages of flooded agricultural targets located within the F-area. Note that, unlike the non-flooded agriculture areas, the street/roads between consecutive fields are not visible in the post-event subimages. In Figure 8c, the area with high backscattering intensity at the bottom-right consists of a flooded urban area. The conditional coherence computed in this subimage is γ −16 = 0.08. The high value pixels at the top left of Figure 8d represent the backscattering intensity from the Prefectural School of Okayama Kurashikimakibishien, which was inundated as well. The conditional coherence at this area is γ −16 = 0.12. Note that, although in the four examples the conditional coherence is rather low, those computed in flooded areas are lower than those computed in non-flooded areas. Within flooded areas, large intensity pixels are mainly due to the join effect of specular-reflection and double-bouncing backscattering mechanisms; whereas in non-flooded areas, the medium/large intensity pixels denotes the backscattering from the street/roads and nearby buildings.   Figure 2b. A window size of 101 × 101 pixels (approximately 300 × 300 m 2 ) was used. Recall that Figure 2b depicts the areas with the lowest backscattering intensity, and thus the conditional coherence at each pixel in Figure 9 contains information of its surroundings rather than its location. It is observed that in both the flooded and non-flooded areas, the conditional coherence is mostly lower than 0.5. However, it is clearly observed that the conditional coherence in Mabi-town is much lower than that computed in the surveyed paddy fields. Figure 10 top shows the distribution of γ −16 . The distribution is slightly skewed to the right, which is the effect of the low conditional coherence at the flooded agricultural areas. It does not exhibit a bimodal distribution, with a distinctive peak representing the flooded agricultural targets, because the population of non-flooded areas is much larger than that from the flooded areas. Furthermore, although difficult to observe, there are several thousands of samples with γ −16 greater than 0.5.

Classification of Flooded and Non-Flooded Agriculture Targets
Considering the shape of the distribution of γ −16 (Figure 10a), the EM algorithm was computed for three classes (k = 3). Then, the class that contains the lowest γ −16 is considered a flooded area, and the other two classes are merged to represent the non-flooded areas. Figure 10b shows the p(C i |γ −16 ) for the three classes and Figure 11a shows the resulted classification map. The red pixels denote the flooded agricultural areas, whereas the blue ones denote the non-flooded agricultural areas. Figure 11b shows a closer look of the flooded paddy fields at the F-area. It is observed that the resulting classification using the conditional coherence is consistent with the area delineated by GSI. Figure 11c shows a closer look of the NF-area. Compared with Figure 2b, it is observed that most of the false detection has been rectified. However, in the bottom right of Figure 11c, it is observed an area classified as flooded paddy field. Figure 9 shows that indeed the area exhibit low conditional coherence. A closer look revealed there was a significant amount of dry vegetation at a higher elevation than the paddy fields, which certainly will exhibit medium backscattering intensity, but low coherence and conditional coherence.   Regarding the classification using the one-class SVM method, 30,000 samples of flooded agricultural targets were randomly extracted from the green polygons denoted in Figure 5b. Note the referred area was promptly confirmed as flooded, and it is located within the F-area. The parameter ν of the Equation (7) was chosen to be 0.95. That is, the calibrated function can accept a maximum of 5% of outliers from the input samples. Figure 12a shows the distribution of the input samples with respect to γ −16 . Figure 12b shows the calibrated f (γ −16 ). As mentioned previously, the value in which f (γ −16 ) changes sign is the boundary between the two classes. Such value is slightly lower than 0.2. Therefore, a lower value is classified as flooded agriculture area; otherwise, it is classified as non-flooded agricultural area. Note, the boundary between the two classes are practically the same as the one defined with the EM method, and therefore the map of flooded and non-flooded agricultural targets computed from the one-class SVM method ( Figure 13) is very similar to that computed from the EM algorithm ( Figure 11). To evaluate the performance of the classifiers calibrated using the EM and the one-class SVM methods, pixels within the area delineated by GSI at Mabi town and pixels within the NF-area are used as reference pixels of flooded and non-flooded agriculture areas. A proper evaluation requires the same amount of samples for each class. Thus, 10,000 pixels are extracted randomly from each of the referred areas. Three scores are used for the evaluation: producer accuracy (PA), user accuracy (UA), and F1. The PA represents the percentage of samples extracted from the GSI-area (NF-area) that were classified as flooded (non-flooded). Similarly, the UA is the percentage of samples classified as flooded (non-flooded) that were extracted from the GSI-area (NF-area). The F1 is computed from the following expression, 2(U A −1 + PA −1 ) −1 . To consider the effect of the random sampling, the evaluation was performed a hundred times. Figure 14 reports the resulting scores. It is observed all the scores have the value of~81%, which indicates high accuracy. Table 1 reports the predictions over the all pixels in the GSI-area, the surveyed paddy fields, and the NF-area.

Discussion
Additional comments regarding the relevance of this study are necessary. In the aftermath of the 2018 Western Japan floods, preliminary estimations of the affected areas were published. As this study proved, all flood maps estimated from the L-band SAR images overestimated the floods in Okayama Prefecture. Such significant overestimation can compromise an efficient transfer of human and material resources. It was, therefore, of great interest the identification of the factors that induced such misclassifications. This study showed that the main factors are the use of only two images, the L-band wavelength, and the onset of agricultural activities. Another factor that may have influenced the large misclassifications is the use of automatic procedures without human intervention. We belief that such an importance task should be supervised by an expert to make a proper interpretation of the microwave imagery and the resulted flood map. For the main role of an automatic procedure is to assist the experts on getting a fast estimation of the affected area.
Note that the definition of flooded and non-flooded vegetation in [9] differs from ours. In [9], flooded vegetation are vegetation that contain water beneath it; whereas non-flooded vegetation does not contain water at all. In our study, both flooded and non-flooded agriculture targets contains waterbodies. The water in flooded areas was produced by the flood; whereas non-flooded areas contains water from ordinary irrigation activities. The study of Pierdicca et al. [8] on the potential of CSK to map flooded agricultural fields, see Section 1, did not consider the situation where both the agricultural targets flooded by the natural phenomenon and those artificially flooded for irrigation purposes occur simultaneously.
Regarding the conditional coherence, note that its value contains local information from the field edges and nearby structures; and thus, the window size need to be large enough to include them. As a consequence, the resolution is decreased. Furthermore, note that the window size needs to be defined by the user in advance, which requires rough estimation of the dimensions of a standard agriculture field. The term "conditional" to the conditional coherence is used because a condition was imposed to the pixels used in the computation of the coherence. In this study, the condition was associated to the intensity of the backscattering. Namely, their intensity must be larger than certain threshold. However, other types of conditions can be used to fulfill specific needs.

Conclusions
We have reported a case where the change detection approach was not suitable for identifying flooded areas in agricultural targets. It was the co-occurrence of many factors, such as the use of only two images, the L-band wavelength, and the onset of agricultural activities, that produced such misclassifications. Using the backscattering intensity, significant overestimation was produced, and no changes were observed from coherence images. Furthermore, solid evidence from field survey and Sentinel-1 SAR time series data demonstrated that this flaw might occur in future events. We strongly recommend awareness of the dynamics of backscattering in agricultural fields to avoid false alarms. Furthermore, we proposed a new metric, termed here as "conditional coherence", to infer whether a detected change is associated with non-flooded or flooded agricultural areas. The conditional coherence is a simple modification of the well-known coherence. Thus, its computation is simple, it is computationally efficient, and does not require additional ancillary data (such as optical images and/or land use map). The conditional coherence was computed over the waterbodies observed in flooded and non-flooded agriculture targets, and then, they were used as input feature to perform unsupervised and supervised classifications. Both methods, applied independently, showed the same results; and thus, it confirms that the conditional coherence contains information that can be useful to discriminate flooded agricultural targets. It was observed the resulted classification cleaned up~81% of the false detection of flooded areas.