Change Detection Based on the Coefficient of Variation in SAR Time-Series of Urban Areas

: This paper discusses change detection in SAR time-series. First, several statistical properties of the coefﬁcient of variation highlight its pertinence for change detection. Subsequently, several criteria are proposed. The coefﬁcient of variation is suggested to detect any kind of change. Furthermore, several criteria that are based on ratios of coefﬁcients of variations are proposed to detect long events, such as construction test sites, or point-event, such as vehicles. These detection methods are ﬁrst evaluated on theoretical statistical simulations to determine the scenarios where they can deliver the best results. The simulations demonstrate the greater sensitivity of the coefﬁcient of variation to speckle mixtures, as in the case of agricultural plots. Conversely, they also demonstrate the greater speciﬁcity of the other criteria for the cases addressed: very short event or longer-term changes. Subsequently, detection performance is assessed on real data for different types of scenes and sensors (Sentinel-1, UAVSAR). In particular, a quantitative evaluation is performed with a comparison of our solutions with baseline methods. The proposed criteria achieve the best performance, with reduced computational complexity. On Sentinel-1 images containing mainly construction test sites, our best criterion reaches a probability of change detection of 90% for a false alarm rate that is equal to 5%. On UAVSAR images containing boats, the criteria proposed for short events achieve a probability of detection equal to 90% of all pixels belonging to the boats, for a false alarm rate that is equal to 2%.


Introduction
Since the launch of the Copernicus Program, data have become available on a full, open, and free-of-charge basis. Thus, many new services can be developed, whether for environmental, civil, industrial, defense, or surveillance applications, using large time-series [1,2]. Urban areas are particularly dynamic: they are being developed at an ever-increasing rate [3]. Therefore, this article focuses on the use of time series of SAR images to be able to image the rate of change during a given observation period in urban areas.
Change detection involves measuring how the attributes of a particular area have changed between two dates [4]. Change detection makes it possible to monitor natural events, such as flooding, earthquakes, and landslides, and to observe urban sprawl. The notion of change detection implicitly assumes that we are processing two images: a reference image, considered as the initial case, and a new image, which is related to the change to be detected.
Several studies use existing algorithms to deal with this problem, from two optical images, or two radar images [5]. Radar images are appreciated in this context, as they can record data, regardless of weather. Moreover, unlike optical data, the SAR image is almost insensitive to cloud cover and independent of illumination conditions, and change detection performance is known to be particularly interesting [6].

•
Durable event: a long process, such as building construction. In this case, the temporal signal corresponds to a rising or falling edge.

•
Occasional events: this can correspond, for example, to the presence of vehicles on a road or a parking lot, or barges on a river. These events are distributed only sporadically. • Periodic or quasi-periodic events: this can be the case in particular for vegetation which undergoes a seasonal variation. • Chaotic evolution: this corresponds, for example, to crops whose backscattering depends both on meteorological conditions and crop stage.
Although the use of time series is particularly interesting in this context, less work can be found in the literature on the detection of changes in time series of N dates, because access to enough data for a time series is relatively recent, especially for SAR images.
Methods of change detection in multivariate time series can be separated into two classes: sequential or online methods, and retrospective or offline analysis. The first ones are dedicated to the processing of data as they are acquired [22], in order to detect a change point as soon as possible after it occurs. The second ones consider the complete data set at once and look back in time to recognize where a change occurred. In this paper, we are interested in the offline case: we want to detect in a SAR time series all changes that occurred in a given area and a given period.
A first strategy considers binary-temporal change detection tests between all possible pairs of dates exhaustively. The resulting tests are represented by a change detection matrix (CDM) [23] containing all information on changed and unchanged pixels. This matrix is constructed for each spatial position over the time series by implementing similarity cross tests that are based on the coefficient of variation. The NORmalized Cut on chAnge criterion MAtrix (NORCAMA) method [24] is a spectral clustering method that also relies on the change detection matrix CDM. After a filtering step on the speckle, change criteria between two images are based on a likelihood ratio test while using both the original noisy images and the denoised images. Finally the authors apply a change classification by a normalized cut based clustering-and-recognizing method on change criterion matrix (CCM). However, the Change Detection Matrix is a very combinatorial problem, since it requires calculating an N × N matrix in each pixel for a temporal stack of N images.
Therefore, another alternative is to consider the detection of different ruptures. The omnibus method proposes testing the hypothesis that all polarimetric signals belong to a single statistical population [25]. This global test can be factored into a binary set of tests, which iteratively test whether the element t has the same realization polarimetric matrix as the first element of the series. As soon as the test is invalidated, then the test is rejected. It is then possible to consider the following sub-series to search again for another possible rupture. In practice, that means that a subset of binary tests is conducted among all possible pairs that correspond to the first line in the change detection matrix. This approach has been applied on data from different sensors by using a statistical test on polarimetric Wishart distribution in [21,26,27].
Finally, the last category of approaches focuses on how to define a criterion to judge the homogeneity of a whole statistical population. If the hypothesis is rejected, then it leads to the detection of a potential statistical rupture without trying to date it precisely. Among the criteria for detecting activity in time-series, we find, in particular:

•
The MIMOSA method (Method for generalized Means Ordered Series Analysis), detailed in [28], is based on the statistical study of the theoretical joint probability density function between the two quadratic and geometric means. Automatic threshold according to a given false alarm rate makes it possible to decide whether there is a change or not.

•
The statistical test that is involved in the Omnibus method has to decide for a given time profile whether the variance corresponds to that of a homogeneous statistical population or not. A likelihood ratio test for the homogeneity of several complex polarimetric covariance matrices leads to the ratio of the geometric mean and the arithmetic mean in the single polarimetric case.

•
The temporal coefficient of variation (standard deviation × mean −1 ) is another potentially advantageous candidate to assess temporal variability because of its simplicity and its remarkable statistical properties to detect a change. It has been used in [23] and in the visualization method, called REACTIV (Rapid and EAsy Change detection on Time-series using the coefficient of Variation). To our knowledge, this is the only visualization method in the literature that is dedicated to change. In this approach, the critical parameter is the temporal coefficient of variation, which is used to encode the color saturation. Indeed, this parameter has been proven to be particularly sensitive to changes.
In this paper, we therefore propose building different criteria that are based on the coefficient of variation for change detection on the entire time-series. The statistical properties of this criterion are at the heart of the strategies for setting the detection functions. Therefore, they are studied and presented in Section 2.
In Section 3, we propose, based on these properties, other criteria that are more specific to distinctive categories of changes, including profiles exhibiting a target appearing only on a single date, such as a vehicle; and, profiles with high returns of longer duration, typically corresponding to the activity of a building site. Section 4 evaluates the three previous detection scenarios by using different simulations. We present the evaluation protocol, and give the performance of the methods in Section 5, before concluding in Section 6.

Generalities
The coefficient of variation (CV), also known as relative standard deviation, is mathematically defined in probability theory and statistics by the ratio of the standard deviation of the signal by the mean value. Therefore, it is considered to be a normalized measurement of the dispersion of a probability distribution. In the specific area of SAR images, the coefficient of variation has already been considered for DInSAR (Differential Interferometry SAR) applications. This technique uses the information from selected points having high backscattering and are stable through time, which are called Permanent Scatterer [29]. In this framework, the coefficient of variation is one candidate for the detection of these particularly stable scatterers.
In [30,31], the first theoretical studies have shown that the coefficient of variation is also interesting to detect changes in speckle areas. For this reason, the coefficient of variation is central to the visualization algorithm REACTIV that highlights, in bright colors, all changes that occur in a time-series. Therefore, it can be considered that the coefficient of variation has different statistical properties for at least three categories of temporal profiles: that of a permanent scatterer; that of a natural area of stable speckle, not necessarily correlated in time, but which is stationary; and finally, that of a non-stationary area that we generally interpret as a change. These three general cases are represented in Figure 1. The first two of these categories concern cases without any changes. In both of these cases, it is possible to derive useful mathematical properties about the coefficient of variation. Additionally, we discuss these properties in the following paragraphs.

The Permanent Scatterer
At least two strategies can be chosen in order to identify stationary targets. The first one is to use the interferometric coherence level. The main difficulty relies on the spatial estimation process that implies a spatial resolution loss. The second one relies on the analysis of the amplitude values A k along the time axis by the so-called dispersion index, which is precisely the coefficient of variation.
The time series of the amplitude values of each pixel containing a permanent scatterer follow the Rice distribution [29,32]: it corresponds to the superposition of the backscattering of a deterministic target with constant values of phase and amplitude, with an additive fully developed speckle. Let µ c be the backscattering parameter of the target, then the Rice law can be expressed as: if µ c is the amplitude of the deterministic signal and µ is the parameter of the Rayleigh distribution associated with the speckle noise, the probability that is associated with a given value x can be written as: (with µ and µ c ∈ R + ): where I 0 is modified Bessel function of the first kind of order 0. Thus, the shape of the Rice distribution depends on the ratio between the amplitude of the deterministic component and the amplitude of the speckle component.
The coefficient of variation γ is defined as the ratio of the standard deviation and the average. It can be expressed in terms of the first two statistical moments m 1 and m 2 of any distribution following: It is possible to express m 1 and m 2 in terms of the law parameters. One of the most elegant ways is to go through the characteristic function of the second-order expressed through Mellin transformations. The analytic expression of the first moments of the law can be obtained by involving the confluent hypergeometric function, also called the Kummer function, which is denoted by 1 F 1 . All of the mathematical demonstrations are described in [33]. The expressions of the moments thus obtained are not trivial. However, an analytical expression of the coefficient of variation can be obtained either in terms of the Kummer function 1 F 1 , or in terms of the modified Bessel functions (or hyperbolic Bessel functions) of the first kind I 0 and I 1 : These expressions show that the coefficient of variation only depends on the ratio λ = µ c µ , on how the deterministic target emerges from the local noise.
Moreover, it is possible to find an approximation for the asymptotic cases: • When µ c tends towards infinity (a target whose value is infinitely greater than that of the speckle), we find a normal law N (µ c , µ √ 2 ), and the coefficient of variation tends towards 0. Moreover, we can find the asymptotic approximation γ = 1 √ 2λ • When µ c or λ tends towards 0, we find the Rayleigh law, which followed the fully developed speckle. Subsequently, the coefficient of variation will correspond to the case of fully developed speckle γ = 0.52272. It is possible to find the asymptotic relation γ = 0.52 − 0.15λ 4 for small λ, satisfactory for λ < 0.15.
In order to know how the estimation of the coefficient of variation behaves, we are also interested in the variance of this estimator. Kendall and Stuart [34] propose computing the variance of a function g(m 1 , m 2 ) by performing a first-order limited expansion of the function around the values m 0,1 and m 0,2 . This method applied to the coefficient of variation expressed in terms of m 1 and m 2 in Equation (2) leads to the following formula, as proposed in [35]: where N is the number of images in the sequence and m n are the n-th order moment. It is of no use to express this variance for the coefficient of variation of Rice's law: with the even moments involving exponential functions and the odd moments of the modified Bessel functions of the first kind, no simplification can be expected. However, it has been demonstrated that the moment of order n is proportional to µ n , and we can deduce from this relation that the variance of the estimator of the coefficient of variation only depends on the parameter λ and N, and it can be written: In all cases, the coefficient of variation that is associated with a Rice law is statistically lower than that of the corresponding Rayleigh law, especially since the ratio µ c /µ is large.

Coefficient of Variation for a Speckle Area without Change over Time
A speckle area is typically a forest or area of bare soil. In SAR images, it is commonly accepted that the amplitude of a texture-free speckle distribution follows a Rayleigh-Nakagami law [36]: where µ is the shape parameter and L the Equivalent Number of Looks (ENL), and Γ is the gamma function. The case of the fully developed speckle corresponds to the case L = 1 and Rayleigh's law. Rayleigh Nakagami law generalizes the framework of the study to multilooked speckle, as in the case of Sentinel 1 GRD data. Rice's law only addresses the fully developed speckle, as observed in the SLC data. The case of a deterministic target in multilooked data has not been studied. Indeed, its formalism would be even more complicated: it would be necessary to study the spatial averaging of an area containing a deterministic component in a pixel and surrounding pure speckle. This study is out of the scope of this paper. However, it is useful to show, as part of the speckle, how the L parameter can influence the behavior of the coefficient of variation.
Generally, the parameters of this law are spatially estimated from a homogeneous area. In our approach, we are interested in temporal statistics. We then assume that, for a pixel belonging to a Rayleigh-Nakagami law speckle area, and having not been perturbed by a change, the various realizations of amplitudes over time of this pixel also follow a Rayleigh-Nakagami law. In a way, we consider that the hypothesis of spatial ergodicity can be transformed into a hypothesis of temporal ergodicity. In this case, we will speak in the following of stable speckle. This stability consideration concerns the amplitudes, not the phases.
From the Rayleigh-Nakagami law, it is possible to derive the expression of empirical moments. We have in particular [35]: which allows finding the following expression of the coefficient of variation, defined as the ratio of the standard deviation and the average: This expression shows a first interesting property: the coefficient of variation will have the same value for all stable speckle zones, whatever the average amplitude of this speckle. In the particular case L = 1, we find γ = 0.522723.
Knowing the moments of order 1 to 4 of the Nakagami law, it is possible to write the variance of this estimator as a function of L: In the particular case L = 1, we have var(γ) = 0.137881/N, σ(γ) = 0.3713/ √ N, respectively, for the variance and the standard deviation.
The application of a multilook on a SAR image has the effect of modifying the L parameter. The Sentinel-1 GRD data are delivered with a displayed equivalent number of looks of L = 4.9, calculated for the theoretical value of the middle of swath and in the middle of orbit [37]. For a speckle area, this value of L gives γ = 0.2286, var(γ) = 0.0216/N, and σ(γ) = 0.1616/ √ N. This formula shows a second attractive property on this parameter: the standard deviation of the coefficient of variation decreases with the number of images N in 1 √ N . We can thus imagine that, on temporally stable speckle areas, the coefficient of variation will be constant, with a variance decreasing with the number of images in the sequence.
A last significant result is that the coefficient of variation, calculated on amplitudes according to a Rayleigh-Nakagami law, seems to follow a Rayleigh-Nakagami law again. This property is still a postulate, verified for many simulations and statistical analyses.

Synthesis on the Behavior of the Coefficient of Variation
We have theoretically analyzed the coefficient of variation (CV) for the temporal amplitude profile of a pixel, modeled as the superposition of a fully developed speckle and a deterministic component.
The following properties have been demonstrated: • The higher the deterministic component, the lower the Coefficient of Variation. It tends to 0 as the relative power between the deterministic component and the speckle increases. It only depends on this ratio.

•
When the deterministic component tends towards 0, then CV tends towards 0.52. We are then in the presence of a pure decorrelated speckle. In this case, one empirically finds that the distribution of the CV corresponds to a Rayleigh law.

•
In all cases, the variance of the estimator is proportional to 1/N, where N is the number of images in the time-series.

Change Detection Criteria
Thanks to the properties highlighted previously, we propose several decision criteria for the change detection, empirically based on the coefficient of variation.

Problem Formulation
Let S be a sequence of N co-registered SAR images acquired at time {t 1 , t 2 , ..., t N }. In this paper, a purely temporal approach is proposed. That means that we develop criteria that are defined for each pixel independently, by only considering its temporal profile: is the amplitude signal for this pixel in the image acquired at time t.
From this profile, a first intention is to determine if there is a change, of any nature whatsoever, with at least one break. A second intention is to detect a specific kind of change. However, we do not determine the different break dates, unlike that intended in [21]. Therefore, we propose the coefficient of variation as generic change detection, and other criteria for specific change detection.
Change detection can be formulated as a classical detection problem using a hypothesis testing, with two hypotheses, H 0 = "no change" against H 1 = "a change". Subsequently, the mathematical expression of the test simply corresponds to the thresholding of the criterion map following: with f the criteria function and λ the threshold value. We describe three different functions in the next subsections for generic and specific changes.

Generic Change Detection
In Section 2, we have illustrated that the coefficient of variation is even lower when the signal is stable. Our first generic change detector is simply this criterion. In this case, function f is defined by: with

Point-Event Change Detection
We call a point-event or one-time event, a time profile for which a high-backscattering deterministic target is seen at a single date in the profile. In practice, this will correspond to the detection of a vehicle. For example, the probability of different boats appearing in the same place at different moments is small, even in a long sequence. The same is true for vehicles in the desert.
The following empirical criterion is proposed. It considers the ratio of the coefficients of variation, the first one computed on the profile without the minimal amplitude value, and the other one computed for the profile without its maximal value.
where t min and t max are the indices on which the signal is minimal or maximal, respectively, and the symbol\denotes the set difference. Note that a sequential version of this criterion is directly obtained as a special case: it is to consider the ratio of coefficient of variation for the profiles, respectively, without the first and the last date obtained. This makes it possible to obtain a detector of an event arriving for the last date of acquisition. In this case, the threshold will be applied to: We also propose comparing the f 3 criterion with the one only computed on the amplitude means m:

Step Change Detection
Among the changes that do not belong to the previous category, we will find other categories, such as chaotic changes in radiometry or the presence of deterministic targets over more extended periods.
The first ones are observed in practice in all agricultural areas. The coefficient of variation is sensitive to these variations that are related to seasonal changes, moisture changes, or plant growth. These variations can be considered to be randomly spread over time. Because changes that are related to agricultural surfaces are not of interest in some applications, we are more interested in the second category of changes. Therefore, they correspond to changes that are related to objects, such as infrastructure construction or destruction. The typical profiles generally include steps, which is to say, several dates contiguous with a strong deterministic signal.
We then propose empirical criteria that are dedicated to this case. The idea is to split the sequence into two parts, to compute the ratio between the coefficient of variation on both sides. Subsequently, we take advantage of the temporal redundancy by scanning the different possible cut-off points and by averaging the results thus obtained.
We expect then to have two typical behaviors: • If the temporal evolution is simply a radiometric hazard, such as those encountered with agricultural surfaces, then the coefficients of variation between the different sub-parts of the profile remain statistically equivalent. It is then expected to obtain a ratio that is close to 1.

•
On the other hand, in the presence of a change as infrastructure construction or a particular object, several cuts of the profile will lead to finding coefficients of variation significantly different. Thus, the ratios should have values that deviate from 1.
Let p be the variable corresponding to the break date of the sequence. We obtain two sub-parts of the temporal profile noted x(t) t∈{1...p} and x(t) t∈{p+1...N} . Subsequently, the coefficients of variation computed from these profiles are, respectively, noted γ(x(t) t∈{1...p} ) and γ(x(t) t∈{p+1...N} ). To have a significant meaning for the ratio measure, we must respect the statistical constraint for calculating the coefficient of variation, which is to have a sufficient number of images. If we note that M is the minimum number of images, then p must vary between M and (N − M).
Subsequently, the function f for the hypothesis test is defined by: In this expression, we have a min operator to have a resulted value between 0 and 1. Moreover, we take one minus the obtained value in order to have low values in the case of no change and close to 1 in the case of change.
Once again, we propose considering the same idea based on mean amplitudes m 1 instead of the coefficient of variation: To summarize, we proposed five detection criteria. The first, the coefficient of variation itself, is dedicated to any type of deviation from a stationary speckle; f 2 and f 3 are dedicated to "short" events, the last two f 4 and f 5 are dedicated to longer step events. In the following, we investigate their performance in simulations by varying the parameters of these canonical profiles, and then on real data, as summarized in Figure 2.

Analysis of Performance Thanks to Statistical Simulations
We rely on many simulations to evaluate the detection performance based on the previous criteria. These consider different kinds of change. Therefore, we are now analyzing different kinds of profiles.

Simulation on One-Point Events
Here, we consider the profiles containing, for the most part, uncorrelated speckle, and for which we have at a given date the addition of a deterministic target with amplitude µ c . In this context, we make both N and µ c /µ 1 vary, where N is the number of images of the temporal stack, and µ c /µ 1 is the ratio between the amplitude of the target µ c and the mean amplitude of the speckle µ 1 . Note that µ 1 is not precisely equal to the µ parameter, but it is deduced by Equation (8). Given the particular dynamics of the radar, we linearly vary this contrast in decibels. Two examples of such profiles are given in Figure 3 on the top.
We simulate a large number of profiles in order to estimate the performance of a given criterion. For each category of changes, we simulate K = 10 6 reference profiles without any change and K profiles with a change. The investigated criterion is calculated for these two populations. This computation enables K estimation of this criterion to be achieved, with and without change. From these K realizations, the empirical distributions of this parameter with and without change are estimated. Thus, it is straight-forward to compute a Probability of Detection/False Alarm curve for this parameter and this scenario by varying the threshold of the studied criterion.
In the following, we will use colored representations to compare the detection performance of three different criteria obtained by the simulation. Each criterion codes a Red, Green and Blue (RGB) sequence of three floats in the range 0-1. Zero Probability of Detection for each criteria gives the darkest color, considered the black, and maximum Probability of Detection PD = 1 for all criteria gives a white color. When one of the criterion has the strongest probability of detection, the color is a hue near this primary color (red-ish, green-ish, or blue-ish), and when two criteria have the same strongest intensity, then the color is a hue of a secondary color (a shade of cyan, magenta, or yellow).
For the one-point event, we have compared the three criteria f 1 (coefficient of variation), f 2 (ratio of the coefficient of variation, without min and max value), and f 3 (ratio of amplitude means), by fixing PFA to 0.1%. In the colored representation of Figure 3, Red channel corresponds to the probability of detection for the f 1 criterion, Green channel for f 3 , and Blue channel for f 2 . The main findings that we can draw from Figure 3 are as follows: • for N < 20, the coefficient of variation f 1 and the intensity mean ratio f 3 should be preferred.

•
For N > 20, a target can be detected as soon as its contrast is more than 8 dB.

•
For N > 20, ratio criteria f 2 and f 3 have better performances than f 1 for contrasts lying between 8 and 13 dB; for contrasts beyond 13 dB, all criteria have same performances.

Mixture of Two Speckles
Here, we consider profiles mixing two different populations of speckle, with a speckle of level given in proportion P, and a speckle of higher level for the rest of the profile (proportion 1-P). This type of profile can correspond, for proportions close to one half, to profiles of types of agricultural plots, for which different backscattering levels may appear. Note that, when P tends to 0 or 1, we find ourselves in cases of profiles that are closer to those corresponding to one-off events.
In this case, we have again set the number of images to N = 100. We then varied P, the percentage of dates at which speckle 1 is present relative to speckle 2 and the ratio between the two average amplitudes of speckle.
In Figure 4    The coefficient of variation ( f 1 ) remains very sensitive to such a mixture. For natural areas with radiometric variations, it will lead to change detection. The f 3 criterion remains a little less sensitive than f 1 , but will still detect such events as a change in the case of mixing over low durations.

Simulation on P-Point Events
We now consider profiles containing speckle, on which a deterministic type target is superimposed for a date proportion equal to P. In this case, we have set the number of images to N = 100. We make both P and µ c /µ 1 vary, where µ c /µ 1 is the ratio between the magnitude of the target and the amplitude of the speckle. This type of profile typically corresponds to a site for which there is the presence of a deterministic signal on several dates. For example, it is typical of building construction. Figure 5 illustrates two examples of a profile with a deterministic component over 25% of the observation duration, for two contrast values.
In a first simulation illustrated in Figure 5, the probability of detection is represented for the three criteria f 1 , f 2 and f 3 for varying proportions P and varying contrasts. These criteria are independent of the order of the values of the profile. The results are very similar to the ones presented in Figure 4. • The criterion f 2 is the only one which is specific to a very time-limited change. • The criterion f 3 can detect step with duration up to 20% of total duration. • The general criterion f 1 detect only step with duration less than the half observation period. For longer events, it fails to detect a change, probably because the deterministic component of the step becomes predominant in the profile, which makes the coefficient of variation decrease again. In another simulation that is illustrated in Figure 6, we have fixed different values of P, the proportion of points that constitute a step in the profile. Then, both the starting position of this step and the contrast between the step and the clutter are varied. The probability of detection is represented for the three criteria f 1 , f 4 , and f 5 .
These simulations lead us to the following observations: • Sensitivity to the relative duration of the step (P) -the f 1 criterion only manages to detect steps of durations lower than half of the duration of observation; -the criterion f 4 can detect only steps with durations greater than half the duration of observation. Beyond a certain duration (P > 80%), it is the only criterion that manages to detect the step; and, -the criterion f 5 is the most versatile compared to the duration of the step.
• Sensitivity to the date of the beginning of the step by construction, only the criterion f 1 does not depend on the position of the step; -the criteria f 4 and f 5 depend on this position. When the step is centered, the detection is more difficult, particularly for brief events (P < 25%) or long events (P > 75%); and, -for high contrast, more than 10 dB, there is no more impact on performances.

Performance Evaluation on Real Sar Data
Now that our criteria have been evaluated through statistical simulations, we are considering real data validation cases.

Choice of Test Sites and Ground Truth
The first site of interest is chosen on the Saclay area (near Paris, France) between June 2015 and June 2017. Indeed, this site includes many construction sites that are related to the development of the University Paris-Saclay. Construction dates are available thanks to the local authorities.
The considered area is approximately 15 km × 12 km. A precise Ground Truth Database has been established by using two geospatial vector databases: a topographic database produced by IGN (French National Geographic Agency), and OSM database, corresponding, respectively, at dates before and after the observed time range. We kept several standard semantic classes to help interpret the results: water surface, crops, forests, roads, and buildings. Both of the vector files have been converted into raster images using the GDAL rasterize utility, on a 10 m × 10 m common georeferencing grid, and have been subtracted from one another. Subsequently, all of the changes found were manually validated or invalidated using past versions of High-Resolution optical images on the Google Earth timeline. The resulting ground truth in Figure 7 shows different classes. Among them, parking areas, building construction or destruction, and ground changes are assigned to a generic change class used in performance evaluation. In all, there are 140 changes that are related to buildings, 111 changes corresponding to parking areas, and 42 changes that are related to ground areas. This site is observed by the C-band Sentinel-1 sensor. 64 IW Images have been selected during the period in interferometric conditions, in ascending mode. Change detection methods have been applied on L1-SLC (Single Look Complex) images and not on GRD (Ground Range Detected) ones. We have made this choice for several reasons: • statistical properties of speckle are preserved in the whole SLC image and do not require taking into account the effect of multilooking; • the average phase angles between the polarimetric channels are preserved; and, • in future works, coherent methods could be applied and compared in the present results.
We have then computed all of the criteria maps in the SLC reference grid, before exporting them to the Lambert Conformal Conic projection coordinate system of the Ground Truth. In order to achieve this, the dense non-rigid transformation between SLC and GRD images has been computed thanks to the GeFolki algorithm [38]. Subsequently, this transformation can be applied to any product computed from the SLC time-series.
In order to take into consideration one-event targets, a second validation scenario has been considered, with 68 L-band UAVSAR images in the Grizzly Bay, around San Francisco. This dataset is extracted from the SanAnd_05508 POLSAR stack.
Several boats are available in the images. They belong to the Fifth Reserve Fleet, which is docked off the coast of Benicia. The ground truth has been established by manually segmenting the vessels on each image of the stack. We have implicitly assumed that all the salient objects in the image on the water are boats. We have manually checked that they only appear on one date.
For some pixels, an overlap of two boats present at two different dates might occur. In the following, we will estimate the detection performance only on the parts outside these recoveries, to correspond precisely to the "single event" case.
Note that, in this data set, we do not strictly have a Ground Truth, since we do not use information external to our images; this manual data entry is intended to observe statistical behaviors on real signals.

Performance Evaluation on Sentinel 1
The Saclay area contains a significant proportion of cultivated fields and vegetation, in addition to urban areas. The ground truth mainly involves changes in the building elements, some elements of ground modifications, and the parking areas. In this context, we will only evaluate the generic and specific detection of step signals. Besides, we compare our results with those that were obtained with two other reference methods: • the reference method of the literature on the detection of a change in SAR time series by the so-called "omnibus" method [21] and • a baseline method of change detection by deep learning [39], applied to Sentinel-2 optical images on the same area. This method is a fully convolutional encoder-decoder paradigm modified into a Siamese architecture, using skip connections to improve the spatial accuracy of the results. It has been trained end-to-end from scratch on Sentinel-2 images, and it surpassed the state-of-the-art in change detection, both in accuracy and in inference speed without the use of post-processing. In this paper, it has only been tested on building changes for which it has been trained.
The first method consists of a statistical homogeneity test that is based on a likelihood-ratio test. Authors propose two versions of this method. A first version considers the test of global homogeneity of the complete temporal profile. It is possible to demonstrate that, in this case, the test is to threshold a criterion proportional to the ratio between geometric mean and arithmetic mean of the intensity signal. When polarimetric information is available, the determinant of the polarimetric covariance matrix replaces the single-channel intensity.
In a second version of the code, several homogeneity tests are carried out iteratively on subsets of the profile: the intensity considered at date j is compared with the intensity population that is estimated for all previous dates without breaks. This second version of the code is more comprehensive, as it finds the location of all probable breaks. However, it is much more expensive in computing time. In this paper, the first version will be called the omnibus method and the second version the full omnibus method Regarding the deep learning method, the comparison must be made with much more caution. The detection of change has been done in a bi-date way, between the start date and the end date. We took care to select images without cloud cover. However, it is likely that many changes over a limited period within the range cannot be seen in this context. On the other hand, the learning has mainly been done on changes relating to new or missing buildings and, therefore, only the performances that are related to this type of change are considered later. This comparison aims to demonstrate the difficulty of the task of detection of a change from optical images, including with the most advanced algorithms today in deep learning, as compared to that of detection from radar images.
The following comparisons have been made by generating Receiver Operating Characteristic (ROC) and Precision-Recall curves. These graphical plots illustrate the ability and precision of our detection parameter, as its discrimination threshold is varied.
The ROC curve is created by plotting the Probability of Detection (PD) against the Probability of False Alarm (PFA) at various threshold settings. The Precision-Recall curve plots the Precision against the Recall or Probability of Detection (PD) at various threshold settings.
The Probability of Detection (PD) is the percentage of change class objects that are detected. The Recall is equal to Probability of Detection (PD). We consider an object as detected as soon as a minimum of 5% pixels of the entire object has been detected. We justify this choice by the very sparse nature of a radar signal. For a given building footprint, only a few points are bright points. Thus, we cannot hope to detect a break on all of the pixels of the considered footprint.
The Probability of False Alarm (PFA) is given by the percentage of pixels detected that are not lying in the change class of the ground truth.
Lastly, the Precision is the percentage of pixels declared as detected that effectively corresponds to a change in the Ground Truth.  Figure 8 shows the performance for the detection of changes in all classes. Overall, we specify above that we focus all of our analysis on low false alarm rates. Indeed, we believe that few operational scenarios can accept 5% False Alarm, mainly when they are densely distributed among the whole image. With this point of view, we can not expect a detection rate higher than 70%. For general changes, the cumulative ratio of CV ( f 4 ) and the full omnibus method obtain the best performances. Nevertheless, when we restrict the change detection to the building class, Figure 9 shows that several methods achieve a much better result, such as 70% for less than 1% as the False Alarm Rate. This analysis indicates that the main difficulty relies on park areas and ground changes, that cannot be robustly detected on these Sentinel 1 images.
For change detection restricted to buildings in Figure 9, except the deep learning method, all of the methods obtain similar performances for low False Alarm Rates, and two methods are better for higher false alarm rates, the cumulative ratio of the coefficient of variation f 1 and the cumulative ratio of mean amplitude f 2 . When qualitatively analyzing the criteria maps, we can see that most false alarms are due to high sensitivity to changes in an agricultural area, except for f 4 . It is likely that the electromagnetic mechanisms and therefore the backscattering levels vary significantly from one date to another, depending on weather conditions and crop growth. In this case, simulations have proven that cumulative ratio methods f 4 are less sensitive to these kinds of changes than f 1 and f 3 . On the other hand, the main problem with f 4 highlighted by our previous simulations is the difficulty in detecting events that are shorter than half the duration of observation. Thus, the f 4 and f 5 criteria have complementary behaviors for comparable performance.
Finally, we note the performance significantly worse with the method of deep learning, highlighting the difficulty for detecting changes in optics.
In a second step, the performances have been calculated by removing any zone of size less than 20 pixels from the ground truth and the change detection results. This restriction mainly affects the performance on building detection, as shown in Figure 10, where the improvement is significant, mainly for two methods, the cumulative ratio of CV ( f 4 ) and the full omnibus method. In summary, two methods show excellent performance for change detection on large enough buildings: the cumulative ratio of CV ( f 4 ) and the full omnibus method. Nevertheless, remember that the full omnibus method is time-consuming while the other method is high-speed. On the dataset that is composed of 64 dual-polarimetric images of (1133 × 3205) pixels, the execution time on Intel(R) Core(TM) i7-4710HQ CPU @ 2.50 GHz is 4 min. for the cumulative ratio of CV ( f 4 ) and 50 min. for the full omnibus method. It is 6 s for the CV ( f 1 ) or simple Omnibus method.

Performance Evaluation on Boat Detection on Uavsar
On this second set of data, we address a very different case from the previous one: that of one-point events, with an example of boat detection.
Change detection on water surfaces has its own specifics. However, this dataset was chosen on boats for reasons of simplicity: on this example, it is possible for us to carry out a manual ground truth. On the water, each vehicle is easily identifiable and interpretable as a target. Therefore, it was possible for us to manually cut them out, and to check, for each of them, that the time profile was indeed of the type sought.
The performance analysis conducted as previously at the object level would have no meaning here: firstly, because some criterion thresholds allow us to obtain a Probability of detection equal to 100% for a null probability of false alarm; secondly, because we do not have Ground Truth external to the data.
For these reasons, we analyze performances on this real data set, statistically at the pixel scale.
We have selected all of the pixels belonging to one single boat without superposition, those that are represented in gray in Figure 11. The statistical distributions of the three criteria f 1 , f 2 , f 3 , and the criterion of the Omnibus method, where computed for all the pixels belonging to the boats, and then for pixels of a sea area. We can then use these distributions to compute the ROC curve, by deriving the True positive Rate and False positive Rate for different threshold values. Figure 12 presents the resulting ROC curves.  The f 2 and f 3 criteria are the ones that lead to the best performances. Note that the y-axis starts at 80% detection. These two specific criteria are statistically better than the generic criteria for the coefficient of variation and the polarimetric omnibus criterion computed on the diagonal covariance matrix.

Conclusions
In this paper, we have proposed several criteria that are based on the temporal coefficient of variation to detect a change in SAR time-series. Statistics properties of the coefficient of variation have made it a key parameter for deploying robust and extremely fast change detection strategies. First, the mean value and the variance do not depend on the scale parameter of the speckle. Subsequently, for the presence of a deterministic component, these parameters only depend on the ratio between this component and the clutter amplitudes.
The coefficient of variation itself is suitable for detection for any change. Additionally, we proposed using it as a generic change detector. Subsequently, because a change can correspond to several types of events, we have proposed other detectors that are more specific to a short-period event, such as a vehicle, or a longer change, such as a construction site.
These criteria were first analyzed using statistical simulations. These simulations proved the great generality of the coefficient of variation to detect any change, but also the specificity of the other criteria to address specific changes. Afterwards, these criteria were evaluated on real data.
For step-type changes, both proposed specific criteria lead to excellent performance, at least equal to that of the baseline so-called full omnibus method, which is much more expensive in computing time. The performances are also much better than those currently obtained using a couple of optical data on a similar scenario.
Moreover, the two proposed criteria are complementary: one is likely to detect better short-steps. In return, it is more sensitive to changes in all agricultural areas.
For short-events, we have conducted a manual statistical analysis of real high-resolution data that contain a large number of ships for the validation. Here again, the two specific criteria lead to the best performance. The simulations show that the criterion based on the CV ratio is the most specific to the short-events. This remark reinforces the use of this parameter for detecting a new event in a new acquisition.
From a quantitative perspective, the simulation reaches valuable performance as soon as the contrasts between a deterministic event and the clutter exceed 8dB. On real data, we obtain a probability of detection of constructions equal to around 90% for a false alarm rate of 5%. If we select change sites above 20 pixels, the probability of false alarm decreases to 2%. This result confirms that construction site monitoring is possible from space.
The detection of short events seems to be even more performant. On the UAVSAR data, 90% of pixels belonging to the boats are detected for a probability of false alarm of 2%. We would further improve the result in high-resolution images by adding a simple spatial regularization constraint. This result means that we can map the traffic using Sentinel-1A or Sentinel-1B every six days. The very high computational efficiency of this type of algorithm makes it possible to derive worldwide statistics or to check the concordance of the positioning of the targets with other given sources. In sparsely populated areas, vehicle detection or any disruptive even can also be considered. A contrast between the target and the clutter higher than 8 dB provides excellent detection performance, according to the simulations.
Future work will focus on proposing a fusion of the various criteria developed. Ideally, a complete schema could go as far as categorizing change profiles by using additional topographic or optical data. In parallel, we will analyze the contribution of polarimetric information.
Author Contributions: The formal analysis and investigation in this paper have been conducted by E.C.K. and J.-M.N. Conceptualization of this paper, project administration, validation, writing review and editing have been performed by E.C.K. All authors have read and agreed to the published version of the manuscript.