Looking for change? Roll the Dice and demand Attention

Change detection, i.e. identification per pixel of changes for some classes of interest from a set of bi-temporal co-registered images, is a fundamental task in the field of remote sensing. It remains challenging due to unrelated forms of change that appear at different times in input images. Here, we propose a reliable deep learning framework for the task of semantic change detection in very high-resolution aerial images. Our framework consists of a new loss function, new attention modules, new feature extraction building blocks, and a new backbone architecture that is tailored for the task of semantic change detection. Specifically, we define a new form of set similarity, that is based on an iterative evaluation of a variant of the Dice coefficient. We use this similarity metric to define a new loss function as well as a new spatial and channel convolution Attention layer (the FracTAL). The new attention layer, designed specifically for vision tasks, is memory efficient, thus suitable for use in all levels of deep convolutional networks. Based on these, we introduce two new efficient self-contained feature extraction convolution units. We validate the performance of these feature extraction building blocks on the CIFAR10 reference data and compare the results with standard ResNet modules. Further, we introduce a new encoder/decoder scheme, a network macro-topology, that is tailored for the task of change detection. Our network moves away from any notion of subtraction of feature layers for identifying change. We validate our approach by showing excellent performance and achieving state of the art score (F1 and Intersection over Union-hereafter IoU) on two building change detection datasets, namely, the LEVIRCD (F1: 0.918, IoU: 0.848) and the WHU (F1: 0.938, IoU: 0.882) datasets.


Introduction
Change detection is one of the core applications of remote sensing. The goal of change detection is to assign binary labels ("change" or no "change") to every pixel in a study area based on at least two co-registered images taken at different times. The definition of "change" varies across applications and includes, for instance, urban expansion , flood mapping (Giustarini et al., 2012), deforestation (Morton et al., 2005), and cropland abandonment (Löw et al., 2018). Changes of multiple land-cover classes, i.e. semantic change detection, can also be addressed simultaneously . It remains a challenging task due to various forms of change owed to varying environmental conditions that do not constitute a change for the objects of interest (Varghese et al., 2018).
A plethora of change-detection algorithms has been devised and summarised in several reviews (Lu et al., 2004;Coppin et al., 2004;Hussain et al., 2013a;Tewkesbury et al., 2015). 1 foivos.diakogiannis@uwa.edu.au In recent years, computer vision has further pushed the state of the art, especially in applications where the spatial context is paramount. The rise of computer vision, especially deep learning, is related to advances and democratisation of powerful computing systems, increasing amounts of available data, and the development of innovative ways to exploit data .
Our starting point is the hypothesis that human intelligence identifies differences in images by looking for change in objects of interest at a higher cognitive level (Varghese et al., 2018). We understand this because the time required for identifying objects that changed between two images, increases with time when the number of changed objects increases (Treisman and Gelade, 1980). That is, there is strong correlation between processing time and number of individual objects that changed. In other words, the higher the complexity of the changes the more time is required to accomplish it. Therefore, simply subtracting extracted features from images (which is a constant time operation) cannot account for the complexities of human perception. As a result, the deep convolutional neural networks proposed in this paper address change detection without using bespoke tn tp fn fp Figure 1: Example of the proposed framework (architecture: mantis CEECNetV1) change detection performance on the LEVIRCD test set . From left to right: input image at date 1, input image at date 2, ground truth buildings change mask, and color coded the true negative (tn), true positive (tp), false positive (fp) and false negative (fn) predictions. features subtraction.
In this work, we developed neural networks using attention mechanisms that emphasize areas of interest in two bi-temporal coregistered aerial images. It is the network that learns what to emphasize, and how to extract features that describe change at a higher level. To this end, we propose a dual encoder -single decoder scheme, that fuses information of corresponding layers with relative attention and extracts as a final layer a segmentation mask. This mask designates change for classes of interest, and can also be used for the dual problem of class attribution of change. As in previous work, we facilitate the use of conditioned multi-tasking 2 ) that proves crucial for stabilizing the training process and improving performance. In summary, the main contributions of this work are: 1. We introduce a new set similarity metric that is a variant of the Dice coefficient, the Fractal Tanimoto similarity measure (section 3). This similarity measure has the advantage that it can be made steeper than the standard Tanimoto metric towards optimality, thus providing a finer-grained similarity metric between layers. The level of steepness is controlled from a depth recursion hyper-parameter. It can be used both as a "sharp" loss function when fine-tuning a model at the latest stages of training, as well as a set similarity metric between feature layers in the attention mechanism. 2. Using the above set similarity as a loss function, we propose an evolving loss strategy for fine-tuning training of neural networks (section 4). This strategy helps to avoid overfitting and improves performance. 3. We introduce the Fractal Tanimoto Attention Layer (hereafter FracTAL ), tailored for vision tasks (section 5). This layer uses the fractal Tanimoto similarity to compare queries with keys inside the Attention module. It is a form of spatial and channel attention combined.
4. We introduce a feature extraction building block that is based on the Residual neural network and fractal Tanimoto Attention (section 5.2.1). The new FracTAL ResNet converges faster to optimality than standard residual networks and enhances performance. 5. We introduce two variants of a new feature extraction building block, the Compress-Expand / Expand-Compress unit (hereafter CEECNet unit -section 6.1). This unit exhibits enhanced performance in comparison with standard residual units, and the FracTAL ResNet unit. 6. Capitalizing on these findings, we introduce a new backbone encoder/decoder scheme, a macro-topology -the mantis -that is tailored for the task of change detection (section 6.2). The encoder part is a Siamese dual encoder, where the corresponding extracted features at each depth are fused together with FracTAL relative attention.
In this way, information exchange between features extracted from bi-temporal images is enforced. There is no need for manual feature subtraction. 7. Given the relative fusion operation between the encoder features at different levels, our algorithm achieves state of the art performance on the LEVIRCD and WHU datasets without requiring the use of contrastive loss learning during training (section 9). Therefore, it is easier to implement with standard deep learning libraries and tools.
Networks integrating the above-mentioned contributions yielded state of the art performance for the task of building change detection in two benchmark data sets for change detection: the WHU (Ji et al., 2019b) and LEVIRCD  datasets.
In addition to the previously mentioned sections, the following complete the works. In Section 2 we present related work on Attention mechanism and change detection, specialised for the case of very high resolution (hereafter VHR) aerial images. In Section 7 we describe the setup of our experiments. In Section 8 we perform an ablation study of the proposed schemes. Finally, in Section Appendix C we present in mxnet/gluon style pseudocode various key elements of our architecture 3 .

On attention
The attention mechanism was first introduced by Bahdanau et al. (2014) for the task of neural machine translation 4 (hereafter NMT). This mechanism addressed the problem of translating very long sentences in encoder/decoder architectures. An encoder is a neural network that encodes a phrase to a fixedlength vector. Then the decoder operates on this output and produces a translated phrase (of variable length). It was observed that these types of architectures were not performing well when the input sentences were very long . The attention mechanism provided a solution to this problem: instead of using all the elements of the encoder vector on equal footing for the decoder, the attention provided a weighted view of them. That is, it emphasized the locations of encoder features that were more important than others for the translation, or stated another way, it emphasized some input words that were more important for the meaning of the phrase. However, in NMT, the location of the translated words is not in direct correspondence with the input phrase, because of the syntax changes. Therefore, Bahdanau et al. (2014) introduced a relative alignment vector, e i j , that was responsible for encoding the location dependences: in language, it is not only the meaning (value) of a word that is important but also its relative location in a particular syntax. Hence, the attention mechanism that was devised was comparing the emphasis of inputs at location i with respect to output words at locations j. Later, Vaswani et al. (2017) developed further this mechanism and introduced the scaled dot product self-attention mechanism as a fundamental constituent of their Transformer architecture. This allowed the dot product to be used as a similarity measure between feature layers, including feature vectors having large dimensionality.
The idea of using attention for vision tasks soon passed to the community. Hu et al. (2017) introduced channel based attention, in their squeeze and excitation architecture. Wang et al. (2017) used spatial attention to facilitate non-local relationships across sequences of images. Chen et al. (2016) combined both approaches by introducing joint spatial and channel wise attention in convolutional neural networks, demonstrating improved performance on image captioning datasets. Woo et al. (2018) introduced the Convolution Block Attention Module (CBAM) which is also a form of spatial and channel attention, and showed improved performance on image classification and object detection tasks. To the best of our knowledge, the most faithful implementation of multi-head attention (Vaswani et al., 2017) for convolution layers, is Bello et al. (2019) (spatial attention). Sakurada and Okatani (2015) and Alcantarilla et al. (2016) (see also Guo et al. 2018) were some of the first to introduce found on https://github.com/feevos/ceecnet. 4 That is, language to language translation of sentences, e.g. English to French. fully convolutional networks for the task of scene change detection in computer vision, and they both introduced street view change detection datasets. Sakurada and Okatani (2015) extracted features from a convolutional neural networks and combined them with super pixel segmentation to recover change labels in the original resolution. Alcantarilla et al. (2016) proposed an approach that chains multi-sensor fusion simultaneous localization and mapping (SLAM) with a fast 3D reconstruction pipeline that provides coarsely registered image pairs to an encoder/decoder convolutional network. The output of their algorithm is a pixel-wise change detection binary mask.

On change detection
Researchers in the field of remote sensing picked up and evolved this knowledge and started using it for the task of land cover change detection. In the remote sensing community, the dual Siamese encoder and a single decoder is frequently adopted. The majority of different approaches then modifies how the different features extracted from the dual encoder are consumed (or compared) in order to produce a change detection prediction layer. In the following we focus on approaches that follow this paradigm and are most relevant to our work. For a general overview of land cover change detection in the field of remote sensing interested readers can consult Hussain et al. (2013b) and Asokan and Anitha (2019). For a general review on AI applications of change detection to the field of remote sensing Shi et al. (2020).
Caye Daudt et al. (2019) presented and evaluated various strategies for land cover change detection, establishing that their best algorithm was a joint multitasking segmentation and change detection approach. That is, their algorithm predicted simultaneously the semantic classes on each input image, as well as the binary mask of change between the two.
For the task of buildings change detection, Ji et al. (2019a) presented a methodology that is a two-stage process, wherein the first part they use a building extraction algorithm from single date input images. In the second part, the binary masks that are extracted are concatenated together and inserted into a different network that is responsible for identifying changes between the two binary layers. In order to evaluate the impact of the quality of the building extraction networks, the authors use two different architectures. The first, one of the most successful networks to date for instance segmentation, the Mask-RCNN , and the second the MS-FCN (multi scale fully convolutional network) that is based on the original UNet architecture (Ronneberger et al., 2015). The advantage of this approach, according to the authors is the fact that they could use unlimited synthetic data for training the second stage of the algorithm. Chen et al. (2021) used a dual attentive convolutional neural network, i.e. the feature extractor was a siamese VGG16 pretrained network. The attention module they used for vision, was both spatial and channel attention, and it was the one introduced in Vaswani et al. (2017), however with a single head. Training was performed with a contrastive loss function. Chen and Shi (2020) presented the STANet, which consists of a feature extractor based on ResNet18 (He et al., 2015), and two versions of spatio-temporal attention modules, the Basic spatial-temporal attention module (BAM) and the pyramid  spatial-temporal attention module (PAM). The authors introduced the LEVIRCD change detection dataset and demonstrated excellent performance. Their training process facilitates a contrastive loss applied at the feature pixel level. Their algorithm predicts binary change labels. Jiang et al. (2020) introduced the PGA-SiamNet that uses a dual Siamese encoder that extracts features from the two input networks. They used VGG16 for feature extraction. A key ingredient to their algorithm is the co-attention module (Lu et al., 2019) that was initially developed for video object segmentation. The authors use it for fusing the extracted features of each input image from the dual VGG16 encoder.

Fractal Tanimoto similarity coefficient
In  we analyzed the performance of the various flavours of the Dice coefficient and introduced the Tanimoto with complement coefficient. Here, we expand further our analysis, and we present a new functional form for this similarity metric. We use it both as a self-similarity measure between convolution layers in a new attention module, as well as a loss function for finetuning semantic segmentation models.
For two (fuzzy) binary vectors of equal dimension, p, l, whose elements lie in the range [0, 1] the Tanimoto similarity coefficient is defined: Interestingly, the dot product between two fuzzy binary vectors is another similarity measure of their agreement. This inspired us to introduce an iterative functional form of the Tanimoto: For example, expanding Eq. (2) for d = 2, yields: We can expand this for an arbitrary depth d and then we get the following simplified version 5 of the fractal Tanimoto similarity measure: This function takes values in the range [0, 1] and it becomes steeper as d increases. At the limit d → ∞ it behaves like the integral of the Dirac δ function around point l, δ(p − l)dp.
That is, the parameter d is a form of annealing "temperature". Interestingly, although the iterative scheme was defined with d being an integer, for continuous values d ≥ 0, T d remains bounded in the interval [0, 1]. That is: where n = dim(p) is the dimensionality of the fuzzy binary vectors p, l.
In the following we will use the functional form of the fractal Tanimoto with complement , i.e.: In Fig. 2 we provide a simple example for a ground truth vector l = {0.4, 0.6} and a continuous vector of probabilities p = {p x , p y }. On the top panel, we construct density plots of the Fractal Tanimoto function with complement, F T d . Oveplotted are the gradient field lines that point to the ground truth. In the bottom pannels, we plot the corresponding 3D representations. From left to right, the first column corresponds to d = 0, the second to d = 3 and the third to d = 5. It is apparent that the effect of the d hyperparameter is to make the similarity metric steeper towards the ground truth. For all practical purposes (network architecture, evolving loss function) we use the average fractal Tanimoto loss (last column), due to having steeper gradients away from optimality 6 :

Evolving loss strategy
In this section, we describe a training strategy that modifies the depth of the fractal Tanimoto similarity coefficient, when used as a loss function, on each learning rate reduction. For minimization problems, the fractal Tanimoto loss is defined through: L = 1 − F T d . In the following, when we refer to the fractal Tanimoto loss function, it should be understood that this is defined trough the similarity coefficient, as described above.
During training, and until the first learning rate reduction we use the standard Tanimoto with complement F T 0 (p, l). The reason for this is that for a random initialization of the weights (i.e. for an initial prediction point in the space of probabilities away from optimality), the gradients are steeper towards the best values for this particular loss function (in fact, for cross entropy are even steeper). This can be seen in Fig. 2 in the bottom row: clearly for an initial probability vector p = {p x , p y } away from the ground truth l = {0.4, 0.6} the gradients are steeper for d = 0. As training evolves, and the value of the weights approaches optimality, the predictions approach the ground truth and the loss function flattens out. With batch gradient descent (and variants), we are not really calculating the true (global) loss function, but a noisy approximate version of it. This is because in each batch loss evaluation, we are not using all of the data for the gradients evaluation. In Fig. 3 we represent a graphical representation of the true landscape and a noisy version of it for a toy 2D problem. In the top row, we plot the value of the F T 0 similarity as well as the average value of the loss functions for d = 0, . . . , 9 for the ground truth vector l = {0.4, 0.6}. In 6 The formula is valid for d >= 1, for d = 0 it reverts to the standard fractal Tanimoto loss, F T d=0 (p, l). On the top row, from left to right is the F T 0 (p, l) and (1/10) 9 d=0 (F T d (p, l)). The bottom row is the same corresponding F T d (p, l) similarity measures, with Gaussian random noise added. When the algorithmic training approaches optimality with the standard Tanimoto, local noise gradients tend to dominate over the background average gradient. Increasing the slope of the background gradient at later stages of training is a remedy to this problem. the corresponding bottom rows, we have the same plot were we also added random Gaussian noise. In the initial phases of training, the average gradients are greater than the local values dues to noise. As the network reaches optimality the average gradient towards optimality becomes smaller and smaller, and the gradients due to noise dominate the training. Once we reduce the learning rate, the step the optimizer takes is even smaller, therefore it cannot easily escape local optima (due to noise). What we propose is to "shift gears": once training stagnates, we change the loss function to a similar but steeper one towards optimality that can provide gradients (on average) that can dominate the noise. Our choice during training is the following set of learning rates and depths of the fractal Tanimoto loss: {(lr : 10 −3 , d = 0), (lr : 10 −4 , d = 10), (lr : 10 −5 , d = 20)}. In all evaluations of loss functions for d > 0, we use the average value for all d values (Eq. 8).

Fractal Tanimoto Attention
Here, we present a novel convolutional attention layer based on the new similarity metric and a methodology of fusing information from the output of the attention layer to features extracted from convolutions.

Fractal Tanimoto Attention layer
In the pioneering work of Vaswani et al. (2017) the attention operator is defined through a scaled dot product operation. For images in particular, i.e. two dimensional features, assuming that q ∈ R C q ×H×W is the query, k ∈ R C×H×W the key and v ∈ R C×H×W it's corresponding value, the (spatial) attention is defined as (see also Zhang et al. 2020): Here d is the dimension of the keys and the softmax operation is with respect to the first (channel) dimension. The term √ d is a scaling factor that ensures the Attention layer scales well even with a large number of dimensions (Vaswani et al., 2017). The operator • 1 corresponds to inner product with respect to the spatial dimensions height, H, and width, W, while • 2 is a dot product with respect to channel dimensions 7 . In this formalism each channel of the query features is compared with each of the channels of the key values. In addition there is a 1 − 1 correspondence between keys and values, meaning that for each key corresponds a unique value. The point of the dot product is to emphasize the key-value pairs that are more relevant for the particular query. That is the dot product selects the keys that are most similar to the particular query. It represents the projection of queries on the keys space. The softmax operator provides a weighted "view" of all the values for a particular set of queries, keys and values-or else a "soft" attention mechanism. In the multi-head attention paradigm, multiple attention heads that follow the principles described above are concatenated together. One of the key disadvantages of this formulation when used in vision tasks (i.e. two dimensional features) is the very large memory footprint that this layer exhibits. For 1D problems, such as Natural Language Processing, this is not -in general -an issue.
Here we follow a different approach. We develop our formalism for the case where the number of query channels, C q is identical to the number of key channels, C. However, if desired, our formalism can work for the general case where C q C.
Let q ∈ R C×H×W be the query features, k ∈ R C×H×W the keys and v ∈ R C×H×W the values. In our formalism, it is a requirement for these operators to have values in [0, 1] 8 . Our approach is a joint spatial and channel attention mechanism. With the use of the Fractal Tanimoto similarity coefficient, we define the spatial, , and channel, , similarity between the query, q, and key, k, features according to: 7 This is more apparent in index notation: This can be easily achieved by applying the sigmoid operator.
where the spatial and channel products are defined as: It is important to note that the output of these operators lies numerically within the range [0,1], where 1 indicates identical similarity and 0 indicates no correlation between the query and key. That is, there is no need for normalization or scaling as is the case for the traditional dot product similarity. In our approach the spatial and channel attention layers are defined with element-wise multiplication 9 (denoted by the symbol ): It should be stressed that these operations do not consider that there is a 1 − 1 mapping between keys and values. Instead, we consider a map of one-to-many, that is a single key can correspond to a set of values. Therefore, there is no need to use a softmax activation (see also Kim et al. 2017). The overall attention is defined as the average of the sum of these two operators: In practice we use the averaged fractal Tanimoto similarity coefficient with complement, F T d / , both for spatial and channel wise attention.
As stated previously, it is possible to extend the definitions of spatial and channel products in a way where we compare each of the channels (respectively, spatial pixels) of the query with each of the channels (respectively, spatial pixels) of the key. However, this imposes a heavy memory footprint, and makes deeper models, even for modern-day GPUs, prohibitive 10 . In addition, we found that this approach did not improve performance for the case of change detection and classification. Indeed, one needs to question this for vision tasks: the initial definition of attention  introduced a relative alignment vector, e i j , that was necessary because, for the task of NMT, the syntax of phrases changes from one language to the other. That is, the relative emphasis with respect to location between two vectors is meaningful. When we compare two images (features) at the same depth of a network (created by two different inputs, as is the case for change detection), we anticipate that the channels (or spatial pixels) will be in correspondence. For example, the RGB (or hyperspectral) order of inputs, does not change. That is, in vision, the situation can be different than NLP because we do not have a relative location change as it happens with words in phrases.
We propose the use of the Fractal Tanimoto Attention Layer (hereafter FracTAL ) for vision tasks as an improvement over the scaled dot product attention mechanism (Vaswani et al., 2017) for the following reasons: 1. The F T similarity is automatically scaled in the region [0, 1], therefore it does not require normalization, or activation to be applied. This simplifies the design and implementation of Attention layers and enables training without ad-hoc normalization operations. 2. The dot product does not have an upper or lower bound, therefore a positive value cannot be a quantified measure of similarity. In contrast F T has a bounded range of values in [0, 1]. The lowest value indicates no correlation, and the maximum value perfect similarity. It is thus easier to interpret. 3. Iteration d is a form of hyperparameter, like "temperature" in annealing. Therefore, the F T can become as steep as we desire (by modification of the temperature parameter d), steeper than the dot product similarity. This can translate to finer query and key similarity. 4. Finally, it is efficient in terms of GPU memory footprint (when one considers that it does both channel and spatial attention), thus allowing the design of more complex convolution building blocks.
The implementation of the FracTAL is given in Listing 2. The multihead attention is achieved using group convolutions for the evaluation of queries, keys and values.

Attention fusion
A critical part in the design of convolution building blocks enhanced with attention is the way the information from attention is passed to convolution layers. To this aim we propose fusion methodologies of feature layers with the FracTAL for two cases: self attention fusion, and a relative attention fusion where information from two layers are combined.

Self attention fusion
We propose the following fusion methodology between a feature layer, L, and its corresponding FracTAL self-attention layer, A: Here F is the output layer produced from the fusion of L and the Attention layer, A, describes element wise multiplication, 1 is a layer of ones like L, and γ a trainable parameter initiated at zero. We next describe the reasons why we propose this type of fusion. The Attention output is maximal (i.e. close to 1) in areas on the features where it must "attend" and minimal otherwise (i.e. close to zero). Multiplying element-wise directly the FracTAL attention layer A with the features, L, effectively lowers the values of features in areas that are not "interesting". It does not alter the value of areas that "are interesting". This can produce loss of information in areas where A "does not attend" (i.e. it does not emphasize), that would be otherwise valuable at a later stage. Indeed, areas of the image that the algorithm "does not attend" should not be perceived as empty space (Treisman and Gelade, 1980). For this reason the "emphasized" features, L A are added to the original input L. That is L + L A is identical to L in spatial areas where A tends to zero, and is emphasized in areas where A is maximal.
In the initial stages of training, the attention layer, A, does not contribute to L, due to the initial value of the trainable parameter γ 0 = 0. Therefore it does not add complexity during the initial phase of training and it allows for an annealing process of Attention contribution (Zhang et al., 2018, see also Chen et al. 2021). This property is particularly important when L is produced from a known performant recipe (e.g. residual building blocks).
In Fig 4 we present this fusion mechanism for the case of a Residual unit (He et al., 2016(He et al., , 2015. Here the input layer, X in , is subject to the residual block sequence of Batch normalization, convolutions, and ReLU activations, and produces the X out layer. A separate branch uses the X in input to produce the self attention layer A (see Listing 2). Then we multiply element wise the standard output of the residual unit, X in + X out , with the 1 + γA layer. In this way, at the beginning of training, this layer behaves as a residual layer, which has the excellent convergent properties of resnet at initial stages, and at later stages  represents concatenation of features along the channel dimension (for V1). For version V2, we replace the concatenation, followed by the normalized convolution layer with a relative fusion attention, as described in Section 5.2.2 of training the Attention becomes gradually more active and allows for greater performance. A software routine of this fusion for the residual unit, in particular, can be seen in Listing 4 in the Appendix.

Relative attention fusion
Assuming we have two input layers, L 1 , L 2 , we can calculate the relative attention of each with respect to the other. This is achieved by using as query the layer we want to "attend to" and as a key and value the layer we want to use as information for attention. In practical implementations, the query, the key, and the value layers result in after the application of a convolution layer to some input.
Here, the γ 1,2 parameters are initialized at zero, and the concatenation operations are performed along the channel dimension. Conv2DN is a two dimensional convolution operation followed by a normalization layer, e.g. BatchNorm (Ioffe and Szegedy, 2015)). An implementation of this process in mxnet/gluon pseudocode style can be found in Listing 3. The relative attention fusion presented here can be used as a direct replacement of concatenation followed by a convolution layer in any network design.

Architecture
We break down the network architecture into three component parts: the micro-topology of the building blocks, which represents the fundamental constituents of the architecture; the macro-topology of the network, which describes how building blocks are connected to one another to maximize performance; and the multitasking head, which is responsible for transforming the features produced by the micro and macro-topologies into the final prediction layers where change is identified. Each of the choices of micro and macro topology has a different impact on the GPU memory footprint. Usually, selecting very deep macro-topology improves performance, but then this increases the overall memory footprint and does not leave enough space for using an adequate number of filters (channels) in each micro-topology. There is obviously a trade off between the micro-topology feature extraction capacity and overall network depth. Guided by this, we seek to maximize the feature expression capacity of the micro-topology for a given number of filters, perhaps at the expense of consuming computational resources.

Micro-topology: the CEECNet unit
The basic intuition behind the construction of the CEEC building block, is that it provides two different, yet complementary, views for the same input. The first view (the CE block -see Fig. 5) is a "summary understanding" operation (performed in lower resolution than the input -see also Newell et al. 2016;Liu et al. 2020 andQin et al. 2020). The second view (the EC block) is an "analysis of detail" operation (performed in higher spatial resolution than the input). It then exchanges information between these two views using relative attention, and it finally fuses them together, by emphasizing the most important parts using the FracTAL .
Our hypothesis and motivation for this approach is quite similar to the scale-space analysis in computer vision (Lindeberg, 1994): viewing input features at different scales, allows the algorithm to focus on different aspects of the inputs, and thus perform more efficiently. The fact that by merely increasing the resolution of an image does not increase its content information is not relevant here: guided by the loss function the algorithm can learn to represent at higher resolution features that otherwise would not be possible in lower resolutions. We know this from the successful application of convolutional networks in super-resolution problems  as well as (variational) autoencoders (Tschannen et al., 2018;Kingma and Welling, 2019): in both of these paradigms deep learning approaches manage to increase meaningfully the resolution of features that exists in lower spatial dimension layers.
In the following we define the volume V of features of dimension (C, H, W) 11 , as the product of the number of their channels (or filters), C (or n f ), with their spatial dimensions, height, HEAD (segm, bound, dist) H, and width, W, i.e. V = n f ·H ·W 12 . The two branches consist of: a "mini ∪-Net" operation (CE block), that is responsible for summarizing information from the input features by first compressing the total volume of features into half its original size and then restoring it. The second branch, a "mini ∩-Net" operation (EC block), is responsible for analyzing in higher detail the input features: it initially doubles the volume of the input features, by halving the number of features and doubling each spatial dimension. It subsequently compresses this expanded volume to its original size. The input to both layers is concatenated with the output, and then a normed convolution restores the number of channels to their original input value. Note that the mini ∩-Net is nothing more than the symmetric (or dual) operation of the mini ∪-Net.
The outputs of the EC and CE blocks are fused together with relative attention fusion (section 5.2.2). In this way, exchange of information between the layers is encouraged. The final emphasized outputs are concatened together, thus restoring the initial number of filters, and the produced layer is passed through a normed convolution in order to bind the relative channels. The operation is concluded with a FracTAL residual operation and fusion (similar to Fig. 4), where the input is added to the final output and emphasized by the self attention on the original input. The CEECNet building block is described schemati-cally in Fig. 5.
The compression operation, C, is achieved by applying a normed convolution layer of stride equal to 2 (k=3, p=1, s=2) followed by another convolution layer that is identical in every aspect except the stride that is now s=1. The purpose of the first convolution is to both resize the layer and extract features. The purpose of the second convolution layer is to extract features. The expansion operation, E, is achieved by first resizing the spatial dimensions of the input layer using Bilinear interpolation, and then the number of channels is brought to the desired size by the application of a convolution layer (k=3,p=1,s=1). Another identical convolution layer is applied to extract further features. The full details of the convolution operations used in the EC and CE blocks can be found on Listing 5.

Macro-topology: dual encoder, symmetric decoder
In this section we present the macro-topology (i.e. backbone) of the architecture that uses as building blocks either the CEECNet or the FracTAL ResNet units. We start by stating the intuition behind our choices and continue with a detailed description of the macro-topology. Our architecture is heavily influenced from the ResUNet-a model . We will refer to this macro-topology as the mantis topology 13 .
In designing this backbone, a key question we tried to address is how can we facilitate exchange of information between features extracted from images at different dates. The following two observations guided us: 1. We make the hypothesis that the process of change detection between two images requires a mechanism similar to human attention. We base this hypothesis on the fact that the time required for identifying objects that changed in an image correlates directly with the number of changed objects. That is, the more objects a human needs to identify between two pictures, the more time is required. This is in accordance with the feature-integration theory of Attention (Treisman and Gelade, 1980). In contrast, subtracting features extracted from two different input images is a process that is constant in time, independent of the complexity of the changed features. Therefore, we avoid using adhoc feature subtraction in all parts of the network. 2. In order to identify change, a human needs to look and compare two images multiple times, back and forth. We need things to emphasize on image at date 1, based on information on image at date 2 (Eq. 16), and, vice versa (Eq. 17). And then combine both of these information together (Eq. 18). That is, exchange information, with relative attention (section 5.2.2) between the two, at multiple levels. A different way of stating this as a question is: what is important on input image 1 based on information that exists on image 2, and vice versa?
Given the above, we now proceed in detailing the mantis macrotopology (with CEECNetV1 building blocks, see Fig. 6). The encoder part is a series of building blocks, where the size of the features is downscaled between the application of each subsequent building block. Downscaling is achieved with a normed convolution with stride, s=2 without using activations. There exist two encoder branches that share identical parameters in their convolution layers. The input to each branch is an image from a different date and the role of the encoder is to extract features at different levels from each input image. During the feature extraction by each branch, each of the two inputs is treated as an independent entity. At successive depths, the outputs of the corresponding building block are fused together with the relative attention methodology as described in section 5.2.2, but they are not used until later, in the decoder part. Crucially, this fusion operation, suggests to the network that the important parts of the first layer, will be defined by what exists on the second layer (and vice versa), but it does not dictate how exactly the network should compare the extracted features (e.g. by demanding the features to be similar for unchanged areas, and maximally different for changed areas 14 ). This is something that the network will have to discover in order to match its predictions with the ground truth. Finally, the last encoder layers are concatenated and inserted to the pyramid scene pooling layer (PSPPooling - Diakogiannis et al. 2020;Zhao et al. 2017).
In the (single) decoder part is where the network extracts features based on the relative information that exist in the two inputs. Starting from the output of the PSPPooling layer (middle of network), we upscale lower resolution features with bilinear interpolation and combine them with the fused outputs of the decoder with a concatenation operation followed by a normed convolution layer, in a way similar to the ResUNet-a  model. The mantis CEECNetV2 model replaces all concatenation operations followed by a normed convolution, with a Fusion operation as described in Listing 3.
The final features extracted from this macro-topology architecture is the final layer from the CEECNet unit that has the same spatial dimensions as the first input layers, as well as the Fused layers from the first CEECNet unit operation. Both of these layers are inserted in the segmentation HEAD.

Segmentation HEAD
The features extracted from the features extractor (Fig. 6) are inserted to a conditioned multitasking segmentation head (Fig. 7) that produces three layers: a segmentation mask, a boundary mask and a distance transform mask. This is identical with the ResUNet-a "causal" segmentation head, that has shown great performance in a variety of segmentation tasks Waldner and Diakogiannis, 2020), with two modifications.
The first modification relates to the evaluation of boundaries: instead of using a standard sigmoid activation for the 14 We tried this approach and it was not successful. boundaries layer, we are inserting a scaling parameter, γ, that controls how sharp the transition from 0 to 1 takes place, i.e.
Here = 10 −2 is a smoothing parameter. The γ coefficient is learned during training. We inserted this scaling after noticing in initial experiments that the algorithm needed improvement close to the boundaries of objects. In other words, the algorithm was having difficulty separating nearby pixels. Numerically, we anticipate that the distance between the values of activations of neighbouring pixels is small, due to the patch-wise nature of convolutions. Therefore, a remedy to this problem is making the transition boundary sharper. We initialize training with γ = 1.
The second modification to the segmentation HEAD relates to balancing the number of channels of the boundaries and distance transform predictions before re-using them in the final prediction of segmentation change detection. This is achieved by passing them through a convolution layer that brings the number of channels to the desired number. Balancing the number of channels treats the input features and the intermediate predictions as equal contributions to the final output. In Fig.  7 we present schematically the conditioned multitasking head, and the various dependencies between layers. Interested users can refer to Diakogiannis et al. (2020) for details of the conditioned multitasking head.

Experimental Design
In this section, we describe the setup of our experiments for the evaluation of the proposed algorithms on the task of change detection. We start by describing the two datasets we used (LEVIRCD Chen and Shi 2020 and WHU Ji et al. 2019b) as well as the data augmentation methodology we followed. Then we proceed in describing the metrics used for performance evaluation and the inference methodology. All models mantis CEECNetV1, V2 and mantis FracTAL ResNet have an initial number of filters equal to nf=32, and the depth of the encoder branches was equal to 6. We designate these models with D6nf32.

LEVIRCD Dataset
The LEVIR-CD change detection dataset  consists of 637 pairs of VHR aerial images of resolution 0.5m per pixel. It covers various types of buildings, such as villa residences, small garages, apartments, and warehouses. It contains 31,333 individual building changes. The authors provide a train/validation/test split, which standardizes the performance process. We used a different split for training and validation, however, we used the test set the authors provide for reporting performance. For each tile from the training and validation set, we used ∼47% of the area for training and the remaining ∼53% for validation. For a rectangle area with sides of length a and b, this is achieved by using as training area the rectangle with sides a = 0.6838 a and b = 0.6838 b, i.e. training area = 0.6838 2 ab ≈ 0.47 ab. Then val area = 1. − train area ≈ 0.53total area. From each of these areas, we extracted chips of size 256×256. These are overlapping in each dimension with stride equal to 256/2 = 128 pixels.

WHU Building Change Detection
The WHU building change dataset (Ji et al., 2019b) consists of two aerial images (2011 and 2016) that cover an area of ∼ 20km 2 , which was changed from 2011 (earthquake) to 2016. The images resolution is 0.3m spatial resolution. The dataset contains 12796 buildings. We split the triplets of images and ground truth change labels, in three areas with ratio 70% for training and validation and 30% for testing. We further split the 70% part in ∼47% area for training and ∼53% area for validation, in a way similar to the split we followed for each tile of the LEVIRCD dataset. The splitting can be seen in Fig. 8. Note that the training area is spatially separated from the test area (the validation area is in between the two). The reason for the rather large train/validation ratio is for us to ensure there is adequate spatial separation between training and test areas, thus minimize spatial correlation effects.

Data preprocessing and augmentation
We split the original tiles in training chips of size F 2 = 256 2 by using a sliding window methodology with stride s = F/2 = 128 pixels (the chips are overlapping in half the size of the sliding window). This is the maximum size we can fit to our architecture due to GPU memory limitations that we had at our Figure 8: Train -validation -test split of the WHU dataset. The yellow (dashdot line) rectangle represents the training data. The area between the magenta rectangle (solid line) and the yellow (dash-dot) represents the validation data. Finally, the cyan rectangle (dashed) is the test data. The reasoning for our split is to include in the validation data both industrial and residential areas and isolate (spatially) the training area from the test area in order to avoid spurious spatial correlation between training/test sites. The train/validation/test ratio split is train:val:test ≈ 33 : 37 : 30. disposal (NVIDIA P100 16GB). With this batch size we managed to fit a batch size of 3 per GPU for each of the architectures we trained. Due to the small batch size, we used GroupNorm (Wu and He, 2018) for all normalisation layers.
The data augmentation methodology we used during training our network was the one used for semantic segmentation tasks as described in Diakogiannis et al. (2020). That is, random rotations with respect to a random center with a (random) zoom in/out operation. We also implemented random brightness and random polygon shadows. In order to help the algorithm explicitly on the task of the change detection, we implemented time reversal (reversing the order of the input images should not affect the binary change mask) and random identity (we randomly gave as input one of the two images, i.e. null change mask). These latter transformations were implemented at a rate of 50%.

Metrics
In this section, we present the metrics we used for quantifying the performance of our algorithms. With the exception of the Intersection over Union (IoU) metric, for the evaluation of all other metrics we used the Python library pycm as described in Haghighi et al. (2018). The statistical measures we used in order to evaluate the performance of our modelling approach are pixel-wise precision, recall, F1 score, Matthews Correlation Coefficient (MCC) (Matthews, 1975)

Inference
In this section, we provide a brief description of the model selection after training (i.e. which epochs will perform best on the test set) as well as the inference methodology we followed for large raster images that exceed the memory capacity of modern-day GPUs.

Inference on large rasters
Our approach is identical to the one used in Diakogiannis et al. (2020), with the difference that now we are processing two input images. Interested readers that want to know the full details can refer to Section 3.4 of Diakogiannis et al. (2020).
During inference on test images, we extract multiple overlapping windows of size 256 × 256 with a step (stride) size of 256/4 = 64 pixels. The final prediction "probability", per pixel, is evaluated as the average "probability" over all inference windows that overlap on the given pixel. In this definition, we refer to "probability" as the output of the softmax final classification layer, which is a continuous value in the range [0, 1]. It is not a true probability, in the statistical sense, however, it does express the confidence of the algorithm in obtaining the inference result.
With this overlapping approach, we make sure that the pixels that are closer to the edges and correspond to boundary areas for some inference windows, appear closer to the center area of subsequent inference windows. For the boundary pixels of the large raster, we apply reflect padding before performing inference (Ronneberger et al., 2015).

Model selection using Pareto efficiency
For monitoring the performance of our modelling approach, we usually rely on the MCC metric on the validation dataset. We observed, however, that when we perform simultaneously learning rate reduction and F T d depth increase, initially the MCC decreases (indicating performance drop), while the F T d similarity is (initially) strictly increasing. After training starts to stabilize around some optimality region (with the standard noise oscillations), there are various cases where the MCC metric and F T d similarity coefficient do not agree on which is the best model. To account for this effect and avoid losing good candidate solutions, we evaluate the average of the inference output of a set of best candidate models. These best candidate models are selected according to the models that belong to the Pareto front of the most evolved solutions. We use all the Pareto front (Emmerich and Deutz, 2018) model weights as acceptable solutions for inference. A similar approach was followed for the selection of hyper parameters for optimal solutions in Waldner and Diakogiannis (2020).
In Fig. 9 we plot on the top panel the evolution of the MCC, and F T d for d = 30. Clearly, these two performance metrics do not always agree. For example, the F T 30 is close to optimality in approximate epoch ∼250, while the MCC is clearly suboptimal. We highlight with filled circles (cyan dots) the two solutions that belong to the pareto front. In the bottom panel we plot the correspondence of the MCC values with the F T 30 similarity metric. The two circles show the corresponding nondominated Pareto solutions (i.e. best candidates).

FracTAL units and evolving loss ablation study
In this section we present the performance of the FracTAL ResNet (He et al., 2015(He et al., , 2016 and CEECNet units we introduced against ResNet and CBAM (Woo et al., 2018) baselines as well as the effect of the evolving F T d loss function on training a neural network. We also present a qualitative and quantitative analysis on the effect of the depth parameter in the FracTAL based on the mantis FracTAL ResNet network.

FracTAL building blocks performance
We construct three identical networks in macro-topological graph (backbone), but different in micro-topology (building blocks). The first two networks are equipped with two different versions of CEECNet: the first is identical with the one presented in Fig. 5. The second is similar to the one in Fig. 5 with all concatenation operations that are followed by normed convolutions being replaced with Fusion operations, as described in Listing 3. The third network uses as building blocks the FracTAL ResNet building blocks (Fig. 4). Finally, the fourth network uses as building blocks standard residual units as described in He et al. (2015He et al. ( , 2016) (ResNet V2). All building blocks have the same dimensionality of input and output features. However, each type of building block has a different number of parameters. By keeping the dimensionality of input and output layers identical to all layers, we believe, the performance differences of the networks will reflect the feature expression capabilities of the building blocks we compare.
In Fig. 10 we plot the validation loss for 300 epochs of training on CIFAR10 dataset (Krizhevsky, 2009) without learning rate reduction, We use cross entropy loss and Adam optimizer (Kingma and Ba, 2014). The backbone of each of the networks is described in Table A.3. It can be seen that the convergence and performance of all building blocks equipped with the FracTAL outperform standard Residual units. In particular we find that the performance and convergence properties of the networks follow: ResNet FracTAL ResNet CEECNetV1 CEECNetV2. The performance difference between FracTAL ResNet and CEECNetV1 will become more clearly apparent in the change detection datasets. The V2 version of CEECNet that uses Fusion with relative attention (cyan solid line) instead of concatenation (V1 -magenta dashed line), for combining layers in the Compress-Expand and Expand-Compress branches, has superiority over V1. However, it is a computationally more intensive unit.

Comparing FracTAL with CBAM
Having shown the performance improvement over the residual unit, we proceed in comparing the FracTAL proposed attention with a modern attention module, and in particular the Convolution Block Attention Module (CBAM) (Woo et al., 2018). We construct two networks that are identical in all aspects except the implementation of the attention used. We base our implementation on a publicly available repository that reproduces the results of Woo et al. (2018) -written in Pytorch 15 -that we translated into the mxnet framework. From this implementation we use the CBAM-resnet34 model and we compare it with a FracTAL-resnet34 model, i.e. a model which is identical to the previous one, with the exception that we replaced the CBAM attention with the FracTAL (attention). Our results can be seen on Fig. 11, where a clear performance improvement is evident merely by changing the attention layer used. The improvement is of the order of 1%, from 83.37% (CBAM) to 84.20% (FracTAL), suggesting that the FracTAL has better feature extraction capacity than the CBAM layer.

Evolving loss
We continue by presenting experimental results on the performance of the evolving loss strategy on CIFAR10 using two networks, one with standard ResNet building blocks and one with CEECNetV1 units. The macro topology of the networks is identical to the one in Table A.3. In addition, we also demonstrate performance differences on the change detection task, by training the mantis CEECNetV1 model on the LEVIRCD dataset, with static and evolving loss strategies for FracTAL depth, d = 5.
In Fig. 12 we demonstrate the effect of this approach: we train the network on CIFAR10 with standard residual blocks (top panel He et al., 2016He et al., , 2015 under the two different loss strategies. In both strategies, we reduce the initial learning rate by a factor of 10 at epochs 250 and 350. In the first strategy, we train the networks with F T 0 . In the second strategy, we evolve the depth of the fractal Tanimoto loss function: we start training with F T 0 and on the two subsequent learning rate reductions we use F T 15 and F T 30 . In the top panel, we plot the validation accuracy for the two strategies. The performance gain following the evolving depth loss is ∼ 0.25% in validation accuracy. In the bottom panel we plot the validation accuracy for the CEECNetV1 based models. Here, the evolution strategy is same as above with the difference that we use different depths for the F T loss (to observe potential differences). These are d ∈ {0, 10, 20}. Again, the difference in the validation accuracy is ∼ +0.22% for the evolving loss strategy.
We should note that we observed performance degradation by using for training (from random weights) the F T d loss for d > 1. This is evident in Fig. 15 where we train from scratch on CIFAR10 three identical models with different depth for the F T d function: d = [0, 3, 6]. It is seen that as the hyperparameter d increases, the performance of the validation accuracy degrades. We consider that this happens due to the low value of the gradients away from optimality, as it requires the network to train longer to reach the same level of validation accuracy. In contrast, the greatest benefit we observed by using this training strategy is that the network can avoid overfitting after learning rate reduction (provided that the slope created by the choice of depth d is significant) and has the potential to reach higher performance.
Next we perform a test on evolving vs static loss strategy on the LEVIR CD change detection dataset, using the CEECNetV1 units, as it can be seen in Table 1. The CEECNetV1 unit, trained with the evolving loss strategy, demonstrates +0.856% performance increase on the Interesection over Union (IoU) and +0.484% increase in MCC. Note that, for the same FracTAL depth, d = 5, the FracTAL ResNet network, trained with the evolving loss strategy performs better than the CEECNetV1 that is trained with the static loss strategy, while it falls behind the CEECNetV1 trained with the evolving loss strategy. We should also note that performance increment is larger, in comparison with the classification task on CIFAR10, reaching almost ∼1% for the IoU.

Performance dependence on FracTAL depth
In order to understand how the FracTAL layer behaves with respect to different depths, we train three identical networks, the mantis FracTAL ResNet (D6nf32), using FracTAL depths in the range d ∈ {0, 5, 10}. The performance results on the LEVIRCD dataset can be seen on Table 1. It seems the three networks perform similarly (they all achieve SOTA performance on the LEVIRCD dataset), with the d = 10 having top performance (+0.724% IoU), followed by the d = 0 (+0.332%IoU) and, lastly the d = 5 network (baseline). We conclude that the depth d is a hyper parameter dependent on the problem at task that users of our method can choose to optimize against. Given that all models have competitive performance, it seems also that the proposed depth d = 5 is a sensible choice.
In Fig. 13 we visualize the features of the last convolution, before the multitasking segmentation head for FracTAL depth d = 0 (left panel) and d = 10 (right panel). The features at different depths appear similar, all identifying the regions of interest clearly. To the human eye, according to our opinion, the features for depth d = 10 appear slightly more refined in comparison with the features corresponding to depth d = 0 (e.g. by comparing the images in the corresponding bottom rows). The entropy of the features for d = 0 (entropy: 15.9982) is negligibly higher (+0.00625 %) than for the case d = 10 (entropy: 15.9972), suggesting both features have the same information content for these two models. We note that, from the perspective of information compression (assuming no loss of informa-

Results
In this section, we report the quantitative and qualitative performance of the models we developed for the task of change detection on the LEVIRCD (Chen and Shi, 2020) and WHU (Ji et al., 2019b) datasets. All of the inference visualizations are performed with models having the proposed FracTAL depth d = 5, although this is not always the best performant network.

Performance on LEVIRCD
For this particular dataset, a fixed test set is provided and a comparison with methods that other authors followed is possible. Both FracTAL ResNet and CEECNet (V1, V2) outperform the baseline  with respect to the F1 score by ∼5%.
In Fig. 14 we present the inference of the CEECNet V1 algorithm for various images from the test set. For each row, from left to right we have input image at date 1, input image at date 2, ground truth mask, inference (threshold = 0.5), and algorithm's confidence 16 heat map. It is interesting to note that the algorithm has zero doubt in areas where buildings exist in both input images. That is, it is clear our algorithm identifies change in areas covered by buildings, and not building footprints. In Table  1 we present numerical performance results of both FracTAL ResNet as well as CEECNet V1& V2. All metrics, precision, recall, F1, MCC and IoU are excellent. The mantis CEECNet for FracTAL depth d = 5, outperforms the mantis FracTAL ResNet by a small numerical margin, however the difference is clear. This difference can also be seen in the bottom panel of Fig. 17. We should also note that the numerical difference 16 This should not be confused with statistical confidence. on, say, F1 score, does not translate to equal portions of quality difference in images. That is, a 1% difference in F1 score, may have a significant impact on the quality of inference. We further discuss this on Section 9.4. Overall the best model is mantis CEECNet V2 with FracTAL depth d = 5. Second best is the mantis FracTAL ResNet with FracTAL depth d = 10. Among the same set of models (mantis FracTAL ResNet), it seems that depth d = 10 performs best, however we do not know if this generalizes to all models and datasets. We consider that FracTAL depth d is a hyperparameter that needs to be finetuned for optimal performance, and, as we've shown, the choice d = 5 is a sensible one as in this particular dataset it provided us with state of the art results.

Performance on WHU
In Table 2 we present the results of training the mantis network with FracTAL ResNet and CEECNetV1 building blocks. Both of our proposed architectures outperform all other modeling frameworks, although we need to stress that each of the other authors followed a different splitting strategy of the data. However, with our splitting strategy, we used only the 32.9% of the total area for training. This is significantly less than the majority of all other methods we report here, and we should anticipate a significant performance degradation in comparison with other methods. In contrast, despite the relatively smaller training set, our method outperforms other approaches. In particular, Ji et al. (2019a), used 50% of the raster for training, and the other half for testing ( Fig. 10 in their manuscript). In addition, there is no spatial separation between training and test sites, as it exists in our case, and this should work in their advantage. Also, the usage of a larger window for training (their extracted chips are of spatial dimension 512 × 512) increases in principle the performance because it includes more context information. There is a tradeoff here though, in that using a larger window size reduces the number of available training chips, therefore the model sees a smaller number of chips during training. Chen For each row, from left to right input image date 1, input image date 2, ground truth, change prediction (threshold 0.5) and confidence heat map. Table 1: Model comparison on the LEVIR building change detection dataset. We designate with bold font the best values, with underline the second best, and with square brackets, [ ] the third best model. All of our frameworks (D6nf32) use the mantis macro-topology and achieve state of the art performance. Here evo represents evolving loss strategy, sta, static loss strategy and the depth d refers to the F T similarity metric of the FracTAL (attention) layer.   This should improve performance, because there is a tight spatial correlation for two extracted chips that are in geospatial proximity. Cao et al. (2020) used as a test set ∼ 20% of the total area of the WHU dataset, however, they do not specify the splitting strategy they followed for the training and validation sets. Finally, Liu et al. (2019) used approximately ∼ 10% of the total area for reporting test score performance. They also do not mention their splitting strategy.
In this table we could not include (Jiang et al., 2020, PGA-SiamNet) Fig. B.23 we plot from left to right, the test area on date 1, the test area on date 2, the ground truth mask, and the confidence heat map of these predictions. In Fig. 18 we plot a set of examples of inference on the WHU dataset. The correspondence of the images in each row is identical to Fig. 14, with the addition that we denote with blue rectangles the locations of changed buildings (true positive predictions), and with red squares missed changes from our model (false negative). It can be seen that the most difficult areas are the ones that are heavily populated/heavily built up, and the changes are small area buildings.

The effect of scaled sigmoid on the segmentation HEAD
Starting from an initial value γ = 1 of the scaled sigmoid boundary layer, the fully trained model mantis CEECNetV1 learns the following parameters that control how "crisp" the boundaries should be, or else, how sharp the decision boundary should be: The deviation of these coefficients from their initial values, demonstrates that indeed the network finds useful to modify the decision boundary. In Fig. 16 we plot the standard sigmoid function (γ = 1) and the sigmoid functions recovered after training on the LEVIRCD and WHU datasets. The algorithm in both cases learns to modify the decision boundary, by making it sharper. This means that for two nearby pixels, one belonging to a boundary, the other to a background class, the numerical distance between them needs to be smaller to achieve class separation, in comparison with standard sigmoid. Or else, a small δx change is sufficient to transition between boundary and no-boundary class.

Qualitative CEECNet and FracTAL performance
In this section we base our comparison on CEECNet V1 and FracTAL ResNet models with FracTAL depth d = 5. Although both CEECNet V1 and FracTAL ResNet achieve a very high MCC (Fig. 17), the superiority of CEECNet, for the same FracTAL depth d = 5, is evident in the inference maps in both the LEVIRCD (Fig. 20) and WHU (Fig. 21) datasets. This confirms their relative scores (Tables 1 and 2) and the faster convergence of CEECNet V1 (Fig 10). Interestingly, CEECNet V1 predicts change with more confidence than FracTAL ResNet (Figures 20 and 21), even when it errs, as can be seen from the corresponding confidence heat maps. The decision on which of the models one should use is a decision to be made with respect to the relative "cost" of training each model, available hardware resources and performance target goal.

Qualitative assesment of the mantis macro-topology
A key ingredient of our approach on the task of change detection is that we emphasize on the importance of avoiding using the difference of features to identify change. Instead, we propose the exchange of information between features extracted from images at different dates with the concept of relative attention (section 6.2) and fusion (Listing 3). In this section our aim is to get insight on the behaviour of the relative attention and fusion layers, and compare them with the features obtained by the difference of the outputs of convolution layers of images at different dates. We use the outputs of layers of a trained mantis FracTAL ResNet model, trained on LEVIRCD with FracTAL depth d = 10.
In Fig. 19 we visualize the features of the first relative attention layers (channels=32, spatial size 256 × 256, ratt12 (left pannel) and ratt21 (right pannel) for a set of image patches belonging to the test set (size: 3 × 256 × 256). Here, the notation ratt12 indicates that the query features come from the input image at date t 1 , while the key/value features are extracted from the input image at date t 2 . Similar notation is applied for the relative attention, ratt21. Starting from the top left corner we provide the input image at date t 1 , the input image at date t 2 and the ground truth mask of change and after that we visualize the features as single channel images. Each feature (i.e. image per channel) is normalized in the range [−1, 1] for visualization purposes. It can be seen that the algorithm emphasizes from the early stages (i.e. first layers) to structures containing buildings and boundaries of these. In particular the ratt12 (left pannel) has emphasis on boundaries of buildings that exist on both images. It also seems to represent all buildings that exist in both images. The ratt21 layer (right pannel) seems to emphasize more the buildings that exist on date 1, but not on date 2. In addition, in both relative attention layers, emphasis is given on roads and pavements.
In Fig. 22 we visualize the difference of features of the first convolution layers (channels=32, spatial size 256 × 256 -left   Figure 19: Visualization of the relative attention units, ratt12 (left pannel) and ratt21 (right pannel), for the mantis FracTAL ResNet with FracTAL depth, d = 10. These come from the first feature extractors (channels=32, filter spatial size 256 × 256). Here, ratt12 is the relative attention where for query we use input at date t 1 , and the key/value filters are created from input at date t 2 . In the top left rows for each pannel we have input image at date t 1 , input image at date t 2 , and ground truth building change labels, followed by the visualization of each of the 32 channels of the features. pannel) and the fused features (right pannel) obtained using the relative attention and fusion methodology (Listing 3). Some key differences between the two is that we observe that there is less variability within channels in the output of the fusion layer, in comparison with the difference of features. In order to quantify the information content of the features, we calculated the Shanon entropy of the features for each case and we found that the fusion features have half the entropy (11.027) in comparison with the entropy of the difference features (20.97). Similar entropy ratio was found for all images belonging to the test set. This means that the fusion features are less "surprising", than the difference features. This may suggest that the fusion provides a better compression of information in comparison with the difference of layers, assuming both layers have the same information content. It may also mean that the fusion layers have less information content than the difference features, i.e. they are harmful for the change detection process. However, if this was the case, our approach would fail to achieve state of the art performance on the change detection datasets. Therefore, we conclude that the lower entropy value translates to better encoding of information, in comparison with the difference of layers.

Conclusions
In this work, we propose a new deep learning framework for the task of semantic change detection on very high resolution aerial images, presented here for the case of changes in buildings. This framework is built on top of several novel contributions that can be used independently in computer vision tasks. Our contributions are: 1. A novel set similarity coefficient, the fractal Tanimoto coefficient, that is derived from a variant of the Dice coefficient. This coefficient can provide finer detail of similarity, at a desired level (up to a delta function), and this is regulated by a temperature-like hyper-parameter, d (Fig.  2). 2. A novel training loss scheme, where we use an evolving loss function, that changes according to learning rate reductions. This helps avoid overfitting and allows for a small increase in performance (Figures 12 & 15). In particular, this scheme provided ∼0.25% performance increase in validation accuracy on CIFAR10 tests, and performance increase of ∼0.9% on IoU and ∼0.5% on MCC on the LEVIRCD dataset. 3. A novel spatial and channel attention layer, the fractal Tanimoto Attention Layer (FracTAL -see Listing 2), that uses the fractal Tanimoto similarity coefficient as a means of quantifying the similarity between query and key entries. This layer is memory efficient and scales well with the size of input features. 4. A novel building block, the FracTAL ResNet (Fig 4), that has a small memory footprint and excellent convergent and performance properties that outperform standard ResNet building blocks. 5. A novel building block, the Compress/Expand -Expand/-Compress (CEECNet) unit (Fig. 5), that has better performance than the FracTAL ResNet (Fig. 17), that comes, however, at a higher computational cost. 6. A corollary that follows from the introduced building blocks, is a novel fusion methodology of layers and their corresponding attentions, both for self and relative attention, that improves performance (Fig. 17). This methodology can be used as a direct replacement of concatenation in convolution neural networks. 7. A novel macro-topology (backbone) architecture, the mantis topology (Fig. 6), that combines the building blocks we developed and is able to consume images from two different dates and produce a single change detection layer. It should be noted that the same topology can be used in    Fig. 19 we plot the difference of the first feature extractor blocks (left pannel) vs the first Fusion feature extraction block. The entropy of the fusion features is half that of the difference channels. This means there is less "surprise" in the fusion filters, in comparison with the difference of filters, for the same trained network.
general segmentation problems, where we have two input images to a network that are somehow correlated and produce a semantic map. That is, it can be used for fusion of features coming from different inputs (e.g. Digital Surface Maps and RGB images).
Putting all things together, all of the proposed networks that presented in this contribution, mantis FracTAL ResNet and mantis CEECNetV1&V2, outperform other proposed networks and achieve state of the art results on the LEVIRCD  and the WHU 17 (Ji et al., 2019b) building change detection datasets (Tables 1 & 2). In comparison with state of the art architectures that use atrous dilated convolutions, the proposed architectures do not require fine tuning of the dilation rates. Therefore, they are simpler, and easier to set up and train. In this work we did not experiment with deeper architectures, that would surely improve performance (e.g. D7nf32 models usually perform better), or with hyper parameter tuning. Table A.3: CEECNetV1 vs CEECNetV2 vs FracTAL ResNet vs ResNet building blocks comparison. All Building Blocks use kernel size k=3 and padding p=1 (SAME) and stride s=1. The transition convolutions that half the size of the features use the same kernel size and padding, however the stride is s=2. In the following we indicate with nf the number of output channels of the convolution layers, and with nh the number of heads in the multihead FracTAL module.

Appendix C.3. CEECNet building blocks
In this section we provide with pseudo-code the implementation of the CEECNetV1 unit.