When Deep Learning Meets Data Alignment: A Review on Deep Registration Networks (DRNs)

Registration is the process that computes the transformation that aligns sets of data. Commonly, a registration process can be divided into four main steps: target selection, feature extraction, feature matching, and transform computation for the alignment. The accuracy of the result depends on multiple factors, the most significant are the quantity of input data, the presence of noise, outliers and occlusions, the quality of the extracted features, real-time requirements and the type of transformation, especially those ones defined by multiple parameters, like non-rigid deformations. Recent advancements in machine learning could be a turning point in these issues, particularly with the development of deep learning (DL) techniques, which are helping to improve multiple computer vision problems through an abstract understanding of the input data. In this paper, a review of deep learning-based registration methods is presented. We classify the different papers proposing a framework extracted from the traditional registration pipeline to analyse the new learning-based proposal strengths. Deep Registration Networks (DRNs) try to solve the alignment task either replacing part of the traditional pipeline with a network or fully solving the registration problem. The main conclusions extracted are, on the one hand, 1) learning-based registration techniques cannot always be clearly classified in the traditional pipeline. 2) These approaches allow more complex inputs like conceptual models as well as the traditional 3D datasets. 3) In spite of the generality of learning, the current proposals are still ad hoc solutions. Finally, 4) this is a young topic that still requires a large effort to reach general solutions able to cope with the problems that affect traditional approaches.


Introduction
Registration, both rigid and non-rigid, has already been widely discussed in the computer vision literature, however, it remains a problem of paramount importance in this field. The predominant registration paradigm over the last few decades, purely based on traditional computer vision techniques, is changing. This is mostly because of the growing number of available consumer grade devices, such as RGB-D cameras and LiDAR sensors, which have increased the interest of cutting-edge applications to leverage multimodal data sources provided by these sensors. This has led to a significant increase in the amount of cheap available data of different modalities which has to be previously structured, either hierarchically or semantically, in order to extract high-level information. This is essential for many different systems in which virtual representations of objects or scenes are used. Most of the times, the object or scene is not fully perceived in a single view or instant if the object deforms. Hence, multiple sensed data needs to be processed as a whole. This is what we know in computer vision and pattern recognition literature as point set registration -the process of finding a spatial transformation that aligns two points sets-, and, in a more general way, to align two data sets. Examples where registration is used are uncountable, but for in-stance we can name animation [1]; body modeling [2] for pose analysis; medical diagnosis, for example, Zeman et al. [3] that registered Computed Tomography or Boldea et al. [4] that employed 3D models; robot guidance, e.g. registered multi-camera setup to guide a robot arm [5]; object classification of assembly lines [6], etc. Registration approaches will be the subject of this review framed in the context of learning-based paradigm.
The current context is changing the predominant paradigm in registration techniques in two ways: (1) dealing with a huge amount of raw and unstructured multidimensional data is not straightforward, especially if we have to meet real-time constraints; (2) in light of the success of Deep Learning (DL) in the computer vision field, the large amount of available data satisfies the needs of DLbased approaches which are well-known to be data hungry. This opens a promising avenue in the research of registration methods using DL techniques. Currently, existing DLbased techniques for rigid and non-rigid registration, mostly in an n-dimensional space, are far to be fully accurate and reliable. Furthermore, the direct application of DL techniques to the problem of registration is not straightforward; its lack of maturity and the ever-changing state of this field because of the continuous advances, makes it difficult to keep up with the latest trends and track them properly.
With the advent of new machine learning techniques, in particular, Deep Learning (DL), it is possible to learn representations of data with multiple scales of abstraction [7], which has been useful in many domains. This has had an impact on how problems are addressed. For instance, a traditional computer vision pipeline is composed of different stages: preprocessing, feature extraction, and analysis. In a traditional perspective, the problem remains of finding the proper features from the input data to address each specific problem. With these features, the classifier (or other application) manages the data in a space that could be more suitable to find a solution. However, with learning techniques the networks learn to extract features along with their classification, so even if they are not the optimal characteristics by themselves, they are the best to the analysis. This means, in traditional methodologies, the effort is in finding the right features, but now the effort is in the choice of a good network architecture and its training data. This makes the last two stages of the traditional pipeline of the computer vision more fuzzy. Hence, the challenges here for researchers are to a) design an appropriate network architecture and b) provide a large-enough training data set so that the network can learn to extract and generalize the features from the input data.
1.1. Review Scope, Organization and Terminology This paper reviews the current state-ofthe-art of learning-based approaches to registration. The ability of deep neural networks to generalize from training data and manage geometric properties has created a new subfield in the intersection between learning and registration algorithms. Although some reviews of registration have been performed [8,9,10], due to the novelty of this subfield, there are not reviews addressing learningbased approaches for registration. The contributions of this paper are two-fold: • A clarified framework of registration to enclose both traditional and learningbased approaches.
• A review of the recent works setting out a learning-based approach for registration. Figure 1 graphically summarizes the scope of this paper for registration. The schematics contains the four main stages of the traditional pipeline, which are: Target Selection (Yellow), Feature Extraction (Red), Feature Matching (Green) and Pose Optimization (Blue). The right end is the final result transformation ([R, t]). This stages are further explained in Section 2. Vertically, the schematic is divided at the top with the traditional stages clearly defined and at the bottom the learning-based approaches that are reviewed here, represented by a neural network, and the possible data for inputs and outputs. Having in mind this information, the inputs for the learning-based proposal could be in different formats, like point clouds, voxelgrids, meshes, etc. As well as full end-to-end approaches, some methods accept as inputs the result of the Feature Extraction or Matching stages in the traditional pipeline. With the outputs something similar occurs. The output could be some data in a specific format as well as the result of the stages from the traditional pipeline (feature vector, matching vector and transformation). Nevertheless, with the new learning- Figure 1: Schematic of the registration process in the traditional pipeline on the top part, and the learningbased approaches reviewed in this paper at the bottom. The learning-based proposals solve one or more of the traditional phases, and allow different type of inputs depending the phases addressed or the outputs provided. The conceptual space is the learned parameters during training process, and theoretically could also be considered as an input for the registration process.
based approaches, a new conceptual kind of space appears, which contains learned properties about the object, materials and their behaviour that can be registered with the input data (e.g. aligning actual input data of a ball with its deflated state restricted by the behaviour-material of the object, rather than aligning with a final deflated target). This kind of conceptual space is given by a neural network and its training process, and, theoretically, it could be considered as an input to the registration process but it is not an input that one might use in every registration instance since it is an internally representation. This allows the network to encode conceptual models like physical phenomenons (e.g. force vector, symbolic/conceptual information such as "sporty, comfy") or mathematical rules. Besides, the neural network could perform one or more phases from the traditional pipeline (represented by the coloured rectangles (see Fig. 1) shown in the network).
Considering all of this, we can see that there are multiple possibilities, combining inputs from different stages and outputs from the learning-based approaches.
In the literature registration and alignment are used indistinctly. Also it is common to find the terms 'reconstruction' or 'shape completion' as synonyms of registration. Although the result could be the same in some cases, registration aims to find the transformation to align input data while reconstruction or shape completion are at higher level and they could be related with registration or not. However, with learning approaches the boundary between them is not so clarified. A detailed explanation of that terms is given in the following section.
The rest of the paper is organized as follows: Section 2 provides a definition of registration, setting out its different stages; in Section 3 deep learning is introduced in the context of registration, analyzing the common network architectures that are being employed and the evolution of the research papers focused in the intersection of those fields. Section 4 summarizes the analyzed works, classifying them according to the traditional registration flow. Finally, in Section 5, we summarize the contributions and key issues that arose in the analysis of the different approaches.

Registration Framework
Each and every piece of information perceived from the environment has its own point of reference, which usually is the location of the sensor or a location established by consensus. When the environment is affected by variations in sensor position, dimensions or time, the data perceived is also affected by those variations. In consequence, an alignment that can compensate for the variations is needed. There are different concepts that sometimes are used interchangeably but have different meanings, including registration, reconstruction, shape completion, amongst others. Registration is the process that aims to compute the alignment between sets of data. Theoretically, registration is the process that, given two inputs P and Q, computes the transformation that minimizes the alignment error E between P and Q. These inputs are composed by elements ω i and ϕ i , respectively (Equation 1).
The goal of a registration algorithm is to get a transformation function χ that minimizes the error alignment between P and Q, through checking the distance error between a pair of correspondences (ϕ i , ω j ) of each input, as it is shown in Equation 2. There are different error and distance functions (dist), e.g. perpendicular distance rather than euclidean distance or Huber distance, L 1 error, etc.
With this equation it is assumed that χ is the function that transforms each element of P , according to the transformation parame-ters a, with the goal of minimizing the error E P between P and Q.
The term reconstruction is at higher level. It refers to getting a full model from the environment or object using a set of partial data subsets. It could be achieved through registration or other techniques like using generic models as reference [11] or interpolating the data using NURBS [12]. In contrast, shape completion assumes some incomplete parts of a model, and data or models that can repair the incomplete parts.
In the literature, the terms registration and reconstruction are sometimes employed to refer to the same process. Just to clarify these two similar but different terms, reconstruction is at higher level than registration. In other words, it is possible to consider a registration process as a part of reconstruction method, but a reconstruction method may not perform a registration. On the other hand, the relation between the terms registration and shape completion have been defined above. Although they have been clearly differentiated in the literature, now with the learning-based proposals the distinction between them is a bit fuzzy, as occurs in [13], a proposal designed to perform shape completion is tested performing registration tasks.
According to Tam et al. [8], the registration process can be divided into three core components that define the sequence of the registration algorithms: target selection, correspondences and constraints, and optimization. This sequence has been used often by registration algorithms to find the alignment of 3D data sets. This stages are shown in Fig. 1, but a more detailed classification was presented by Saval-Calvo et al. [14], including pre-processing and post-processing stages.
• Pre-processing. Sometimes the data should be pre-processed to be useful. This stage implies the arrangement of the data to meet the requirements of the algorithms. For instance, focused on three-dimensional data, some filters to smooth or remove outliers could be used. Also, the algorithms accept data in a specific format (for example a conversion from a point cloud to voxels could be needed).
• Target Selection. The data that will be used as a reference for the registration process is selected here. Also, the functional model indicating the type of transformation that will be applied is defined (e.g. rigid, euclidean, affine, etc.). Typically, in a registration problem it is possible to differentiate between the data that will remain fixed in terms of coordinate reference, and the one that will be moved according to the transformation that aligns both sets. In the literature, different nomenclatures could be found for these fixed/moving terms like model/data, anchor/moving or target/source. In any case, there is a target data which will remain intact acting as reference for computing the alignment of the rest of the inputs.
• Feature Extraction. A feature is a characteristic element of the data. This stage refers to the process of finding those landmarks that will be used to calculate the keypoints or features. Usually they are intended to be invariant to rotations and translations, so features are recognizable from different points of view. For instance, closest-data-point [15], or, salient features, such as SIFT [16], 3Dfeatures, or GMM-based [17], etc.
• Feature Matching. This step refers to the process of identifying corresponding features between the target and each moving data. The pair composed by pairs of features is called correspondence.
• Pose Optimization. Here, the algorithm computes the transformation that minimizes a function distance between the correspondences, aligning them into a common reference space. The transformation is also applied to unmatched data. In order to calculate a suitable transformation, it is necessary to compute the correspondences between the inputs, so they must have enough overlapping information to identify the same spatial locations in each one. For this reason, the presence of noise may affect the later stages of the pipeline in Fig. 1.
• Post-processing. This step is highly dependant on the problem itself. It could include a global optimization such as loop-closure in SLAM [18], or datacleaning in solid mesh estimation, or surface extraction, or outlier removal.
These steps depend on how the data are acquired and formatted beforehand, so it is possible to differentiate between structured and non-structured data. Structured data refers to three-dimensional information that is stored in a discretized structure, like depth maps, meshes or voxelgrids where the neighborhood information of each point is known. On the other hand, an unstructured format stores the information without having an established order, for instance a point cloud.
As we have stated before, we can find two main types of transformations, rigid and nonrigid. In a rigid registration process, the calculated transformation is the same for all elements in the dataset. On the contrary, in non-rigid transformations, a transformation is computed for a subset on the data from isometric transformations where groups of data are rigidly moved [2] to free form transformation where every datum is individually transformed [17].

Research Challenges
In spite of the large amount of research conducted over a long period of time in the field of registration and the many existing approaches, both rigid and non-rigid registration in n-dimensional space -above all nonrigid registration in 3D space-do not have a unique solution. In this subsection we identify and discuss the most important open research challenges, which are currently the target of ongoing research.

Non-rigid Transformations
Unlike rigid registration, which is based on simple transforms such as rotations, trans-lations or isotropic scaling operations, deformable registration is much more complex starting from affine transforms, e.g. nonuniform or anisotropic scaling, skews and shears; and proceeding to piece-wise rigid, articulated and free-form deformations. The real world is full of entities that are subject to non-rigid transformations, raising the problem that these transformations are often unknown. This can lead to inaccurate correspondences between the transformations inferred using parametric and nonparametric models, and the transformations that realworld entities are actually undergoing. This issue is directly reflected in a noticeable increase in the number of parameters defining the transformations, thus increasing the noise impact. Modeling precise non-rigid transformations, delimiting the complexity of the problem through temporal constraints, is an open challenge for many research areas.

Large Transformation
One problem present in registration algorithms is the ability to manage large transformations. When a considerable distance between the location of the input data is present, achieving an accurate result is not straightforward, because most algorithms try to optimize an error function and it is possible to fall into a local minimum without reaching a good-enough result. This could be compounded by other factors such as limited data overlap, ranging data sampling rates, ambiguous overlaps, etc. Furthermore, if the transformation is non-rigid, this problem enhances considerably as most of the methods are based on consistency preservation and large transforms mean important shape changes.

Real-time Processing
Real-time processing is fundamental for many real-world applications, such as selfdriving cars, where decision making has to be quasi instantaneous to avoid potentially dangerous situations. Currently, existing sensors provide increasingly more information, with higher resolution or point density when dealing with 3D data, and at a higher frequency. Coping with such a huge amount of input data is very time-consuming, especially when trying to model deformations in nonrigid objects. Since the different stages in the traditional registration pipeline are independent, the methods are bound to the total time of the different parts added up. Furthermore, the optimization part is usually an iterative approach, so it increases even more the total time to get the final result. This has paved the way for deep learning-based techniques which are characterized by a low latency (inference time) and single data analysis pass. Since the complexity of transformations has a direct effect on the complexity of the model and therefore on its inference time, it is now essential to find a good balance between the desired transformation complexity and the simplicity of the deep model.

Input Data Size and Structure
Having a vast amount of data available is a double-edged sword: (1) on one hand, it enables deep learning approaches that are well-known to be data hungry; (2) however, on the other hand, it forces the algorithms to efficiently compute all the data quickly enough to avoid a bottleneck on the system pipeline. Nowadays, there are not enough datasets with sufficient variety of data to allow generalization, hence, most proposed approaches are problem-specific. Furthermore, input data can be both structured (i.e. it comes with intrinsic relation between data elements) or unstructured. These options come with related problems, mainly in nonstructured datasets where correspondence or neighborhood estimation complicates.

Outlier Rejection and Occlusions
The presence of artifacts, such as noise, outliers, occlusions and missing parts, complicates the registration process. On one hand, outliers are points which can differ from the rest of the data and may have no correspondence in the other point set, which adds uncertainty in the matching step and, hence, in the final result. Meanwhile, occlusions or missing parts are represented as missing points, for example, missing parts of a 3D point mesh. This causes a mismatch between the two point sets captured from a different point of view. Moreover, most non-rigid transformations, defined by a large amount parameters, suffer from these artifacts leading to inaccurate transform parameters. Registration algorithms are highly dependant on the quality of the data as well as the extracted features and the matching between them. The presence of occlusions, outliers or missing data could affect to achieve a good alignment.

Deep Learning in the Context of Registration
Neural Networks are a Machine Learning paradigm that are composed by a large amount of neurons with activation functions dependent on the inputs, weights and biases. Typically, neurons are organized in layers and a common neural network consists of an input layer, an output layer, and a number of hidden layers between them. Deep Learning is the subfield that studies Deep Neural Networks (DNN), which increases the number of hidden layers (and potential layer-to-layer transformations) and allows to get multiple levels of abstraction transforming the data in a non-linear fashion by learning complex functions and transformations [19].
Convolutional Neural Networks (CNNs) are a kind of network that performs convolutions over 2D or 3D dimensions but with the ability to manage data of multiple dimensions, like temporal sequences. They perform convolution operations and also include pooling layers in which the neighborhoods are aggregated summarizing the presence of features in the input data. These networks were proposed by Fukushima [20] and they are widely used in many tasks, such as image classification [21], semantic segmentation [22], object manipulation [23,24] among others. An extended review of the history of deep learning and its approaches could be found in Alom et al. [25].
The recent interest in this field largely began with the work of Krizhevsky et al. [26], who proposed the AlexNet model. It attracted the attention of many researchers, as it achieved the best results in the ImageNet challenge with a deeper CNN network than LeNet [7], a deep convolutional neural network proposed in 1998, but implemented in 2010.
In a similar way to humans, neural networks are able to understand the input data by extracting an abstract understanding of it [19]. Bench-Capon [27] considered the representation of knowledge remaining in a learning system after the training process with the following definition: a set of syntactic and semantic conventions that makes it possible to describe things. This representation of knowledge could be understood as a conceptual model of the object. Norman [28] defined conceptual models as an accurate, consistent and complete representations of knowledge, coherent with the real world and physics rules. There is a gap between an observed phenomenon and the mathematical model. According to Nersessian [29], in this gap mental models are located, but they can be incomplete or unscientific. A mathematical model is also a conceptual model, which is an external representation that facilitates the comprehension of a teaching system. It is functional and coherent with scientific knowledge [30].
This knowledge can improve alignment problems.
Traditional registration approaches have different challenges (as shown in Section 2.1). This challenges lead into one general limitation, the lack of generalization of these algorithms. Usually they are highly dependant on the correspondences between the input datasets. If the correspondences are not good, finding an accurate transformation will be difficult. Moreover, in the optimization stage, the accuracy of the result depends on a good initialization and may take several iterations to achieve good performance, or even not finding a valid result sinking in a local minima due to a non-convexity. For example, ICP algorithm and some of its variants need the input data to be relatively close at the beginning of the optimization stage.
Learning approaches are helping to minimize the effects of these problems in registration. For instance, neural networks are successfully being used with for feature extraction. Fan et al. [31] summarize and compare several methods for this task, including some learning-based proposals. The work carried out by Zeng et al. [32] propose the 3DMatch, which is a descriptor for matching partial 3D data based on AlexNet by Krizhevsky et al. [26]. The descriptor enables better correspondences and, thus, better initial registration. The 3DMatch descriptor is compared with some state-of-the-art methods obtaining better results for registration tasks. Yumer and Mitra [33] developed a convolutional neural network to receive a mesh and transform it to fit specific "semantic" features, which modifies the properties of the model. Zhang et al. [34] have trained a neural network to retrieve depth information in an active stereo system. Eitel et al. [35] developed a network for object recognition in RGB-D images. All of these learning approaches are showing the ability of networks to performing learning geometric-related tasks achieving good results.
With the development of DL, the remaining knowledge defined before as a set of syn-tactic and semantic conventions could be considered as a conceptual model, that, in the case of registration processes, could be a target to align with spatial data. Theoretically, the idea of the conceptual model allows to differentiate the input data of a registration process into defined or non-defined models. The defined aspects are models that represent specific spatial data (commonly 2D or 3D) while a non-defined model is a generalization of a data set produced by a learning system, e.g. the concept of a ball, those properties that make an object a ball, rather than the specific instance of a ball itself using geometrical aspects.
The conceptual models have also been applied in registration, for example, in the work of Yumer and Mitra [33] in which the network learns properties of objects, being able to know what a more sporty car looks like or a more comfortable chair is, and modifying a 3D model to fit those properties while preserving the main features of the original data. With this approach, three combinations of input information are possible: defined model /defined model, defined model /conceptual model, conceptual model /conceptual model. This taxonomy is shown in Figure 2. The classical algorithms for registration are included in the first of the possibilities, one input is used as target or as reference set whilst the other is transformed to be aligned with the first, but always with defined data. By contrast, the use of neural networks for this purpose results in other combinations where conceptual models are included. Those models need not be specifically defined, e.g. they can be synthe-sized by a trained network with the learned features coming from the training data.
Then, these features can be used in the registration process afterwards or even in the same network. In any case, there is no need of knowing the working space of the network. Its internal representation is alien to the human knowledge.
Also, the combination of two conceptual models could be possible with the growth of Imagination Machines proposed by Mahadevan [36], which aims to provide the artificial intelligence systems flexibility and connections between the learned aspects through a training processes not based on labels and classifications of the input data.
In 3D computer vision, inputs are spatial representations usually in the form of a point cloud, mesh or voxelgrid, representing a 3D model, in whole or in part. Traditionally the inputs in most of the registration methods have the same dimensionality, e.g. two or more point clouds or voxelgrids in R 3 . In medical imaging, it is common to apply registration between images from different modalities, but always with the same dimension, 2D or 3D. The main drawback of applying neural networks to computer vision problems is the amount of memory and resources that are needed, particularly with multi-dimensional data. For example, focusing on 3D data, some authors have addressed this problem by increasing the discretization level of the inputs to the network [37]. Other works have proposed specific architectures to increase the processing performance of this kind of data without discretization, Wang et al. [38] propose a graph-based module suit-able to be included in existing network architectures to preserve local geometric features on point clouds. For instance, the work of Shen et al. [39] introduces a modification of PointNet [37] to make use of the neighbourhood information in a point cloud.

Common Deep Learning-based Models Used for Registration
Before conducting an in-depth analysis of the latest deep learning-based registration approaches, it is essential to shed some light on the most common models that have constituted the backbone of most of the existing architectures. By disentangling the building blocks of existing approaches, we could better analyze the contributions made in this field. In this subsection, we will define the architectures which represent the foundations of registration models based on DL techniques.

Convolutional Neural Network / Fully Convolutional Network
The great success of Convolutional Neural Networks (CNN) has been shown in a wide range of computer vision and image processing tasks, such as image recognition, semantic segmentation, object detection, future frame video prediction, among others; and it is not in a lesser extent in the field of registration, where learning-based models currently demonstrate a leap over the traditional approaches. CNNs are primarily used when dealing with n-dimensional inputs, e.g. 2-dimensional or 3-dimensional data. The core building block of a CNN is the convolutional layer. A representation of this layer can be observed in Figure 3, where a dot product is performed between two matrices I and K, where matrix I is the input 2D image and K is a set of learnable parameters, also known as a kernel. The kernel is convolved across the width and height of the input I resulting in a 2-dimensional activation map, which represents a learned feature from the input data, such as, edges, colors, orientation, etc. By stacking many convolutional layers we perform a hierarchical extraction of features, resulting in a Deep Convolutional Neural Network (DCNN). In fact, the networks composed only by convolutional layers are called Fully Convolutional Networks (FCNs). A generic CNN may include other kind of layers like, pooling layers, fully connected layers or softmax.
Despite their numerous advantages, the capacity of the generic CNNs to be spatially invariant to the input data is limited. In other words, learning representations invariant to translation, rotation, and more generally affine transformations are a great chal- lenge for vanilla CNNs. This limitation is a big hurdle for the registration process, specifically when dealing with 3D point clouds. To address this problem, pooling operations (e.g. max-pooling) are used in order to achieve some kind of translation invariance by reducing the scale of the future maps. However, this implies losing spatial information mainly because of the local receptive fields. These limitations have encouraged authors to propose alternatives such as the Spatial Transformer Networks, or directly use a different architecture as a starting point, e.g. Generative Adversarial Networks (GANs).

Siamese CNN
Siamese networks are defined as a neural network containing two or more symmetrical models. We define two symmetrical models as ones that have the same architecture configuration and number of parameters which will be jointly updated and shared. These networks are increasingly gaining interest in many fields, where we need to relate two inputs or find similarities among them, e.g. signature verification, image recognition with one-shot learning techniques, and facial recognition, to name a few. These networks are also an option for registration purposes due to the ability for joint multiple inputs under the same weight space.

Generative Adversarial Networks
Generative Adversarial Networks (GANs) introduced by Goodfellow et al. [40] are used as a building block of many learning-based reconstruction approaches. These networks consist of two models, a generator and a discriminator, trained in an adversarial fashion. In other words, the discriminator searches for a distribution that perfectly resembles input data, meanwhile, the generator tries to regress new samples to fool the discriminator. This is mathematically formulated as a minimax game as follows: (3) where discriminator is defined as D(x; θ g ), where D(x) represents the probability that x came either from the real data distribution (D(x) 1) or the generator distribution (D(x) 0). On the other side, the generator is defined as G(z; θ g ), where G is a differentiable function with parameters θ g aiming to generate a new sample from the noise input z to match the distribution modeled by the discriminator.

Spatial Transform Networks
Spatial Transform Networks (STN) represent a great alternative to make CNNs invariant to affine transformations. Jaderberg et al. [41] presented the Spatial Transform (ST) module which, inserted into an existing deep network, such as CNNs, spatially transform feature maps for each input image. In other words, feature maps are conditioned on a particular input and changes accordingly. The ST module regresses the affine transformation parameters used by a single transformation applied to the whole input image of an intermediate feature map.

Shape Deformation Networks
Groueix et al. [42] propose Shape Deformation Networks (SDN) that are able to learn to align input shapes through a template. They employ an encoder-decoder architecture that, given two inputs, extracts a template from one of them and then aligns the template with the other input. This is performed by encoding parametrically the surface in a template and extracting a global feature vector that parameterizes the transformation between the template and the input shape. This encoder-encoder architecture is trained endto-end. The encoder is a simplified version of PointNet [37], while the decoder is a multilayer perceptron with hidden layers followed by a hyperbolic tangent activation function.
This architecture is mainly tested on human shapes and it allows the network to be robust to different types of noise, as well as being generalizable to other kind of shapes.

Autoencoders
The autoencoder (AE) concept was first introduced in the 1980s by Rumelhart et al. [43] and further extended for classification later in Hinton and Zemel [44]. Currently, autoencoders have been used in deep learning approaches in different cases, mainly to codify inputs for classification. An autoencoder network is a pair of two connected networks, an encoder and a decoder. The encoder takes an input and converts it into a codified representation, which the decoder uses to convert back to the original input. There exist several variants like contractive autoencoder, denoising autoencoder, or the well studied variational autoencoder (VAE).
AEs can be understood as a nonlinear PCA since the hidden units correspond to the same space as the first principal components. The fundamental problem with autoencoders in generation is that the space they convert their inputs to and where their encoded vectors lie may not be continuous or allow easy interpolation.

Developments Relevant to Registration
The number of works addressing registration with learning techniques have increased in recent years. To identify them, strategic searches have been performed in Scopus, Web of Science (WoS) and ArXiv. The results obtained from Scopus and WoS come from indexed journals, which means that they have passed a peer review process. However, most of the works in this review have been reached through ArXiv, which is a preprint repository without peer review. This is a double edged sword. Due to the lack of peer review, ArXiv gets novelty works before indexed sources.
But, an extra effort is required to review these works because quality is not guaranteed.  Figure 4 shows the research papers published in the intersection of deep learning and registration over last years in each repository. It is noticeable how the number of publications have increased for the last three years. Also it is possible to observe the delay of around one year produced between ArXiv and other sources. Probably, the works published in indexed sources will significantly increase in 2020.

Review of Learning-Based Approaches for Registration
To show the advancements of neural networks applied to registration problems, an analysis of recent works in the intersection of these fields is performed. In order to establish a comparative framework between these new approaches, a workflow extracted from traditional solutions has been extracted, as introduced in Section 1. This workflow is divided into four stages: target, features, matching and optimization. In this section, an analysis of learning approaches with any incidence in some or all of this stages of the workflow is performed.
The reviewed methods are shown in Table 1, classifying the methods regarding the traditional phases of a registration workflow. The first columns are the application, inputs, outputs, the employed datasets and the architecture of each method. The final columns indicate the parts of the traditional registration process in which they affect. In this way, the target column refers to the need of the method to have an anchor data as target to perform the alignment. If the method does not has a checkmark in this column it means that the method requires that data as input, otherwise the target is a conceptual model inside the network. This is possible if the generalization of the training data is implicit in the network knowledge, which means that the main properties from the inputs are learned by the network. For example, it is possible to think of a specific instance of chair or think of a chair as concept, with those properties that a chair must fulfill to be considered as such. The feature column indicates the ability of the method to find features in the data using a neural network, like the work of Ofir et al. [45]. The next step in the workflow is the matching between features. There are some proposals which train a network to be able to check the accuracy of the correspondences, for instance Pais et al. [46] proposes a network to align two sets of data given the features and the matching between them. The network implements the ability to determine if the features are correct, removing some of them if it would be necessary. The last column indicates the ability of the learning approach in each method to manage the geometric properties that align sets of data, like computing the camera pose [60] or the transformation parameters [68].
In the recent state-of-the-art methods, we find various proposals that carry out the whole registration, i.e. they cover the main parts of the traditional pipeline of registration, as well as other proposals covering certain stages of the pipeline.
This classification of the analyzed works has been done as a way to compare them in a common frame using a traditional perspective of the registration methods, going into detail in the key aspects of each method according to the stage where it contributes.

Target Level
At target level there are methods that generalise from the training process, exploring the idea of the conceptual model. This enables to register data with the knowledge of learning approaches. For instance, the work of Yumer and Mitra [33] is an end approach for semantically deforming shapes in 3D through free-form deformation using lattices. It does not use a target model to deform a mesh, the properties of the target are learned by the proposed network. The network is able to perform non-rigid deformations over 3D models to fit a given semantic property. As a result, it provides the deformed 3D model as well as the deformation flow of the data to fit that model preserving original details. Similarly, a key aspect of the unsupervised learning approach proposed by Ding and Feng [60] is that there is no target for the registration process. The network is capable of locating each input point cloud in a global space, solving SLAM problems in which multiple point clouds have to be registered rigidly. The employed architecture is commented in the following sections.
Adversarial training (AT) is being used in some works for this purpose. For instance, the work of Wang and Fang [52] employs an adversarial approach with CNN networks to reconstruct a 3D model of an object given only a 2D image of it. The key aspect of this work is the combination of a 2D autoencoderbased network with a deconvolutional network. The first one transforms the input image to the latent space, while the second transforms from the latent space to 3D space, acting as a 3D generator. So it is an unsupervised generative neural network that accurately predicts 3D volumetric objects from single real world 2D images. The network has learned multiple objects and internally performs the registration between the image and the conceptual model. In a similar way, the proposal of Hermoza and Sipiran [71] also uses a GAN network for predicting the missing geometry of damaged archaeological objects, indicating only the reconstructed object in a voxel grid format and a label designating its class. Its network architecture combines a completion loss and an improved Wasserstein GAN loss.
Smirnov et al. [100] propose a method to generate a 3D model from a 2D sketch. The 3D models are defined by a set of parametric patches. They employ an encoder-style architecture using convolutional layers and residual blocks that generates a series of 3D patches from the sketch, then a set of MLPs carry out the intersection between patches. In this work, registration between different spaces is performed with the provided sketch and the internal knowledge that comes from the training procedure. Similarly, the generative model of Yang et al. [101] uses a variation of an autoencoder architecture to generate 3D point clouds by modeling them as a distribution of distributions. Concretely, their method learns the distribution of shapes at the first level, and the distribution of points given a shape at the second level. As a result, the method is able to generate points as a given shape by parameterizing the transformation of points from an initial Gaussian distribution of them. Moreover, variational autoencoders are being used in an adversarial training framework, like the work of Wang et al. [91]. In this case they are employed for predicting a structural deformations produced by forces given a single depth image and the conditions of the input, which includes properties of the material, strength of the force, its location, etc. The generator predicts the force over a 3D model and the discriminator, used for training, should determine if the applied force comes from the generator or from the ground-truth. This approach enables the network to learn non-rigid deformations and it can generalize the deformations to unknown objects having into account properties of the materials. Other proposals are able to train a variational autoencoder with graph convolutional operations for completing missing data from partial body shapes while dealing with non-rigid deforma-tions [13]. They are able to identify the output space of the generator that best aligns with the partial input. Partial shapes are completed by deforming a randomly generated shape to be aligned with a partial input. This approach is robust to non-rigid deformations and has the ability to reconstruct missing data. It shows a topology understanding by the encoder-decoder architecture.

Feature and Matching Level
Correspondence identification between two point sets is one of the main steps in the traditional pipeline of registration, being crucial for the quality of the alignment since the optimization will utterly rely on them. A work classified in the Feature column does not mean that this method explicitly computes the features of the input data, it means that we are assuming that in some point the neural network performs this operation, and it could provide the features as an output or not. In this way, the feature computation could be differentiated into: • Explicit. The features are specifically expressed in some point of the method and could be checked. For instance, they are provided at the end of the method or as an output as a neural network in those works with combinations of neural networks.
• Implicit. It is assumed that the whole architecture of the network performs the feature computation at any time, but it could not be tightly checked.
Learning approaches have demonstrated successful results performing feature extraction and matching for registration purposes. Auto-encoders have been used for feature extraction. For instance, Elbaz et al. [49] uses in their proposal for point cloud registration a Deep Auto Encoder (DAE) for extracting low-dimensional descriptors from large scale point clouds. The training of the DAE is unsupervised, and it is able to extract a compact representation from depth maps that capture the significant geometric properties of the input data. Groueix et al. [42] introduce Shape Deformation Networks (SDN) in an encoder-decoder architecture for matching deformable shapes, where the encoder is able to extract a global shape descriptor from a 3D model, while the decoder can transform the extracted descriptor into another model. The SDN is able to learn to deform a template shape to be aligned to targets with the articulated restriction. Concretely the encoder SDN learns the deformation parameters and degrees of freedom to deform the template. This work shows that an encoder-decoder architecture to generate human shape correspondences can compete with state-of-the-art methods.
Convolutional Neuronal Networks have also been used for feature extraction. Hanocka et al. [69] propose ALIGNet, an unsupervised network to align either 2D or 3D shapes with an incomplete target. The network learns to extract the features to match both shapes and compute Free Form Deformation (FFD) grids. It is trained with a shape alignment loss by comparing the overlap between the source and the target for learning the FFD parameters. Ofir et al. [45] have developed a learning-based method to register multi-spectral images (visible and Near-Infra-Red images). They employ a learning approach for extracting features of both images and matching them. For that purpose their proposal is based on an asymmetric (different weights) Siamese Convolutional Neural Network, one for each spectral channel. The networks minimize the Euclidean distance between the two descriptors. With a similar network architecture, Yew and Lee [74] propose a 3DFet-Net, that finds features and descriptors as well as correspondences for a later registration. They use coarsely annotated point clouds with GPS/INS absolute pose. It is based on a three-branch Siamese architecture that uses PointNet++ [105].
Each branch takes an entire point cloud as input. The network is trained with a set of triplets containing the anchor, positive and negative point clouds. Positive point clouds are those with a distance to the anchor below a threshold, and negative point clouds are far away from the anchor. Each branch has a detector and descriptor network. Both networks for each branch share the same inputs. The detector network predicts an orientation and an attention weight for each branch. Then, the descriptor network rotates the input to a canonical configuration a computes the features that will be aligned with the other branch through a triplet loss. That loss aims to minimize the difference between the anchor and positive point cloud, and maximize the difference between the anchor and the negative point cloud.
To address the problem of inaccurate correspondences, Schaffert et al. [99] employ a modified PointNet [37] architecture for weighting individual correspondences in a 2D/3D rigid registration process on X-ray images. They employ a modified PointNet due to the fact that its basic idea is to process points individually to obtain global information. The authors include a second MLP which process correspondences containing global and local information. This modified network is able to weight individual correspondences based on their geometrical properties and similarity as well as global properties.
Chang and Pham [93] presented a 3D point set registration framework with two stages to cover the problem of coarse-to-fine registration. Two descriptors are proposed, one for rough and for fine orientation extraction, the SSPD and the 8CBCP. The SSPD that is a normalized voxelgrid and the 8CBCP that describes the orientation using an 8dx3 matrix obtained from the data in the 8 subquadrants of the bounding box around the 3D points. They use two consecutive CNNs using those descriptors. The first CNN receives as input the SSPD descriptor and is meant to estimate the coarse alignment. Then, the 8CBCP descriptor is computed over the output and introduced in a second CCN that performs a more accurate alignment. In this proposal, the CNNs only estimate the rotation, the translation is afterwards obtained.
Convolutions used in most of the deep learning networks operate over a neighborhood of the data, thus, structured inputs are required. 3D point clouds are unorganized data sets that are challenging to operate by convolution-based networks, a problem that led much research in this topic. Some state-of-the-art proposals tackle this problem by voxeling the point cloud [106] but these approaches are not efficient since points are sparse and a large percentage of voxels are empty and details can be lost. Others try to extract geometric features from point clouds, such as Xu et al. [107] that uses a so-called SpiderConv filters that are parametrized functions of specific radius applied over the point cloud. The ELF-Nets of Lee et al. [108] proposed the Extended Laplacian Filter that is a combination of a twostate filter, one for the center point and one for neighbors, with a scalar weighting function that represents the relative importance of the points. This last approach uses less parameters than the SpiderConvs. For managing 3D data, Zeng et al. [32] employ a reduced set of voxels of TDF (truncate distance function) containing an interest point of a point cloud. The TDF is the distance from the center of the voxel to the nearest point. This is used as input of a convolutional network which extracts a 512-dimensional feature representation. The result is a geometric descriptor which the network is able to generalize to other tasks and resolutions. But according to Liu et al. [83] the CNN can not provide good results when working with point clouds due to their irregular structure. For this purpose, they employ a modified version of the PointNet++ [105] architecture, a network that learns hierarchical features. With that network they propose a network archi-tecture that learns to predict scene flow as translational motion vectors for each point. The proposed architecture has three modules: point feature learning, point mixture, and flow refinement. It includes a flow embedding layer that learns to aggregate geometric similarities and spatial relations of points for motion encoding. Pais et al. [46] presented a network architecture with two main blocks, the classification block fed with pairs of corresponding 3D points and giving as a result features for each correspondence using 12 ResNets [109] which remove outlier correspondences. The registration block gets the resulting features from the previous stage and produces a six variable output for rotation and translation obtained with a context normalization layer along with convolutional one and two fully connected layers. This method works on point correspondences. It is efficient and outperforms traditional approaches.

Transformation Level
Some works have been successfully employed neural networks for learning and applying geometric transformations. Some of these achievements have been done using GAN architectures or a variants of them. For instance, Lin et al. [79] demonstrated good results using neural networks for finding realistic geometric transformations for 2D image compositing. Image compositing refers to overlap images coming from different scenes, so achieving a good realism implies a good transformation to minimize appearance and geometric differences. For this purpose, they propose a GAN architecture using STNs, named ST-GAN. According to the au-thors of ST-GAN, this idea could be extended to other image alignment tasks. STNs have shown good results resolving geometric variations, so with this architectures the network learns to perform realistic geometric warps. Maybe this idea could be extended for 3D data. Yan et al. [90] also makes use of GANs to carry out the registration of magnetic resonance (MR) and transrectal ultrasound (TRUS) images and also evaluate the provided result. The generator network provides the transformation parameters to align both inputs, whilst the discriminator performs a quality evaluation of that alignment. Mahapatra et al. [85] uses GANs for deformable multimodal medical image registration in 2D. The network outputs a transformed image and also a deformation field. Similarly in three-dimensional space, Hermoza and Sipiran [71], referenced above, perform the reconstruction of incomplete archaeological models also using GANs, in which the generator network provides a reconstructed model. Ding and Feng [60] manage multiple point clouds registration using DNNs. They approach this problem by including two networks, a localization network named L-Net, and a occupancy map network, M-Net. The L-Net estimates the sensor pose for a given point cloud, sharing some optimization parameters between the input point clouds. The goal of this network is to estimate the sensor pose in a global frame. To do that, the L-Net network is divided into a feature extraction module followed by an MLP that outputs the sensor pose. The feature extraction module employed depends on the input format of the point cloud. If it is an organized point cloud, a CNN is employed for that purpose. If not, the features are extracted using PointNet [37]. Later, the M-Net receives those location coordinates in the global space and retrieves the discrete occupancy map. Besides, the L-Net network locates each input point cloud in a global space, there is no target for the alignment. With a similar architecture, Wang et al. [103] presented 3DN, a combination of PointNet and MLPs that deforms 3D meshes to resemble a target, given in a form of 2D image or point cloud, as close as possible preserving the properties of the source. The proposal extracts global features from both source and target inputs using CNN/PointNet. Next those features are used to estimate the per-vertex displacement with an 'offset decoder'. To overcome the problem of tessellation differences, an intermediate sampled point cloud is calculated from both source and target. They employ a combination of four different loss functions, measuring the similarity between the deformed source and the target, symmetry, local geometric details and self-intersections. This work proposes an end-to-end network architecture for mesh deformation.
Using autoencoders, Groueix et al. [42], with their SDNs introduced before, replicate the shape of a body previously encoded in a given template. For 3D medical image nonrigid registration, Kuang and Schmah [77] employs an architecture inspired in STNs extending the works of Shan et al. [110] and Balakrishnan et al. [111]. The network takes a pair of volumes and predicts the displacement fields needed to register source to target. According to the authors, it improves the results compared to U-net [112] and VoxelMorph [111]. This method produces deformations with fewer regions of non-invertibility where the surface folds over itself. To achieve this, they employ an explicit anti-folding regularization to penalize foldings, which are spatial location where the deformation is noninvertible and it is indicated by a negative determinant of the Jacobian matrix.
With convolutional networks, Jack et al. [95] perform the 3D reconstruction from a single 2D image by learning to apply large deformations and compelling mesh reconstructions by inferring Free Form Deformation (FFD) parameters. They employ a lightweight CNN based on the MobileNet architecture [113] to infer FDD parameters to deform a template and infer a 3D mesh of the given image. As a result, the network learns how to deform a given template to match features present in a 2D image with finer geometry than other methods working with voxelgrids and point clouds, due to there is no discretization.
Guan et al. [94] proposes a multi channel CNN (MCNN) for deformable registration of CT scans with digital subtraction angioagrphy (DSA) images of the cardiovascular system. The network is composed by several sub-networks that converge before the fully connected layers. They name this architecture as multi-channel convolutional neural network. They a employ a CNN model modified from VGG network combined with a vascular diameter variation model to directly regress and predict transformation parameters. With this architecture, each channel of the MCNN process a different phase of the vascular deformation cycle, comparing the result of each one a choosing the four optimal. Li and Fan [51] employ FCNs to optimize and learn spatial transformations between pairs of images to be non-rigidly registered. Their method works with medical images at the voxel level and, according to the authors, it improves the results of STNs, which cannot manage small transformations. The spatial transformation between pairs of images is obtained directly by maximizing an image-wise similarity metric, similar to traditional approaches. The use of FCNs facilitates the voxel-to-voxel prediction of deformation fields, which also allows to learn small transformations.
Gundogdu et al. [68] propose a method for 3D garment fitting on bodies. To extract global features of the body model they employ a PointNet [37] but with leak ReLUs with a slope of 0.1. After that, a second stream, composed by six residual blocks, is used to extract features from the garment mesh and also take as input the previous global body features. Thirdly, the features provided by both networks are merged employing four Multi Layer Perceptron (MLP) blocks shared by all points. The final MLP block outputs a vector with the 3D translation information. With this method, the authors achieve results nearly as accurate as a Physics-Based Simulation (PBS), but less time consuming.
Similarly, PointNetLK [92] is a method for 3D rigid registration which is a modification of the Lucas & Kanade (LK) [114] algorithm integrated with PointNet. The process is mainly divided into two steps: initially, two 3D point sets are passed through a shared Multi Layer Perceptron of the two inputs and a symmetric pooling function. Second, the transformation is obtained and applied to the moving point cloud. The whole procedure is iteratively repeated until a minimum quality threshold is reached. According to the authors, this method exhibits remarkable generalization of unseen objects and shape variation due to the encoding of the alignment process in the network architecture that only needs to learn the PointNet representation. Wang and Fang [104] presented CPD-Net, a network architecture that performs nonrigid registration under the concept of learning a displacement vector function that estimates the geometric transformation. The pipeline is decomposed into three main components: 'Learning Shape Descriptor' with a MultiLayer Perceptron (MLP); 'Coherent PointMorph' that is a three MLPs block fed with the two descriptors concatenated with the source data points; last component is the 'Point Set Alignment', where the loss function is defined to determine the quality of the alignment. Deep Closest Point [102] registers two point clouds by first embedding them into high-dimensional space using DGCNN [38] to extract features. After, contextual information is estimated using an attention-based module that provide a dependency term between the feature sets, i.e. one set is modified in the way that is knowledgeable about the structure of the other. Finally, alignment is obtained using a differentiable Singular Value Decomposition (SVD) layer, which seems to provide better results than a MLP. This proposal also includes a "pointer generation" that provides a proba-bilistic approach to generate a soft map between the two features sets to minimize the problem of falling into a local minima.
Going back to the work of Pais et al. [46] cited before, the component performing the alignment uses a CNN that receives as input features extracted from the selected point correspondences at different stages of the previous components composed by ResNet blocks. The registration block receives as input features previously extracted and outputs the transformation parameters.

Summary
Through this section an overview of neural networks that are being proposed for performing alignment or registration tasks is provided. A perspective extracted from the pipeline employed until now in registration methods has been employed to perform the analysis of the proposals. From the analyzed works (gathered in Table 1) it is possible to observe that the extraction and matching of features are tasks widely explored with neural networks, because they are common points with other problems like object classification or recognition. However, it is not common to find neural networks with the ability to manage geometric information for applying transformations on data to meet some requirements.
There exist approaches dealing with some parts of the pipeline, or the whole registration. Interestingly, the proposals that compute the transformation for the alignment also perform the matching of features. That means there are no proposals performing only the calculation of the transformation. In terms of deformable alignment, there exist specific proposals for learning deformations or non-rigid registration with networks, like the SDNs, but is a current topic under research.
Also, it is noticeable that most of the analyzed neural networks employ a GAN, a CNN or a variation of them, but there are a few works with a GAN architecture managing 3D data. However, this point is under active study because this kind of data requires many resources. As occurs with 2D images, the main solution is to use discrete input data. Similarly to pixels in 2D, voxel grids are the common solution for 3D. But, in some situations is important to work at point level e.g.: for a deformation flow. Although there are some proposals working with point clouds as input data, most of them require an organized point cloud or a limitation in the number of points.
We tried to find relationships between the reviewed works. In order to do it, we tried to cluster the proposals gathered in Table 1 using k-means and SOM with a prior categorization of each of the method properties. However, with this two analysis, no clusters have been found, which means that there is not a methodology that could be extracted from these works. Besides, the proposed solutions are very specific to the scope of the problem showing the lack of patterns.

Discussion
In this work, more than 600 papers have been carefully reviewed to identify those proposals contributing in the intersection pro-duced between registration and deep neural networks. To this end, a selection of keywords and a design of the search strings was required. It is important to remark that a large part of the works have been reached through the ArXiv repository, which at the present, even though is not peer reviewed, leads the advancements in many fields like artificial intelligence and deep learning. For this reason, an extra effort is required to select the papers. However, this approach allows to include the most recent works in the scope of this paper.
Registration aims to calculate a common reference for two or more sets of data. This field has been widely studied among years, but recent techniques in machine learning are being combined with registration algorithms to increase the capabilities of the proposals. These techniques include neural networks with many hidden layers, which is known as deep learning, and its novelty remains in the ability to learn representations of data with multiple levels of abstraction.
These possibilities enable a new paradigm to address registration problems, in which the effort of design is mainly focused in the network and its training instead of finding the proper features to find a solution. Also, this paradigm allows to manage higher level understanding problems that are more related with a conceptual knowledge of the scene rather than the geometric properties. Problems that could not be solved in this way until now. We denominate this paradigm as Deep Registration Networks (DRNs) to name the branch of artificial intelligence exploring solutions for alignment problems using DNNs.
The contribution of this work is a review of registration methods based on deep networks. In order to do that, the learning approaches for registration have been reviewed and classified using a perspective extracted from the traditional pipeline of registration. This allows to see where are the current efforts in the intersection between registration and learning algorithms. Also, it is noticeable how the knowledge that remains in a neural network has influence in the registration by its combination with the data in a feature space.
As a result, in these lines an overall view of this new branch is provided, setting out different architectures and solutions that are being provided by the authors. A summary of the different methods is showed in Table 1 with the inputs, outputs, architectures and datasets employed to address the problems. Besides, an analysis of each method has been made using a traditional perspective of the pipeline employed in registration proposals. This is the main contributions of this work, a framework that allows to understand the learning methods for registration. Also, in these new methods the stages of the pipeline are not so clearly defined as they are in a traditional approach, due to some processes are computed directly and implicitly by the network e.g.: the extraction and matching of features. However, an advantage of the learning approaches is that they are suitable for real-time problems. The higher computational needs of a neural network are at the training phase, which is performed once. After that, the data processing is relatively fast for real-time applications.
From our review of the learning-based reg-istration algorithms and methodologies, it is clear that researchers are still exploring different paradigms, and no single approach is so far the preferred one. Whether the learningbased approaches will enable significant improvements over traditional registration approaches is still an open question. To help assess whether convergence is happening, we analysed the approaches using k-means and SOM networks to find clusters of methods sharing characteristics. However, no significant cluster were found, suggesting that convergence has not yet happened.
To conclude, we find that most new approaches can be analyzed using concepts from the four stages of registration that were identified in Figure 1, which enable the recognition, registration and reconstruction of objects. Although the four stages were evident in the traditional algorithms with the rise of deep learning, we believe that it will be possible to deal with more complex registration problems.