Next Article in Journal
C-V2X Centralized Resource Allocation with Spectrum Re-Partitioning in Highway Scenario
Next Article in Special Issue
Feature Enhancement-Based Ship Target Detection Method in Optical Remote Sensing Images
Previous Article in Journal
The Structural and Dielectric Properties of Bi3−xNdxTi1.5W0.5O9 (x = 0.25, 0.5, 0.75, 1.0)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Scale Memetic Image Registration

by
Cătălina Lucia Cocianu
and
Cristian Răzvan Uscatu
*
Department of Economic Informatics and Cybernetics, Bucharest University of Economic Studies, 10552 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(2), 278; https://doi.org/10.3390/electronics11020278
Submission received: 21 December 2021 / Revised: 12 January 2022 / Accepted: 14 January 2022 / Published: 16 January 2022
(This article belongs to the Special Issue Computer Vision Techniques: Theory and Applications)

Abstract

:
Many technological applications of our time rely on images captured by multiple cameras. Such applications include the detection and recognition of objects in captured images, the tracking of objects and analysis of their motion, and the detection of changes in appearance. The alignment of images captured at different times and/or from different angles is a key processing step in these applications. One of the most challenging tasks is to develop fast algorithms to accurately align images perturbed by various types of transformations. The paper reports a new method used to register images in the case of geometric perturbations that include rotations, translations, and non-uniform scaling. The input images can be monochrome or colored, and they are preprocessed by a noise-insensitive edge detector to obtain binarized versions. Isotropic scaling transformations are used to compute multi-scale representations of the binarized inputs. The algorithm is of memetic type and exploits the fact that the computation carried out in reduced representations usually produces promising initial solutions very fast. The proposed method combines bio-inspired and evolutionary computation techniques with clustered search and implements a procedure specially tailored to address the premature convergence issue in various scaled representations. A long series of tests on perturbed images were performed, evidencing the efficiency of our memetic multi-scale approach. In addition, a comparative analysis has proved that the proposed algorithm outperforms some well-known registration procedures both in terms of accuracy and runtime.

1. Introduction

The modern world uses a lot of digital imaging, both motion and static, captured through a myriad of devices of all kinds, and the trend is rapidly growing. All these images must be processed, implying understanding the content and making decisions. Most of the digital content is, in the end, irrelevant for the decision-making process but must be examined nevertheless in order to categorize it. While the human worker is still the best choice for understanding an image, the sheer amount of digital content to be processed is beyond their capabilities. Nobody can afford to hire and retain a huge army of people to analyze the digital imagery that must be processed. Besides being a huge task, it is also a repetitive and, after all, menial task, which makes it prone to errors. This is where computers can step in to perform the menial, repetitive tasks that make up the bulk of the processing and leave only final decision-making steps to humans. Even some simpler decisions can be entrusted to computers.
The understanding of digital imagery content by computers is generally referred to as “computer vision”, an umbrella term that covers all the research conducted in this field.
The processing power of computers constantly grows, but at the same time the amount of processing needed also grows, which leads to an “arms race”. Time is a critical resource and reducing the processing time is the main goal. Computers need better arms in this fight, which are materialized in better algorithms. Technological advances translate into images with higher resolution. More and more pixels are available for computers to process, but for many tasks, not all those pixels are relevant. One of the “weapons” in the arsenal is the use of lower resolution images during processing and then applying the results to the full-scale image. The lower resolution contains relevant information for the processing algorithm and limits the consumption of computing power to what matters. However, it cannot be said in advance which reduced scale is best suited for a given process. Finding a perfect balance between consumed time and quality of results can be difficult or impossible, taking too much time and thus defeating the initial goal of reducing the processing time. A way around this conundrum is the use of several reduced-scale images with various scaling factors. In the literature, this is called “multi-scale” or “pyramid strategy”, and researches indicate that it can be successfully used in multiple tasks.
This paper is structured in sections as follows. Section 2 provides a brief review of state-of-the-art works regarding multi-scale processing techniques. Section 3 summarizes the memetic approach to image registration and the algorithm we introduced in [1]. Section 4 is the main part of this article, in which we describe in detail the new method that extends the bio-inspired clustered technique presented in Section 3 to a multi-scale approach. Computation is carried out in different reduced representations to obtain promising initial solutions and identify search directions leading to the global optimum. The section also includes the computation of the search space, a justification for using multi-scale images, and the proposed methodology. The proposed algorithm was tested on various images, and the experimental results are reported in Section 5. Conclusions derived from a comparative analysis against some classical image registration methods proved the validity of the proposed algorithm, outperforming those methods from an accuracy point of view. In addition, the results show significant improvements regarding time consumption when compared to the algorithm presented in Section 3. In the last section, we summarize our conclusions and future development directions.

2. Literature Review

The idea of using multi-scaling is not new; the reasons behind it manifesting themselves early in the development of computers and computer vision. Key articles at the base of multi-scaling [2,3,4] indicate two main reasons for using images with lower resolutions in various processing stages of computer vision: first, the obvious reduction in processing power needed; second, a lower resolution eliminates distracting pixels, and thus the context of the image is better understood and can be applied on the high-resolution image.
Multi-scale images have been successfully used for various resource-intensive tasks. The identification of elements in images is one such task intensively approached. One direction looks into the identification of objects of interest in an image (salient objects), while another direction looks to identify the same object in a set of images acquired by various sources. In [5], the authors show that multi-scale images are related to the way biological vision works for segmenting, identifying objects, and understanding the perceived image.
The detection of salient segments using adapted graph algorithms and numerical techniques through multi-scale versions of an image is explored in [6]. In [7,8], Convolutional Neural Networks (CNN) are used to create models for salient object detection (objects that attract the attention of the eye) with the purpose of object detection. The CNN are trained using reduced-scale images, which provide the needed information without wasting computing time on irrelevant details. The trained network can be used to detect objects of interest in high-resolution images for various purposes, including personal identification. Multiple scale versions of an image are also used to identify targets of various sizes (both big and small) in an image through the use of the YOLO (You Only Look Once) v3 CNN in [9].
Re-identification (or identification of the same subject in multiple images) is an important task in processing images captured by various cameras. In [10], the potential of meaningful information in reduced scale images is harnessed for vehicle re-identification in images captured by non-overlapping security cameras. In [11,12], the same idea is used for the re-identification of persons. Various reduced-scale images provide useful explicit information for the stated goal. The identification of objects in static or motion images is also accomplished using multi-scale image processing in [13].
For a more engineering-based approach, multi-scale images have been used to detect and classify specific elements of importance. In [14], reduced-scale images (scale 2 and 4) are used to train a CNN to identify and outline six types of possible damages in civil infrastructure, thus relieving human inspectors of a huge workload, usually beyond their physical capabilities. An improved image denoising method that uses multi-scaling in combination with a Normalized Attention Neural Network is proposed in [15]. In [16], the authors use multi-scaling to overcome the difficulties in the detection and identification of very specific objects (ships) in single sensor images in infrared and visible spectra.
The biomedical field also benefits from the added information that can be found in reduced scale images. In [17], authors show that machine learning technology performs very well in image recognition, thus helping diagnosis, but not so well in prognosis. The authors also discuss the integration of machine learning with multi-scale modelling to improve prognosis performance.

3. Memetic Approach of Image Registration

Hybrid and memetic algorithms are among the most commonly used metaheuristics for image alignment, as the evolutionary process is enhanced with local search techniques that reduce the risk of premature convergence and speed up computation. In the following, we present the basic method we proposed in [1]. The method will be further improved to speed up computation, extending it to multi-scale processing and including specialized mechanisms to avoid premature convergence.
The degradation model is of geometric type, consisting of translations, rotations, and non-uniform scaling. We denote the target image by T and let S be the observed image, i.e., the perturbed version of T, where
S ( x ,   y ) = T ( f p ( x ,   y ) )
and
f p ( x ,   y ) = [ a b ] + [ s x 0 0 s y ] ·   [ cos θ sin θ sin θ cos θ ] · [ x y ] .
The parameter vector defining (2) is p = ( a , b , s x , s y , θ ) , where (a, b) is the translation vector, s x and s y are the scale factors, s x , s y > 0 , and θ is the rotation angle. Note that the rotation is relative to the upper left corner of the image. From a mathematical point of view, aligning S to T means computing g, the transformation to reverse (2), that is:
T ( x ,   y ) = S ( g p ( x ,   y ) ) .
The inverse transformation g p is computed as:
g p ( x ,   y ) = R T · S xy 1 ( [ x y ] [ a b ] )
where S xy 1 = [ 1 s x 0 0 1 s y ] and R = [ cos θ sin θ sin θ cos θ ] .
To solve the image alignment problem using evolutionary computing one has to define the chromosome representation, the search space, and the fitness function. In our work, the images are binarized by applying an edge detector. The chromosome space coincides with the phenotype space, and it is defined by [ a min ,   a max ] × [ b min ,   b max ] × [ π ,   0 ] × ( 0 , smax x ] × ( 0 , smax y ] . The boundaries of the translation parameters are evaluated based on smax x , smax y , and the object pixels in T and S, respectively [1]. The quality of a chromosome corresponding to a parameter vector p1 is defined in our work by the Dice similarity between the target and the result computed by (3):
fitness ( p 1 ) = DICE ( S ( g p 1 ) , T ) .
The Dice coefficient of the binary images A and B is defined by [18]:
DICE ( A , B ) = 2 · | A B | | A | + | B |
where | · | is the cardinal function.
The registration mechanism combines a version of Firefly Algorithm (FA) global optimization [19,20] with the local search implemented by Two Membered Evolutionary Strategy (2MES) [21] applied on clustered data. Let c 0 be an input individual, and we denote the initial step size by σ 0 . At each moment t, 2MES iteratively updates the point c t 1 using,
c t = { c t 1 + z , if   fitness ( c t 1 + z ) > fitness ( c t 1 ) c t 1 , otherwise
where z is a random value drawn from N ( 0 , σ t 1 ) . The step size parameter σ t is updated according to the celebrated 1/5 rule [21].
σ t = { σ t 1 ϑ , sr > 0.2 σ t 1 · ϑ , sr < 0.2 σ t 1 , sr = 0.2   ,
where ϑ [ 0.817 , 1 ) and sr is the success rate, i.e., the displacement rate corresponding to the last τ updates.
FA is a nature-inspired search that simulates the behavior of fireflies in terms of bioluminescence evolution. The position of a firefly corresponds to a candidate solution of (1), its fitness being measured by the corresponding light intensity. Each firefly j attracts less bright fireflies i, i.e., j modifies the positions c i according to:
c i ( t + 1 ) = c i ( t ) + β j ( r )   · ( c j ( t ) c i ( t ) ) + α r   ·   ε ,
where α r is the randomness parameter, ε is a draw from U ( 0 ,   1 ) , β j ( r )   is the attractiveness of firefly j seen by firefly i. The attractiveness function is defined as:
β j ( r ) = β 0   ·   e γ r 2
where r is the distance between fireflies i and j, β 0 indicates the brightness at r = 0 , and γ stands for the light absorption coefficient. In our work, we use α r , the updating rule, and the border reflection mechanism introduced in [22].
The registration algorithm reported in [1] is summarized below as Algorithm 1, where the FA parameters are β 0 ,   γ , the 2MES input arguments are σ 0 ,   ϑ ,   τ , and the size of the sequence of individuals, MAX.
Algorithm 1 Cluster-Based Memetic Algorithm
1. Input: the binarized versions of S and T, 2MES parameters, FA parameters, the number of iterations NMax, K (K < Nmax), and the fitness value threshold τ stop
2.Compute the boundaries of the search space
3.Compute the initial population, randomly generate the candidate solutions, and apply the 2MES procedure to locally improve a small number of individuals
4.Evaluate the initial population and find the best; time = 0
5.while  time <   NMax and the highest fitness value <   τ stop  do
6. Execute one FA iteration
7. Compute the best individual
8.if the best fitness value has not been improved
9.   if the best fitness value has not been improved during the last K iterations, apply the premature convergence avoidance mechanism:
10.   Increase the step size of 2MES,
11.   Increase the number of clusters
12.   Replace a small number of individuals with randomly generated and locally improved ones
13.   end if
14.   Apply k-means to split the population into clusters and locally improve the best individual from each class using 2MES
15.   Keep the best individual in the current population
16.end if
17. time = time + 1
18.end while
19.Output: The best individual corresponding to the perturbation parameter vector

4. The New Multi-Scale Methodology

The algorithm aligns an observed image S to the target T by extending the bio-inspired cluster-based technique described in S3 to a multi-scale approach that includes mechanisms especially tailored to avoid premature convergence. The ideas underlying the proposed methodology are that the computation carried out in different reduced representations can produce both promising initial solutions and individuals able to redirect the search to the global optimum. Furthermore, multi-scale processing may significantly improve registration accuracy and lead to faster algorithms.

4.1. The Geometric Degradation Model and the Search Space

The proposed approach aims to register images perturbed by translations, rotations, and non-uniform scaling according to (1). The translation domain [ a min ,   a max ] × [ b min ,   b max ] can be narrowed down by considering the following transformation instead of (2).
f p ( x ,   y ) = [ m n ] + [ a b ] + [ s x 0 0 s y ] ·   [ cos θ sin θ sin θ cos θ ] · [ x m y n ] ,
where p = ( a , b , s x , s y , θ ) and the rotation is relative to the center of the image (m, n). In this case, the inverse transformation g p is given by:
g p ( x ,   y ) = [ m n ] + R T · S xy 1 ( [ x y ] [ a + m b + n ] ) .
The search space boundaries are set as follows. We assume that ( θ , s x , s y ) [ π / 2 ,   0 ] × [ 0 , smax x ] × [ 0 , smax y ] . The alignment is performed using a binarized version of S and T computed by the Canny edge detector [23]; therefore, the input images are represented by sets of contour pixels. We denote by B ( T ) = { ( x T , y T ) ,   minx T x T maxx T ,   miny T y T maxy T } and B ( S ) = { ( x S , y S ) , minx S x S maxx S ,   miny S y S maxy S } the binarized versions of T and S, respectively. Since the elements in B(S) are obtained from those in B(T) using (11), we obtain:
minx T m + a + s x · ( ( x m ) · cos θ ( y n ) · sin θ ) maxx T
And
miny T n + b + s y · ( ( x m ) · sin θ + ( y n ) · cos θ ) maxy T .
By straightforward computation, we obtain that ( a , b ) [ a min ,   a max ] × [ b min ,   b max ] :
a min = minx T m + min ( x ,   y , θ , s x ) Da a _ lim ( x ,   y , θ , s x ) ,  
a max = maxx T m + max ( x ,   y , θ , s x ) Da a _ lim ( x ,   y , θ , s x ) ,
b min = miny T n + min ( x ,   y , θ , s y ) Db b _ lim ( x ,   y , θ , s y ) ,
b max = maxy T n + max ( x ,   y , θ , s y ) Db b _ lim ( x ,   y , θ , s y ) ,
where
a _ lim   :   Da   Da = [ π / 2 ,   0 ] × [ 0 , smax x ] × [ minx S ,   maxx S ] × [ miny S ,   maxy S ] a _ lim ( x ,   y , θ , s x ) = s x · ( ( y n ) · sin θ ( x m ) · cos θ )
And
b _ lim   :   Db   Db = [ π / 2 ,   0 ] × [ 0 , smax y ] × [ minx S ,   maxx S ] × [ miny S ,   maxy S ] b _ lim ( x ,   y , θ , s y ) = s y · ( ( x m ) · sin θ + ( y n ) · cos θ )
Note that the functions a _ lim and b _ lim are bounded and attain their margins. From a practical point of view, the extreme values of a _ lim and b _ lim can be computed in many ways. In our work, we used the pattern search method implemented by the MATLAB function patternsearch [24].

4.2. The Multi-Scale Representation of Images

A long series of research works involving multi-scale image processing using various mathematical tools have been reported in the literature [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. In our approach, scaling refers to the uniform change of object sizes in processed images and corresponds to the standard geometric transformation (11) defined by the parameter vector p s = ( 0 ,   0 ,   s ,   s ,   0 ) ,   s > 1 . Note that the dimension of the input images remains unchanged, while the size of the binary representations B ( T ) and B ( S ) decrease proportionally to s.
Let s > 1 be a stretching factor, T the target image, S the version of T perturbed by (11) with p = ( a ,   b , s x , s y ,   θ ) , and [ a min ,   a max ] × [ b min ,   b max ] × [ π / 2 ,   0 ] × ( 0 , smax x ] × ( 0 , smax y ] the search space corresponding to the inputs S and T. The representation of S and T in scale s, denoted by S s and T s , leads to a narrowed search space [ a min / s ,   a max / s ] × [ b min / s ,   b max / s ] × [ π / 2 ,   0 ] × ( 0 , smax x ] × ( 0 , smax y ] . Indeed, denoting by p = ( a / s , b / s , s x , s y , θ ) , we obtain:
S s ( x ,   y ) = S ( f p s ( x ,   y ) ) = T ( f p ( f p s ( x ,   y ) ) ) f p ( f p s ( x ,   y ) ) = f p s ( f p ( x ,   y ) )
Since T s ( x ,   y ) = T ( f p s ( x ,   y ) ) , the following relation holds:
S s ( x ,   y ) = T s ( f p ( ( x ,   y ) ) )
Consequently, aligning S to T can be reduced to registering S s and T s . That is, computing p’.
From an implementation point of view, digital image scaling using large scale factors s involves information loss due to object shrinkage. Therefore, the size of the binary representations B ( T s ) and B ( S s ) is smaller than the size of B ( T ) and B ( S ) , yielding to faster registration algorithms.
The multi-scale memetic algorithm introduced in the next section involves stages that use images represented at different scale factors. The algorithm evolves from one stage to another by changing the chromosome values and the boundaries of the translation parameter according to (20).

4.3. The Proposed Multi-Scale Registration Method

The extension of the clustered-based memetic algorithm provided in S3 registers pairs of monochrome images (S, T) in the case of the perturbation model defined by (11). If S and T are colored images, monochrome representation can be considered instead to obtain the inverse transformation (11).
The proposed algorithm involves a pre-processing stage, as follows. First, the binary representations B(S) and B(T) are computed using a noise invariant edge detector, and the search space boundaries are evaluated according to (13)–(16). Then, the data is represented at two scales 1 < s 1 < s 2 . The scale s1 is used in developing the registration procedure from (Algorithm 2), while s2 is used to generate the initial population (Algorithm 3) and to avoid getting stuck in a local optimum (Algorithm 4). The genotypes computed in the search space defined by s2 are converted in the algorithm’s scale s1 using transformation (2), where s = s 1 / s 2 . One can generalize the multi-scale approach by using multiple scale parameters to develop Algorithms 3 and 4, respectively. The fitness function that controls the evolution of the memetic algorithm is defined by (6), while the method used to cluster the current population of individuals is the k-mean. Note that the fitness values lie in [0, 1].
First, the population is randomly instantiated. Then, a few chromosomes computed by Algorithm 1 using the s2 scale are added to it. The memetic registration is an iterative process that applies an FA iteration to compute the new generation followed by locally 2MES-based improvement of the best chromosome of clustered data. The data is grouped into k clusters, which varies depending on the population quality.
k = c number + ct 1 bvalFA ,
where ct 1 > 1 is a constant that controls the maximum number of clusters, and c_number stands for the initial number of clusters.
To avoid premature convergence, we apply the following procedure. If the best fitness value bval is the same during the last iterations, increase the step size of 2MES, multiplying it by ct 2 / bval 2 , where ct 2 > 1 is a constant tuning the perturbation size z in (7). If the best fitness value bval is the same during the last it’ > it iterations, k 0 new locally improved individuals and the result computed by Algorithm 1 with s 2 = 15 replace k 0 + 1 individuals belonging to the current population.
The proposed method is summarized by Algorithm 2. Using C _ Pop t = { c 1 t ,   c 2 t ,   ,   c n t } , we denote the population at moment t. The input arguments are grouped depending on their use, as follows:
  • general parameters: the input images, S and T; the maximum number of iterations, NMax; the population size, n; the threshold fitness value, τ stop ; the scales s1 and s2; the constants c_number, ct1, and ct2; 2MES inputs, σ 0 , ϑ , τ ES , and MAX ; FA parameters β 0 ,   γ ;
  • Algorithm 3 parameters, corresponding to Algorithm 1 arguments: 2MES parameters, σ 0 ,   ϑ ,   τ ES ,   υ , and MAX; FA parameters (same as those belonging to the list of general parameters); NMax1, K1, and the threshold value τ stop 1 ;
  • Algorithm 4 parameters: the number of non-effective successive iterations, i.e., without highest quality changes, cp; it, it’, and k 0 as described above; 2MES parameters (same as those belonging to the list of general parameters); Algorithm 1 parameters (same as those described in the list of Algorithm 3 parameters); the population C_Pop.
Algorithm 2 Multi-Scale Memetic Algorithm
1.Input: S , T , NMAX , n , τ stop , s 1 , s 2 , σ 0 , ϑ , τ ES , υ , MAX , β 0 , γ , σ 0 , ϑ , τ ES , υ , MAX , NMAX 1 , K 1 , τ stop 1
2.Compute B ( T s 1 ) , B ( T s 2 ) , B ( S s 1 ) , B ( S s 2 ) , and the corresponding boundaries of each search space
3. t = 0 ; cp = 0
4.Compute C _ Pop t using Algorithm 3 and the input arguments n , B ( T s 2 ) , B ( S s 2 ) , σ 0 , ϑ 0 , τ ES 0 , υ , MAX , β 0 , γ , NMAX 1 , K 1 , and τ stop 1
5.Obtain the representation of C _ Pop t in the s1 scale search space
6.Evaluate C _ Pop t and compute bval = max c C _ Pop t   fitness ( c )
7.while t   <   NMax and bval   <   τ stop do
8. Apply an FA iteration and get C _ Pop t + 1
9. Compute bindFA : bvalFA = max c C _ Pop t + 1   fitness ( c )
10. if bvalFA     bval
11. Apply the premature convergence avoiding mechanism Algorithm 4 with the input arguments n , B ( T s 2 ) , B ( S s 2 ) , cp , it , it , k 0 , σ 0 , ϑ , τ ES , MAX , σ 0 , ϑ 0 , τ ES 0 , υ , MAX , β 0 , γ , NMAX 1 , K 1 , τ stop 1 , and C _ Pop t + 1 represented in the s2 scale search space
12. Get the representation of C _ Pop t + 1 in the s1 scale search space
13. Get the clusters C 1 , , C k using k-means
14.for i = 1 k
15.   Compute the best candidate solution c     C i  
16.   Apply 2MES with the arguments σ 0 , ϑ , τ ES , υ , MAX to locally improve c
17.end for
18. Compute bvalnew = max c C _ Pop t + 1   fitness ( c )
19.if bvalnew   >   bval
20.    bval = bvalnew ; cp = 0
21.else
22.   Keep the best individual in C _ Pop t + 1
23.   cp = cp + 1
24.end if
25. end if
26.  t = t + 1
27.end while
28.Output: the   best   parameter   vector   in   the   final   population
Algorithm 3 Population at t = 0
1.Input: n , S , T , σ 0 , ϑ 0 , τ ES 0 , υ , MAX , β 0 , γ , NMAX 1 , K 1 , and τ stop 1
2.Randomly generate a set of n-nr individuals { c 1 0 ,   c 2 0 ,   ,   c n nr 0 }
3.for   i = 1 nr
4.  Apply Algorithm 1 to compute   c n nr + i 0
5.end for
6.Output:   C _ Pop 0 = { c 1 0 ,   c 2 0 ,   ,   c n 0 }
Algorithm 4 Premature Convergence Avoiding Mechanism
1.Inputs: n , S , T , cp , it , it , k 0 , σ 0 , ϑ , τ ES , MAX , σ 0 , ϑ 0 , τ ES 0 , υ , MAX , β 0 , γ , NMAX 1 , K 1 , τ stop 1 , and C _ Pop
2.if cp = it1
3. σ 0 = σ 0 bval 2
4.end if
5.if counter = it2
6.for   i = 1   k 0
7.    Randomly generate an individual c i
8.    Apply 2MES with the arguments σ 0 , ϑ , τ ES , MAX to locally improve c i
9.end for
10.  Apply Algorithm 1 to compute c k 0 + 1
11.  Apply 2MES with the arguments σ 0 , ϑ ,   τ ES , MAX to locally improve c k 0 + 1
12.Replace k 0 + 1 individuals in C _ Pop by { c 1 , , c k 0 + 1 }
13.end if
14.Output: σ 0 ,   C _ Pop

5. Experimental Results and Discussion

A long series of tests on binary, monochrome, and colored images have been performed to assess the performances of the new registration algorithm. The computer used for testing has the following configuration: Intel Core i7-10870H, 16GB RAM DDR4, SSD 512GB, NVIDIA GeForce GTX 1650Ti 4GB GDDR6.
The algorithm performances have been measured using runtime and registration accuracy. The accuracy has been evaluated by a series of measures to reflect the effectiveness of Algorithm 2 in a comprehensive manner. The main indicator is the mean success rate recorded for NR runs of Algorithm 2, where a successful run is the one that produces an individual whose quality exceeds a certain limit. The indicator measures the capability of Algorithm 2 to compute approximations of the fitness global optimum. The success rate of the algorithm that aligns the image S to the target T is computed by:
SR ( T , S ) = NS NR · 100 % ,  
where NS represents the number of attempts with correct registration, and S and T are of the same size, M × N .
We also evaluated the accuracy of Algorithm 2 through similarity indicators computed between the images T and T ˜ , where T ˜ is the image obtained by aligning S using the result of Algorithm 2. Denoting the density function by p ( x ) , the similarity measures are the following:
  • Signal-to-Noise-Ratio (SNR).
    SNR ( T , T ˜ ) = 10 log 10 [ x = 1 M y = 1 N ( T ( x , y ) ) 2 x = 1 M y = 1 N ( T ( x , y ) T ˜ ( x , y ) ) 2 ] .
  • Shannon normalized mutual information [25,26].
    NMI S ( T , T ˜ ) = 2 · MI S ( T , T ˜ ) H S ( T ) + H S ( T ˜ ) ,
    where
    MI S ( T , T ˜ ) = H S ( T ) + H S ( T ˜ ) H S ( T , T ˜ ) ,
    H S ( X ) = x = 0 L 1 p ( x ) · log 2 p ( x ) .
  • Tsallis normalized mutual information of order α [27,28].
    NMI α T ( T , T ˜ ) = MI α T ( T , T ˜ ) H α T ( T , T ˜ ) ,
    where
    MI α T ( T , T ˜ ) = H α T ( T ) + H α T ( T ˜ ) H α T ( T , T ˜ )
    H α T ( X ) = 1 α 1 · ( 1 x = 0 L 1 p ( x ) α ) .
A good approximation T ˜ is such that the value of SNR ( T , T ˜ ) is very large (infinite for T ˜ = T ), while NMI S ( T , T ˜ ) and NMI α T ( T , T ˜ ) are both near 1. Note that, in case of significant perturbations, the information residing in the observed image S is not enough to completely reconstruct T; that is, reversing the exact geometric transformation leads to an image T’ possible different from T. For this reason, the correct way to measure the quality of the registration is to evaluate the ratio.
RSIM ( T , T ˜ ) = SIM ( T , T ) SIM ( T ˜ , T ) ,
where SIM { SNR , NMI S , NMI α T   } ; the theoretical maximum value being 1.
If T can be completely reconstructed using a geometric transformation, the fitness threshold value is usually set above 0.8. In case of significant perturbations, the threshold value τ stop is set to [0.5, 0.6].
Since the proposed method is of stochastic type, the above-mentioned measures is applied NR times on each pair of images, and the recorded result is computed using the corresponding mean value. Consequently, if we denote by T ˜ 1 , , T ˜ R the images obtained when S is aligned using the geometric transformations computed by Algorithm 2, the accuracy measures are defined by:
MeanRSIM ( T , S ) = i = 1 NR SIM ( T , T ˜ i ) NR ,
The evaluation of the computation complexity is assessed by:
MeanRT ( T , S ) = i = 1 NR t i NR ,
where t 1 ,   ,   t NR are the corresponding runtimes.
We used various parameter settings and uniform scaling factors to implement the proposed multi-scale memetic approach and optimize the alignment accuracy and the execution times.
Below, we provide a summary of the registration results obtained for images belonging to the Yale Face Database [29]. The database consists of 165 monochrome face images of 15 persons, 11 samples for each. The spatial resolution for all images is 320 × 243 pixels. Table 1, Table 2, Table 3 and Table 4 present results for 30 test images (two for each person). Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 show a selection of images for two persons that includes target, perturbed, and aligned pictures and also binarized and scaled versions used during computations.
The perturbation model is given by (11), with the parameters: θ [ π 2 ,   0 ] , s x ,   s y [ 0.5 ,   1.5 ] , a [ 20 ,   80 ] and b [ 40 ,   80 ] . The sensed images shown in Figure 1 and are perturbed by p = [ 80 ,   40 , π 2.2 ,   1.5 ,   0.8 ] , and p = [ 10 , 10 , π 2.1 ,   1.1 ,   1.35 ] , respectively.
The proposed alignment procedure computes an approximation of the perturbation parameter vector in the search space narrowed down by the stretching factor s 1 = 4 , while significantly larger scaling values s2 are used to generate the initial population and it prevent becoming stuck in a local optimum. In our work, the scaling parameter s 2 was between 11 and 15.
The results of applying Algorithm 2 are summarized below. Figure 1 and Figure 2 show recorded, sensed, and registered images for two test samples, subjects 10 and 7. The corresponding numerical results are presented in rows 10 and 7 in Table 1, Table 2, Table 3 and Table 4.
Figure 3 and Figure 4 represent scaled and binarized variants of the images from Figure 1 and Figure 2, computed by Algorithm 2.
Algorithm 2 parameters were set to: τ stop [ 0.5 ,   0.6 ] ,   n = 20 , NMAX = 250 , nr = 6 , ind = 4 , β 0 = γ = 1 ,   σ 0 = [ 12 ,   12 ,   0.3 ,   0.5 ] , ϑ = ϑ = 0.85 , τ ES = 20 , υ = 0.3 , MAX = 800 , σ 0 = [ 7 ,   7 ,   0.03 ,   0.05 ] , τ ES = 15 ,   MAX = 240 , it = 3 , and it = 7 . Each time the algorithm gets caught by a local optimum, k 0 = 7 new locally improved individuals and the result computed by Algorithm 1 with s 2 = 15 replace k 0 + 1 individuals belonging to the current population. Note that the parameters’ values used in 2MES and FA algorithms are set in line with values widely used in various reported works [19,20,21], which constitute a de facto standard.
Figure 5 present the alignment results of Algorithm 2.
The proposed algorithm has yielded a perfect success rate, correctly aligning all the test image pairs. Note that Algorithm 1 correctly aligns the considered images, but the recorded runtimes are substantially larger than those obtained by the proposed method. The numeric results reported below refer to the mean value and the standard deviation of the runtimes computed for Algorithms 2 and 1, respectively. The data in Table 1 prove that Algorithm 2 is significantly faster than Algorithm 1.
The mean values and the standard deviation values computed for the accuracy measures are displayed in Table 2, Table 3 and Table 4. Note that we used α = 1.2 to compute the Tsallis mutual information. The maximum value of the functions defined by (31) is 1, but due to rounding and computation errors, slightly larger values may be obtained.
Additionally, in order to derive comprehensive conclusions regarding the performances of Algorithm 2, we tested it against two classical methods for monomodal image registration, the regular step gradient descent optimization (RS-GD) based on mean squares image similarity metric (MS) [30,31], and Principal Axes Transform (PAT) [18]. RS-GD based registration adjusts the geometric transformation parameters so that the evolution of the considered metric is toward the extrema. PAT is an image registration technique based on features automatically extracted from images, where the image features are defined by the corresponding set of principal axes.
Figure 6 present the results of applying the PAT method, and Figure 7 present the results of applying the RS-GD algorithm.
The accuracy results of all tested methods are reported in Table 2, Table 3 and Table 4. The mean and standard deviation values of RSNR correspond to Algorithm 2 and RSNR values recorded for PAT and RS-GD are provided in Table 2. In addition, Table 2 show the success ratios of Algorithm 2 and whether classical methods managed to correctly align the tested pairs of images. Note that, in the case of severely perturbed sensed images, both classical methods may misregister the inputs. The resulted accuracy rate of PAT is only 26.7%, while for RS-GD, it is 53.3%. Algorithm 2 had a 100% accuracy, correctly registering all tested images in all runs.
The mean and standard deviation values of RNMI S and RNMI α T computed in the case of Algorithm 2 are displayed in Table 3 and Table 4, respectively. The tables also present the values of RNMI S   and RNMI α T corresponding to PAT and RS-GD methods.
The numerical results indicate that Algorithm 2 produces more accurate results than PAT and RS-GD in the light of all informational and quantitative indicators used. In addition, the new method is considerably faster than the method reported in [1].

6. Conclusions

The aim of the paper was to propose a new comprehensive multi-scale method that extends the approach reported in [1] to obtain accurate and efficient registration algorithms. The input images were pre-processed by a noise-insensitive edge detector to obtain binarized versions, i.e., the sets containing contour pixels. Isotropic scaling transformations were used to compute multi-scale representations of the binarized inputs. The registration was then carried out in different reduced representations to obtain promising initial solutions and to identify search directions leading to the global optimum. The process combined bio-inspired and evolutionary computation techniques with clustered search and implemented a procedure specially tailored to address the premature convergence issue.
A long series of tests involving monochrome images were conducted to ascertain meaningful conclusions regarding the registration capabilities of the proposed method. The experiments involved accuracy and efficiency measures, expressed in terms of SNR, Shannon mutual information, Tsallis entropy, and runtime. We compared Algorithm 2 against the basic method introduced in [1] and two of the most commonly used alignment procedures for monomodal images, namely the regular step gradient descent optimization based on MS image similarity metric and PAT registration. In terms of accuracy, Algorithm 2 is similar to Algorithm 1, with a success rate of 100%, which means that it has always managed to correctly align the input images. In contrast, both RS-GD and registration and PAT alignment failed to solve the problem of severely perturbed sensed images, their corresponding success rate being far less than 100%. In terms of efficiency, there were significant improvements over Algorithm 1, with the proposed method being at least two times faster.
The experimentally established results validate the proposed method and open the path for further developments and extensions to more complex transformations. In addition, metaheuristics involving other promising bio-inspired techniques, such as the flower pollination algorithm, cuckoo search, and bat algorithm, will be considered for the population-based optimization component. In addition, an experimental study on the influence of parameter values on the performance of the proposed method is in progress.

Author Contributions

Conceptualization, C.L.C. and C.R.U.; formal analysis, C.L.C. and C.R.U.; methodology, C.L.C.; software, C.L.C. and C.R.U.; supervision, C.L.C.; validation, C.L.C. and C.R.U.; writing—original draft, C.L.C.; writing—review and editing, C.L.C. and C.R.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cocianu, C.-L.; Uscatu, C.R. Cluster-Based Memetic Approach of Image Alignment. Electronics 2021, 10, 2606. [Google Scholar] [CrossRef]
  2. Burt, P.J.; Adelson, E.H. The Laplacian Pyramid as a Compact Image Code, Readings in Computer Vision; Elsevier: Amsterdam, The Netherlands, 1987; pp. 671–679. [Google Scholar] [CrossRef]
  3. Koenderink, J.J. The structure of images. Biol. Cybern. 1984, 50, 363–370. [Google Scholar] [CrossRef] [PubMed]
  4. Rosenfeld, A.; Thurston, M. Edge and Curve Detection for Visual Scene Analysis. IEEE Trans. Comput. 1971, C-20, 562–569. [Google Scholar] [CrossRef]
  5. Roelfsema, P.R.; de Lange, F.P. Early Visual Cortex as a Multiscale Cognitive Blackboard. Annu. Rev. Vis. Sci. 2016, 2, 131–151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Sharon, E.; Brandt, A.; Basri, R. Fast multiscale image segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), Hilton Head, SC, USA, 15 June 2000; Volume 1, pp. 70–77. [Google Scholar] [CrossRef] [Green Version]
  7. Li, G.; Yu, Y. Visual Saliency Detection Based on Multiscale Deep CNN Features. IEEE Trans. Image Process. 2016, 25, 5012–5024. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Ji, Y.; Zhang, H.; Wu, Q.J. Salient object detection via multi-scale attention CNN. Neurocomputing 2018, 322, 130–140. [Google Scholar] [CrossRef]
  9. Ju, M.; Luo, H.; Wang, Z.; Hui, B.; Chang, Z. The Application of Improved YOLO V3 in Multi-Scale Target Detection. Appl. Sci. 2019, 9, 3775. [Google Scholar] [CrossRef] [Green Version]
  10. Zheng, A.; Lin, X.; Dong, J.; Wang, W.; Tang, J.; Luo, B. Multi-scale attention vehicle re-identification. Neural Comput. Appl. 2020, 32, 17489–17503. [Google Scholar] [CrossRef]
  11. Qian, X.; Fu, Y.; Jiang, Y.; Xiang, T.; Xue, X. Multi-scale deep learning architectures for person re-identification. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5409–5418. [Google Scholar] [CrossRef] [Green Version]
  12. Chen, Y.; Zhu, X.; Gong, S. Person Re-identification by Deep Learning Multi-scale Representations. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 2590–2600. [Google Scholar] [CrossRef]
  13. Fan, H.; Xiong, B.; Mangalam, K.; Li, Y.; Yan, Z.; Malik, J.; Feichtenhofer, C. Multiscale vision transformers. arXiv 2021, arXiv:2104.11227. [Google Scholar]
  14. Hoskere, V.; Narazaki, Y.; Hoang, T.; Spencer, B., Jr. Vision-based structural inspection using multiscale deep convolutional neural networks. arXiv 2018, arXiv:1805.01055. [Google Scholar]
  15. Wang, Y.; Song, X.; Gong, G.; Li, N. A Multi-Scale Feature Extraction-Based Normalized Attention Neural Network for Image Denoising. Electronics 2021, 10, 319. [Google Scholar] [CrossRef]
  16. Ren, Y.; Yang, J.; Guo, Z.; Zhang, Q.; Cao, H. Ship Classification Based on Attention Mechanism and Multi-Scale Convolutional Neural Network for Visible and Infrared Images. Electronics 2020, 9, 2022. [Google Scholar] [CrossRef]
  17. Peng, G.C.Y.; Alber, M.; Tepole, A.B.; Cannon, W.R.; De, S.; Dura-Bernal, S.; Garikipati, K.; Karniadakis, G.; Lytton, W.W.; Perdikaris, P.; et al. Multiscale Modeling Meets Machine Learning: What Can We Learn? Arch. Comput. Methods Eng. 2020, 28, 1017–1037. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Goshtasby, A.A. Theory and Applications of Image Registration; John Wiley & Sons: Hoboken, NJ, USA, 2017. [Google Scholar]
  19. Yang, X.S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Frome, UK, 2008. [Google Scholar]
  20. Yang, X.S. (Ed.) Nature-Inspired Algorithms and Applied Optimization; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  21. Eiben, A.; Smith, J. Introduction to Evolutionary Computing; Springer: Berlin, Germany, 2015. [Google Scholar] [CrossRef]
  22. Cocianu, C.L.; Stan, A.D.; Avramescu, M. Firefly-Based Approaches of Image Recognition. Symmetry 2020, 12, 881. [Google Scholar] [CrossRef]
  23. Satamraju, K.P. Canny Edge Detection. MATLAB Central File Exchange. 2021. Available online: https://www.mathworks.com/matlabcentral/fileexchange/29014-canny-edge-detection (accessed on 20 December 2021).
  24. Kolda, T.; Lewis, R.M.; Torczon, V. Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods. SIAM Rev. 2003, 45, 385–482. [Google Scholar] [CrossRef]
  25. Viola, P.; Wells, W. Alignment by maximization of mutual information. Int. J. Comput. Vision 1997, 24, 137–154. [Google Scholar] [CrossRef]
  26. Feutrill, A.; Roughan, M. A Review of Shannon and Differential Entropy Rate Estimation. Entropy 2021, 23, 1046. [Google Scholar] [CrossRef] [PubMed]
  27. Vila, M.; Bardera, A.; Feixas, M.; Sbert, M. Tsallis Mutual Information for Document Classification. Entropy 2011, 13, 1694–1707. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Wu, L. Optimal Multi-Level Thresholding Based on Maximum Tsallis Entropy via an Artificial Bee Colony Approach. Entropy 2011, 13, 841–859. [Google Scholar] [CrossRef] [Green Version]
  29. Yale. Yale Face Database. Available online: http://cvc.yale.edu/projects/yalefaces/yalefaces.html (accessed on 14 December 2020).
  30. Conejo, B. Optimization techniques for image registration applied to remote sensing. Ph.D. Thesis, University Paris-East, Créteil, France, 2018. [Google Scholar]
  31. El-tanany, A.S.; Hussein, K.; Mousa, A.; Amein, A.S. Evaluation of Gradient Descent Optimization method for SAR Images Co-registration. In Proceedings of the 2020 12th International Conference on Electrical Engineering (ICEENG), Cairo, Egypt, 7–9 July 2020; pp. 288–292. [Google Scholar] [CrossRef]
Figure 1. Images of subject 10: (a) target image, (b) sensed image, and (c) correctly restored image.
Figure 1. Images of subject 10: (a) target image, (b) sensed image, and (c) correctly restored image.
Electronics 11 00278 g001
Figure 2. Images of subject 7: (a) target image, (b) sensed image, and (c) correctly restored image.
Figure 2. Images of subject 7: (a) target image, (b) sensed image, and (c) correctly restored image.
Electronics 11 00278 g002
Figure 3. Images for subject 10, scaled and binarized, s1 = 4: (a) target; (b) sensed.
Figure 3. Images for subject 10, scaled and binarized, s1 = 4: (a) target; (b) sensed.
Electronics 11 00278 g003
Figure 4. Images for subject 7, scaled and binarized, s1 = 4: (a) target; (b) sensed.
Figure 4. Images for subject 7, scaled and binarized, s1 = 4: (a) target; (b) sensed.
Electronics 11 00278 g004
Figure 5. Images registered by Algorithm 2: (a) subject 10; (b) subject 7.
Figure 5. Images registered by Algorithm 2: (a) subject 10; (b) subject 7.
Electronics 11 00278 g005
Figure 6. Images registered by PAT: (a) subject 10; (b) subject 7.
Figure 6. Images registered by PAT: (a) subject 10; (b) subject 7.
Electronics 11 00278 g006
Figure 7. Images registered by RS-GD: (a) subject 10; (b) subject 7.
Figure 7. Images registered by RS-GD: (a) subject 10; (b) subject 7.
Electronics 11 00278 g007
Table 1. The recorded runtimes for Algorithms 1 and 2.
Table 1. The recorded runtimes for Algorithms 1 and 2.
Picture
Number
MeanRT
Algorithm 2
Standard Deviation
Algorithm 2
MeanRT
Algorithm 1
Standard Deviation
Algorithm 1
116.68313.84039.22651.040
27.0324.44818.02412.859
38.6939.79421.97334.752
49.2796.53619.96214.004
510.8759.98526.61634.338
627.25925.17481.57980.575
76.5926.21917.90414.108
89.8988.01919.44911.861
99.9928.58814.83714.519
1045.86335.70473.08595.596
1117.57711.03232.33617.693
123.7132.1347.9213.138
137.0264.04914.8688.589
1412.2798.81729.68420.397
1522.55021.48478.26979.963
1616.88113.38540.91033.674
1710.4178.51025.80744.410
186.9414.22919.45814.295
199.0597.98114.4848.955
206.9984.55912.1148.604
215.2182.38810.4526.977
2215.62211.74136.90550.612
2310.94312.64223.85024.620
247.1106.14117.50813.665
256.5954.43817.88415.519
263.5002.7408.4664.721
2710.3348.05927.19324.778
288.1657.23524.55923.945
296.0274.51620.02623.594
304.4373.09515.11611.412
Mean value11.4529.24927.01526.774
Table 2. RSNR values and success rates – Algorithm 2, PAT and RS-GD.
Table 2. RSNR values and success rates – Algorithm 2, PAT and RS-GD.
Picture
Number
MeanRSNR
Algorithm 2
Standard Deviation
Algorithm 2
Correct Alignment Algorithm 2RSNR
PAT
Correct Alignment PATRSNR RS-GDCorrect Alignment RD-GD
10.9110.02210.40100.5380
20.8080.03310.36600.8741
30.7380.05110.31700.8771
40.8720.04210.35400.8871
50.8870.04610.44200.4380
60.7750.06410.27500.4211
70.9590.01910.82210.7651
80.9780.00910.87410.8031
90.8200.03610.40700.3950
100.8760.03910.42400.4820
110.8430.05010.32500.9211
120.8840.03910.40000.9401
130.8170.01510.77010.8761
140.8850.03710.85410.3770
150.8260.03810.32100.4280
160.8080.01610.37500.8921
170.8420.04510.32200.9281
180.9370.02210.77710.9771
190.7610.04810.30600.8261
200.8530.04110.44300.5150
210.9450.02910.25100.3600
220.7300.04810.35800.4400
230.8700.04910.36400.4050
240.7500.02110.34400.8941
250.9430.02910.73310.9841
260.7680.03810.36600.4410
270.9280.02710.58410.4710
280.8830.03710.76710.9251
290.7740.03410.43400.3920
300.7840.03510.34700.4080
Mean value0.8480.03510.4710.2670.6630.533
Table 3. The RNMI S values—Algorithm 2, PAT and RS-GD.
Table 3. The RNMI S values—Algorithm 2, PAT and RS-GD.
Picture
Number
MeanRNMI S   Algorithm   2 Standard Deviation
Algorithm 2
RNMI S   PAT RNMI S
RS-GD
10.8300.0320.3170.437
20.8620.0400.4191.012
30.8020.0380.3620.997
40.9290.0250.3901.015
50.9230.0310.4260.443
60.8640.0410.3690.976
70.8730.0420.5801.025
80.8750.0270.6201.027
90.8530.0350.4530.504
100.8450.0360.4000.442
110.8660.0410.3791.025
120.8970.0350.4081.024
130.9030.0210.8351.009
140.9130.0330.8010.342
150.8810.0300.3400.458
160.7970.0210.3831.017
170.8660.0350.3931.018
180.8720.0320.7121.037
190.8760.0320.3570.994
200.8880.0440.4040.446
210.8720.0220.3560.364
220.8200.0400.3950.470
230.8710.0500.4210.465
240.8090.0190.3831.014
250.9040.0390.6161.040
260.8250.0320.3960.488
270.8760.0310.5220.427
280.8860.0370.7291.010
290.8590.0340.4860.424
300.8580.0330.4290.520
Mean value0.8660.0340.4690.749
Table 4. The RNMI α T values—Algorithm 2, PAT and RS-GD.
Table 4. The RNMI α T values—Algorithm 2, PAT and RS-GD.
Picture
Number
MeanRNMI α T   Algorithm   2 Standard Deviation Algorithm 2 RNMI α T
PAT
RNMI α T
RS-GD
10.9350.0130.6580.796
20.9470.0150.7050.978
30.9110.0200.6550.973
40.9720.0100.6720.977
50.9700.0120.7410.765
60.9440.0180.6940.940
70.9490.0180.8300.980
80.9650.0090.8560.982
90.9420.0150.7120.748
100.9380.0150.7130.756
110.9480.0160.7180.995
120.9650.0140.6980.989
130.9560.0080.8980.969
140.9650.0140.8500.597
150.9440.0130.6210.745
160.9220.0090.6800.977
170.9460.0160.6550.978
180.9550.0120.8820.996
190.9440.0160.6240.973
200.9520.0190.6680.695
210.9510.0080.7340.709
220.9230.0170.6550.751
230.9470.0210.6950.744
240.9130.0080.6320.976
250.9590.0170.8450.986
260.9350.0140.7070.765
270.9550.0130.8230.716
280.9520.0150.8560.969
290.9250.0190.7850.715
300.9330.0160.6770.805
Mean value0.9450.0140.7310.865
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cocianu, C.L.; Uscatu, C.R. Multi-Scale Memetic Image Registration. Electronics 2022, 11, 278. https://doi.org/10.3390/electronics11020278

AMA Style

Cocianu CL, Uscatu CR. Multi-Scale Memetic Image Registration. Electronics. 2022; 11(2):278. https://doi.org/10.3390/electronics11020278

Chicago/Turabian Style

Cocianu, Cătălina Lucia, and Cristian Răzvan Uscatu. 2022. "Multi-Scale Memetic Image Registration" Electronics 11, no. 2: 278. https://doi.org/10.3390/electronics11020278

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop