Next Article in Journal
The Extended Cone b-Metric-like Spaces over Banach Algebra and Some Applications
Previous Article in Journal
Sufficient Conditions for Some Stochastic Orders of Discrete Random Variables with Applications in Reliability
Previous Article in Special Issue
Prediction of Kerf Width in Laser Cutting of Thin Non-Oriented Electrical Steel Sheets Using Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Model of Heteroasociative Min Memory Robust to Acquisition Noise

by
Julio César Salgado-Ramírez
1,*,
Jean Marie Vianney Kinani
2,*,
Eduardo Antonio Cendejas-Castro
3,
Alberto Jorge Rosales-Silva
4,
Eduardo Ramos-Díaz
5 and
Juan Luis Díaz-de-Léon-Santiago
6,*
1
Ingeniería Biomédica, Universidad Politécnica de Pachuca (UPP), Zempoala 43830, Mexico
2
Ingeniería Mecatrónica, Instituto Politécnico Nacional-UPIIH, Pachuca 07738, Mexico
3
Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, Pachuca 42083, Mexico
4
Sección de Estudios de Posgrado e Investigación, Instituto Politécnico Nacional-ESIME Zacatenco, Mexico City 07738, Mexico
5
Ingeniería en Sistemas Electrónicos y de Telecomunicaciones, Universidad Autónoma de la Ciudad de México, Mexico City 09790, Mexico
6
Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City 07700, Mexico
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(1), 148; https://doi.org/10.3390/math10010148
Submission received: 6 December 2021 / Revised: 27 December 2021 / Accepted: 2 January 2022 / Published: 4 January 2022
(This article belongs to the Special Issue Theory and Applications of Neural Networks)

Abstract

:
Associative memories in min and max algebra are of great interest for pattern recognition. One property of these is that they are one-shot, that is, in an attempt they converge to the solution without having to iterate. These memories have proven to be very efficient, but they manifest some weakness with mixed noise. If an appropriate kernel is not used, that is, a subset of the pattern to be recalled that is not affected by noise, memories fail noticeably. A possible problem for building kernels with sufficient conditions, using binary and gray-scale images, is not knowing how the noise is registered in these images. A solution to this problem is presented by analyzing the behavior of the acquisition noise. What is new about this analysis is that, noise can be mapped to a distance obtained by a distance transform. Furthermore, this analysis provides the basis for a new model of min heteroassociative memory that is robust to the acquisition/mixed noise. The proposed model is novel because min associative memories are typically inoperative to mixed noise. The new model of heteroassocitative memory obtains very interesting results with this type of noise.

1. Introduction

The human brain is quite intriguing when it comes to learning and remembering its environment. By just hearing a sound, smelling an aroma, seeing an image, touching a texture, among other things, our brain, through its neurons, may associate external information with something that has been learned and consequently indicate how to act according to that information. The most surprising thing is that very little information acquired by our senses is enough for our brain to fully remember the learned information. For example, the brain learns the characteristics of a person’s face in broad daylight and is able to remember the same characteristics learned from that face on a poorly lit night; this means that the brain needs minimal but sufficient information to remember. It should be noted that, if the brain does not have enough minimum information, it ends up being unable to remember; which implies that, if this same minimum information needed by the brain is affected by other types of information, like noise, regardless of whether it is something already learned, it cannot be remembered. This fact has inspired some researchers, in the area of pattern recognition, to come up with some models that simulate the behavior of the brain. One of those models is associative memories [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. In the development of associative memory algorithms, one pushes towards those algorithms with greater learning capacity, better performance, and above all, they must be robust to different types of noise [13,16,17,18,19,20,21]. Thanks to the versatility of associative memories, they have been implemented to solve pattern recognition problems in different areas such as robotics [22,23,24], medicine [25,26,27], among others [20,28,29,30,31,32,33,34,35,36,37,38]. One should point out that the first associative memory presented in the state of the art, that is, the Lernmatrix [1], is the basis of the associative classifier L M τ 9 and has proven to be such a competitive classifier with most used algorithms in machine learning [39,40]. Owing to their versatility, associative memories have continued to be the subject of research for the last decade [20,22,23,24,25,26,27,28,29,30,31,32,33,35,36,37,41,42,43,44,45,46,47,48,49,50].
Generally, an associative memory is a process that aims to recover or fully remember patterns, from input patterns which may be altered by some type of noise. Associative memory can be exemplified as a black box that receives an x pattern as input, processes it, and generates a y pattern as a result, as shown in Figure 1.
The term “fully recall” means that the resulting pattern is identical to the pattern that was learned by associative memory beforehand. The relationship between the input pattern x and the output pattern y is defined as an ordered pair ( x , y ) , and both are column vectors. Hence, the associative memory must be able to learn a set of ordered pairs of patterns and retrieve the output patterns from the input patterns. Memory M is defined as:
{ ( x ω , y ω ) | ω { 1 , 2 , , p } }
where p indicates the cardinality of the displayed set. The finite set of patterns in expression (1) is called fundamental set of patterns and its elements, fundamental patterns, which can be input or output.
To refer to some element of an x pattern or to that of a y pattern, the following notation will be adopted, that is, x j ω or y j ω , where j is the index of the pattern element position and ω is the ordered pair index.
According to Figure 1, M is the learning matrix or the associative memory, and it will contain the encoded information of the fundamental set after it has learned the pattern; it will also be operated in a certain way with the previously learned pattern x, where x can be altered with some kind of noise; and as a result a corresponding output pattern y is expected.
Associative memories consist of two phases–learning and recalling phases. The learning phase consists of finding the necessary operator (s), so that, in some way, the relationship that exists between the input and output patterns is encoded, and through this encoding the learning matrix M is generated. On the other hand, the recalling phase consists of finding the necessary operator or operators along with sufficient conditions to generate an output pattern; that is, once the matrix M has been formed, an input pattern x that was previously learned is presented, then, M is operated with the needed operator or operators under certain circumstances together with the x pattern, thereby generating output pattern y.
In this paper, an input pattern altered by noise will be represented as x ˜ . For example, the expression x ˜ ω represents the input pattern x ω altered by noise.
Typically, associative memories are classified into auto and heteroassociative memories. An associative memory is said to be autoassociative if x μ = y μ μ 1 , 2 , , p and a memory is hetero-associative if μ 1 , 2 , , p for which x μ y μ .
Associative memories differ in the form of algebra they use; for instance, min and max-memories, as their names imply, work in the so-called minimax algebra, that is, minimal and maximal algebra [16,17,19,21,34,35,45], whereas other types of memories work in real algebra [1,2,3,4,5,6,7,8,9,10,11,12,13,14].
In this article, we will focus on associative memories based on minimax algebra such as morphological and α β memories, which are memories that meet the cited conditions [16,17,19,21,34,35,45].

1.1. Morphological Associative Memories

There are two types of morphological associative memories: the ⋁ (max ⋁) memories represented by M and the ⋀ (min ⋀) memories represented by W. Both memories work for heteroassociative and autoassociative modes. The fundamental set for morphological associative memories is:
x μ , y μ | μ = 1 , 2 , , p
A R , x μ = x 1 μ x 2 μ x n μ A n y y μ = y 1 μ y 2 μ y m μ A m
Two new operations between arrays are defined in terms of the +, ⋁ and ⋀ operations [16,17] in order to express the learning and recall phases of the morphological associative memories.
Let D be a matrix d i j m × r and H, a matrix h i j r × n whose terms are integer numbers.
Definition 1.
Maximum product of D and H denoted by C = D H , is a matrix c i j m × n whose ij-th component c i j is defined as follows:
c i j = k = 1 r d i k + h k j
Definition 2.
The minimum product of D and H, denoted by C = D H , is a matrix c i j m × n whose ij-th component c i j is defined as follows:
c i j = k = 1 r d i k + h k j
The learning phase of morphological ⋁ memories consists of two stages:
1.
In each of the p associations ( x μ , y μ ) , Equation (4) is applied to build memory y μ ( x μ ) t of dimension m × n , where the negated transpose of the input pattern x μ is defined as ( x μ ) t = ( x 1 μ , x 2 μ , , x n μ ) . This expression may be elaborated as follows:
y μ x μ t = y 1 μ y 2 μ y m μ x 1 μ , x 2 μ , , x n μ
y μ x μ t = y 1 μ x 1 μ y 1 μ x 2 μ y 1 μ x j μ y 1 μ x n μ y 2 μ x 1 μ y 2 μ x 2 μ y 2 μ x j μ y 2 μ x n μ y i μ x 1 μ y i μ x 2 μ y i μ x j μ y i μ x n μ y m μ x 1 μ y m μ x 2 μ y m μ x j μ y m μ x n μ
2.
Equation (5) is applied to the p matrices to obtain the morphological memory M, as shown in Equation (6).
M = μ = 1 p y μ x μ t = m i j m × n
m i j = μ = 1 p y i μ x j μ
The recall phase consists of applying the minimum product as shown in Equation (4) between memory M and the input pattern x ω , where ω { 1 , 2 , , p } , in order to obtain a column vector of dimension m, as shown in Equation (8).
y = M x ω
Note that, the i-th component of the vector y is:
y i = j = 1 n m i j + x j ω
The learning phase for morphological ⋀ memories consists of two stages:
1.
In each of the p associations ( x μ , y μ ) , Equation (7) is applied to build memory y μ ( x μ ) t of dimension m × n , where the negated transpose of the input pattern x μ is defined as ( x μ ) t = ( x 1 μ , x 2 μ , , x n μ ) . This expression may be expanded as:
y μ x μ t = y 1 μ y 2 μ y m μ x 1 μ , x 2 μ , , x n μ
y μ x μ t = y 1 μ x 1 μ y 1 μ x 2 μ y 1 μ x j μ y 1 μ x n μ y 2 μ x 1 μ y 2 μ x 2 μ y 2 μ x j μ y 2 μ x n μ y i μ x 1 μ y i μ x 2 μ y i μ x j μ y i μ x n μ y m μ x 1 μ y m μ x 2 μ y m μ x j μ y m μ x n μ
2.
Equation (4) is applied to the p matrices to obtain the morphological memory W, as shown in Equation (10).
W = μ = 1 p y μ x μ t = w i j m × n
w i j = μ = 1 p y i μ x j μ
The recall phase consists of applying the maximum product as shown in Equation (7) between memory W and the input pattern x ω , where ω { 1 , 2 , , p } so as to obtain a column vector of dimension m, as shown in Equation (12).
y = W x ω
Note that, the i-th component of the vector y is:
y i = j = 1 n w i j + x j ω

1.2. α β Associative Memories

The α β associative memories are fundamentally based on the maximum and minimum of order relationships between the patterns. Both the α operator–used at the learning phase–and the β operator used at the recall phase were defined. These memories work for both the autoassociative and heteroassociative modes [34,35,45].
α β memories require two sets to work, that is, the sets A and B which are defined as A = 0 , 1 y B = 0 , 1 , 2 .
The binary α = A × A B operation is defined by Table 1.
The binary β = B × A A operation is defined by Table 2.
The learning phase for α β ⋁ memories consists of two stages:
1.
In each of the p associations ( x μ , y μ ) , Table 1 is applied to build memory y μ α ( x μ ) t of dimension m × n , where the transpose of the input pattern x μ is defined as ( x μ ) t = ( x 1 μ , x 2 μ , , x n μ ) , and the α operator refers to the order relationship of the alpha operator. This expression develops as shown below:
y μ α x μ t = y 1 μ y 2 μ y m μ α x 1 μ , x 2 μ , , x n μ
y μ α x t = α y 1 , x 1 α y 1 , x 2 α y 1 , x n α y 2 , x 1 α y 2 , x 2 α y 2 , x n α y m , x 1 α y m , x 2 α y m , x n
2.
The ⋁ operator applies to the p matrices obtained from the expression (13) to create the memory V.
V = μ = 1 p y μ α x μ t
with the i j -th component of v being:
v i j = μ = 1 p α y i μ , x j μ
According to (14), in the operation α : A × A B , we observed that v i j B , i 1 , 2 , , m , j 1 , 2 , , n .
The recall phase consists of performing the V β x ω , where x ω , with ω 1 , 2 , , p , is the input pattern to be recalled and V is the matrix obtained in (14). As a result, a column vector of dimension m is generated, whose ith-component can be obtained according to expression (15).
V β x ω i = j = 1 n β μ = 1 p α y i μ , x j μ , x j ω
V β x ω i = j = 1 n β v i j , x j ω
Likewise, the learning phase for α β ⋀ memories consists of two stages:
1.
In each of the p associations ( x μ , y μ ) , Table 1 is applied to build memory y μ α ( x μ ) t of dimension m × n , where the transpose of the input pattern x μ is defined as ( x μ ) t = ( x 1 μ , x 2 μ , , x n μ ) and the α operator refers to the order relationship of the alpha operator. This expression develops as shown below:
y μ α x μ t = y 1 μ y 2 μ y m μ α x 1 μ , x 2 μ , , x n μ
y μ α x t = α y 1 , x 1 α y 1 , x 2 α y 1 , x n α y 2 , x 1 α y 2 , x 2 α y 2 , x n α y m , x 1 α y m , x 2 α y m , x n
2.
The ⋀ operator applies to the p matrices obtained from the expression (16) so as to create the memory Λ .
Λ = μ = 1 p y μ α x μ t
with the i j -th component of λ being:
λ i j = μ = 1 p α y i μ , x j μ
The recall phase consists of performing the V β x ω , where x ω , having ω 1 , 2 , , p , is the input pattern to be recalled and V is the matrix obtained in (17). As a result, a column vector of dimension m is generated, and its ith-component can be obtained according to expression (18).
V β x ω i = j = 1 n β μ = 1 p α y i μ , x j μ , x j ω
V β x ω i = j = 1 n β v i j , x j ω

1.3. Noise

Noise has played a crucial role in the statistical analysis of data and associative memories have been no exception [16,17,19,34,35,45]. For instance, in the case of image processing; image quality depends upon various factors, such as noise, temperature, and light. Given that noise is always present in images, it is necessary to reduce or eliminate it so as to insure reliable analysis [51]. Noise usually arises from the image acquisition and transmission processes [52] and its removal has always been a field of interest in image processing. There are recent and very important advances in filtering different types of noise for digital image processing [22,51,52,53,54,55], and these have been modeled in order to meticulously study them and thus control their effects [56].
Noise is typically modeled as a Gaussian distribution because the image acquisition and transmission–which are the dominant sources of noise–are continuous processes that involve physical phenomena like thermal agitation and the discrete nature of light which are responsible for the random nature of noise; and according to the famous Central Limit Theorem, the sum of a large number of random variables of any type, essentially noise, will always tend toward a Gaussian distribution [51,52,54]. As we mentioned earlier, noise is an element that alters information and makes it difficult to process, hence the importance of knowing its behavior. Regardless of its type, noise can be classified as either additive, subtractive, or mixed (salt and pepper). To exemplify this fact, Figure 2 intuitively shows these types of noise.
In the case of associative memories based on minmax algebra, noise behavior is such an important factor. For example, min memories are inoperative with additive noise while max memories are inoperative with subtractive noise, nevertheless, both memories do not operate with mixed noise. Therefore, noise is such an interesting topic for minmax algebra-based memories. The objective of these memories is to address the behavior of the mixed noise because through this information one may accordingly build a kernel model that recovers the original pattern [16,17,21].The kernel model consists of finding out, in some way, a Z kernel that complies with Z X , where X is the input pattern and as a condition, Z must not contain any kind of noise. For the above reason, the authors of the morphological memories affirm that the choice of Kernel is an open problem [16,21]. Note that, the kernel model is functional for associative α β memories as well.
Now, the learning and recall phases of the kernel model for associative memories in min and max algebra will be presented.
1.
Learning phase: The diagram in Figure 3 shows the learning phase of the kernel model. As seen in the figure, the input pattern X enters a process that obtains Z X , then, Z is autoassociatively learned with memory M; furthermore, Z is heteroassociatively learned with output pattern Y but this time with memory W.
2.
Recall phase: Figure 4 shows the process followed when applying the recall phase in the kernel model. Given X ˜ as the mixed noise-distorted version of the learned pattern X, X ˜ is presented to memory M Z Z and Z is recalled, immediately afterwards, Z is presented to Memory W Z Y and as a result the output pattern Y is recalled.

1.4. Fast Distance Transform (FDT)

A Distance Transform (DT) measures the distance in pixels from a pixel to the edge of the region of interest regardless the considered direction [57,58,59,60,61,62,63]. The DT allows finding useful geometric information within images; having this information, regions of interest can be found for many applications, including medicine etc. [57,60]. A widely used metric for the DT is the Euclidean distance for the extraction of geometric information [59,61,62], in fact, the algorithm of the Euclidean distance transform has been parallelized so that it may be processed even faster [59]. As one can see, TD is a very useful tool in image processing for data extraction. Another transform that is even faster than DT is the Fast Distance Transform (FDT) and it indeed generates interesting results [63]. Now, let us turn our attention to the FDT because it will be used for noise modeling.
The FDT algorithm consists of 2 steps [63], namely:
1.
Read each pixel in the binary image from top to bottom and from left to right, then, each pixel c R , where R is the region of interest, is assigned as presented in Equation (19). Algorithm 1 illustrates the pseudocode of this same Equation (19).
δ ( c ) = 1 + m i n ( δ ( p j ) : p j E )
E is one of the following sets shown in Figure 5. Only the points assigned in E are used in the first part of the transformation.
2.
Read the binary image from bottom to top and from right to left, then, each pixel c R , where R is the region of interest, is assigned as shown in Equation (20). Algorithm 2 illustrates the pseudocode of this same Equation (20).
δ ( c ) = m i n { δ ( c ) , 1 + m i n { δ ( p i ) : p i D } }
D is one of the sets shown in Figure 6. Note that, only the points assigned in D are used in the first part of the transformation.
Figure 7 illustrates the result of the two steps of the FDT.
Algorithm 1 FDT algorithm first step with the d 8 metrics.
Mathematics 10 00148 i001
Algorithm 2 FDT algorithm second step with the d 8 metrics.
Mathematics 10 00148 i002

2. Materials and Methods

This section details the procedure followed to generate a kernel suitable for hetero-associative memories based on minmax algebra which are on their turn used to recall patterns altered by mixed noise. Furthermore, it presents the modeling of noise by means of the FDT.

2.1. Noise

When working with pattern recognition from an image perspective, it is assumed that the noise is distributed in the function domain, however, when working from a signal perspective, it is assumed that the noise is distributed in the range of the function and this is evidenced by the signal-to-noise ratio [51,55]; in fact, it makes more sense to talk about the signal-to-noise ratio than about the noise distribution in the domain, because where there is noise there must be a signal that carries it. In this work we assume that noise exists and is distributed both in the range and in the domain of the function, thus, the following hypothesis is presented: The noise is concentrated where the information exists and its distribution is proportional to the signal amplitude.
Since the noise is distributed where there is signal, the proposed model is based on signals distribution along the equipotential lines to the signal in their domain, that is, based on the distance transform.
It is assumed that each data acquisition device has its own probability distribution; consequently, a kernel based on the Fast Distance Transform can be created to minimize the effect of the data acquisition process.
To determine the noise distribution generated by acquisition devices in binary images, the following process was performed:
1.
Print the binary image on paper.
2.
Scan the image obtained from step 1, generating a new digital image.
3.
Compare the new digital image with the original one and store the percentage difference.
4.
Print the new digital image obtained in step 2.
5.
Repeat steps 2 to 4, 15 times with 80 different images (40 binary images and 40 gray -scale images ).
The characteristics of the binary images used to determine the noise distribution in the acquisition devices are as follow:
  • 420 × 420 with 64 dpi resolution.
  • 600 × 800 with 180 dpi resolution
  • 542 × 700 with 96 dpi resolution.
To determine the percentage of variation between the scanned images and their originals, a pixel-by-pixel comparison was carried out. In case the pixels of both images had a different value in the coordinate ( x , y ) , that implied the existence of a variation and this had to be taken into account; then, using a rule of three, the percentage of variation in the scanned image is calculated and corresponds to the acquired noise.
Figure 8 shows four among seven scanned images of a circle and highlights the similarity between images, however, there are very marked variations in the edges of the images; note how the frame is affected and also the contour of the circles; this is clearly seen in Figure 8d. This same image is compared to both the original image and to the scanned image number 7. The differences between images are green-colored. This result is significant because it allows us to conclude that: the noise that affects a binary image obtained from an electronic acquisition device is distributed at the edges; that is, the acquisition noise in a binary image arises, grows and distributes in a structured manner where there are significant gradient changes, besides being a mixed noise.
The images in Figure 8 reveal that, when performing the scanning process, the images obtained are subject to slight changes in scale (due to the configuration of the points per pixel -ppp- in the scanner) and in rotation (due to the placement of the printed image in the scanner bed); however, despite these technical drawbacks, when performing the aforementioned process, it can be guaranteed that: the further away the pixel is from the edge, the less likely it will be affected, and the higher the probability of being affect would be otherwise.
Now, we will describe the algorithms that allow the simulation of the acquisition noise distribution. Knowing the noise distribution allows us to determine the probability that it affects a certain pattern in specific parts, because this permits us to determine which parts of the binary image that may serve as kernel. Algorithm 3 shows the process to obtain the acquisition noise probability distribution to binary images.
Algorithm 3 Noise probability distribution algorithm for binary images.
Mathematics 10 00148 i003
δ 1 represents the positive distances while δ 2 represents the negative distances and there is no 0 distance. In the plots that will be shortly presented in the following sections, δ 1 and δ 2 depict both the x and y-axis along with their corresponding histogram frequencies of δ 1 and δ 2 . Note that noise is distributed at the edges.
Since the noise distribution–as a function of the FDT has already been obtained–it is possible to simulate the noise by applying Algorithm 4.
Now we will proceed to detail the obtainment of the acquisition noise in gray-scale images as well as its simulation. The process to obtain the distribution of mixed noise in gray-scale images is described in Algorithm 5.
Since the distribution of the acquisition noise in gray-scale images has already been obtained, it is now possible to simulate it and Algorithm 6 shows how this is carried out.
So far the probability distribution of the acquisition noise has been obtained by means of the proposed image scanning process. Now, we will proceed to formally define noise.
Definition 3.
Let f be a function from P to A, that is, f : P A , the function affected by the noise is expressed by:
f * = f + r = π ( t ) + ψ ( τ ( f ) ) + κ ( P )
where:
  • π ( t ) is a time-dependent random function of t and independent from f.
  • ψ ( τ ( f ) ) is a random function depending on a measure τ taken from the obtained data.
  • κ ( P ) is a p-dependent random function of p f , P -domain of the noisy information.
π ( t ) represents the transmission noise and is independent from the transmitted information. ψ ( τ ( f ) ) is the acquisition noise that is based on a measure τ . κ ( P ) is known as geometric noise.
Noise can be depicted as shown in Figure 9. Since π ( t ) has been extensively treated in pattern recognition, which includes associative memories in minmax algebra, it will be left out of this paper. Moreover, κ ( P ) , which is the geometric noise introduced by the acquisition devices, will not be considered either. It is assumed that both π ( t ) and κ ( p ) are 0; therefore, the noise to be considered is phi ψ ( τ ( f ) ) .
Algorithm 4 Mixed noise simulation algorithm for binary images.
Mathematics 10 00148 i004
Algorithm 5 Algorithm that obtains the probability distribution of the acquisition noise.
Mathematics 10 00148 i005
Algorithm 6 Image with simulated mixed noise.
Mathematics 10 00148 i006
Definition 4.
The probability that a point p P is affected by noise r since its distance measure τ ( p ) = i is expressed as:
P r ( p | τ ( p ) = i )
where τ ( p ) represents a particular distance taken from the FDT affected by noise and obtained from ψ ( τ ( f ) ) .
Lemma 1.
Let P r ( p | τ ( p ) = d 1 ) ( p | τ ( p ) = d 2 ) = 0 if d 1 d 2
Proof. 
By contradiction; suppose that, P r ( p | τ ( p ) = d 1 ) ( p | τ ( p ) = d 2 ) 0 , then, there is a noise event in p with τ ( p ) = d 1 and τ ( p ) = d 2 , but τ is a measure, therefore it is a mapping and does not have different values. □
Corollary 1.
P r ( p | τ ( p ) ) = d 1 and P r ( p | τ ( p ) ) = d 2 are independent events.
Proof. 
Direct consequence of the Lemma 1. Since τ is a distance measure, the probability that an event in p will affect the noise at this distance is unique. However, the only way to affect a different distance is through another noise probability event; therefore, P r ( p | τ ( p ) ) = d 1 is independent from P r ( p | τ ( p ) ) = d 2 . □
Corollary 2.
P r ( p | τ ( p ) = d 1 ) ( p | τ ( p ) = d 2 ) = P r ( p | τ ( p ) = d 1 ) + ( p | τ ( p ) = d 2 ) .
Proof. 
Corollary 1 showed that ( p | τ ( p ) = d 1 ) is an independent event from ( p | τ ( p ) = d 2 ) ; that is, the probabilities that a noise event in p will affect two different distances at different times are different; this indicates that the union of the two probabilities is the sum of both probabilities; therefore,
P r ( p | τ ( p ) = d 1 ) ( p | τ ( p ) = d 2 ) = P r ( p | τ ( p ) = d 1 ) + ( p | τ ( p ) = d 2 )
Lemma 2.
p r d = d 1 d 2 ( p | τ ( p ) = d ) = d = d 1 d 2 P r ( p | τ ( p ) = d ) .
Proof. 
By Lemma 1 and Corollary 2 we have:
p r d = d 1 d 2 ( p | τ ( p ) = d ) = d = d 1 d 2 P r ( p | τ ( p ) = d ) .
Theorem 1.
P r ( p | d 1 τ ( p ) d 2 ) = d = d 1 d 2 P r ( p | τ ( p ) = d ) .
Proof. 
P r ( p | d 1 τ ( p ) d 2 ) = P r ( d = d 1 d 2 ( p | τ ( p ) = d ) .
Then by Lemma 2 we have: P r ( p | d 1 τ ( p ) d 2 ) = d = d 1 d 2 P r ( p | τ ( p ) = d ) = d = d 1 d 2 ( p | τ ( p ) = d ) .
Corollary 3.
P r ( p | d 1 τ ( p ) d 1 ) = d = d 1 d 2 P r ( p | τ ( p ) = d )
Proof. 
Direct consequence of Theorem 1 with d 1 = d 1 and d 2 = d 1 . □
Corollary 4.
P r ( p | d 1 τ ( p ) d 2 ) = 1 d = d 1 d 2 P r ( p | τ ( p ) = d ) , where P r refers to the complementary probability to P r .
Proof. 
1 = d = P r ( p | τ ( p ) = d ) = d = d 1 1 P r ( p | τ ( p ) = d ) + d = d 1 d 2 P r ( p | τ ( p ) = d ) + d = d 2 + 1 P r ( p | τ ( p ) = d ) 1 = P r ( p | τ ( p ) < d 1 τ ( p ) > d 2 ) + P r ( p | d 1 τ ( p ) d 2 )
therefore: P r ( p | d 1 τ ( p ) d 2 ) = 1 d = d 1 d 2 P r ( p | τ ( p ) = d ) .
Lemma 3.
P r ( p | r is additive ) = d = 1 ( p | τ ( p ) = d )
Proof. 
By definition, additive noise exists in the complement of the region; therefore τ ( p ) < 0 and
P r ( p | r is additive ) = P r ( d = 1 ( p | τ ( p ) = d ) ) = d = 1 ( p | τ ( p ) = d ) .
Lemma 4.
P r ( p | r is subtractive ) = d = 1 ( p | τ ( p ) = d ) .
Proof. 
By definition, subtractive noise exists in the region; therefore τ ( p ) > 0 and
P r ( p | r is subtractive ) = P r ( d = 1 ( p | τ ( p ) = d ) ) = d = 1 ( p | τ ( p ) = d ) .

2.2. Optimal Kernel Based on FDT

Given a function ψ ( τ ) that represents the acquisition noise distribution, let us assume the noise is distributed from the edges to their surroundings, Theorem 1 and Corollary 3 show that it is possible to find a range of distances from d 1 to d 2 of that distribution where the probability of having the noise affect this region is high. The kernel is constructed from the hypothesis that noise is distributed along the edges and it is enough to make erosions in order to eliminate the range between the distances d 1 and d 2 obtained from Theorem 1 and still preserve the remaining distances. When performing the erosions, the probable remaining noise can be seen as additive noise (if the kernel is built with the distances of the complement of the region) and for this type of noise, the max memories are robust; and if the kernel distances left over by erosion are those of the complement, then the noise is subtractive, and for this type of noise, min memories are the ones that are robust. It is assumed that when dealing with the complement of the region that will make up the kernel, singular features are preserved so they may be distinguishable from other patterns, thus reducing the risk that one or more patterns may be memorized by becoming a subset or superset of others.
Remark 1.
The erosion term used in this paper does not refer specifically to the morphological operation of erosion that is defined in the mathematical morphology, it does also refer to the arithmetic operation of subtraction performed between two gray levels of an image.
Definition 5.
Given a function ψ ( τ ) and the distances d 1 (the distance that is likely to be affected by noise in the region complement) and d 2 (the distance that is likely to be affected by noise in the region) that satisfy P r ( p | d 1 τ ( p ) d 2 ) we will proceed to build the optimal binary kernel as follows:
1. 
Erode up to distance d 2 of δ 1 .
2. 
Binarize the eroded δ 1 .
3. 
Obtain the complement of the eroded image from step 2.
Definition 6.
Given a function ψ ( τ ) and the distances d 1 and d 2 that were chosen to satisfy P r ( p | d 1 τ ( p ) d 2 ) , we will proceed to build the grayscale optimal kernel as follows:
1. 
Erode the image.
2. 
Obtain the complement of the eroded image.
Based on Theorem 1, Corollaries 1–4 and Lemmas 1–4 along with the acquisition noise distribution function ψ ( τ ) , it is possible to propose a generic model of heteroasociative memories in minmax algebra that is robust to mixed noise.
Remark 2.
Since morphological and α β memories use minimum and maximum operators in the learning and recalling phases, morphological memories will be taken as a basis to make the necessary demonstrations for the creation of the generic model of min heteroasociative memory that is robust to mixed noise.
The new generic model is defined as follows:
Let A be a matrix a i j m × r and B a matrix b i j r × n whose terms are integers.
Definition 7.
The maximum product of A and B, denoted by C = A B , is a matrix c i j m × n whose ij-th component c i j is defined as:
c i j = k = 1 r a i k + b k j

2.2.1. Learning Phase

The heteroassociative memory in min algebra meant for the learning phase is constructed as follows: The heteroassociative memory in min algebra meant
W = ϱ = 1 p y ϱ x ϱ t = w i j m × n
w i j = ϱ = 1 p y i ϱ x j ϱ

2.2.2. Recall Phase

The recall phase consists of applying the Definition 7 between min memory and the input pattern x ϑ , where ϑ 1 , 2 , , p in order to get a column vector of m dimension:
y = W x ϑ
where the i-th component of the vector y is:
y i = j = 1 n w i j + x j ϑ
Remark 3.
Typically, min memories are robust to subtractive noise while max memories are robust to additive noise, but both are inoperative when it comes to mixed noise; for this reason the kernel model arises in an effort to solve the mixed noise problem. The demonstration of the above argument will be omitted in this paper since it is well detailed in [16,45]. We will only focus on demonstrating that through the generation of a kernel based on the noise distribution, one can create a min heteroasociative memory that is robust to mixed noise, which is the original contribution of this paper.
Theorem 2.
Let x ˜ ϑ ϑ = 1 , , k be the distorted version of the pattern x ϑ . W x ˜ ϑ = y ϑ will be true if and only if
x ˜ j ϑ x j ϑ j = 1 , , n
for each index row i { 1 , , m } an index column exists j i { 1 , , n } such that:
x ˜ j i ϑ = x j i ϑ ( ϱ ϑ [ y i ϑ y i ϱ + x j i ϱ ] )
Proof. 
Suppose x ˜ ϑ denotes the distorted version of x ϑ and ϑ = 1 , , k , W x ˜ ϑ = y ϑ , then:
y i ϑ = ( W x ˜ ϑ ) i = l = 1 n ( w i l + x ˜ l ϑ ) w i j + x ˜ j ϑ i = 1 , , m and j = 1 , , n
thus,
x ˜ j ϑ y i ϑ w i j i = 1 , , m y j = 1 , , n x ˜ i = 1 m ( y i ϑ w i j ) j = 1 , , n x ˜ j ϑ i = 1 m [ y i ϑ ϱ = 1 k ( y i ϱ x j ϱ ) ] j = 1 , , n x ˜ j ϑ i = 1 m [ y i ϑ + ϱ = 1 k ( x j ϱ y i ϱ ) ] j = 1 , , n x ˜ j ϑ i = 1 m [ y i ϑ + ϱ ϑ ( x j ϱ y i ϱ ) ( x j ϑ y i ϑ ) ] j = 1 , , n x ˜ j ϑ i = 1 m [ ϱ ϑ ( y i ϑ y i ϱ + x j ϱ ) x j ϑ ] j = 1 , , n x ˜ j ϑ x j ϑ i = 1 m [ ϱ ϑ ( y i ϑ y i ϱ + x j ϱ ) ] x j ϑ j = 1 , , n x ˜ j ϑ x j ϑ
This shows that the inequality obtained in (27) is sufficient for x ˜ j ϑ to be recovered. then,
x ˜ j ϑ x j ϑ [ ϱ ϑ ( y i ϑ y i ϱ + x j ϱ ) ] j = 1 , , n and i = 1 , , m .
Now, suppose the set obtained in (31) does not contain the equivalence for i = 1 , , m , i.e., it is assumed that there are indices of row i { 1 , , m } such that:
x ˜ j μ < x j ϑ ϱ ϑ ( y i ϑ y i ϱ + x j ϱ ) j = 1 , , n
then
( W x ˜ ϑ ) i = j = 1 n ( w i j + x ˜ j ϑ ) < j = 1 n w i j + x j ϑ ϱ 1 [ y i ϑ y i ϱ + x j ϱ ] = j = 1 n w i j + ϱ = 1 k [ y i ϑ y i ϱ + x j ϱ ] = j = 1 n [ w i j + y i ϑ ϱ = 1 k ( y i ϱ x j ϱ ) ] = j = 1 n [ w i j + y i ϑ w i j ] = y i ϑ .
Thus, ( W x ˜ ϑ ) i < y i ϑ which contradicts the hypothesis that W x ˜ ϑ = y ϑ . This indicates that for each row index there must be a column index of j i that satisfies (28).
The opposite will now be proofed. Suppose that
x ˜ j ϑ x j ϑ i = 1 m ϱ ϑ [ y i ϑ y i ϱ + x j ϱ ] j = 1 , , n
for the first part of the proof, the inequality is true if and only if
x ˜ j ϑ y i ϑ w i j i = 1 , , m y j = 1 , , n
or, equivalently, if and only if
w i j + x ˜ j ϑ y i ϑ j = 1 , , m y i = 1 , , n
j = 1 n ( w i j + x ˜ j ϑ ) y i ϑ i = 1 , , m ( W X Y X ˜ ) i y i ϑ i = 1 , , m
this implies that W X Y x ˜ ϑ y ϑ ϑ = 1 , , k therefore, if it is proven that W X Y x ˜ ϑ y ϑ ϑ = 1 , , k , then as a result W X Y x ˜ ϑ = y ϑ ϑ . Now, let ϑ { 1 , , k } and i { 1 , , m } be arbitrarily chosen, then
( W X Y x ˜ ϑ ) i = j = 1 n ( w i j + x ˜ j ϑ ) w i j i + x ˜ j i ϑ = w i j i + x j i ϑ ϱ 1 [ y i ϑ y i ϱ + x j i ϱ ] = w i j i + ϱ = 1 k [ y i ϑ y i ϱ + x j i ϱ ] = w i j i + y i ϑ ϱ = 1 k ( y i ϱ x j i ϱ ) = w i j i + y i ϑ w i j i = y i ϑ
This shows that W X Y x ˜ ϑ y ϑ . □
Remark 4.
Expression (27) shows that the new min heteroasociative memory model is robust to mixed noise and is directly related to acquisition noise.
Theorem 3.
The new min heteroasociative memory model is robust to mixed noise in a parameterized way by d within ψ ( τ ) and it is true that E ( d ) 1 d P r ( p | τ ( p ) = i ) f o r d > d 1 where E ( d ) is the probability of success in the complete recall of altered patterns with mixed noise.
Proof. 
Lemma 4 shows that subtractive noise is located on the positive side of the ψ ( τ ) curve which is expressed as P r ( p | r being subtractive ) = d = 1 ( p | τ ( p ) = d ) . It has been determined that the noise is distributed across the edges. However, by performing pattern erosion and obtaining the complement, the model gets robust to mixed noise from a distance d 1 where d 1 < 0 . Thus, the probability of success in the recall of those patterns affected by mixed noise in this new model of associative memories W is expressed as 1 i = d 1 P r ( p | τ ( p ) = i where d 1 < 0 . On the other hand, Theorem 2 showed that it is a sufficient condition for the patterns recovery if the following condition is fulfilled, i.e., x ˜ j ϑ x j ϑ j = 1 , , n and expression (28) guarantees that for each row index i there must be a column index of j i so that a complete recall may occur; this implies that the W heteroassociative memory model, as such, is robust to high percentages of mixed noise; therefore, it can be demonstrated that:
E ( d ) 1 d P r ( p | τ ( p ) = i ) f o r d > d 1
Corollary 5.
The probability of full recall of the new heteroassociative memory W is 0, if and only if, when parameterizing by d, d P r ( p | τ ( p ) = i ) holds.
Proof. 
Direct consequence of Theorem 3: Given that E ( d ) 1 d P r ( p | τ ( p ) = i ) and d is negative, the same expression can be presented as E ( d ) 1 d P r ( p | τ ( p ) = i ) . However, the complement of E ( d ) is expressed as d P r ( p | τ ( p ) = i ) , which indicates that if the above is true, then, there is a 100% probability that the memory will fail. □
Corollary 6.
The new heteroassociative memory model W with mixed noise, may fail in full pattern recall if the noise is sufficient enough to turn X ϑ pattern into a subset of another pattern X γ where ϑ γ .
Proof. 
Direct consequence of not complying with expression (28). □
The Corollary 6 is of utmost importance, since it is sufficient to have row index i in memory W that does not contain a column index j of X for the recall to be incomplete. This implies that in case y ˜ ϱ X ϑ X γ , where ϑ γ , is not fulfilled, the pattern shall contain subtractive noise.

2.2.3. New Generic Model of Min Heteroassociative Memories Robust to Mixed Noise

Given an acquisition noise distribution function ψ ( τ ) , where it is highlighted that the noise is distributed over distances close to distance 0, i.e., by the edges; and taking Theorem 3 as a reference, then, we will proceed to propose another novel model of min heteroassociative memory that is robust to mixed noise.
Learning phase.
1.
Obtain Z X by ψ ( τ ) and Theorem 3.
2.
Obtain the Z complement ( Z c ).
3.
Obtain the Y complement ( Y c ).
4.
Perform the learning process with W Z c Y c .
Figure 10 shows the learning process of the new model of min heteroasociative memory.
Recall phase.
1.
Obtain the X ˜ complement.
2.
Perform the recall process with memory W Z c Y c .
3.
Obtain the Y c complement.
Figure 11 graphically shows the recall process of the new min heteroassociative memories model that is robust to mixed noise.

3. Results

In this section we will show the results of the acquisition noise distribution along with the acquisition noise simulation algorithms in binary and grayscale images, and finally the behavior of the new min heteroassociative memory model for morphological memories and α β .

3.1. Acquisition Noise Distribution

3.1.1. Acquisition Noise Distribution in Binary Images

The final acquisition noise distribution function is obtained by averaging the 40 noise distributions obtained from the process described in Section 2.1. Table 3 shows the distribution of acquisition noise in binary images.
Negative distances shown in Figure 12 represent the subtractive noise while the positive distances make up the additive noise. This indicates that additive noise occurs with greater intensity than subtractive noise encountered in acquisition devices. Furthermore, it can be observed that the noise occurs at the edges and the greater the distance, the lower the noise effect. Also, the highest noise concentration is found at distances 1 and 1 , and approximately 50 % of noise is concentrated at the edges.
Table 3 shows 3 columns. The Distances column indicates that the noise distribution is from distance 20 to distance 20. The Absolute frequency column indicates the number of pixels that are affected at each distance in the range of 20 to 20. The Relative frequency column indicates the probability that noise will affect that distance. The sum of the Relative frequency column is 1 which indicates 100 % of the noise affects the binary image.
According to Table 3 and Figure 12 one can observe that the acquisition noise is presented and distributed along the edge and, on top of this, it grows in a structured way as it moves away from the edge.
Table 3 shows the percentages of noise probability by distance. It is possible to simulate the acquisition noise in binary images and the result is shown in Figure 13. One can see that the noise is distributed at the edge.
Acquisition noise grows and is distributed proportionally according to the size of the image, for example, in scaled images, noise is presented and distributed in the same way, but at different scales, keeping a relationship of growth against distribution. To confirm this, we experimented with images of size 120 × 120 that are scales of the images used in the present article, with the same noise distribution, but affecting less distance.

3.1.2. Acquisition Noise Distribution in Grayscale Images

By applying the Algorithm 5, the acquisition noise distribution in grayscale images was generated. This distribution is presented numerically in Table A1 and Table A2 and graphically in Figure 14. Table A1 and Table A2 show 3 columns, namely, the Distances column which represents the distances affected by the acquisition noise; the Frequency column which represents the number of pixels per distance affected by the noise; and the Probability column that represents the probability that noise will occur at the corresponding distance, whose sum is 1, that is, 100 % noise should affect the grayscale image.
The final noise distribution in grayscale images is the result of averaging the 40 noise distributions generated by the process described in Section 2.1. One can see that the noise is distributed from distance 199 to distance 199 and there is no distance 0.
Remark 5.
Additive noise for binary and grayscale images is located in negative distances of graphs and tables while subtractive noise is located in positive distances.
Table A1 and Table A2 define the distribution function of acquisition noise in grayscale images and by using these two tables in Algorithm 6, one can simulate this type of noise.
Figure 15 compares the noise distribution of the scanned image against that of the image with simulated noise. One can see that the distributions are very similar. The difference between these two images is that the image with simulated noise was generated having the same distribution as the one proposed in this article while the scanned image is just an image that was scanned once. Another difference is that the scanned image has geometric noise, that is, the noise in the form of a texture that was added by the acquisition device during the image scanning. It should be noted that each acquisition device has its own geometric noise which is different from other acquisition devices. However, the simulated acquisition noise distribution allows us to ensure that the acquisition noise is very close to the original.
Having the results shown in the acquisition noise simulation, the images that will be used as patterns with the new min heteroassociative memory model will be altered by the acquisition noise simulation algorithms proposed in this article.

3.2. New Model of Min Heteroassociative Memory

This section presents the results of experiments conducted in order to show the effectiveness of the new min heteroassociative memory model. The experiments were carried out on a Dell XPS 8700 with Intel Core i7 processor and 16 Gigabytes of RAM. The images used for memory learning and retrieval were both binary and grayscale. The size of the images was 50 × 50 , 80 × 80 and 120 × 120 .
The process of learning and recalling patterns consisted of first learning the whole fundamental set with a kernel formed from distance i, then once this was learned, one pattern was chosen at a time, in order to be recalled 1000 times, thus resulting in the percentage of effectiveness in the complete patterns recall.
The construction of a kernel with sufficient conditions for pattern recall is based on the probability that noise is distributed around the edges, that is, the more we move away from distance 0, be it towards the left or right, the better the kernel. As mentioned earlier in Section 3.1.1, the acquisition noise grows and distributes proportionally according to the size of the image; and in scaled images, noise is generally presented and distributed in the same way, but at different scales, keeping the relationship of growth against distribution.
The main feature of the original kernel model is that the kernel must be free from noise so as to have a full recall. For this reason, our kernel construction was based upon the acquisition noise distribution proposed in this article.
Acquisition noise grows or decreases depending on the size of the image; to corroborate this, binary images were scaled from 420 × 420 to 50 × 50 , that is, a reduction of 88 % . When performing the process described in Section 2.1, it was observed that the distances affected by noise in the 50 × 50 images, on average were up to distance 2. Based on the above, to calculate the distances affected by the noise that was simulated in images used in this experiment, the operation d f = t 2 t 1 × d m a x was defined, where d f is the final distance that could be affected by noise. t 1 represents the size of the original image, t2 being the size of the scaled image and d m a x , the maximum distance in the noise distribution table; note that for binary images it corresponds to the maximum distance of Table 4, while for grayscale images, it is the maximum distance of Table A2. To illustrate this, we take d f = 50 420 = 0.12 × 20 = 2.4 ; therefore, one can conclude that, in this very image, noise will have no effect starting from distance 3. Taking d f as a reference, images used in this experiment will probably no longer be affected by acquisition noise starting from distances that were shown in Table 4.
Experiment characteristics:
  • 6 fundamental sets, 3 with binary images and 3 with grayscale images. Figure 16 shows the fundamental sets appearance.
  • The images of fundamental set 1 and 2 are of size 50 × 50 , those of set 3 and 4 are 80 × 80 , while those of set 5 and 6 are 120 × 120 .
  • Table 4 shows how far away the kernel will be created.
  • 1000 recall process per fundamental set.
The processes of building the kernels are shown in Figure 17 and Figure 18. The morphological erosion used in the kernel construction is shown in Figure 18.
Applying the new model of min heteroasociative memory described in Section 2.2.3 along with the 6 fundamental sets illustrated above yields the results shown in Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10.
Table 5 shows the performance of fundamental set 1 of Figure 16. Fundamental set 1 is composed of binary images of size 50 × 50 . It is noted that, if the patterns do not contain noise, both the original kernel model and the proposed new model have a 100% recall, that is, the two models are efficient for noiseless patterns. The original kernel model is efficient from distance 3 and this makes sense because by removing the first distances from the FDT, the kernel ends up being built outside the region that affects the acquisition noise, and for this same reason the new model proves to be efficient as well. However, the proposed new model has full recalls from distance 1 even though the kernel may be affected by acquisition noise. The question is why, given the fact that the kernel is supposed to be noiseless. The answer is as follows:
Since it was determined that the distances affected by noise in 50 × 50 binary images are 1 and 2, it is noted in Table 3 that the sum of the probabilities of noise affecting these two distances is 37 % , the new model proposes that the min heteroasociative memory is able to recover those patterns affected by mixed noise based on a noise distribution function; therefore, the new model indicates that, for a 50 × 50 image patterns, at least one should expect a 63 % effectiveness in full pattern recall. The percentages in Table 4 represent how many times each pattern was fully recalled after 1000 attempts. For the new model on distance 1, recovery percentage per pattern is greater than 63 % , so one can ensure that the new model could recall patterns where the original model could not. In the case of distance 2, as distance 1 was eliminated in kernel building, and according to Table 3, the probability of acquisition noise affecting distance 2 is 7 % , hence the new model expects memory shall recall patterns by at least 93 % , and Table 5 proves that to be true.
Table 6 shows the performance of fundamental set 2 of Figure 16. Fundamental set 2 is composed of grayscale images of size 50 × 50 . According to Table A2, the sum of probabilities that the noise will affect up to distance 20 is about 14 % , thus, the probability that noise will have no effect is about 86 % . These probability percentages are significant, because they corroborate that the proposed new model works as expected and this is proven in Table 6. The proposed model performance in pattern recall is greater than 86 % from distance 18 and is 100 % from distance 22. Original kernel model recalls from distance 24. The above proves that the proposed new model has better efficiency than original model.
Table 7 shows the performance of fundamental set 3 of Figure 16. Fundamental set 3 is composed of binary images of size 80 × 80 . One can see that in Table 7 original kernel model has a performance of 100 % from distance 4, and the performance is 0 % before this distance. The proposed model has a higher performance of 69.90 % in pattern recall from distance 1 and a 100 % recall from distance 3. These results show that the new model is consistent, that is, it still shows better efficiency than original model.
Table 8 shows the performance of fundamental set 4 of Figure 16. Fundamental set 4 is composed of grayscale images of size 80 × 80 . The images of fundamental set 4 are bigger than those of fundamental set 2, so the noise is higher and it affects more distances. For this reason, the original model has 100 % performance from distance 38 onward. Obviously, if the kernel is noiseless, both models have 100 % efficiency. The new model has complete recall percentages greater than 75 % from distance 10, and from distance 30, its complete recall is 100 % .
Table 9 shows the performance of fundamental set 5 of Figure 16. Fundamental set 5 is composed of binary images of size 120 × 120 . Table 9 shows the same behavior in patterns formed with binary images, and the bigger the size of the images, the greater the noise. As for the original model, it can recall 100 % of the pattern if the kernel has no noise and this happens from distance 6. The new model has a performance greater than 77 % from distance 1, and it can fully recall all learned patterns starting from distance 3.
Table 10 shows the performance of fundamental set 6 of Figure 16. Fundamental set 6 is composed of grayscale images of size 120 × 120 . Table 10 confirms what was commented on the results of Table 10, but in this case for the fundamental set 6 which is made up of grayscale images. The original model completely recalls from distance 56 while the new model recalls from distance 10 with a performance greater than 77 % . Again, it is proven that the new model has a better performance when compared to the original model.
We must provide high certainty that the proposed kernel is a subset of one and only one set for the recall phase to be complete. If the proposed kernel is made up of few elements, then the probability that the kernel is a subset of other sets is high, and as a result, the associative memory will fail. The more patterns the associative memory learns, the greater the probability that the kernel is a subset of another pattern. That is why the proposed new model is so important. The proposed model uses elements from those patterns that are certainly affected by noise, and it does not delete them as the original model does, however, there are chances of failing some pattern recoveries. The results show that the new model preserves distances deleted by the original model, with an efficiency greater than 70% in those distances where the original model is totally inoperative.

4. Discussion

For an associative memory to work as designed it must ensure that noise does not affect its operation. Knowing how acquisition noise presents itself will make associative memories know how to treat it. Associative memories in minmax algebra guarantee that if they meet the conditions of their design, they then have an infinite capacity to learn and recall.
This article showed that the acquisition noise has a Gaussian distribution where the maximum point is near distance 0, and these distances were obtained through a distance transform. Moreover, the probability that noise can affect these distances was obtained. These results allowed kernels to be generated with sufficient conditions for the original and new model to completely recall patterns affected by mixed noise.
A new model of min heteroassociative memory that is robust to mixed noise was also proposed. We affirm that the proposed new model is better than the original kernel model for two reasons: first, as shown in Figure 3 and Figure 4, the original kernel model involves min memory and max memory for the learning and recall phases and it doesn’t show how to get the Z kernel. Figure 10 shows that the new min heteroassociative memory model consists of only one memory, i.e., min memory, associated with a function that determines the distance from where Z will be formed, thus ensuring that there will be full pattern recalls. One can conclude that, the proposed new model is faster in its execution and proposes conditions that also satisfies the original model. Second reason, the original kernel model is deterministic, hence it sets where it will fail and where it will be efficient. The proposed new model is probabilistic, which gives it the advantage of recalling completely patterns where the original model cannot; and in those cases where the original model recalls completely, the new model guarantees a 100 % complete recalls as well. As one can see, the new model surpasses the original model.

5. Conclusions

In this paper, it was proven that where there is information, there is acquisition noise, and this noise grows and distributes in a structured way. It was possible to associate the distribution of the acquisition noise to a distance transform so as to form kernels that satisfy sufficient and necessary conditions for the associative memories in minmax algebra to fully recall the learned patterns. Taking the above as a reference, bases were laid for a new model of min associative memory that is robust to mixed noise, i.e., robust to acquisition noise, and this model proved to surpasses the original model.

Author Contributions

Conceptualization, J.C.S.-R. and J.L.D.-d.-L.-S.; validation, E.A.C.-C., A.J.R.-S. and E.R.-D.; formal analysis, J.C.S.-R., J.M.V.K. and J.L.D.-d.-L.-S.; writing—original draft preparation, J.C.S.-R.; writing—review and editing, A.J.R.-S., E.R.-D., J.M.V.K., E.A.C.-C. and J.C.S.-R.; visualization, E.A.C.-C., A.J.R.-S. and E.R.-D.; supervision, J.C.S.-R. and J.M.V.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank the Universidad Politécnica de Pachuca, Tecnológico de Monterrey and CONACYT for the support provided.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Acquisition Noise Distribution Table in Grayscale Images

Table A1. Acquisition noise distribution table in grayscale images.
Table A1. Acquisition noise distribution table in grayscale images.
DistanceFrequencyProbabilityDistanceFrequencyProbability
−18930.00000173−942310.001332764
−18830.00000173−932420.001396229
−18720.00000115−922570.001482772
−18630.00000173−912630.001517389
−18540.00000231−902760.001592393
−18450.00000288−893050.00175971
−18360.00000346−883140.001811636
−182120.00000692−873530.002036648
−18160.00000346−863970.002290508
−180160.00000923−853830.002209734
−17990.00000519−844420.002550137
−178210.00012116−835000.002884771
−177180.000103852−824800.00276938
−176200.000115391−814490.002590524
−175300.000173086−805240.00302324
−174200.000115391−795050.002913618
−173230.000132699−785810.003352104
−172230.000132699−775680.0032771
−171250.000144239−766610.003813667
−170180.000103852−756540.00377328
−169170.00000981−746270.003617502
−168220.00012693−737520.004338695
−167150.00000865−727110.004102144
−166140.00000808−717670.004425238
−165190.000109621−707880.004546399
−164180.000103852−698800.005077196
−163190.000109621−688780.005065657
−162170.00000981−678980.005181048
−161210.00012116−669180.005296439
−160130.00000750−659980.005758002
−159100.00000577−649990.005763772
−158130.00000750−6310600.006115714
−15780.00000462−6210710.006179179
−156120.00000692−6110910.00629457
−155150.00000865−6011300.006519582
−154180.000103852−5911980.006911911
−153110.00000635−5812300.007096536
−152190.000109621−5712840.007408091
−151180.000103852−5612650.00729847
−150100.00000577−5512780.007373474
−14970.00000404−5412280.007084997
−148220.00012693−5313930.008036971
−147160.00000923−5213530.00780619
−146200.000115391−5113340.007696568
−145170.00000981−5013810.007967737
−144160.00000923−4913890.008013893
−143240.000138469−4814040.008100436
−142160.00000923−4714630.008440839
−141190.000109621−4614460.008342757
−140230.000132699−4514830.00855623
−139220.00012693−4415270.00881009
−138130.00000750−4315400.008885094
−137280.000161547−4215400.008885094
−136200.000115391−4115440.008908172
−135390.000225012−4015340.008850477
−134360.000207703−3915960.009208188
−133420.000242321−3815830.009133184
−132390.000225012−3715180.008758164
−131430.00024809−3615110.008717777
−130390.000225012−3515480.00893125
−129540.000311555−3416340.009427431
−128400.000230782−3315540.008965867
−127490.000282708−3216120.009300501
−126480.000276938−3115850.009144723
−125540.000311555−3016710.009640904
−124590.000340403−2916730.009652443
−123450.000259629−2817190.009917842
−122540.000311555−2716320.009415892
−121470.000271168−2616080.009277423
−120690.000398098−2516440.009485126
−119510.000294247−2416470.009502435
−118640.000369251−2316550.009548591
−117530.000305786−2215550.008971637
−116590.000340403−2116590.009571669
−115390.000225012−2016000.009231266
−114510.000294247−1916350.0094332
−113600.000346172−1815860.009150493
−112540.000311555−1715600.009000485
−111730.000421177−1616030.009248575
−110850.000490411−1515520.008954328
−109750.000432716−1415110.008717777
−108820.000473102−1314290.008244675
−107900.000519259−1215780.009104336
−1061010.000582724−1115050.00868316
−105910.000525028−1014790.008533152
−1041250.000721193−915040.00867739
−1031080.00062311−814720.008492765
−1021170.000675036−715020.008665851
−1011330.000767349−613660.007881194
−1001500.000865431−513490.007783111
−991510.000871201−413740.00792735
−981550.000894279−314090.008129284
−971840.001061596−212320.007108075
−961830.001055826−114080.008123514
−952280.001315455000
Table A2. Acquisition noise distribution table in grayscale images.
Table A2. Acquisition noise distribution table in grayscale images.
DistanceFrequencyProbabilityDistanceFrequencyProbability
112450.007183079951350.000778888
212560.007246544961200.000692345
312740.007350396971370.000790427
412390.007148462981130.000651958
512850.00741386199960.000553876
611970.0069061411001370.000790427
712000.006923451011020.000588493
812090.0069753761021070.000617341
911980.0069119111031160.000669267
1011660.006727285104940.000542337
1111390.006571508105820.000473102
1211030.0063638041061040.000600032
1311260.0064965041071040.000600032
1410640.006138792108830.000478872
1510860.006265722109810.000467333
1610160.005861854110870.00050195
1710230.005902241111810.000467333
189790.005648381112670.000386559
1910050.005798389113820.000473102
2010030.00578685114690.000398098
219110.005256052115640.000369251
229330.005382982116620.000357712
239140.005273361117730.000421177
249120.005261822118690.000398098
258970.005175279119610.000351942
269330.005382982120640.000369251
278670.005002192121560.000323094
288070.00465602122530.000305786
298210.004736794123600.000346172
308220.004742563124520.000300016
317970.004598325125590.000340403
327340.004234843126480.000276938
337270.004194457127380.000219243
347370.004252152128350.000201934
357100.004096374129290.000167317
366890.003975214130390.000225012
376790.003917519131360.000207703
386260.003611733132430.00024809
396250.003605963133290.000167317
406670.003848284134350.000201934
415960.003438647135260.000150008
426080.003507881136270.000155778
435420.003127091137200.000115391
445910.003409799138290.000167317
455520.003184787139200.000115391
465320.003069396140230.000132699
474860.002803997141190.000109621
485260.003034779142180.000103852
494740.002734763143180.000103852
504590.00264822144300.000173086
514700.002711684145260.000150008
524490.002590524146220.00012693
534030.002325125147180.000103852
544130.002382821148160.00000923
554150.0023943614960.00000346
563880.002238582150120.00000692
574140.00238859151100.00000577
583970.002290508152130.00000750
593600.002077035153100.00000577
603460.00199626115480.00000462
613410.001967414155130.00000750
623460.00199626115660.00000346
633470.00200203115790.00000519
643270.0018866415880.00000462
653200.00184625315940.00000231
662710.00156354616070.00000404
672920.00168470616160.00000346
683100.00178855816230.00000173
692660.00153469816340.00000231
702790.00160970216450.00000288
712630.00151738916520.00000115
722900.00167316716620.00000115
732400.0013846916750.00000288
742460.00141930716850.00000288
752230.00128660816910.00000577
762150.00124045117020.00000115
772110.00121737317110.000000577
782150.00124045117230.00000173
791810.00104428717320.00000115
801790.00103274817410.000000577
812220.00128083817520.00000115
821610.00092889617600
831670.00096351317700
841830.00105582617820.00000115
851640.00094620517920.00000115
861540.00088850918000
871610.00092889618110.000000577
881680.00096928318200
891660.00095774418310.000000577
901470.00084812318400
911270.00073273218500
921530.0008827418600
931490.00085966218710.000000577
941440.000830814

References

  1. Steinbuch, K. Die Lernmatrix. Kybernetik 1961, 1, 36–45. [Google Scholar] [CrossRef]
  2. Willshaw, D.; Buneman, O.; Longuet-Higgins, H. Non-holographic associative memory. Nature 1969, 222, 960–962. [Google Scholar] [CrossRef]
  3. Amari, S. Learning patterns and pattern sequences by self-organizing nets of threshold elements. IEEE Trans. Comput. 1972, C-21, 1197–1206. [Google Scholar] [CrossRef]
  4. Anderson, J.A. A simple neural network generating an interactive memory. Math. Biosci. 1972, 14, 197–220. [Google Scholar] [CrossRef]
  5. Kohonen, T. Correlation matrix memories. IEEE Trans. Comput. 1972, 100, 353–359. [Google Scholar] [CrossRef]
  6. Nakano, K. Associatron-A model of associative memory. IEEE Trans. Syst. Man Cybern. 1972, SMC-2, 380–388. [Google Scholar] [CrossRef]
  7. Kohonen, T.; Ruohonen, M. Representation of associated data by matrix operators. IEEE Trans. Comput. 1973, c-22, 701–702. [Google Scholar] [CrossRef]
  8. Kohonen, T.; Ruohonen, M. An adaptive associative memory principle. IEEE Trans. Comput. 1973, c-24, 444–445. [Google Scholar] [CrossRef]
  9. Anderson, J.A.; Silverstein, J.; Ritz, S.; Jones, R. Distinctive features, categorical perception, and probability learning: Some applications of a neural model. Psichol. Rev. 1977, 84, 413–451. [Google Scholar] [CrossRef]
  10. Amari, S. Neural theory of association and concept-formation. Biol. Cybern. 1977, 26, 175–185. [Google Scholar] [CrossRef]
  11. Hopfield, J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [Green Version]
  12. Hopfield, J. Neurons with graded respose have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 1984, 81, 3088–3092. [Google Scholar] [CrossRef] [Green Version]
  13. Bosch, H.; Kurfess, F. Information storage capacity of incompletely connected associative memories. Neural Netw. 1998, 11, 869–876. [Google Scholar] [CrossRef] [Green Version]
  14. Karpov, Y.L.; Karpov, L.E.; Smetanin, Y.G. Associative Memory Construction Based on a Hopfield Network. Program. Comput. Softw. 2020, 46, 305–311. [Google Scholar] [CrossRef]
  15. Ferreyra, A.; Rodríguez, E.; Avilés, C.; López, F. Image retrieval system based on a binary auto-encoder and a convolutional neural network. IEEE Lat. Am. Trans. 2020, 100, 1–8. [Google Scholar]
  16. Ritter, G.X.; Sussner, P.; Diaz-de-Leon, J. Morphological associative memories. IEEE Trans. Neural Netw. 1998, 9, 281–293. [Google Scholar] [CrossRef]
  17. Ritter, G.X.; Diaz-de-Leon, J.; Sussner, P. Morphological bidirectional associative memories. IEEE Neural Netw. 1999, 12, 851–867. [Google Scholar] [CrossRef]
  18. Santana, A.X.; Valle, M. Max-plus and min-plus projection autoassociative morphological memories and their compositions for pattern classification. Neural Netw. 2018, 100, 84–94. [Google Scholar]
  19. Sussner, P. Associative morphological memories based on variations of the kernel and dual kernel methods. Neural Netw. 2003, 16, 625–632. [Google Scholar] [CrossRef]
  20. Heusel, J.; Löwe, M.; Vermet, F. On the capacity of an associative memory model based on neural cliques. Stat. Probab. Lett. 2015, 106, 256–261. [Google Scholar] [CrossRef]
  21. Sussner, P. Observations on morphological associative memories and the kernel method. Neurocomputing 2000, 31, 167–183. [Google Scholar] [CrossRef]
  22. Kim, H.; Hwang, S.; Park, J.; Yun, S.; Lee, J.; Park, B. Spiking Neural Network Using Synaptic Transistors and Neuron Circuits for Pattern Recognition With Noisy Images. IEEE Electron Device Lett. 2018, 39, 630–633. [Google Scholar] [CrossRef]
  23. Masuyama, N.; Kiong, C.; Seera, M. Personality affected robotic emotional model with associative memory for human-robot interaction. Neurocomputing 2018, 272, 213–225. [Google Scholar] [CrossRef]
  24. Masuyama, N.; Islam, N.; Seera, M.; Kiong, C. Application of emotion affected associative memory based on mood congruency effects for a humanoid. Neural Comput. Appl. 2017, 28, 737–752. [Google Scholar] [CrossRef]
  25. Aldape-Pérez, M.; Yáñez-Márquez, C.; López-Yáñez, I.; Camacho-Nieto, O.; Argüelles-Cruz, A. Collaborative learning based on associative models: Application to pattern classification in medical datasets. Comput. Hum. Behav. 2015, 51, 771–779. [Google Scholar] [CrossRef]
  26. Aldape-Pérez, M.; Alarcón-Paredes, A.; Yáñez-Márquez, C.; López-Yáñez, I.; Camacho-Nieto, O. An Associative Memory Approach to Healthcare Monitoring and Decision Making. Sensors 2018, 18, 2960. [Google Scholar] [CrossRef] [Green Version]
  27. Njafa, J.P.T.; Engo, S.N. Quantum associative memory with linear and non-linear algorithms for the diagnosis of some tropical diseases. Neural Netw. 2018, 97, 1–10. [Google Scholar] [CrossRef] [Green Version]
  28. Yong, K.; Pyo, G.; Sik, D.; Ho, D.; Jun, B.; Ryoung, K.; Kim, J. New iris recognition method for noisy iris images. Pattern Recognit. Lett. 2012, 33, 991–999. [Google Scholar]
  29. Peng, X.; Wen, J.; Li, Z.; Yang, G. Rough Set Theory Applied to Pattern Recognition of Partial Discharge in Noise Affected Cable Data. IEEE Trans. Dielectr. Electr. Insul. 2017, 24, 147–156. [Google Scholar] [CrossRef] [Green Version]
  30. Zhu, Z.; You, X.; Philip, Z. An adaptive hybrid pattern for noise-robust texture analysis. Pattern Recognit. 2015, 48, 2592–2608. [Google Scholar] [CrossRef]
  31. Li, Y.; Li, J.; Duan, S.; Wang, L.; Guo, M. A reconfigurable bidirectional associative memory network with memristor bridge. IEEE Neurocomputing 2021, 454, 382–391. [Google Scholar] [CrossRef]
  32. Knoblauch, A. Neural associative memory with optimal bayesian learning. Neural Comput. 2011, 23, 1393–1451. [Google Scholar] [CrossRef]
  33. Rendeiro, D.; Sacramento, J.; Wichert, A. Taxonomical associative memory. Cogn. Comput. 2014, 6, 45–65. [Google Scholar] [CrossRef] [Green Version]
  34. Acevedo-Mosqueda, M.; Yáñez-Márquez, C.; López-Yáñez, I. Alpha-Beta bidirectional associative memories: Theory and applications. Neural Process. Lett. 2007, 26, 1–40. [Google Scholar] [CrossRef]
  35. Acevedo, M.E.; Yáñez-Márquez, C.; Acevedo, M.A. Bidirectional associative memories: Different approaches. ACM Comput. Surv. 2013, 45, 1–30. [Google Scholar] [CrossRef]
  36. Luna-Benoso, B.; Flores-Carapia, R.; Yáñez-Márquez, C. Associative memories based on cellular automata: An application to pattern recognition. Appl. Math. Sci. 2013, 7, 857–866. [Google Scholar] [CrossRef]
  37. Cleofas-Sánchez, L.; Sánchez, J.S.; García, V.; Valdovinos, R.M. Associative learning on imbalanced environments: An empirical study. Expert Syst. Appl. 2016, 54, 387–397. [Google Scholar] [CrossRef] [Green Version]
  38. Mustafa, A. Probabilistic binary similarity distance for quick binary image matching. IET Image Process. 2018, 12, 1844–1856. [Google Scholar] [CrossRef]
  39. Velázquez-Rodríguez, J.L.; Villuendas-Rey, Y.; Camacho-Nieto, O.; Yáñez-Márquez, C. A novel and simple mathematical transform improves the perfomance of Lernmatrix in pattern classification. Mathematics 2020, 8, 732. [Google Scholar] [CrossRef]
  40. Reyes-León, P.; Salgado-Ramírez, J.C.; Velázquez-Rodríguez, J.L. Application of the Lernmatrix tau[9] to the classifi-cation of patterns in medical datasets. Int. J. Adv. Trends Comput. Sci. Eng. 2020, 9, 8488–8497. [Google Scholar] [CrossRef]
  41. Gamino, A.; Díaz-de-León, J. A new method to build an associative memory model. IEEE Lat. Am. Trans. 2021, 19, 1692–1701. [Google Scholar] [CrossRef]
  42. Yiannis, B. A new method for constructing kernel vectors in morphological associative memories of binary patterns. Comput. Sci. Inf. Syst. 2011, 8, 141–166. [Google Scholar] [CrossRef]
  43. Esmi, E.; Sussner, P.; Bustince, H.; Fernández, J. Theta-Fuzzy Associative Memories (Theta-FAMs). IEEE Trans. Fuzzy Syst. 2015, 23, 313–326. [Google Scholar] [CrossRef]
  44. Tarkov, M.S. Application of emotion affected associative memory based on mood congruency effects for a humanoid. Opt. Mem. Neural Netw. 2016, 25, 219–227. [Google Scholar] [CrossRef]
  45. Yáñez-Márquez, C.; López-Yáñez, I.; Aldape-Pérez, M.; Camacho-Nieto, O.; Argüelles-Cruz, A.; Villuendas-Rey, Y. Theoretical Foundations for the Alpha-Beta Associative Memories: 10 Years of Derived Extensions, Models, and Applications. Neural Process. Lett. 2018, 48, 811–847. [Google Scholar] [CrossRef]
  46. Estevão, E.; Sussner, P.; Sandri, S. Tunable equivalence fuzzy associative memories. Fuzzy Sets Syst. 2016, 292, 242–260. [Google Scholar]
  47. Li, L.; Pedrycz, W.; Qu, T.; Li, Z. Fuzzy associative memories with autoencoding mechanisms. Knowl.-Based Syst. 2020, 191, 105090. [Google Scholar] [CrossRef]
  48. Starzyk, J.A.; Maciura, Ł.; Horzyk, A. Associative Memories With Synaptic Delays. J. Assoc. Inf. Syst. 2020, 21, 7. [Google Scholar] [CrossRef]
  49. Lindberg, A. Developing Theory Through Integrating Human and Machine Pattern Recognition. IEEE Trans. Neural Networks Learn. Syst. 2019, 31, 331–344. [Google Scholar] [CrossRef]
  50. Feng, N.; Sun, B. On simulating one-trial learning using morphologicalneural networks. Cogn. Syst. Res. 2019, 53, 61–70. [Google Scholar] [CrossRef]
  51. Ahmad, K.; Khan, J.; Salah, M. A comparative study of Different Denoising Techniques in Digital Image Processing. In Proceedings of the IEEE 2019 8th International Conference on Modeling Simulation and Applied Optimization, Manama, Bahrain, 15–17 April 2019. [Google Scholar] [CrossRef]
  52. Fan, Y.; Zhang, L.; Guo, H.; Hao, H.; Qian, K. Image Processing for Laser Imaging Using Adaptive Homomorphic Filtering and Total Variation. Photonics 2020, 7, 30. [Google Scholar] [CrossRef]
  53. Lu, C.; Chou, T. Denoising of salt-and-pepper noise corrupted image using modified directional-weighted-median filter. Pattern Recognit. Lett. 2012, 33, 1287–1295. [Google Scholar] [CrossRef]
  54. Xiao, Y.; Zeng, J.; Michael, Y. Restoration of images corrupted by mixed Gaussian-impulse noise via l1–l0 minimization. Pattern Recognit. 2011, 44, 1708–1720. [Google Scholar] [CrossRef] [Green Version]
  55. Chervyakov, N.; Lyakhov, P.; Kaplun, D.; Butusov, D.; Nagornov, N. Analysis of the Quantization Noise in Discrete Wavelet Transform Filters for Image Processing. Electronics 2018, 7, 135. [Google Scholar] [CrossRef] [Green Version]
  56. Gonzalez, R.; Woods, R. Digital Image Processing, 3rd ed.; Pearson: Upper Saddle River, NJ, USA, 2008; pp. 331–345. [Google Scholar]
  57. Kipli, K.; Hoque, M.; Lim, L.; Afendi, T.; Kudnie, S.; Hamdi, M. Retinal image blood vessel extraction and quantification with Euclidean distance transform approach. IET Image Process. 2021, 14, 3718–3724. [Google Scholar] [CrossRef]
  58. Duy, D.; Dovletov, G.; Pauli, J. A Differentiable Convolutional Distance Transform Layer for Improved Image Segmentation. Pattern Recognit. 2021, 12544, 432–444. [Google Scholar] [CrossRef]
  59. Elizondo, J.; Ramirez, J.; Barron, J.; Diaz, A.; Nuño, M.; Saldivar, V. Parallel Raster Scan for Euclidean Distance Transform. Symmetry 2020, 12, 1808. [Google Scholar] [CrossRef]
  60. Hill, B.; Baldock, R. Constrained distance transforms for spatial atlas registration. BMC Bioinform. 2015, 16, 90. [Google Scholar] [CrossRef] [Green Version]
  61. Elizondo, J.; Parra, E.; Ramirez, J. The Exact Euclidean Distance Transform: A New Algorithm for Universal Path Planning. Int. J. Adv. Robot. Syst. 2013, 10, 266. [Google Scholar] [CrossRef]
  62. Torelli, J.; Fabbri, R.; Travieso, G.; Martinez, B. A A high performance 3d exact euclidean distance transform algorithm for distributed computing. Int. J. Pattern Recognit. Artif. Intell. 2010, 24, 897–915. [Google Scholar] [CrossRef] [Green Version]
  63. Bautista, S.; Salgado, J.; Gomez, A.; Tellez, A.; Ortega, R.; Jimenez, J.; Cadena, A. Image segmentation with fast distance transform (FDT) and morphological skeleton in microalgae Raceway culture systems applications. Rev. Mex. Ing. Quim. 2021, 20, 885–898. [Google Scholar] [CrossRef]
Figure 1. Associative memory as black box.
Figure 1. Associative memory as black box.
Mathematics 10 00148 g001
Figure 2. Additive noise, subtractive noise, and mixed noise, respectively.
Figure 2. Additive noise, subtractive noise, and mixed noise, respectively.
Mathematics 10 00148 g002
Figure 3. kernel model learning phase.
Figure 3. kernel model learning phase.
Mathematics 10 00148 g003
Figure 4. kernel model recall phase.
Figure 4. kernel model recall phase.
Mathematics 10 00148 g004
Figure 5. d 4 and d 8 metrics for the first step.
Figure 5. d 4 and d 8 metrics for the first step.
Mathematics 10 00148 g005
Figure 6. d 4 and d 8 metrics for the second step.
Figure 6. d 4 and d 8 metrics for the second step.
Mathematics 10 00148 g006
Figure 7. Result of the two steps of the FDT.
Figure 7. Result of the two steps of the FDT.
Mathematics 10 00148 g007
Figure 8. Appearance of the 7-scan process images.
Figure 8. Appearance of the 7-scan process images.
Mathematics 10 00148 g008
Figure 9. Noise scheme.
Figure 9. Noise scheme.
Mathematics 10 00148 g009
Figure 10. Learning process of the new model of min heteroassociative memories.
Figure 10. Learning process of the new model of min heteroassociative memories.
Mathematics 10 00148 g010
Figure 11. Recall process of the new model of min heteroassociative memories.
Figure 11. Recall process of the new model of min heteroassociative memories.
Mathematics 10 00148 g011
Figure 12. Absolute and relative frequency distributions of noise acquisition in binary images.
Figure 12. Absolute and relative frequency distributions of noise acquisition in binary images.
Mathematics 10 00148 g012
Figure 13. Binary image with simulated acquisition noise.
Figure 13. Binary image with simulated acquisition noise.
Mathematics 10 00148 g013
Figure 14. Process generating the noise distribution function.
Figure 14. Process generating the noise distribution function.
Mathematics 10 00148 g014
Figure 15. Scanned image vs simulated image noise distributions.
Figure 15. Scanned image vs simulated image noise distributions.
Mathematics 10 00148 g015
Figure 16. Fundamental sets.
Figure 16. Fundamental sets.
Mathematics 10 00148 g016
Figure 17. Operations to build binary kernels.
Figure 17. Operations to build binary kernels.
Mathematics 10 00148 g017
Figure 18. Operations to build grayscale kernels.
Figure 18. Operations to build grayscale kernels.
Mathematics 10 00148 g018
Table 1. The binary α operation.
Table 1. The binary α operation.
xy α x , y
001
010
102
111
Table 2. The binary β operation.
Table 2. The binary β operation.
xy β x , y
000
010
100
111
201
211
Table 3. Acquisition noise distribution table in binary images.
Table 3. Acquisition noise distribution table in binary images.
DistanceFrequencyProbabilityDistanceFrequencyProbability
−2000.0192920.2084483
−1900.0238260.08582901
−1800.0323010.051618546
−1730.00000729928416490.03699217
−16100.00002243309358300.018619467
−15180.00004037956866190.013886085
−14420.00009421899475350.012001705
−13790.001772214484450.009982727
−121230.002759270493910.008771339
−111620.003634161103820.008569442
−101940.00435202113380.0075823856
−92380.0053390763122880.006460731
−83970.008905938132750.0061691008
−75120.011485743142700.006056935
−65950.013347691152660.0059672026
−58230.018462436161970.0044193193
−411720.02629158517950.0021311438
−317120.03840545618560.0012562532
−232120.07205509418560.0012562532
−113,2040.2962065619240.000053839426
0002020.0000044866185
Table 4. Probable distances where the acquisition noise does not affect.
Table 4. Probable distances where the acquisition noise does not affect.
Binary ImageGrayscale Image
Original SizeNew Size df Original SizeNew Size df
420 × 240 50 × 50 3 420 × 240 50 × 50 24
420 × 240 80 × 80 4 420 × 240 80 × 80 38
420 × 240 120 × 120 6 420 × 240 120 × 120 56
Table 5. Performance of the proposed model vs. original model in 50 × 50 binary images.
Table 5. Performance of the proposed model vs. original model in 50 × 50 binary images.
Original ModelMorphological α β
Pattern d = 3 d = 1 d = 2 d = 1 d = 2
A100%77.00%100%76.90%100%
B100%70.10%100%70.60%100%
C100%93.00%100%93.00%100%
D100%74.20%100%74.30%100%
E100%78.00%100%77.90%100%
F100%79.10%100%79.00%100%
Q100%71.70%100%71.90%100%
T100%79.00%100%79.00%100%
W100%71.20%100%71.20%100%
X100%70.00%100%70.20%100%
Y100%70.10%100%70.00%100%
Z100%69.80%100%69.50%100%
Table 6. Performance of the proposed model vs. original model in 50 × 50 grayscale images.
Table 6. Performance of the proposed model vs. original model in 50 × 50 grayscale images.
Original Model KernelNew Model
PatternNo noise d = 24 No noise d = 18 d = 22
A100%100%100%86.10100%
B100%100%100%87.00100%
C100%100%100%92.20100%
D100%100%100%90.10100%
E100%100%100%86.30100%
F100%100%100%87.20100%
Q100%100%100%86.50100%
T100%100%100%88.10100%
W100%100%100%86.20100%
X100%100%100%89.50100%
Y100%100%100%91.30100%
Z100%100%100%86.00100%
Table 7. Performance of the proposed model vs. original model in 80 × 80 binary images.
Table 7. Performance of the proposed model vs. original model in 80 × 80 binary images.
Original ModelMorphological α β
Pattern d = 4 d = 1 d = 2 d = 3 d = 1 d = 2 d = 3
1100%78.10%97.10%100%78.00%97.00100%
2100%75.20%95.60%100%75.10%95.60100%
3100%77.10%97.10%100%77.20%97.20100%
4100%78.40%97.30%100%78.40%97.20100%
5100%69.90%98.20%100%70.00%98.40100%
6100%80.00%99.00%100%80.00%99.00100%
Table 8. Performance of the proposed model vs. original model in 80 × 80 grayscale images.
Table 8. Performance of the proposed model vs. original model in 80 × 80 grayscale images.
Original Model KernelNew Model
PatternNo noise d = 38 No noise d = 10 d = 20 d = 30
1100%100%100%77.6089.00100%
2100%100%100%79.1089.20100%
3100%100%100%75.2087.10100%
4100%100%100%80.0091.00100%
5100%100%100%78.1090.10100%
6100%100%100%82.9092.20100%
Table 9. Performance of the proposed model vs. original model in 120 × 120 binary images.
Table 9. Performance of the proposed model vs. original model in 120 × 120 binary images.
Original ModelMorphological α β
Pattern d = 6 d = 1 d = 2 d = 3 d = 1 d = 2 d = 3
1100%80.00%96.70%100%80.10%96.60100%
2100%77.80%96.50%100%77.80%95.80100%
3100%81.70%98.90%100%82.90%98.20100%
Table 10. Performance of the proposed model vs original model in 120 × 120 grayscale images.
Table 10. Performance of the proposed model vs original model in 120 × 120 grayscale images.
Original Model KernelNew Model
PatternNo noise d = 56 No noise d = 10 d = 30 d = 35
1100%100%100%78.1095.90100%
2100%100%100%77.2096.10100%
3100%100%100%78.3091.10100%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Salgado-Ramírez, J.C.; Vianney Kinani, J.M.; Cendejas-Castro, E.A.; Rosales-Silva, A.J.; Ramos-Díaz, E.; Díaz-de-Léon-Santiago, J.L. New Model of Heteroasociative Min Memory Robust to Acquisition Noise. Mathematics 2022, 10, 148. https://doi.org/10.3390/math10010148

AMA Style

Salgado-Ramírez JC, Vianney Kinani JM, Cendejas-Castro EA, Rosales-Silva AJ, Ramos-Díaz E, Díaz-de-Léon-Santiago JL. New Model of Heteroasociative Min Memory Robust to Acquisition Noise. Mathematics. 2022; 10(1):148. https://doi.org/10.3390/math10010148

Chicago/Turabian Style

Salgado-Ramírez, Julio César, Jean Marie Vianney Kinani, Eduardo Antonio Cendejas-Castro, Alberto Jorge Rosales-Silva, Eduardo Ramos-Díaz, and Juan Luis Díaz-de-Léon-Santiago. 2022. "New Model of Heteroasociative Min Memory Robust to Acquisition Noise" Mathematics 10, no. 1: 148. https://doi.org/10.3390/math10010148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop