Next Article in Journal
Reduced Complexity BER Calculations in Large Scale Spatial Multiplexing Multi-User MIMO Orientations in Frequency Selective Fading Environments
Next Article in Special Issue
Adaptive Algorithm on Block-Compressive Sensing and Noisy Data Estimation
Previous Article in Journal
A Parametric Conducted Emission Modeling Method of a Switching Model Power Supply (SMPS) Chip by a Developed Vector Fitting Algorithm
Previous Article in Special Issue
An Efficient Separable Reversible Data Hiding Using Paillier Cryptosystem for Preserving Privacy in Cloud Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wiener–Granger Causality Theory Supported by a Genetic Algorithm to Characterize Natural Scenery

by
César Benavides-Álvarez
1,*,†,
Juan Villegas-Cortez
2,†,
Graciela Román-Alonso
1,† and
Carlos Avilés-Cruz
2,†
1
Electrical Engineering Department, Autonomous Metropolitan University Iztapalapa, Av. San Rafael Atlixco 186, Leyes de Reforma 1ra Secc, Mexico City 09340, Mexico
2
Electronics Department, Autonomous Metropolitan University Azcapotzalco, Av. San Pablo 180, Col. Reynosa, C.P., Mexico City 02200, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2019, 8(7), 726; https://doi.org/10.3390/electronics8070726
Submission received: 9 May 2019 / Revised: 17 June 2019 / Accepted: 21 June 2019 / Published: 26 June 2019

Abstract

:
Image recognition and classification have been widely used for research in computer vision systems. This paper aims to implement a new strategy called Wiener-Granger Causality theory for classifying natural scenery images. This strategy is based on self-content images extracted using a Content-Based Image Retrieval (CBIR) methodology (to obtain different texture features); later, a Genetic Algorithm (GA) is implemented to select the most relevant natural elements from the images which share similar causality patterns. The proposed method is comprised of a sequential feature extraction stage, a time series conformation task, a causality estimation phase, causality feature selection throughout the GA implementation (using the classification process into the fitness function). A classification stage was implemented and 700 images of natural scenery were used for validating the results. Tested in the distribution system implementation, the technical efficiency of the developed system is 100% and 96% for resubstitution and cross-validation methodologies, respectively. This proposal could help with recognizing natural scenarios in the navigation of an autonomous car or possibly a drone, being an important element in the safety of autonomous vehicles navigation.

1. Introduction

One of the challenges researchers face today is developing an artificial authentication system that has acquisition and processing capabilities similar to those possessed by humans [1]. Artificial vision is defined as the capacity of a machine to see the world that surrounds it in a 3-Dimensional form starting from a group of 2-Dimensional images [2]. Since there is no effective algorithm that can fully recognize any object one can imagine in the entire environment, computer vision is considered an open problem. A computer vision system is composed of different stages that work together for solving a particular problem [3].
Automatic image recognition is among the problems that might be solved using computer vision systems. Researchers are eager to develop these systems and different techniques have been implemented for their improvement, such as machine learning, pattern recognition and evolutionary algorithms.
One of the tasks of an automated image recognition system is to successfully classify and identify natural scenery images (It is said that a scene is natural if the image has no intervention or alteration by human hands). Currently, thousands of images are generated via different kinds of sources on a daily basis and the constant increase of the Internet has influenced human life.
More than half of the information on the Internet is images, 85% of which were taken with mobile devices with a final estimation of 5 trillion images reported so far [4].
In order to use this information efficiently, an image recovery system based on Content-Based Image Retrieval (CBIR) is necessary. It will help users to find relevant images based on their self-content features or those which are “seen” to e related to them, from our visual perception, even when there is no previous knowledge of the database, such as manual labeling of the images.
Our previous work successfully applied the CBIR technique to the face recognition problem [5,6]. The multiple textures, objects in unknown positions and their different compositions in natural scenery images challenge the proposals that combine different techniques for obtaining a better performance of natural scenery image classification. In this work, we use CBIR feature extraction as an input of a texture causality engine to characterize 5 scenery types, manually defining a base dictionary conformed by 4 textures. In future work, conforming this dictionary is planned to be dynamical, considering more base textures and scenery types to improve classification performance.
In this work, an image retrieval system of natural scenery images is developed by applying the Wiener-Granger Causality (WGC) theory [7] as a tool for analyzing images throughout self-content information. The causal relationships between local textures contained in an image were identified, leading to characterization of a descriptive pattern of a set of scenes inside an image dataset. The selection of causality relationships was carried out using genetic algorithm (GA) implementation as an evolutionary process.
The major stages involved in the developed system are the following (See Figure 1):
  • Scenery reading: First, images are read from the data set and then a change of space color format is applied from Red-Green-Blue RGB to Hue-Saturation-Intensity HSI.
  • Feature extraction: The statistical CBIR feature extraction is generated within a neighborhood in a grid.
  • Time series conformation: The texture features are organized as a time series for each image.
  • Causality analysis: The WGC analysis is applied to calculate the causal relationship matrix among different textures.
  • Genetic Algorithm (GA) implementation: GA is executed to find the characterization of causality relationships that perform better for natural element retrieval of the images that have similar causality texture patterns for a particular scene.
The paper proposes a causality analysis of the natural scenery classes based on a pre-established texture dictionary and the WGC analysis from the CBIR methodology [5,8] in order to provide a whole dataset characterization.
This approach aims to improve the optimization process of evolutionary algorithms. In this case, since the GA [9] shows a simple and fast implementation, it was employed to select the relationships of the local-texture statistical features handled as time series.
Finally, an improvement in the classification accuracy obtained by our proposed strategy is reported, getting 100% on re-substitution and up to 96% for cross-validation methodologies. This approach was implemented using the computer power of a 19-processor cluster and the MPI parallel programming tool.
The current methodology was probed with two databases of natural scenery:
  • Vogel and Shiele (V_S) [10], with 700 Images classified as: 144 coast, 103 forest, 179 mountain, 131 prairie, 111 river/lake and 32 sky/cloud.
  • Oliva and Torralba (O_T) [11], with 1472 Images classified as: 360 coast, 328 forest, 374 mountain and 410 prairie.
Visualizing the future implementation of an autonomous system of recognition of natural scenes mounted on a car—which will be managed by our proposal as the autonomous system [12,13,14]—recognizing natural scenarios in the navigation of an autonomous car or possibly a drone, with a 100% certainty, this proposed system will be an important element in the safety of autonomous vehicles.
The rest of the paper is organized as follows: Section 2 presents the state of the art of the problem of image analysis from the CBIR criterion and the WGC theory used in our project, as well as the theoretical support of the WGC model to be applied; in Section 3, the proposed methodology for applying the WGC theory in the natural scenery image characterization is presented; in Section 4 our GA implementation approach to optimize the selection of texture causality relationships is explained; the parallel implementation of our proposal to get good efficiency when processing a large number of images is provided in Section 5; finally, the results and conclusions are presented in Section 6 and Section 8, respectively.

2. State of The Art

The problem of image classification and recognition has been studied with different approaches for supporting visual search for different purposes.
Several techniques have been applied successfully to the face recognition problem [6,15,16,17,18]. The solutions are favored by controlling the way in which the images are obtained by determining the amount of light, the orientation, the distance, and so forth, in order to obtain ideal face images. In addition, the points to be identified on a face image are well known. The multiple textures, objects in unknown positions and their different compositions make it quite difficult to recognize and identify natural scenery in an image or group of images.
One of the most recent solutions for the classification of natural scenery is the use of the deep learning technique [19], which consists of a set of neural networks connected with each other in successive layers, where each layer network performs a convolution operation on the information of the previous layer, as we can see in Reference [20]. This methodology has the disadvantage of requiring high-end computational resources (memory and CPU) for the training task, unlike the CBIR technique which can be implemented in systems with few resources.
When using CBIR for scenery image classification, significant descriptors are determined considering the image self texture attributes to have an important and effective recovery. In this system, a user presents an image query and the system returns similar images from the database. In Figure 2, the general diagram of a CBIR-based classification system of natural scenery images is shown.
One of the first papers that uses the CBIR methodology for natural scenery classification is that by J. Vogel [21]; this work defines a regular 10 × 10 grid on the image; from each grid coordinate an analysis window is opened. Local information is extracted from a window texture and compared with a base texture dictionary; then, the author defines a classification system for natural scenery. A point to be improved is the definition of the base texture dictionary that is set manually, including only typical textures perceived by the researchers. This approach obtains up to 75 % average for the cross validation classification test.
Unlike a grid, in Reference [8], random points are thrown on the image and around each point a window is opened; from each window, statistical texture local information is extracted to be grouped, conforming dynamically to a base texture dictionary. The testing is performed considering the generated dictionary obtaining an 85 % classification average of natural scenery data bases.
In Reference [22], the CBIR approach is presented to classify natural scenery images through the composition of relevant features in relation to the texture, like in Reference [23], the shape and distribution of the luminosity.
CBIR, being an unsupervised learning technique, still has some disadvantages, since the information extracted is only treated as a histogram that represents the composition of textures in a scenery. This way of characterizing scenery has not been able to obtain more than 85 % classification, that is why new proposals that use hybrid methodologies to give CBIR greater robustness arise [24]
In References [24,25], the authors combine the CBIR information with certain semantic content introducing high-level concept objects, trying to link content-based images to objects extracted inside them. This work obtained a percentage of natural scenery classification not greater than that of References [21] and [8].
In this work, a hybrid method of three components is presented. The basic component is CBIR, which generates the information regarding the local texture features of the image. Unlike performing only statistical management of the obtained features (using histograms), in this proposal the second component is responsible for applying a causality technique based on the Wiener Granger causality theory to identify the causal relationships that exist within the basic textures of a type of scenery. Since the causality component generates different configurations of causal relationships, the third component consists of a GA that allows the selection of the configuration that obtains the best classification percentage for each scenery.
Evolutionary Computational Vision (ECV) as a research area is currently growing in artificial intelligence through two areas of work—computational vision and evolutionary computation. Beginning from a practical point of view, ECV seeks to design the software and hardware solutions necessary to solve hard computer vision problems [1]. Bio-inspired computation within computational vision contains a set of techniques that are frequently applied to hard optimization problems. Its chief objective is to generate solutions formulated in a synthetic way and the artificial evolutionary process based on the evolutionary theory developed by Charles Darwin is the one frequently applied in Reference [9].

2.1. Theoretical Fundamentals Of Wgc

The causal inference paradigm has been used in different fields of science, for example, in neurology the WGC theory [26] is used to examine areas of the brain and the causal relationships among them. WGC analysis was carried out using sensors [27,28], and, lately in MRI images [29,30,31], the WGC theory is being used for the study of causal relationships among areas of the brain. Other science fields where WGC theory has been applied is video processing for indexing and retrieval [32]. Video processing for massive people and vehicle identification [33,34,35] and complex scenery analysis [36]. In this proposal, for the first time, WGC theory is applied to a natural elements and natural scenes retrieval.
In this section, the theoretical framework of the WGC is established. For simplicity and in order to avoid extending mathematically, the theory is presented only for three random processes, being extendable to n processes. In our approach, a random process corresponds to a signal reading associated to one type of texture within a natural scenery; so, for the present analysis, each texture reading corresponds to one stochastic process represented by T i , being i the i-th texture which has a stochastic behavior disposed into a scenery.

2.2. Stochastic Autoregressive Model

We assume that each texture can be represented by an autoregressive model into time series. In the current analysis, we will only carry out with three signals, { T 1 , T 2 , and  T 3 }, being easily extendable to n signals/textures. Let T 1 , T 2 , and  T 3 be three stochastic processes, individually and jointly stationary. Each stationary process can be represented by an autoregressive model in the following way:
T 1 ( t ) = k = 1 C T 1 1 ( k ) T 1 ( t k ) + η T 1 1 , with T 1 1 = var ( η T 1 1 ) ,
T 2 ( t ) = k = 1 C T 2 1 ( k ) T 2 ( t k ) + η T 2 1 , with T 2 1 = var ( η T 2 1 ) ,
T 3 ( t ) = k = 1 C T 3 1 ( k ) T 3 ( t k ) + η T 3 1 , with T 3 1 = var ( η T 3 1 ) ,
being η T 1 1 , η T 2 1 and η T 3 1 random Gaussian noise with zero mean and unit standard deviation; C T 1 1 ( k ) , C T 2 1 ( k ) and C T 3 1 ( k ) are the coefficients of the regression model for textures T 1 , T 2 and T 3 , respectively.
The joint autoregressive model for the three textures is defined by the equations:
T 1 ( t ) = k = 1 C T 1 1 , 1 ( k ) T 1 ( t k ) + k = 1 C T 2 1 , 2 ( k ) T 2 ( t k ) + k = 1 C T 3 1 , 3 ( k ) T 3 ( t k ) + η T 1 2 , with T 1 2 = var ( η T 1 2 )
T 2 ( t ) = k = 1 C T 1 2 , 1 ( k ) T 1 ( t k ) + k = 1 C T 2 2 , 2 ( k ) T 2 ( t k ) + k = 1 C T 3 2 , 3 ( k ) T 3 ( t k ) + η T 2 2 , with T 2 2 = var ( η T 2 2 )
T 3 ( t ) = k = 1 C T 1 3 , 1 ( k ) T 1 ( t k ) + k = 1 C T 2 3 , 2 ( k ) T 2 ( t k ) + k = 1 C T 3 3 , 3 ( k ) T 3 ( t k ) + η T 3 2 , with Σ T 3 2 = var ( η T 3 2 )
where Σ T 1 2 , Σ T 2 2 and Σ T 3 2 are the variance of the residual terms η T 1 2 , η T 2 2 and η T 3 2 , respectively. On the other hand, the terms C T l i , j ( k ) i , j , l [ 1 , 2 , 3 ] , are the regression coefficients for textures T 1 ( t ) , T 2 ( t ) and T 3 ( t ) , respectively.
Now let us analyze the variances/covariances of the residual terms η T i 2 by means of the following Σ matrix form Equation (7):
Σ = Σ T 1 2 Υ 1 , 2 Υ 1 , 3 Υ 2 , 1 Σ T 2 2 Υ 2 , 3 Υ 3 , 1 Υ 3 , 2 Σ T 3 2
where Υ 1 , 2 is the covariance between η T 1 2 and η T 2 2 (defined as Υ 1 , 2 = c o v ( η T 1 2 , η T 2 2 ) ); Υ 1 , 3 is the covariance between η T 1 2 and η T 3 2 (defined as Υ 1 , 3 = c o v ( η T 1 2 , η T 3 2 ) ), and so on.
Based on the earlier conditions and using the concept of statistical independence between two random processes at the same time (in pairs), causality can be defined in time. An example of the causality between T 1 and T 2 is as in the following expression:
F T 2 , T 1 = ln Σ T 1 1 × Σ T 2 1 Σ T 1 2 × Σ T 2 2
The Equation (8) is commonly known as the causality in the time domain. From this equation, if the random processes T 1 ( t ) and T 2 ( t ) are statistically independent, then F T 1 , T 2 = 0 ; otherwise there will be causality from one to another.
In the Equation (1), Σ T 1 1 measures the precision of the autoregressive model to predict T 1 ( t ) , established on the past samples.
Then again, Σ T 1 2 in the expression (4) measures the precision to predict T 1 ( t ) based on the previous values of T 1 ( t ) , T 2 ( t ) and T 3 ( t ) at the same time. Returning to the case of taking only 2 textures at the same time T 1 ( t ) and T 2 ( t ) and according to References [37] and [7], if  Σ T 2 2 < Σ T 1 1 then it is said that T 2 ( t ) has a causal influence on T 1 ( t ) . The causality is defined by the following equation:
F T 2 T 1 = l n Σ T 1 1 Σ T 1 2
It is relatively easy to see that if F T 2 T 1 = 0 then there is no causal influence from T 2 ( t ) towards T 1 ( t ) , at any other values, the result will be otherwise. On the other hand, the causal influence of T 1 ( t ) towards T 2 ( t ) is established using the following equation:
F T 1 T 2 = l n Σ T 2 1 Σ T 2 2

3. Methodology

In the current section, we describe the methodology developed for the WGC technique with a GA support applied to natural scenery.
For the use of the CBIR, there are different determining factors that must be taken into account while extracting the information from the images, such as luminosity, orientation, scale, homogeneity, and so forth. The main characteristic in our proposed patterns is texture, such that we try to create a base dictionary to later create the time series from the reading of the images and their comparison with the dictionary, with which the theory of WGC was applied.
For the development of the dictionary, a set of k textures are manually selected on the images to be studied, which we will call reference textures. The k generated textures represent parts of objects such as the sky, clouds, grass, rock, and so forth, trying to make a manual segmentation of the scenery as shown in Figure 3. In Section 6, the  k = 4 textures test is shown for 6 scenery-classes.
Once the set of the k reference textures has been obtained, the values in the HSI color space of each of them are examined to create a range of maximum and minimum values which represent them, these values help us to define the thresholds of comparison for the test textures of a query image.
The proposed methodology for the identification and classification of scenery by WGC is shown in Figure 4. The blocks of the architecture are described below.
  • Natural Scenery Database (NSDB). Represents the set of images to be analyzed, it contains the images of the natural scenery.
  • Reading the images. Is responsible for obtaining the images from the database, which will be processed in (Red-Green-Blue) RGB color format.
  • Pre-processing. Pre-processing of the images, erasing the noise, to be used in the next step.
  • Change to the (Hue-Saturation-Intensity) HSI color space. The RGB color space does not give us the necessary information for the feature extraction, therefore we pass them/it to the HSI color space, which gives us the information related to the texture.
  • Feature extraction. This block consists of three important stages:
    • Grid image. The work done in Reference [21] is taken as a reference, a regular grid of 10 × 10 windows is considered for the CBIR texture analysis; in our proposal, we use a grid of r × c windows, which has the property of r c , where c : = number of windows in horizontal (columns), and r : = number of windows in vertical (lines).
    • Neighborhood construction. In each of the resulting frames of the grid, the size of the neighborhood p × p pixels is extracted, starting from the top left corner of each window, as shown in Figure 5, such that p < r and p < c .
    • CBIR feature extraction. The image is read from the neighborhoods in the following way: It starts in the top left corner of the image and it moves following a descending vertical order through the neighborhoods, processing each of them. Once it reaches the last line, it moves one step to the right neighborhood and goes up to the first line; when the first line is attended again, it moves to the right column within the neighborhood and goes down again (like a snake moving), this reading is repeated for the entire image until the last neighborhood is reached, as shown in Figure 6. Each neighborhood section creates a pattern of size 1 × 3 , that is, one feature per channel (HSI) of the image. After the feature extraction of all the neighborhoods was read in the established order, a matrix M s i with size w × 3 is created, where w = r c is the number of neighborhoods analyzed for each i-th image of the class C s .
  • Generation of time series.For each M s i of the previous step, each matrix entry is compared to the k textures of the dictionary to construct a discrete signal as a time series T S i , defined as a matrix of size k × w .
    In comparison, the value 1 is assigned if the feature neighborhood approaches the dictionary texture and 0 if not, according to the threshold values which characterize each texture as they were previously presented. After processing the entries of all M s i , the set of signals for each scenery is stored in the F M s , the time series matrix corresponding to a class s that contains I m g s images.
  • Wiener-Granger Causality analysis. Each F M s matrix created in the previous step was carried to the WGC analysis to obtain the causal relationships, contained among each one of the base textures. A matrix of causality relationships, η s , related to the training images was generated, as shown in Figure 7; therein, darker colors represent stronger relationships and these can be depicted through a state diagram where continuous lines represent only the stronger ones. The analysis of causality was computed with the causality toolbox MVGC [38], which was invoked as an external system call.
    Once the causality analysis has been made for each of the C s scenery, we get a causality relationships matrix η s of size k × k , with the total of the causal relationships F T i , T j from the texture T i T j (as given in Equation (11)), such that if a value of F T i , T j = 0 means that there is no causal relationship of the texture i j , and in the measure that the value increases with respect to other η s values, we say that the causal relationship is significant with respect to others.
    η s = F T 1 , T 1 F T 1 , T 2 F T 1 , T k F T 2 , T 1 F T 2 , T 2 F T 2 , T k F T k , T 1 F T k , T 2 F T k , T k
    The causality matrices η s are normalized according to the total sum of their values, being N s = i , j = 1 k F T i , T j , such that η s N is the normalized matrix of the s th scenario, for  s = 1 , , C s , with  C s : the number of scenery types considered, as given in the Equation (12). From this resulting matrix the values of the main diagonal are not taken into account because these values do not generate force in the causality relationship; as observed in the theory, there is no causal relationship between the same variables.
    At the end, for  C s classes or scenery, the total concentration of the matrices, Γ , is defined as given in the Equation (13).
    η s N = F T 1 , T 1 F T 1 , T 2 F T 1 , T k F T 2 , T 1 F T 2 , T 2 F T 2 , T k F T k , T 1 F T k , T 2 F T k , T k 1 N s
    Γ = l = 1 C s η l N = { η 1 N , η 2 N η C s N }
    The Γ matrices as entries serve as a descriptive pattern for each scenery or class contained in the database.
  • Selection of causal relationships by means of Genetic Algorithm. To look for the causal relationships among different variables that are more important or relevant, for each of the scenes, this can be accomplished in a simple way by eliminating the relationships that have a numerical value less than a previously established threshold.
    However, one disadvantage of this method is the establishment of the threshold to be used, because there is no a priori knowledge of the optimal value; in addition, the complexity increases when the number of textures increases in the dictionary, along with the number of classes and images to be examined. Other drawback of this solution is that some of the weak relationships could also be important in order to characterize a scenery. So there is a need to implement an automatic selection which discriminates the relevant relationships as a combinatorial optimization process. Genetic algorithms (GA) have been used successfully in several computer vision problems together with the digital image processing [39] and classification [1,9,40,41,42]. In this work the GA is also the right solution for the required optimization.

4. Genetic Algorithm Proposal

Looking for the analysis of the Γ matrices generated by the WGC to find the significant causality relationships for one scenery, we propose each matrix to be treated with a GA implementation. In this section we provide the GA proposal in detail.
In this approach, each matrix η s N Γ is expressed using vector representation, see Figure 8 parts (a) and (b); this is achieved only by concatenating the rows of the matrix η s N , then the entries of the diagonals are eliminated as shown in Figure 8 part (c). In Figure 8, part (d), a reallocation of the values after the previous elimination is adjusted. This provides a vector of continuous index having the size of each vector 1 × ( k 2 k ) for each s th row, one per scenery. Following this process, finally, the matrix τ is created, which contains the linear conformation of each matrix η s N , with s = { 1 , 2 , , C s } in different rows, as shown in Figure 8 part (e).

4.1. Individual Codification

An individual binary representation for one scenery, τ [ i ] , is conformed in order to create a filter type array of size 1 × ( k 2 k ) of zeros and ones, such that if an input or causal relationship is selected in that array, the value 1 is used, and 0 if not. So we have C s rows, one row per scenery, it is intended that each row of the filter matrix could be different from the other lines, with the purpose of characterizing each type of scenery in a unique way.
It is then necessary to apply an automatic process to determine which values of the matrix τ are relevant features to distinguish the causal relationships of each scenery, and based on this result, it selects which values are going to be removed for the preset number of textures. With the selection of the most relevant causal values, it is sought to have a classification by means of a distance classifier, towards the matrix τ for each one of the query images.

4.2. Fitness Function

The fitness evaluation of each individual is generated in several parts. First, the Equation (14) is applied to the individual G x , representing a texture relationships selection for the s scenery in question, using the matrix τ in Figure 9.
ρ s G x = l = 1 k 2 k G x ( l ) τ s , l m = 1 C s l = 1 k 2 k G x ( l ) τ m , l , such that G x ( l ) 0
There, G x ( l ) τ s , l refers to the product of the τ entries located at s scenery (row s) and column l, specifying a causal relationship, accomplishing G x ( l ) is a valid non zero entry of the genome. Thus ρ s G x is the total probability for the individual G x applied to all scenery.
Based on these data, by means of the probability theory, the individual G x is required to meet the condition: ρ s G x > ρ j G x , such that s { 1 , 2 , , C s } , s j , and  1 j C s .
That is, C s probabilities corresponding to each scenery evaluating the individual G x are obtained with the calculation of ρ j G x . Equation (15) gives the first step for the optimization process, considering the maximum probability related to the casual relationships which best characterized the s scenery versus the others.
s G x = ρ s G x if ρ s G x = M a x { ρ j G x } j = 1 , , C s 0 if ρ s G x M a x { ρ j G x } j = 1 , , C s
Then, the fitness function, f s ( G x ) , is determined as Equation (16).
f s G x = C P s if s G x > 0 0 if s G x = 0
To this end, the images contained in the s scenery are consulted, using the re-substitution test. Each image query gives the scenery which belongs to filling the information of a confusion matrix that is used to calculate the percentage of classification. The image consult query process is described in the following paragraph. Later, in Section 4.3 the global fitness is taken into account for the population evolution in the GA loop process.

4.2.1. Creating a Query from a Single Image

In order to classify an s-scenery image considering the relationships specified in a G x individual, a related causal relationship matrix needs to be constructed.
The first step consists of creating a set of M synthetic images, L 1 , L 2 , , L M , from a single L image is performed. This is produced by means of manipulating the first reading of the image, making a circular shift of d positions for each new synthetic image, in order to create several samples of the same image as shown in Figure 10. In this way, the respective query matrix of size | k × w × M | ( k : = number of textures in the dictionary, w : = number of neighborhoods, and  M : = number of synthetic images) is generated to feed the WGC analysis process and to obtain the resulting normalized causal relationship matrix η L N of size | k × k | . These steps are carried out by Equations (11) and (12).
Then the manipulation of η L N is performed as shown in the stage presented in Figure 8 to obtain the linear representation of the matrix. The last query step consists of applying the k-NN classifier (with k : = 1 ) to determine which τ scenery (line) has the closest relationship to the linear relationship representation of image L, considering only the relationship indicated with the G x non-zero values.

4.3. GA Implementation

A genetic algorithm is applied for each τ line to automatically select the most representative causal relationship of each scenery. Figure 11a shows the general algorithm flowchart of this approach.
An initial population, P G , of  s i z e P individuals is randomly generated, where s i z e P is an odd number and each individual is of size k 2 k , the size in columns of the matrix τ .
Then the P G individuals are evaluated with the fitness function, Equation (16), for a particular s scenery, considering the total set of images that conform it, as shown in Figure 11b. The individual’s fitness is stored inside a fitness array { } , as in Equation (17). The { } array is consequently ordered, from highest to lowest, to find the best individual with the highest fitness.
{ } = { f 1 ( G x ) , f 2 ( G x ) , , f s i z e P ( G x ) } ,
such that f p ( G x ) 0 for 1 p s i z e P . For this proposal, size population s i z e P = 21 , the genome length is 12, and the number of iterations was m a x G e n = 100 generations.
To generate the new population, s i z e P 1 s i z e P 1 2 2 triplets of random numbers are generated, e.g., {1,5,1} or {2,4,0}, where the first two numbers are the selected individual numbers that generate the new individuals, the third element of the triplet is one of the two possible operations to be executed; either crossover “1” or mutation “0”.
For derivatives of each triplet, we generate two new individuals if it is by crossover operator, or if it is by mutation operator the two selected individuals are altered separately, in order to have two new elements for the new generation, 1 mutated individual from one single individual.
The crossover operator is applied at one uniform random point of the two participating chromosomes, and the mutation operator is performed over 10 % of the elements of a chromosome, as shown in Figure 12.
The genetic operations of mutation and crossover are applied to 30 % and 70 % of the population respectively, favoring the selection of the highest fitness individuals to be reproduced. The individual with the best fitness passes to the next generation applying elitism. In this way, the population will evolve towards a selection of relevant causal relationships to be able to characterize each scenery.
The end of the GA or stop criteria is given when reaching the 100 % classification percentage or a number of generations is attained.
After the GA is applied C s times, the individuals that contain the most relevant relationships for each scenery are found. Then, the  τ matrix is updated and its entries are replaced by a zero value whenever the corresponding individual entries have a zero and they keep their value in other case.

5. Parallel Approach

In this section, a parallel algorithm to speed-up the performance of the proposed WGC methodology is presented. The parallel approach works on a distributed memory architecture using MPI library; that is, there is a set of processes without shared memory, and these processes work in parallel, and the communication goes through message exchange to determine the relevant causal relationships of all the scenery. Each process can access the NSDB to extract and work with the corresponding set of images. The algorithm complexity in this proposal is given for the Equation (18).
O ( N c l a s s × k × C I P × r × c × n I m g × t c o m p × W G C t b )
where, N c l a s s : = number of classes, k : = number of textures, C I P : = constant for every image processing, r : = number of rows in the grid, c : = number of cols in the grid, n I m g : = total number of images, t c o m p : = comparison time against the base textures, W G C t b : = causality analysis time. (e.g., for an image of size 640 x 480 pixels, r = ( 640 / 20 ) , c = ( 480 / 30 ) , and p is the size of the neighborhood, such that 10 × 10 implies p = 10 ) That means, if the number of rows r, cols c in the grid increases, then the number of images nImag in NSDB increases, and the number of base textures k in the dictionary increases, and the number of classes N c l a s s increases, thus the computational cost increases. In this way it is necessary to conceive a parallel architecture to solve this problem in a large number of images related to Big Data problems.
The Algorithm 1 shows the procedure which is executed simultaneously by each process and the general process of this parallel proposal is depicted in Figure 13.
At the beginning (line 2 of Algorithm 1, Figure 13, tag (1)), each process determines the amount set of images to be read (ImgBlock), taking into consideration the total number of images ( T o t a l _ I M G s ), the total number of processes ( T o t a l _ p r o c s ) and the process identifier (rank). A single process can work with images belonging to different scenery (e.g., T o t a l _ I M G s = 700 , T o t a l _ p r o c s = 70 , i m g B l o c k = 700 / 10 = 10 for the N S D B ).
Algorithm 1 Parallel algorithm for the causality matrix construction.
1:
procedureCausality matrix construction(rank)
2:
    initialization(ImgBlock,Total_IMGs,Total_procs,rank);
3:
    for every i in ImgBlock do
4:
         I m g _ R G B r e a d ( i m a g e , s , i ) ;
5:
         I m g _ p r e p r o c e s s i n g ( i m a g e ) ;
6:
         R G B _ t o _ H S I ( i m a g e ) ;
7:
         M s i = Feature_extraction(image);
8:
         F s i = time_series_construction( M s i , T e x t u r e _ D i c t i o n a r y );
9:
        Save_TimeSeries( F s i );
10:
    end for
11:
     B a r r i e r _ s y n c h r o n i z a t i o n ( ) ;
12:
    if rank in {Scenery coordinator ranks} then
13:
         F s = Load_all_time_series(s);
14:
         η s = S y s t e m _ c a l l ( M V G C ( F s ) ) ;
15:
         τ s = Fitting ( η s ) ;
16:
        Genetic_Algorithm ( τ s ) ;
17:
        Send ( τ s , s , G e n e r a l _ c o o r d i n a t o r r a n k ) ;
18:
    end if
19:
    if (rank == G e n e r a l _ c o o r d i n a t o r r a n k ) then
20:
        for (every i d in {Scenery coordinator ranks}) do
21:
           Recv ( τ s , s , i d ) ;
22:
            τ ( s ) = τ s ;
23:
        end for
24:
    end if
25:
end procedure
Each process works simultaneously with the section of the NSDB, I m g B l o c k , which was assigned to it, performing the following steps (lines 3–10). The reading of the i-image in RGB space is the first action to be executed, next up the scenery, s, such that i-image s , is also obtained (line 4). Then the image preprocessing and the conversion from RGB to HSI domains are carried out (lines 5 and 6, respectively). In line 7, the statistical features are calculated, including the construction of the image grid and neighborhoods, then the CBIR features per each neighborhood generates the M s i matrix; Figure 13, tag (2), represents the execution of lines 4 to 7 of Algorithm 1. Then, M s i and the texture dictionary are used to construct the respective time series, F s i , that is stored on file (lines 8,9 of the algorithm, Figure 13, tag (3)).
Up to this point, all processes work independently; however, in order to ensure that every process has fully accomplished its task, a parallel barrier synchronization (line 11 of the algorithm, Figure 13, tag (4)) should be introduced before continuing with the next step. Here (line 12), only the processes identified as s c e n e r y c o o r d i n a t o r s (one process per scenery) continue with the construction of the corresponding F s matrix (line 13, Figure 13, tag (5)), by loading the respective set of F s i matrices (one per scenery image), previously generated. Then, a system call is performed (line 14, Figure 13, tag (6)) to run the MVGC toolbox and obtain the causality relationship matrix, η s , from the WGC analysis.
The F i t ( η s ) function in line 15 (Figure 13, tag (7)) is in charge of normalization and vector representation of the causality relationship matrix, η s . The respective τ s is thus generated, corresponding to the s-th row (scenery) of τ matrix. Line 16 of Algorithm 1 (Figure 13, tag (8)) shows the GA call that is executed by each one of the C s scenery coordinator processes, with C s being the number of scenery. After identifying the most relevant causal relationships by means of the GA, τ s is updated and sent to the general coordinator (line 17) through a message.
Finally, in lines 19–24, the general coordinator process receives, by means of several messages, the results generated by the scenery coordinator processes (Figure 13, tag (9)). When all message receptions are achieved, the matrix τ is successfully constructed.

6. Experimental Results

The proposal evaluation was generated using the computer power of a 19-processor dual core cluster. Each processor is an Intel©Xeon©CPU E5-2670 v3 2.30 GHz, and 74 GB RAM.
Four image textures k = 4 where selected to conform the base dictionary, as shown in Table 1. For each texture the generated values were obtained manually within the images of the database, a set of 20 texture samples were taken from a set of 5 images per class, from each texture the average was extracted in the layer H plus twice the standard deviation, with this the maximum and minimum threshold values for each texture were generated.
The N S D B used for the evaluation consists of the following data:
  • Vogel and Shiele (V_S) [10], including 6 scenery with 700 images classified as: 144 coast, 103 forest, 179 mountain, 131 prairie, 111 river/lake, and 32 sky/cloud.
  • Oliva and Torralba (O_T) [11], including 4 scenery with 1472 Images classified as: 360 coast, 328 forest, 374 mountain and 410 prairie.
The images were adapted so that some typical classification challenges were considered. The whole set of images was tested in a normal state and introducing Gaussian noise (GN), salt and pepper noise (S&P) of 1 % , 3 % , and 10 % levels respectively, as shown in Figure 14. A rotation transformation was also introduced on each image considering 0 , 45 , 90 , 135 , and 180 , as shown in Figure 15. An image consult query was performed following the same procedure described in Section 4.2.1.
The results in this section are organized as follows. The image classification performance obtained when applying the WGC theory is first presented. Then the execution times of the proposed parallel methodology are shown.

6.1. Classification Results

To show that the proposed GA implementation was a good solution to select some relevant texture relationships describing a scenery, we compared our proposal (GA version) to the manual strategy (Manual version) introduced in Reference [36]; under the manual strategy only the highest relationship values were selected, establishing a specific threshold. In both versions, the methodology presented in Section 3 for the construction of τ matrix, was executed. Table 2 shows the resulting τ values.
Because there were no a priori criteria to determine a threshold value, in the Manual version 25 % of the less significant causal relationships per scenery were deleted. Table 3 shows the updated τ matrix after the manual selection.
When executing the GA version for the selection of τ matrix relationships, a larger space of solutions was explored trying to look for the causal relationships that best represent one scenery. The obtained individuals are presented in Table 4, and the updated τ matrix is shown in Table 5.
Both, GA and manual versions where tested using 300 images, 50 per scenery. The manual version only obtained obtaining a 12 . 53 % general classification percentage. The confusion matrix showing the image association per scenery can be seen in Table 6; we observe that most of the images were associated to the coast scenery, and as a result the manual selection test gave a poor classification percentage.
With the information in the Table 5, the most representative relations of each natural scenery are generated as a visual representation, the graphs representing the intensity of the causal relations between the k = 4 base textures of the dictionary. These graphs will show how textures are related within the corresponding scenery, obtaining the pattern which represents each of them, as it can be appreciated in Figure 16.
Given these first results, it can be observed that not necessarily the relationships with higher values were the best ones to be selected.
To measure the technical efficiency of our proposal using the GA, the Recall (managed as classification percentage), Precision, Accuracy and F1 Score were estimated from the confusion matrices of every test.
The classification results of Figure 17 show that that rotating the images by 45 , 90 and 315 , the classification performance decays significantly. Also, the noise (GN and S&P) significantly alters the classification which is expected in natural scenery images since the texture is a representative of the type of image, and the alterations with noise on it degenerate into another possible meaning. In normal conditions, avoiding noise and rotations, the classification performance reaches 100%.
Additionally, Figure 18 shows the estimations of the precision (Figure 18a), recall (Figure 18b), accuracy (Figure 18c), and F1 Score (Figure 18d) averages for the classes contained in the NSDB. In general, the classification of ideal images ( 0 ) without rotations and noise obtains 100% classification, However, when rotations and noise are added this percentage decreases, particularly the sky class is the most affected.
The average result for the GA evaluation is depicted in Figure 19. Figure 19a shows the fitness evolution within a run with 100 generations. Figure 19b shows that, while in some classes the highest fitness is achieved in the first iteration, in the other ones 100 generations are not enough to achieve the best fitness. Figure 19c shows the best fitness obtained through 200 runs considering 100 generations per run and population size set to 21 individuals; all fitness converges near the expected value of 100% classification.

6.2. Parallel Methodology Performance

The execution time taken by the proposed parallel causality methodology applied to the identification and classification of natural scenery, for a total of 700 images, 6 different scenery, and varying the number of processes, are shown in Figure 20. These values were the average time taken for 200 executions. We can observe that the execution time decreased rapidly while increasing the number of processes, getting the best execution time when 125 concurrent processes were defined. With this configuration each process worked with 5 or 6 images, favoring the internal scheduling that used more efficiently the computer resources so that the sequential version execution time decreased by 88 . 9 % .

7. Discussion

To find the texture causality relationships for characterizing a natural scenery, we found the necessity of GA implementation. With this solution, the automatic discrimination process for selecting the causal relationships that are important or relevant for the classification of the proposed scenery was successfully achieved. Compared to some of the articles in the literature, the possibility of our approach to perform the selection in an automatic way allows the study of the scenery classification problem considering more parameters. In this way, a larger number of scenery or base textures for future implementations of several purposes of texture classification could be taken into account considering the efficient methodology and evolutionary algorithm proposed in this work. This proposal could help in recognizing natural scenarios in the navigation of an autonomous car or possibly a drone, being an important element in the safety of autonomous vehicles navigation.
As we can see in Figure 17, the classification percentage obtained by means of the selected features for the evolutionary process surpasses those obtained by the manual selection version. This result is important because the representative causal relationships of the scenery are selected in such a way that they numerically escape the manual perspective; that is to say, in the manual selection strategy a non-significant threshold value is specified, such that any value lower than the threshold is set to zero, but evolutionary strategy turns out that some of these causal relationships are relevant to classify the scenery, marking differences with another similar scenery. From the causality theory applied to the image reading sequences, we are trying to infer the order of appearance of the textures typified in the base dictionary seeking to represent them as temporal visual reading that we see as a type of natural scenery.

8. Conclusions

In this paper a novel proposal for the use of the Wiener-Granger causality theory supported by a genetic algorithm was presented, along with the CBIR self-content analysis, applied for the identification and classification of 6 natural scenery: coast, forest, mountain, prairie, river/lake, and sky/cloud. Considering the new formulation it was possible to find a set of descriptors from the causality matrices in order to represent a scenery class, from a base set of reference textures, proposing a characterization of images based on the continuous appearance of textures within them; the base dictionary in this approach included the textures: Cloud, Sky, Rock, and Forest. Unlike others approaches, our methodology deals with the rotation and image noise considerations, and the results show excellent classification percentages.
Under this approach we have 100% image classification for the whole dataset, and the methodology provided the next good classification rate for 180 rotation, and the sensitivity for intermediate rotation levels ( 45 , 90 ), and had good results for the salt and pepper image noise.
In relation to the proposal performance, the design of a parallel computing algorithm was developed. A reduction in execution times was achieved using a 19-processor dual-core cluster server, and the MPI tool, reaching an 88.9% decrease of the sequential version execution time when 125 processes were launched.
Future work includes the study of performance of this proposal using other parallel architectures; e.g., the GPU technology could perform efficiently for the image feature extraction stage, as well as the implementation of other evolutionary algorithms, such as Genetic Programming in order to analyze all together the image textures looking to characterize the whole scenery and its associations with paradigm of visual comprehension.

Author Contributions

Writing–review and editing, C.B.-A.; investigation, C.B.-A., C.A.-C. and J.V.-C.; resources, J.V.-C. and C.A.-C.; writing–original draft preparation, C.B.-A; validation, G.R.-A., C.A.-C. and J.V.-C.; conceptualization G.R.-A, C.B.-A., C.A.-C.; formal analysis, C.A.-C., G.R.-A., J.V.-C.; methodology, C.B.-A., J.V.-C. and G.R.-A; C.B.-A. supervised the overall research work. All authors contributed to the discussion and conclusion of this research.

Funding

This research received no external funding.

Acknowledgments

Cesar Benavides thanks the CONACyT for the scholarship support. This work has been supported by Fundación Carolina, Spain, under the scholarship program 2016–2017. This work has been done under project EL006-18, granted by the Metropolitan Autonomous University, Unidad Azcapotzalco, Mexico.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. Gustavo, O. Evolutionary Computer Vision: The First Footprints, 1st ed.; Natural Computing Series; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  2. Nalwa, V.S. A Guided Tour of Computer Vision. Volume 1 of TA1632; Addison Wesley: Boston, MA, USA, 1993. [Google Scholar]
  3. Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1992. [Google Scholar]
  4. Tyagi, V. Content-Based Image Retrieval: Ideas, Influences, and Current Trends, 1st ed.; Springer: Singapore, 2017. [Google Scholar]
  5. Benavides, C.; Villegas-Cortez, J.; Roman, G.; Aviles-Cruz, C. Reconocimiento de rostros a partir de la propia imagen usando técnica cbir. In Proceedings of the X Congreso Español sobre Metaheurísticas, Algoritmos Evolutivos y Bioinspirados (MAEB 2015), Merida Extremadura, Spain, 4–6 February 2015; pp. 733–740. [Google Scholar]
  6. Benavides, C.; Villegas-Cortez, J.; Roman, G.; Aviles-Cruz, C. Face classification by local texture analysis through cbir and surf points. IEEE Latin Am. Trans. 2016, 14, 2418–2424. [Google Scholar] [CrossRef]
  7. Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econometrica 1969, 37, 424–438. [Google Scholar] [CrossRef]
  8. Serrano-Talamantes, J.F.; Aviles-Cruz, C.; Villegas-Cortez, J.; Sossa-Azuela, J.H. Self organizing natural scene image retrieval. Expert Syst. Appl. Int. J. 2012, 40, 2398–2409. [Google Scholar] [CrossRef]
  9. Deb, K.; Kalyanmoy, D. Multi-Objective Optimization Using Evolutionary Algorithms; John Wiley & Sons, Inc.: New York, NY, USA, 2001. [Google Scholar]
  10. Vogel, J.; Schiele, B. Performance evaluation and optimization for content-based image retrieval. Pattern Recognit. 2006, 39, 897–909. [Google Scholar] [CrossRef]
  11. Oliva, A.; Torralba, A. Modeling the shape of the scene: A holistic representation of the spatial envelope. Int. J. Comput. Vis. 2001, 42, 145–175. [Google Scholar] [CrossRef]
  12. Begum, R.; Halse, S.V. The smart car parking system using gsm and labview. J. Comput. Math. Sci. 2018, 9, 135–142. [Google Scholar] [CrossRef]
  13. Blaifi, S.; Moulahoum, S.; Colak, I.; Merrouche, W. Monitoring and enhanced dynamic modeling of battery by genetic algorithm using labview applied in photovoltaic system. Electr. Eng. 2017, 100, 1–18. [Google Scholar] [CrossRef]
  14. Alam, A.; Jaffery, Z. A Vision-Based System for Traffic Light Detection: SIGMA 2018, Volume 1, pages 333–343. 01 2019. IEEE Latin Am. Trans. 2016, 14, 2418–2424. [Google Scholar]
  15. Baltruŝaitis, T.; Robinson, P.; Morency, L. Openface: An open source facial behavior analysis toolkit. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; pp. 1–10. [Google Scholar]
  16. Sultana, M.G. A Content Based Feature Combination Method for Face Recognition; Springer: Heidelberg, Germany, 2013; Volume 226, pp. 197–206. [Google Scholar]
  17. Madhavi, D.; Patnaik, R. Genetic Algorithm-Based Optimized Gabor Filters for Content-Based Image Retrieval; Springer: Singapore, 2018; pp. 157–164. [Google Scholar]
  18. Desai, R.; Sonawane, B. Gist, Hog, and Dwt-Based Content-Based Image Retrieval for Facial Images; Springer: Singapore, 2017; Volume 468, pp. 297–307. [Google Scholar]
  19. Gao, J.; Yang, J.; Zhang, J.; Li, M. Natural scene recognition based on convolutional neural networks and deep boltzmannn machines. In Proceedings of the 2015 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 2–5 August 2015; pp. 2369–2374. [Google Scholar]
  20. Meng, F.; Wang, X.; Shao, F.; Wang, D.; Hua, X. Energy-efficient gabor kernels in neural networks with genetic algorithm training method. Electronics 2019, 8, 105. [Google Scholar] [CrossRef]
  21. Vogel, J.; Schiele, B. Semantic modeling of natural scenes for content-based image retrieval. Int. J. Comput. Vis. 2007, 72, 133–157. [Google Scholar] [CrossRef]
  22. Traina, A.J.M.; Balan, A.G.R.; Bortolotti, L.M.; Traina, C. Content-based image retrieval using approximate shape of objects. In Proceedings of the 17th IEEE Symposium on Computer-Based Medical Systems, Bethesda, MD, USA, 25 June 2004; pp. 91–96. [Google Scholar]
  23. Dabbiru, L.; Aanstoos, J.V.; Ball, J.E.; Younan, N.H. Screening mississippi river levees using texture-based and polarimetric-based features from synthetic aperture radar data. Electronics 2017, 6, 29. [Google Scholar] [CrossRef]
  24. Liu, Y.; Zhang, D.; Lu, G.; Ma, W.Y. A survey of content-based image retrieval with high-level semantics. Pattern Recognit. 2007, 40, 262–282. [Google Scholar] [CrossRef]
  25. Zeng, P.; Li, Z.; Zhang, C. Scene Classification Using Spatial and Color Features. In Proceedings of the 8th International Conference on Intelligent Information Processing (IIP), Hangzhou, China, 17–20 October 2014; Volume AICT-432 of Intelligent Information Processing VII. Shi, Z., Wu, Z., Leake, D., Sattler, U., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 259–268. [Google Scholar]
  26. Bressler, S.L.; Seth, A.K. Wiener-granger causality: A well established methodology. NeuroImage 2011, 58, 323–329. [Google Scholar] [CrossRef] [PubMed]
  27. Matias, F.S.; Gollo, L.L.; Carelli, P.V.; Bressler, S.L.; Copelli, M.; Mirasso, C.R. Modeling positive granger causality and negative phase lag between cortical areas. NeuroImage 2014, 99, 411–418. [Google Scholar] [CrossRef] [PubMed]
  28. Mannino, M.; Bressler, S.L. Foundational perspectives on causality in large-scale brain networks. Phys. Life Rev. 2015, 15, 107–123. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, H.; Li, X. Effective connectivity of facial expression network by using granger causality analysis. Parallel Process. Images Optim. Med. Imaging Process. 2013, 8920, 89200K. [Google Scholar]
  30. Friston, K. Causal modelling and brain connectivity in functional magnetic resonance imaging. PLoS Biol. 2009, 7, e1000033. [Google Scholar] [CrossRef] [PubMed]
  31. Kim, E.; Kim, D.S.; Ahmad, F.; Park, H. Pattern-based granger causality mapping in fmri. Brain Connect. 2013, 3, 569–577. [Google Scholar] [CrossRef] [PubMed]
  32. Fablet, R.; Bouthemy, P.; Pérez, P. Nonparametric motion characterization using causal probabilistic models for video indexing and retrieval. IEEE Trans. Image Process 2002, 11, 393–407. [Google Scholar] [CrossRef] [PubMed]
  33. Kular, D.; Ribeiro, E. Analyzing Activities in Videos Using Latent Dirichlet Allocation and Granger Causality; Springer International Publishing: Cham, Switzerland, 2015; pp. 647–656. [Google Scholar]
  34. Prabhakar, K.; Oh, S.; Wang, P.; Abowd, G.D.; Rehg, J.M. Temporal causality for the analysis of visual events. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 1967–1974. [Google Scholar]
  35. Zhang, C.; Yang, X.; Lin, W.; Zhu, J. Recognizing human group behaviors with multi-group causalities. In Proceedings of the Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology—Volume 03, WI-IAT ’12, Macau, China, 4–7 December 2012; pp. 44–48. [Google Scholar]
  36. Fan, Y.; Yang, H.; Zheng, S.; Su, H.; Wu, S. Video sensor-based complex scene analysis with granger causality. Sensors 2013, 13, 13685–13707. [Google Scholar] [CrossRef] [PubMed]
  37. Winner, N. The Theory of Prediction; McGraw-Hill: New York, NY, USA, 1958; Chapter 8. [Google Scholar]
  38. Barnett, L.; Seth, A.K. The {MVGC} multivariate granger causality toolbox: A new approach to granger-causal inference. J. Neurosci. Methods 2014, 223, 50–68. [Google Scholar] [CrossRef] [PubMed]
  39. Nag, S. Vector quantization using the improved differential evolution algorithm for image compression. Genet. Program. Evolv. Mach. 2019, 20, 187–212. [Google Scholar] [CrossRef] [Green Version]
  40. Shirali, A.; Kordestani, J.K.; Meybodi, M.R. Self-adaptive multi-population genetic algorithms for dynamic resource allocation in shared hosting platforms. Genet. Program. Evol. Mach. 2018, 19, 505–534. [Google Scholar] [CrossRef]
  41. Karpov, P.; Squillero, G.; Tonda, A. Valis: An evolutionary classification algorithm. Genet. Program. Evol. Mach. 2018, 19, 453–471. [Google Scholar] [CrossRef]
  42. Martínez, Y.; Trujillo, L.; Legrand, P.; Galvan-Lopez, E. Prediction of expected performance for a genetic programming classifier. Genet. Program. Evol. Mach. 2016, 17, 409–449. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Proposed general methodology applied for image recognition.
Figure 1. Proposed general methodology applied for image recognition.
Electronics 08 00726 g001
Figure 2. Classical methodology of image classification.
Figure 2. Classical methodology of image classification.
Electronics 08 00726 g002
Figure 3. Example of segmentation texture zone in a natural scene.
Figure 3. Example of segmentation texture zone in a natural scene.
Electronics 08 00726 g003
Figure 4. Learning and testing architecture for the classification system.
Figure 4. Learning and testing architecture for the classification system.
Electronics 08 00726 g004
Figure 5. A 10 × 10 grid partition image example, every grid has a window of 10 × 10 pixels size.
Figure 5. A 10 × 10 grid partition image example, every grid has a window of 10 × 10 pixels size.
Electronics 08 00726 g005
Figure 6. Reading image example among the grid neighborhood of the image.
Figure 6. Reading image example among the grid neighborhood of the image.
Electronics 08 00726 g006
Figure 7. Generation of a texture-based causality relationship matrix, η s , using the WGC analysis.
Figure 7. Generation of a texture-based causality relationship matrix, η s , using the WGC analysis.
Electronics 08 00726 g007
Figure 8. The τ matrix generation process, for every s th row ∈ τ .
Figure 8. The τ matrix generation process, for every s th row ∈ τ .
Electronics 08 00726 g008
Figure 9. The G x genome construction.
Figure 9. The G x genome construction.
Electronics 08 00726 g009
Figure 10. Representation of a query image construction into N samples.
Figure 10. Representation of a query image construction into N samples.
Electronics 08 00726 g010
Figure 11. Flowchart showing the general GA algorithm implementation. (a) Implementation of the proposed General flowchart GA, and (b) GA implementation for a particular s-scenery.
Figure 11. Flowchart showing the general GA algorithm implementation. (a) Implementation of the proposed General flowchart GA, and (b) GA implementation for a particular s-scenery.
Electronics 08 00726 g011
Figure 12. Genetic operators application.
Figure 12. Genetic operators application.
Electronics 08 00726 g012
Figure 13. The proposed parallel algorithm structure.
Figure 13. The proposed parallel algorithm structure.
Electronics 08 00726 g013
Figure 14. Example of images with Gaussian, salt, and pepper noise of 1%, 3%, and 10%, respectively.
Figure 14. Example of images with Gaussian, salt, and pepper noise of 1%, 3%, and 10%, respectively.
Electronics 08 00726 g014
Figure 15. Rotation degrees applied to the images set.
Figure 15. Rotation degrees applied to the images set.
Electronics 08 00726 g015
Figure 16. Evolutionary texture causal relationship graphs resulted for each scenery.
Figure 16. Evolutionary texture causal relationship graphs resulted for each scenery.
Electronics 08 00726 g016
Figure 17. Classification results of our proposal using the GA, considering different noise type and rotation configurations.
Figure 17. Classification results of our proposal using the GA, considering different noise type and rotation configurations.
Electronics 08 00726 g017
Figure 18. Technical efficiency measures of the best GA individuals for each scenery: (a) precision measure, (b) recall measure, (c) accuracy measure and (d) F1 score measure.
Figure 18. Technical efficiency measures of the best GA individuals for each scenery: (a) precision measure, (b) recall measure, (c) accuracy measure and (d) F1 score measure.
Electronics 08 00726 g018
Figure 19. Performance of the GA for the natural scenery contained in the NSDB.
Figure 19. Performance of the GA for the natural scenery contained in the NSDB.
Electronics 08 00726 g019
Figure 20. Performance of the parallel methodology algorithm.
Figure 20. Performance of the parallel methodology algorithm.
Electronics 08 00726 g020
Table 1. HSI ranges of the base texture dictionary.
Table 1. HSI ranges of the base texture dictionary.
TextureH-maxH-minS-maxS-minI-maxI-min
Cloud180025025561
Sky113932552561255
Rock2462552019030
Forest10228255102293
Table 2. The obtained τ matrix values.
Table 2. The obtained τ matrix values.
Scene/F F T 1 , T 2 F T 1 , T 3 F T 1 , T 4 F T 2 , T 1 F T 2 , T 3 F T 2 , T 4 F T 3 , T 1 F T 3 , T 2 F T 3 , T 4 F T 4 , T 1 F T 2 , T 4 F T 3 , T 4
Forest0.3700.2890.00510.02690.0370.00130.1080.0750.08490.003321.611 × 10 5 0.00062
Sky & cloud0.04410.04810.03210.02690.3430.1970.00440.03850.2110.0110.00700.0371
Coast0.00990.2330.1880.0210.03630.05030.05420.00140.06460.15580.07720.109
Mountain0.0850.4050.01320.04970.07990.06980.11150.01020.01620.10760.03920.0133
Prader0.24010.11430.24000.00450.01400.02680.21510.11460.00060.01610.01400.0002
River0.16190.40530.11120.00610.00460.00350.03770.03220.17940.00660.00840.0432
Table 3. The τ matrix resulting from the manual selection of the highest significant causal relationships.
Table 3. The τ matrix resulting from the manual selection of the highest significant causal relationships.
Scene/F F T 1 , T 2 F T 1 , T 3 F T 1 , T 4 F T 2 , T 1 F T 2 , T 3 F T 2 , T 4 F T 3 , T 1 F T 3 , T 2 F T 3 , T 4 F T 4 , T 1 F T 2 , T 4 F T 3 , T 4
Forest0.3700.2890.00510.02690.03700.1080.0750.08490.0033200
Sky & cloud0.04410.04810.03210.02690.3430.19700.03850.211000.0371
Coast00.2330.18800.03630.05030.054200.06460.15580.07720.109
Mountain0.0850.40500.04970.07990.06980.1115000.10760.03920
Prader0.24010.11430.240000.01400.02680.21510.114600.01610.01400
River0.16190.40530.11120000.03770.03220.1794000.0432
Table 4. The best individuals resulting from the evaluation of the GA per scenery.
Table 4. The best individuals resulting from the evaluation of the GA per scenery.
Scene/F123456789101112
Forest100000001000
Sky011011110001
Coast001100010011
Mountain100100110001
Prader101010111010
River001100110001
Table 5. Final τ values, applying the better individuals of the GA.
Table 5. Final τ values, applying the better individuals of the GA.
Scene/F F T 1 , T 2 F T 1 , T 3 F T 1 , T 4 F T 2 , T 1 F T 2 , T 3 F T 2 , T 4 F T 3 , T 1 F T 3 , T 2 F T 3 , T 4 F T 4 , T 1 F T 2 , T 4 F T 3 , T 4
Forest0.37000000000.0849000
Sky&cloud00.04810.032100.3430.1970.00440.03850000.0371
Coast000.1880.0210000.0014000.07720.109
Mountain0.085000.0497000.11150.01020000.0133
Prader0.240100.240000.014000.21510.11460.000600.01400
River000.11120.0061000.03770.03220000.0432
Table 6. Confusion matrix for a test with 50 images per scenery, using a manual selection of causal relationships.
Table 6. Confusion matrix for a test with 50 images per scenery, using a manual selection of causal relationships.
Scenei/ScenejForestSkyCoastMountPradRiv
Forest00311900
Sky10391000
Coast30361100
Mount2047100
Prad5040500
Riv60321110

Share and Cite

MDPI and ACS Style

Benavides-Álvarez, C.; Villegas-Cortez, J.; Román-Alonso, G.; Avilés-Cruz, C. Wiener–Granger Causality Theory Supported by a Genetic Algorithm to Characterize Natural Scenery. Electronics 2019, 8, 726. https://doi.org/10.3390/electronics8070726

AMA Style

Benavides-Álvarez C, Villegas-Cortez J, Román-Alonso G, Avilés-Cruz C. Wiener–Granger Causality Theory Supported by a Genetic Algorithm to Characterize Natural Scenery. Electronics. 2019; 8(7):726. https://doi.org/10.3390/electronics8070726

Chicago/Turabian Style

Benavides-Álvarez, César, Juan Villegas-Cortez, Graciela Román-Alonso, and Carlos Avilés-Cruz. 2019. "Wiener–Granger Causality Theory Supported by a Genetic Algorithm to Characterize Natural Scenery" Electronics 8, no. 7: 726. https://doi.org/10.3390/electronics8070726

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop