Next Article in Journal
Statistical Uncertainties of Space Plasma Properties Described by Kappa Distributions
Next Article in Special Issue
On-The-Fly Syntheziser Programming with Fuzzy Rule Learning
Previous Article in Journal
Rigid Shape Registration Based on Extended Hamiltonian Learning
Previous Article in Special Issue
Comparison of Outlier-Tolerant Models for Measuring Visual Complexity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gaze Information Channel in Van Gogh’s Paintings

1
College of Intelligence and Computing, Tianjin University, Yaguan Road 135, Tianjin 300350, China
2
Institute of Informatics and Applications, University of Girona, 17003 Girona, Spain
*
Authors to whom correspondence should be addressed.
Entropy 2020, 22(5), 540; https://doi.org/10.3390/e22050540
Submission received: 4 April 2020 / Revised: 2 May 2020 / Accepted: 4 May 2020 / Published: 12 May 2020

Abstract

:
This paper uses quantitative eye tracking indicators to analyze the relationship between images of paintings and human viewing. First, we build the eye tracking fixation sequences through areas of interest (AOIs) into an information channel, the gaze channel. Although this channel can be interpreted as a generalization of a first-order Markov chain, we show that the gaze channel is fully independent of this interpretation, and stands even when first-order Markov chain modeling would no longer fit. The entropy of the equilibrium distribution and the conditional entropy of a Markov chain are extended with additional information-theoretic measures, such as joint entropy, mutual information, and conditional entropy of each area of interest. Then, the gaze information channel is applied to analyze a subset of Van Gogh paintings. Van Gogh artworks, classified by art critics into several periods, have been studied under computational aesthetics measures, which include the use of Kolmogorov complexity and permutation entropy. The gaze information channel paradigm allows the information-theoretic measures to analyze both individual gaze behavior and clustered behavior from observers and paintings. Finally, we show that there is a clear correlation between the gaze information channel quantities that come from direct human observation, and the computational aesthetics measures that do not rely on any human observation at all.

1. Introduction

The eye is one of the most important organ for human beings to know the external things and transmit information. The eye tracking system can track the trajectory of the eye, thereby obtaining eye movement indicators such as the fixation position, the number of fixations, and fixation duration. By the analysis of eye movement data, the subjective views can be obtained, so that we can expect to improve our ability to measure the individual’s understanding of an image or a scene.
With more and more researchers using eye tracking technology as a research tool, eye tracking is a promising method in academic and industrial research. It has the potential to provide insights into many issues in the visual and cognitive fields: education [1,2,3], medicine [4,5,6,7], assistive technology for people with a variety of debilitating conditions [8,9,10], better interface design [11,12,13], marketing and media [14,15,16], and human–computer interaction method for making decisions [17,18,19]. Furthermore, eye movement provides a new perspective and experimental method for cognitive research [20,21,22].
Thus, there is an increasingly urgent need for quantitative comparison of eye movement indicators [23]. The scanpath map [24,25,26,27], heat map [28,29], and transition matrix [30] are several important methods for analyzing the sequence of fixation. The scanpath map represents the fixations as a sequential sequence, and vector- and character-based editing methods have been applied to calculate the similarity and difference of scanpaths. The heat map represents the eye movement data as a Gaussian mixture model, but because this method loses the sequence information of the fixations, the index based on the heat map can only reflect the similarity of different regions of the observed image, and ignores the order of fixation.
Compared with heat map, modeling the gaze transitions as a first-order Markov chain transition matrix between areas of interest (AOIs) preserves the gaze switch information. Thus, quantitative analysis based on transition matrix, of which gaze entropy (the entropy of the Markov chain) is one of the most important measures, has been used in recent years. Gaze entropy was first applied into flight simulation in [31], although it is only in recent years that it has gained a growing interest from researchers. Shiferaw et al. have recently reviewed and discussed gaze entropy in [32].
As a first-order Markov chain can be interpreted as an information channel, we proposed for the first time the gaze information channel in [33], and applied it to study the artwork of Van Gogh. In addition to incorporating the stationary entropy and gaze transition entropy, the gaze information channel paradigm allows for additional information-theoretic measures to analyze the gaze behavior. The new informational measures include joint entropy, mutual information, and normalized mutual information. The gaze channel was further explored in [34], where the scientific posters cognition was studied from the perspective of the gaze channel.
In this paper, we expand our previous work in [33,34] in several lines:
  • Differently to the authors of [33,34], the gaze channel does not depend on the gaze sequences being interpreted as a first-order Markov chain.
  • We study images (artworks from Van Gogh) as in [33], versus posters containing text plus images in [34].
  • We study 12 Van Gogh artworks versus only three artworks, and 10 observers versus three observers in [33].
  • We use nine AOIs, versus only three in [33] and up to six in [34].
  • We use regular grid division into AOIs, against predetermined in [34].
  • We compare vertical division vs. horizontal division, allowing us an intuitive explanation of mutual information.
  • We present and interpret the evolution of gaze channel quantities with observation time.
  • We compare and relate our results with informational aesthetics measures described in the literature.
The rest of the paper is organized as follows. In Section 2, we present previous work on eye tracking data analysis based on the transition matrices, in Section 3 we model the gaze sequences between AOIs as an information channel, in Section 4 and Section 5 we show experimental design and results analysis, and conclusions and future work are presented in Section 6.

2. Background

Vandeberg et al. [35] used a multi-level Markov modeling approach to analyse gaze switch patterns. After modeling the individuals’ gaze as Markov chains, Krejtz et al. [36,37] calculated the entropy of the stationary distribution H s and the transition or conditional entropy H t to interpret the overall distribution of attention over AOIs, as the Markov chain transition probability matrix has a dual interpretation as a conditional probability matrix. Raptis et al. [38] asked the participants to complete recognition tasks with various complexities, then the researchers used H s and H t to eye tracking analysis; the result revealed there are quantitative differences on visual search patterns among individuals. Raptis et al. [38] stated that eye gaze, including gaze entropies, fixation duration, and number, can reflect personal differences in cognitive styles.
Zhong et al. [39] modeled the relationship between the image feature and the saliency as a Markov chain, and in order to predict the transition probabilities of the Markov chain, they trained a support vector regression (SVR) from true eye tracking data. At last, when given the stationary distribution of this chain, a saliency map of predicting user’s attention can be obtained.
Huang [40] used the female gaze data of browsing apparel retailers’ web pages to study how the female attention was influenced by visual content composition and slot position in personalized banner ads. Gu et al. [30] used heatmap entropy (visual attention entropy (VAE)) and its improved version, relative VAE (rVAE) to analyze eye tracking data of observing web pages; the result showed that VAE and rVAE have correlation with the perceived aesthetics. Hwang et al. [41] stated that it is important to notice scenes consist of objects representing not only low-level visual information, but also higher-level semantic data, and they presented transitional semantic guidance computation to estimate gaze transition.
Ma et al. [33] introduced the gaze information channel using Van Gogh paintings, and, based on preliminary results, observed that we can give a coherent interpretation to the channel quantities to both classify the observers and the artworks. Hao et al. [34] tracked observers’ eye movements for reading scientific posters, which contain both text and data, and modeled eye tracking fixation sequences between AOIs as a Markov chain and subsequently as an information channel to find quantitative links between eye movements and cognitive comprehension. The AOIs were determined by the design of the poster.

3. Methodology

3.1. Gaze Information Channel

Given an image I, divided it into s AOIs, where the set of AOIs is S = 1 , 2 , , s , let us build a matrix C of successively visited AOIs. Thus, element i j in matrix C, c i j , will correspond to how many times the AOI j has been visited immediately after AOI i was visited, that is, how many times there has been a direct transition from i to j. This information is extracted from the recorded gaze sequences. Observe that j c j i gives the total number of times AOI i was visited. Observe also that if we consider an additional fictional AOI, let us say AOI number “0”, that represents both the initial state before our gaze lands on the painting and the final state when our gaze leaves the painting, then the number of exits and number of entries on any state have to be the same, this is j c i j = j c j i for all i and j. If the trajectories are not short, j c i j j c j i , for practical purposes we can consider them equal and ignore AOI “0”. Observe that matrix C can be considered as the realization of a joint occurrence of random variables X and Y, ( X , Y ) , where each pair ( x , y ) represents the occurrence of the gaze entering AOI x and leaving for AOI y. Let N = i j c i j , N i = j c i j , and N j = i c i j , then the joint probabilities can be constructed as p ( i , j ) = c i j / N , the conditional probabilities matrix P as p i j = p ( j | i ) = c i j / N i , and the marginal probabilities p ( X ) = p ( Y ) , as p i = N i / N . Observe that by construction p ( X ) P = p ( Y ) . We have thus built an information channel [42] between the S areas of interest to itself. Observe that this information channel can be considered too as a first-order Markov chain with equilibrium distribution π = p ( X ) = p ( Y ) and transition matrix P. In our previous work [33,34], we introduced the gaze information channel from the first-order Markov chain, while here we introduce first the information channel. The difference is not trivial, as when directly introducing the information channel we do not mind whether the gaze sequences follow a first-order or a higher-order Markov chain. However, even if the gaze does follow a higher order than first-order Markov chain, it is still possible by what we have shown before to model gaze sequences as a first-order Markov chain. In that case, the transition probabilities between states should be understood as the average ones. Given AOI i, p i j would give then the average transition probability to AOI j, as the transition probabilities would depend on the given instant of the total observation time, and might change from the first seconds of observation to later seconds. Previous work has considered the gaze transitions as a first-order Markov chain [32].
According to the strategy of dividing into AOIs, there are mainly content-dependent AOIs and grid AOIs. In this paper, grid AOIs are used.

3.2. Gaze Information Channel Measures

In this section, Shannon’s information measures [42] for gaze information channel are introduced. In addition to the gaze stationary entropy H s and gaze transition entropy H t used in previous work [36], the gaze information channel makes it possible to introduce more informational measures to study the eye movement data. In the information channel, the stationary entropy H s is defined as
H s = H ( X ) = H ( Y ) = i = 1 s π i log π i ,
and gives the uncertainty of the distribution of the gaze between the AOIs.
The entropy of i t h row, H ( Y | i ) , is defined as
H ( Y | i ) = j = 1 s p i j log p i j ,
and gives the uncertainty about the next AOI when the current gaze location is the i-th AOI.
The conditional entropy H t of the information channel is given by the weighted average values of H ( Y | i ) ,
H t = H ( Y | X ) = i = 1 s π i H ( Y | i ) = i = 1 s π i j = 1 s p i j log p i j ,
and represents the randomness or uncertainty of next gaze transition for all AOIs.
The joint entropy H ( X , Y ) of the information channel is the entropy of the joint distribution of X and Y,
H ( X , Y ) = H ( X ) + H ( Y | X ) = H s + H t = i = 1 s j = 1 s π i p i j log π i p i j ,
and measures the total uncertainty of the information channel. Observe that, being for the gaze information channel p ( X ) = p ( Y ) = π , then H ( X ) = H ( Y ) , and as H ( X , Y ) = H ( Y , X ) then H ( Y | X ) = H ( X | Y ) .
The mutual information I ( X ; Y ) , given by
I ( X ; Y ) = H ( X ) + H ( Y ) H ( X , Y ) = i = 1 s j = 1 s π i p i j log p i j π j ,
indicates the total correlation, or information shared, between the AOIs.
The relationship between information measures can be illustrated by a Venn diagram, as shown in Figure 1. The diagram represents the relationship between Shannon’s information measures.

3.3. Informational Aesthetics Measures

To study the evolution of Van Gogh’s style, Jaume Rigau et al. [43,44,45,46,47] used a quantitative approach based on aesthetic measures, including palette-based relative redundancy M b , Kolmogorov complexity-based redundancy M k , and the number of regions for a given ratio of mutual information M s .
Given a color image of N pixels, where C represents the palette distribution ( X r g b with 256 3 = 2 24 colors or X l with 256 = 2 8 luminance values), the palette entropy H ( C ) stands for the uncertainty of a pixel, and the maximum entropy H m a x is 24 ( X r g b ) and 8 ( X l ), respectively. The relative redundancy M b is defined as
M b = H m a x H ( C ) H m a x ,
where M b ranges in [0, 1] and represents the reduction of pixel uncertainty due to the choice of a palette with a given color probability distribution instead of a uniform distribution. Observe that M b is similar to the redundancy per character of a natural language [48] and corresponds to Bense’s information theoretic interpretation [49] of Birkhoff’s aesthetic measure [50].
From the perspective of Kolmogorov complexity, an image’s order or regularity can be measured by the difference between the image size N × H m a x and its Kolmogorov complexity K ( I ) . The normalization of the order gives us the aesthetic measure
M k = N × H m a x K ( I ) N × H m a x
where M k ranges in [0, 1] and represents the degree of the order of the image without any prior knowledge of the palette. Note that the higher the order of the image, the higher the compression ratio.
We can segment an image into regions. The coarsest segmentation is to consider the whole image as a single region, and the finest segmentation would be to consider as many segments as pixels in the image. Given a segmentation, we represent by R the normalized areas of the regions. A given region can contain pixels of different colors from the palette C. Thus, an information channel between colors C and regions R can be established. The mutual information between C and R is given by
I ( C , R ) = c C r R p ( c , r ) log p ( c , r ) p ( c ) p ( r )
For a decomposition of an image into n regions, the ratio of mutual information is defined by
M s ( n ) = I ( C , R ) H ( C ) ,
and ranges from 0 to 1. When we have one single region the mutual information is 0, and thus the ratio is 0. When we have as many regions as pixels we have captured the whole correlation of the image, I ( C , R ) = H ( C ) and the ratio is 1. We are interested in how many segments n we need to divide the single image to arrive at a given percentage of mutual information. This is given by the inverse function
M s 1 ( I ( C , R ) H ( C ) ) = n ,
and is interpreted as a measure of image compositional complexity.
In addition to the above measures, Sigaki et al. [51] presented a quantitative analysis of art by estimating the permutation entropy and the statistical complexity of a painting, considered, as in the above measures, as an array of pixel values. Given a N x × N y image as a two-dimensional array, subarrays of size d x × d y are considered as a single sequence of d x × d y components, and the possible order of the values of each sequence is classified into one of the n = ( d x d y ) ! possible orderings. For instance, for d x = d y = 2 we have 4 ! = 16 possible orderings. All possible, overlapping, d x × d y subarrays are considered, and finally after normalization we will have a distribution P which represents the order of neighbor pixel values in the image, P = { p i ; i = 1 , , n } . The only parameters of the method are the d x , d y values, also called embedding dimensions (for a more formal description, please refer to the work in [51,52]).
Then, the normalized permutation entropy P E is calculated by dividing the Shannon entropy S ( P ) of P distribution,
S ( P ) = i = 1 n p i log p i ,
by its maximum possible value log ( n ) ,
P E ( P ) = S ( P ) log ( n )
Sigaki et al. [51] argue that although the value of P E is a good measure of randomness, it cannot fully capture the degree of structural complexity present in the image matrix. Therefore, they further calculated the so-called statistical complexity C ( P )
C ( P ) = Q ( P , U ) P E ( P ) Q m a x
where Q ( P , U ) is a relative entropic measure (the Jensen–Shannon divergence) between P = { p i ; i = 1 , , n } and the uniform distribution U = { u i = 1 / n ; i = 1 , , n } , and computed as
Q ( P , U ) = S ( P + U 2 ) S ( P ) 2 S ( U ) 2
where P + U 2 = { p i + 1 / n 2 , i = 1 , , n } and
Q m a x = 1 2 { n + 1 n log ( n + 1 ) + log ( n ) 2 log ( 2 n ) }
is a normalization constant obtained by calculating Q ( P , U ) .

4. Experimental Design

4.1. Participants

Twelve Master’s students from Tianjin University were selected to take part in the experiment. All participants had normal or corrected-to-normal vision. Twenty minutes before the start of the experiment, all participants were forbidden to play on mobile phones or perform reading activities that may cause visual fatigue, and to perform eye exercises, such as activities that can relax the eyes and mind and body. The data from two participants had to be excluded because their eye tracking rate was below 98%. Finally, eye movement data of 10 students (6 females, 4 males, average age 24.8) were available for the study.

4.2. Stimuli

The stimuli are 12 paintings of Vincent Van Gogh in digital format. Van Gogh’s paintings are classified into six periods, which follow chronological order, as Earliest Paintings (1881–1883), Nuenen/Antwerp (1883–1886), Paris (1886–1888), Arles (1888–1889), Saint-Remy (1889–1890), and Auvers-sur-Oise (1890), respectively. The paintings are divided in two groups (a and b) as shown in Figure 2. Both groups include 6 representative paintings of each period (periods numbered from 1 to 6). We have considered the two groups of paintings used by Feixas et al. [53], which gives the values of the measures ( M b , M k , M s ) for the 12 paintings. The 12 paintings were downloaded from The Vincent Van Gogh Gallery of David Brooks, http://www.Vggallery.com, a website remaining the most thorough and comprehensive Van Gogh resource on the World Wide Web.

4.3. Apparatus

The experiment used a mobile eye tracking device SMIETG2w produced by the German SMI company. This eye tracker’s two non-contact infrared cameras (60/120 Hz) can capture images of the observer’s eyes, and calculate eye movements in real-time based on the pupil and corneal reflection principles. Another camera of the eye tracker can record image scene that the observer is viewing. In addition, the eye tracker is also equipped with a USB cable to transfer the data collected by the camera to the eye tracking control system.
The eye tracking control system is a high-performance workstation installed with IView X software. The video data collected by the eye movement instrument is integrated into the workstation for image data analysis after MPEG coding. Eye movement data acquisition software IView X can complete the fixation point calibration before formal observation. In our work, we adopted three-point calibration with higher accuracy. After the data collection is completed, the Begaze software can be used to generate fixation position.

4.4. Procedure

The calibration picture and the 12 Van Gogh paintings used in the formal experiment were presented on a computer monitor ( 1920 × 1080 resolution; 23.8-inch LCD). The participant was invited to sit in a chair in front of the monitor, their eyes about 60 to 80 cm away from the screen, and chin resting on a fixed bracket. Then, the staff used the IView X software to make 3-point calibration for each participant. After calibration, the Van Gogh paintings were displayed in full screen, in random order. The observation time of each painting is 45 s, there are 10 s for rest after each painting is displayed, and the viewing mode is free-viewing, that is, no viewing task is assigned to the observer. Before the observation, the researcher does not disclose any information about the painting to be observed to the participant, which aims to reduce the influence of top-down factors and facilitate the analysis of the relationship between human eye behavior and the painting content itself.

5. Result Analysis

5.1. Channel Measures Analysis with 9 AOIs

Each painting was divided into nine AOIs (as shown in Figure 3). This number of AOIs is a compromise between the detail we look in the analysis and the sparseness of the transition matrices. In order to demonstrate the differences when observing each painting, for each AOI, we add the fixations of the 10 observers together, then we use the gaze information channel (as shown in Figure 4) to compute the clustered entropy and MI for each painting. The clustered values for all observers are shown in the Appendix A in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11 and Table A12. We have built the equilibrium distribution π = p ( X ) p ( Y ) by normalizing the row totals. Table 1 shows the values of entropy, MI, normalized MI, and the aesthetics measures from Section 3.3 for the 12 paintings. The validity of the clustering strategy was shown in [34]. From Figure 5 left, we can observe that there is little variation of H ( X ) , while there is an important variation of H ( X , Y ) values, mainly due to the variation of H ( X | Y ) (as H ( X , Y ) = H ( X ) + H ( X | Y ) ). The values of H ( X | Y ) have a tendency to decrease from left to right, according to the chronological order of the paintings. From Figure 5 right we observe an increase in mutual information, attenuated in the case of normalized one. Remembering that paintings are ordered according to the evolution in the time of Van Gogh styles, and the interpretation of H ( X | Y ) as randomness and of I ( X , Y ) as correlation between the AOIs of the painting, Van Gogh style evolution towards its maturity, with richer compositions, is reflected in an increase of mutual information in the gaze channel.
Next, in order to study the individual differences between observers, the clustered entropy and MI for each observer are computed: for each AOI, we add the fixations of 12 paintings together, then use the gaze information channel compute the clustered entropy and MI for each painting. Table 2 shows the clustered entropy and MI for 10 observers, and Figure 6 shows the clustered entropies and MI.
Comparing Figure 6 with Figure 5, it can be inferred that there is not as much difference between observers as there is between paintings. In Table 2, similar to Table 1, the standard deviation of H ( X ) is the lowest among H ( X ) , H ( X | Y ) and H ( X , Y ) values, thus the differences happen more in gaze switch among AOIs (given by H ( X | Y ) ) than in the attention distribution among AOIs (given by H ( X ) ). Moreover, from Figure 6 left, it can be observed that the H ( X ) , H ( X | Y ) and H ( X , Y ) present close values for the different observers. Figure 6 right shows that the main differences between observers can be found in the values of mutual information, with basically two kind of observers: ones with lower MI, around 1.2, and the other ones with higher MI, around 1.4. These differences are smoothed down when considering normalized MI.

5.2. Comparison of Horizontal with Vertical Division

When reading text, human eye movement behavior is greatly affected by the direction of text layout. For example, if the text is arranged as usually in horizontal lines, our eyes will move in horizontal direction during reading. However, the direction of eye movement is more unpredictable when viewing images or paintings. Because of the differences in the content of the paintings, the observer’s attention distribution in different areas will be different. Therefore, in order to study the characteristics of the observer’s gaze switch and attention distribution, according to a different division into AOIs, we compared gaze entropy and mutual information values obtained in horizontal and vertical divisions into three AOIs, see Figure 7.
As done for nine AOIs, the gaze sequences of the 12 paintings are integrated, and gaze channel measures of each observer are calculated under horizontal and vertical divisions, respectively, to analyze the characteristics of eye movement behavior of each observer. Similarly, to obtain the gaze measures for each painting, the gaze sequences of the 10 observers are firstly integrated, and then processed with the gaze information channel based on horizontal and vertical division AOIs.
Figure 8 gives the clustered gaze information measures H ( X ) , H ( X | Y ) , H ( X , Y ) , and I ( X ; Y ) from all observers under the horizontal and vertical division. For the vertical division, the entropies ( H ( X ) , H ( X | Y ) and H ( X , Y ) ) are higher than the entropy measures of horizontal division, while the mutual information I ( X ; Y ) from observers is lower in general under the vertical division, except for observer1. The larger mutual information represents the stronger relevance of the gaze in the area of horizontal division, so it can be concluded that gaze shift is more likely to occur in the horizontal direction.
Figure 9 presents the clustered gaze information measures H ( X ) , H ( X | Y ) , H ( X , Y ) , and I ( X ; Y ) of the 12 paintings under horizontal and vertical division. Similar to Figure 8, for most paintings, the gaze measures H ( X ) , H ( X | Y ) , and H ( X , Y ) generated by vertical division are higher than for horizontal division, while the mutual information is lower than for horizontal division. However, painting a1 and painting b2 do not follow this rule, as the H ( X ) and H ( X , Y ) of painting a1 are higher in horizontal than vertical division, and the difference for H ( X | Y ) of painting a1 between horizontal and vertical division is the smallest of all paintings. On the other hand, for painting b2, all four measures ( H ( X ) , H ( X | Y ) , H ( X , Y ) , and I ( X ; Y ) ) have the opposite rules than for the other paintings.
For the difference of gaze measures caused by the two division types in Figure 8 and Figure 9, we can forward the following explanation; on the one hand, the difference between the results of horizontal and vertical division is related to people’s inherent reading mode, and thus when the area of interest is divided horizontally, the number of gaze switches between different AOIs is relatively small, that is, the H ( X | Y ) value is low. On the other hand, the painting content also has an important impact on the eye movement mode. For paintings a2 and a4 (as shown in Figure 2), the horizontal division splits coherently the sky and the field, with higher mutual information, while the vertical line cuts off the continuous scene, resulting in higher entropy measures and lower mutual information. For paintings a1 and b2 (as shown in Figure 2), the main body of the picture is the person. In the process of observation, people tend to observe the coherent content continuously, so the gaze shift occurs more in the vertical direction.

5.3. Comparison with Different Varying Observation Time

Figure 10 are line charts of 12 paintings with entropies and MI for different observation time. It can be seen that the H ( X ) , H ( X , Y ) , and I ( X ; Y ) basically present an increasing trend until a stable value is reached. First, the global scanning of image over the time is illustrated by the change of H ( X ) , which gradually increases and tends to be stable, indicating that the distribution of fixation points are more evenly between the different AOIs. This increase in H ( X ) pushes the increase of H ( X , Y ) . On the other hand, the increase in I ( X ; Y ) tends to correspond to a decrease in H ( X | Y ) . The more we explore the image, the more correlation, or mutual information, we can discover, and the less the uncertainty in exploration, given by H ( X | Y ) . We could thus divide observation behavior into two stages. In a first stage the observer will scan the image globally, without a specific aim or plan, and after that, the observer will focus on more details and in correlations within the image. This fits with the observations by Locher et al. [54,55].

5.4. Comparison with Aesthetic Measures

In this section, we study the relationship of gaze channel measures with the information aesthetics measures from Section 3.3. In Table 1, the values of entropy, MI, M k , M b , and M s are given for the 12 paintings.

5.4.1. Comparison with M b

Figure 11 shows the line charts of M b and entropies and normalized MI for the 12 paintings. We see that the behavior of M b is rather opposite to the behavior of entropies, and presents some similarity with normalized mutual information. This can be interpreted as the measure M b representing correlation or redundancy in the scene. In fact, from its definition, M b is the normalized difference between the maximum entropy of the color histogram and the color histogram used in the painting, and thus represents the redundancy existing in the palette used, giving a certain measure of correlation. However, M b does not take into account any spacial order, thus we can not expect any accurate correlation.

5.4.2. Comparison with M k

From information theory perspective, the positive correlation shown in Figure 12d between normalized MI and M k can be explained by the theoretical correspondence or similarity between the entropy rate expressed by H ( Y / X ) and the Kolmogorov complexity K ( I ) approximated by the file length of the compressed image. Let us remember that M K is given by 1 K ( I ) N × H m a x and normalized MI by 1 I ( X ; Y ) H ( X ) . Thus, instead of analyzing the correlation between normalized MI and M k , we can equivalently analyze the relationship between the entropy rate and the Kolmogorov complexity.
Both measures express, from two different perspectives, the notion of compression. On the one hand, the entropy rate H ( Y / X ) of a communication process quantifies the irreducible randomness in sequences produced by a source and also measures the size, in bits per symbol, of the optimal binary compression of the source [56]. Thus, a process highly random is difficult to compress. On the other hand, as mentioned above, the Kolmogorov complexity represents the difficulty in compressing an image, expressed by a set of bits which describes both its regularities and its random part. In our case, these measures are normalized in order to carry out a comparative study.
We visualize also in Figure 13 the change of normalized mutual information versus the increase in viewing time from 5 s, 15 s, to 45 s, the value of gaze mutual information will also change.

5.4.3. Relationship with M s 1

We can find an indirect relationship of gaze information channel measures with M s 1 . If we look in [53] at the spatial division triggered by the color to region channel, we can see that the first divisions, which give the maximum increase in mutual information, are triggered along horizontal divisions. This is fully in concordance with the findings in Section 5.2.

5.4.4. Comparison with P E and C

Table 3 shows the values of MI, M k , permutation entropy P E , and complexity C for the 12 Van Gogh paintings considered. These values are displayed in Figure 14. We can observe that the MI, M k and complexity C have similar curves pattern, but permutation entropy P E behavior is different. In fact, we can observe that C behavior is C 1 P E , see Figure 15. This can be explained as follows. The behavior of the normalized Jensen–Shannon distance Q ( P ; U ) is similar to 1 P E , and thus C, defined by Equation (13), can be approximated by C ( 1 P E ) P E , and for values of P E near 1 as it is our case C ( 1 P E ) . This is illustrated in Figure 15. We display in Figure 16 the normalized H ( X | Y ) (normalized H ( X | Y ) + normalized I ( X , Y ) =1), 1 M k and P E . The correspondence between normalized H ( X | Y ) and 1 M k is the same as between normalized I ( X , Y ) and M k . However, observe now the correspondence with P E too. Correspondence between the Kolmogorov complexity measured by compressibility, 1 M k , and the normalized permutation entropy, P E was somehow to be expected, as P E measures [51], the degree of disorder in the pixel arrangement of an image, and the more disorder we can expect less compressibility and bigger size of compressed file, and vice versa.

6. Conclusions and Future Work

This paper uses quantitative indicators based on gaze information channel to study the relationship between Van Gogh artworks and human viewing. The eye tracking fixation sequences through areas of interest (AOIs) are modeled as an information channel, which extends the Markov chain modeling of those sequences.
For our study, we have used 12 Van Gogh paintings, two from each of the six periods in which critics classify Van Gogh art work. We have first shown that, with nine AOIs, the measures discriminate better between the different paintings than between the different observers. Then we have compared the values obtained with horizontal and vertical division into three AOIs, and found that in general the mutual information is higher for horizontal division. This can be put in correspondence with the semantic content of the painting.
Finally, we have compared previously defined computational measures to study artworks with the measures derived from the information channel paradigm. We have shown the relationship between the computational measures, which are independent of any observer, and the information channel measures, which come from the eye trajectories from human observers. In particular we have found a striking visual correlation between the measure M k which is related to the compressibility and the normalized mutual information MI, and inversely, between the normalized entropy of the channel, M k , and the permutation entropy, used recently to classify artworks.
Although promising, this paper has some limitations such as a small number of participants and paintings. With more data we could study quantitatively the correlations in addition to visually. Moreover, a larger variety of persons (e.g., laypersons vs. experts as in [57]) or painting styles (e.g., abstract vs. representational as in [58]) can be considered, as well as the aesthetic evaluation of the paintings by the observer. In the future, we will continue to explore the unique significance of human visual search patterns, which need to be paired with behavioral or cognitive metrics.

Author Contributions

Q.H. wrote the original draft, and performed visualization, formal analysis and investigation. L.M. performed the experiments and an initial data analysis. M.S. and M.F. provided the conceptualization and edited the draft. J.Z. provided supervision and funding acquisition. All authors have read and approved the final manuscript.

Funding

This work is supported by the National Natural Science Foundation of China under grant No. 61702359, and by grant TIN2016-75866-C3-3-R from Spanish Government.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The (unnormalized) transition matrices ( 9 × 9 ) of painting a1.
Table A1. The (unnormalized) transition matrices ( 9 × 9 ) of painting a1.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI1116250000024
AOI2711485302120169
AOI3281203700032
AOI433164209411106
AOI5127016134151230217
AOI608921186035124
AOI70006112814050
AOI81206181141177166
AOI9000104271327
Total25168321052171255016726915
Table A2. The (unnormalized) transition matrices ( 9 × 9 ) of painting a2.
Table A2. The (unnormalized) transition matrices ( 9 × 9 ) of painting a2.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI1171191110031
AOI27121201001033
AOI308800020018
AOI44016010904088
AOI5110934108256
AOI6161105736125119
AOI70051010220341
AOI80202482401169
AOI9111122864264
Total3131189456114417163519
Table A3. The (unnormalized) transition matrices ( 9 × 9 ) of painting a3.
Table A3. The (unnormalized) transition matrices ( 9 × 9 ) of painting a3.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI14580143000070
AOI28257116210060
AOI3062000400030
AOI410401181611251167
AOI5315123186303141276
AOI61101331680210216
AOI72007615016183
AOI8110216317904134
AOI9002008062238
Total70603016627621783133391074
Table A4. The (unnormalized) transition matrices ( 9 × 9 ) of painting a4.
Table A4. The (unnormalized) transition matrices ( 9 × 9 ) of painting a4.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI11152221720401163
AOI224120173152003184
AOI3022160345102197
AOI414114419172291
AOI53165157417232137
AOI60273134113676
AOI72123114112265
AOI80012528291562
AOI90011472134876
Total15818419691137766662811051
Table A5. The (unnormalized) transition matrices ( 9 × 9 ) of painting a5.
Table A5. The (unnormalized) transition matrices ( 9 × 9 ) of painting a5.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI12210063023349
AOI2726718010151
AOI3173402600050
AOI41001101150800135
AOI52302113918458200
AOI61150192060120253
AOI702044187131112
AOI800207111601495
AOI9500032301573119
Total484949133200255113971201064
Table A6. The (unnormalized) transition matrices ( 9 × 9 ) of painting a6.
Table A6. The (unnormalized) transition matrices ( 9 × 9 ) of painting a6.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI17374011300098
AOI2108522092001129
AOI312914896133313225
AOI401151091700210163
AOI5162251110001146
AOI61321301991402144
AOI7012111681013115
AOI8002190007112104
AOI9001700116891133
Total981312251631461441141031331257
Table A7. The (unnormalized) transition matrices ( 9 × 9 ) of painting b1.
Table A7. The (unnormalized) transition matrices ( 9 × 9 ) of painting b1.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI1888181931203142
AOI254013271501184
AOI31615611123001109
AOI4222367601530118
AOI54910754805097
AOI60841628121464
AOI710013023516370
AOI801056214271065
AOI91100383113461
Total137841091159767706566810
Table A8. The (unnormalized) transition matrices ( 9 × 9 ) of painting b2.
Table A8. The (unnormalized) transition matrices ( 9 × 9 ) of painting b2.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI155110132010082
AOI21212284192001168
AOI301150011201176
AOI413311232211661186
AOI5218026248102393348
AOI600151121090218157
AOI70101352544079
AOI80006362615515220
AOI9002031901583122
Total8216676186348157792221221438
Table A9. The (unnormalized) transition matrices ( 9 × 9 ) of painting b3.
Table A9. The (unnormalized) transition matrices ( 9 × 9 ) of painting b3.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI13814481100066
AOI211511825400091
AOI361645021100080
AOI49205124060092
AOI513122212300233295
AOI60312028121197181
AOI70007003214356
AOI80003223147015127
AOI901001103122754
Total6590809329518056128551042
Table A10. The (unnormalized) transition matrices ( 9 × 9 ) of painting b4.
Table A10. The (unnormalized) transition matrices ( 9 × 9 ) of painting b4.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI1810144023000122
AOI21326300014359
AOI379533202111106
AOI4472817915178177282
AOI5010218432757148
AOI623021321402100201
AOI7200532178411142
AOI8290226029914154
AOI90112900173262
Total12059104281148202143154651276
Table A11. The (unnormalized) transition matrices ( 9 × 9 ) of painting b5.
Table A11. The (unnormalized) transition matrices ( 9 × 9 ) of painting b5.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI11132432031300167
AOI22787216134100159
AOI33221490216001193
AOI41731712721800139
AOI562041699162174184
AOI60313117920414144
AOI7000217087150130
AOI810151431910518166
AOI9010029124114151
Total1671601921401841431311651511433
Table A12. The (unnormalized) transition matrices ( 9 × 9 ) of painting b6.
Table A12. The (unnormalized) transition matrices ( 9 × 9 ) of painting b6.
a1AOI1AOI2AOI3AOI4AOI5AOI6AOI7AOI8AOI9Total
AOI162515212200199
AOI247817032090113
AOI313151052414021156
AOI4412751512031122
AOI58052110315121156
AOI641951513212223203
AOI70101221567523125
AOI8090321778618142
AOI9013304181574118
Total951111561231562021251441221234

References

  1. Was, C.; Sansosti, F.; Morris, B. Eye-Tracking Technology Applications in Educational Research; IGI Global: Hershey, PA, USA, 2016. [Google Scholar]
  2. Prieto, L.P.; Sharma, K.; Wen, Y.; Dillenbourg, P. The Burden of Facilitating Collaboration: Towards Estimation of Teacher Orchestration Load Using Eye-tracking Measures; International Society of the Learning Sciences, Inc. (ISLS): Albuquerque, NM, USA, 2015. [Google Scholar]
  3. Ellis, E.M.; Borovsky, A.; Elman, J.L.; Evans, J.L. Novel Word Learning: An Eye-tracking Study. Are 18-month-old Late Talkers Really Different From Their Typical Peers? J. Commun. Disord. 2015, 58, 143–157. [Google Scholar] [CrossRef] [PubMed]
  4. Fox, S.E.; Faulkner-Jones, B.E. Eye-Tracking in the Study of Visual Expertise: Methodology and Approaches in Medicine. Frontline Learn. Res. 2017, 5, 29–40. [Google Scholar] [CrossRef]
  5. Jarodzka, H.; Boshuizen, H.P. Unboxing the Black Box of Visual Expertise in Medicine. Frontline Learn. Res. 2017, 5, 167–183. [Google Scholar] [CrossRef] [Green Version]
  6. Fong, A.; Hoffman, D.J.; Zachary Hettinger, A.; Fairbanks, R.J.; Bisantz, A.M. Identifying Visual Search Patterns in Eye Gaze Data; Gaining Insights into Physician Visual Workflow. J. Am. Med. Inform. Assoc. 2016, 23, 1180–1184. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. McLaughlin, L.; Bond, R.; Hughes, C.; McConnell, J.; McFadden, S. Computing Eye Gaze Metrics for the Automatic Assessment of Radiographer Performance During X-ray Image Interpretation. Int. J. Med. Inform. 2017, 105, 11–21. [Google Scholar] [CrossRef]
  8. Holzman, P.S.; Proctor, L.R.; Hughes, D.W. Eye-tracking Patterns in Schizophrenia. Science 1973, 181, 179–181. [Google Scholar] [CrossRef]
  9. Pavlidis, G.T. Eye Movements in Dyslexia: Their Diagnostic Significance. J. Learn. Disabil. 1985, 18, 42–50. [Google Scholar] [CrossRef]
  10. Zhang, L.; Wade, J.; Bian, D.; Fan, J.; Swanson, A.; Weitlauf, A.; Warren, A.; Sarkar, N. Cognitive Load Measurement in A Virtual Reality-based Driving System for Autism Intervention. IEEE Trans. Affect. Comput. 2017, 8, 176–189. [Google Scholar] [CrossRef]
  11. Vidal, M.; Bulling, A.; Gellersen, H. Pursuits: Spontaneous Eye-based Interaction for Dynamic Interfaces. GetMobile Mob. Comput. Commun. 2015, 18, 8–10. [Google Scholar] [CrossRef]
  12. Strandvall, T. Eye Tracking in Human-computer Interaction and Usability Research. In Human-Computer Interaction—INTERACT 2009, Proceedings of the 12th IFIP TC 13 International Conference, Uppsala, Sweden, 24–28 August 2009; Springer: Berlin/Heidelberg, Germany, 2010; pp. 936–937. [Google Scholar]
  13. Wang, Q.; Yang, S.; Liu, M.; Cao, Z.; Ma, Q. An Eye-tracking Study of Website Complexity from Cognitive Load Perspective. Decis. Support Syst. 2014, 62, 1–10. [Google Scholar] [CrossRef]
  14. Schiessl, M.; Duda, S.; Tholke, A.; Fischer, R. Eye tracking and Its Application in Usability and Media Research. MMI-Interakt. J. 2003, 6, 41–50. [Google Scholar]
  15. Steiner, G.A. The People Look at Commercials: A Study of Audience Behavior. J. Bus. 1966, 39, 272–304. [Google Scholar] [CrossRef]
  16. Lunn, D.; Harper, S. Providing Assistance to Older Users of Dynamic Web Content. Comput. Hum. Behav. 2011, 27, 2098–2107. [Google Scholar] [CrossRef]
  17. Stuijfzand, B.G.; Van der Schaaf, M.F.; Kirschner, F.C.; Ravesloot, C.J.; Van der Gijp, A.; Vincken, K.L. Medical Students’ Cognitive Load in Volumetric Image Interpretation: Insights from Human-computer Interaction and Eye Movements. Comput. Hum. Behav. 2016, 62, 394–403. [Google Scholar] [CrossRef] [Green Version]
  18. Ju, U.; Kang, J.; Wallraven, C. Personality Differences Predict Decision-making in An Accident Situation in Virtual Driving. In Proceedings of the 2016 IEEE Virtual Reality, Greenville, SC, USA, 19–23 March 2016; pp. 77–82. [Google Scholar]
  19. Chen, X.; Starke, S.D.; Baber, C.; Howes, A. A Cognitive Model of How People Make Decisions through Interaction with Visual Displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 1205–1216. [Google Scholar]
  20. Van Gog, T.; Scheiter, K. Eye Tracking as A Tool to Study and Enhance Multimedia Learning; Elsevier: Amsterdam, The Netherlands, 2010. [Google Scholar]
  21. Navarro, O.; Molina, A.I.; Lacruz, M.; Ortega, M. Evaluation of Multimedia Educational Materials Using Eye Tracking. Procedia-Soc. Behav. Sci. 2015, 197, 2236–2243. [Google Scholar] [CrossRef] [Green Version]
  22. Van Wermeskerken, M.; Van Gog, T. Seeing the Instructor’s Face and Gaze in Demonstration Video Examples Affects Attention Allocation but not Learning. Comput. Educ. 2017, 113, 98–107. [Google Scholar] [CrossRef]
  23. Duchowski, A.T.; Driver, J.; Jolaoso, S.; Tan, W.; Ramey, B.N.; Robbins, A. Scanpath Comparison Revisited. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, Austin, TX, USA, 22–24 March 2010; pp. 219–226. [Google Scholar]
  24. Fe Bruin, J.A.; Malan, K.M.; Eloff, J.H.P. Saccade Deviation Indicators for Automated Eye Tracking Analysis. In Proceedings of the 2013 Conference on Eye Tracking South Africa, Cape Town, South Africa, 29–31 August 2013; pp. 47–54. [Google Scholar]
  25. Peysakhovich, V.; Hurter, C. Scanpath visualization and comparison using visual aggregation techniques. J. Eye Mov. Res. 2018, 10, 1–14. [Google Scholar]
  26. Mishra, A.; Kanojia, D.; Nagar, S.; Dey, K.; Bhattacharyya, P. Scanpath Complexity: Modeling Reading Effort Using Gaze Information. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  27. Li, A.; Zhang, Y.; Chen, Z. Scanpath Mining of Eye Movement Trajectories for Visual Attention Analysis. In Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 535–540. [Google Scholar]
  28. Grindinger, T.; Duchowski, A.T.; Sawyer, M. Group-wise Similarity and Classification of Aggregate Scanpaths. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, Austin, TX, USA, 22–24 March 2010; pp. 101–104. [Google Scholar]
  29. Isokoski, P.; Kangas, J.; Majaranta, P. Useful Approaches to Exploratory Analysis of Gaze Data: Enhanced Heatmaps, cluster Maps, and Transition Maps. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, Warsaw, Poland, 14–17 June 2018; p. 68. [Google Scholar]
  30. Gu, Z.; Jin, C.; Dong, Z.; Chang, D. Predicting Webpage Aesthetics with Heatmap Entropy. arXiv 2018, arXiv:1803.01537. [Google Scholar] [CrossRef] [Green Version]
  31. Ellis, S.R.; Stark, L. Statistical Dependency in Visual Scanning. Hum. Factors 1986, 28, 421–438. [Google Scholar] [CrossRef]
  32. Shiferaw, B.; Downey, L.; Crewther, D. A review of gaze entropy as a measure of visual scanning efficiency. Neurosci. Biobehav. Rev. 2019, 96, 353–366. [Google Scholar] [CrossRef]
  33. Ma, L.J.; Sbert, M.; Xu, Q.; Feixas, M. Gaze Information Channel. In Pacific Rim Conference on Multimedia; Springer: Cham, Switzerland, 2018; pp. 575–585. [Google Scholar]
  34. Hao, Q.; Sbert, M.; Ma, L. Gaze Information Channel in Cognitive Comprehension of Poster Reading. Entropy 2019, 21, 444. [Google Scholar] [CrossRef] [Green Version]
  35. Vandeberg, L.; Bouwmeester, S.; Bocanegra, B.R.; Zwaan, R.A. Detecting cognitive interactions through eye movement transitions. J. Mem. Lang. 2013, 69, 445–460. [Google Scholar] [CrossRef]
  36. Krejtz, K.; Duchowski, A.; Szmidt, T.; Krejtz, I.; Gonzalez Perilli, F.; Pires, A.; Vilaro, A.; Villalobos, N. Gaze Transition Entropy. ACM TAP 2015, 13, 4. [Google Scholar] [CrossRef]
  37. Krejtz, K.; Szmidt, T.; Duchowski, A.; Krejtz, I.; Perilli, F.G.; Pires, A.; Vilaro, A.; Villalobos, N. Entropy-based Statistical Analysis of Eye Movement Transitions. In Proceedings of the 2014 Symposium on Eye Tracking Research and Applications, Safety Harbor, FL, USA, 26–28 March 2014; pp. 159–166. [Google Scholar]
  38. Raptis, G.E.; Fidas, C.A.; Avouris, N.M. On Implicit Elicitation of Cognitive Strategies using Gaze Transition Entropies in Pattern Recognition Tasks. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 1993–2000. [Google Scholar]
  39. Zhong, M.; Zhao, X.; Zou, X.C.; Wang, J.Z.; Wang, W. Markov chain based computational visual attention model that learns from eye tracking data. Pattern Recognit. Lett. 2014, 49, 1–10. [Google Scholar] [CrossRef]
  40. Huang, Y.T. The female gaze: Content composition and slot position in personalized banner ads, and how they influence visual attention in online shoppers. Comput. Hum. Behav. 2018, 82, 1–15. [Google Scholar] [CrossRef]
  41. Hwang, A.D.; Wang, H.C.; Pomplun, M. Semantic Guidance of Eye Movements in Real-world Scenes. Vis. Res. 2011, 51, 1192–1205. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley and Sons: Hoboken, NJ, USA, 1991; pp. 33–36. [Google Scholar]
  43. Wallraven, C.; Cunningham, D.W.; Rigau, J.; Feixas, M.; Sbert, M. Aesthetic appraisal of art: From eye movements to computers. In Computational Aesthetics 2009: Eurographics Workshop on Computational Aesthetics in Graphics, Visualization and Imaging; Eurographics: Goslar, Germany, 2009; pp. 137–144. [Google Scholar]
  44. Rigau, J.; Feixas, M.; Sbert, M. Conceptualizing Birkhoff’s Aesthetic Measure Using Shannon Entropy and Kolmogorov Complexity. In Computational Aesthetics; Eurographics: Goslar, Germany, 2007; pp. 105–112. [Google Scholar]
  45. Rigau, J.; Feixas, M.; Sbert, M. Informational dialogue with Van Gogh’s paintings. In Computational Aesthetics’08: Proceedings of the Fourth Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging; Eurographics: Goslar, Germany, 2008; pp. 115–122. [Google Scholar]
  46. Rigau, J.; Feixas, M.; Sbert, M. Informational aesthetics measures. IEEE Comput. Graph. Appl. 2008, 28, 24–34. [Google Scholar] [CrossRef] [Green Version]
  47. Rigau, J.; Feixas, M.; Sbert, M.; Wallraven, C. Toward Auvers Period: Evolution of Van Gogh’s Style. In Computational Aesthetics; Eurographics: Goslar, Germany, 2010; pp. 99–106. [Google Scholar]
  48. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar]
  49. Einführung in die informationstheoretische Ästhetik. Grundlegung und Anwendung in der Texttheorie (Introduction to the Information-theoretical Aesthetics. Foundation and Application in the Text Theory); Rowohlt Taschenbuch Verlag GmbH: Hamburg, Germany, 1969. [Google Scholar]
  50. Birkhoff, G.D. Aesthetic Measure; Harvard University Press: Cambridge, MA, USA, 1933. [Google Scholar]
  51. Sigaki, H.Y.; Perc, M.; Ribeiro, H.V. History of art paintings through the lens of entropy and complexity. Proc. Natl. Acad. Sci. USA 2018, 115, E8585–E8594. [Google Scholar] [CrossRef] [Green Version]
  52. Ribeiro, H.V.; Zunino, L.; Lenzi, E.K.; Santoro, P.A.; Mendes, R.S. Complexity-entropy causality plane as a complexity measure for two-dimensional patterns. PLoS ONE 2012, 7, e40689. [Google Scholar] [CrossRef] [Green Version]
  53. Feixas, M.; Bardera, A.; Rigau, J.; Xu, Q.; Sbert, M. Information theory tools for image processing. Synth. Lect. Comput. Graph. Animat. 2014, 6, 1–164. [Google Scholar] [CrossRef]
  54. Locher, P.; Krupinski, E.A.; Mello-Thoms, C.; Nodine, C.F. Visual interest in pictorial art during an aesthetic experience. Spat. Vis. 2007, 21, 55–77. [Google Scholar] [CrossRef] [PubMed]
  55. Locher, P.J. The usefulness of eye movement recordings to subject an aesthetic episode with visual art to empirical scrutiny. Psychol. Sci. 2006, 48, 106. [Google Scholar]
  56. Crutchfield, J.; Feldman, D. Regularities Unseen, Randomness Observed: Levels of Entropy Convergence. Chaos Interdiscip. J. Nonlinear Sci. 2003, 13, 25–54. [Google Scholar] [CrossRef]
  57. Vogt, S.; Magnussen, S. Expertise in pictorial perception: Eye-movement patterns and visual memory in artists and laymen. Perception 2007, 36, 91–100. [Google Scholar] [CrossRef]
  58. Pihko, E.; Virtanen, A.; Saarinen, V.-M.; Pannasch, S.; Hirvenkari, L.; Tossavainen, T.; Haapala, A.; Hari, R. Experiencing art: The influence of expertise and painting abstraction level. Front. Hum. Neurosci. 2011, 5, 94. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The information diagram represents the relationship between information channel measures.
Figure 1. The information diagram represents the relationship between information channel measures.
Entropy 22 00540 g001
Figure 2. Group a and b of representative paintings of each period are shown (chronologically ordered from period 1:(a1&b1) to period 6:(a6&b6), Copyright 1996–2010 DavidBrooks). The values of M B , M K , and M s 1 ( 0.25 ) are labeled for each painting. (a1) Fisherman’s Wife on the Beach, 1882 (0.418, 0.759, 1264). (b1) Two women in the Woods, 1882 (0.310, 0.650, 2020). (a2) Shepherd with a Flock of Sheep, 1884 (0.463, 0.739, 875). (b2) The Potato Eaters, 1885 (0575, 0.850, 1417). (a3) The Seine with the Pont de la Grande Jette, 1887 (0.385, 0.718, 1396). (b3) Self-Portrait with Straw Hat, 1887 (0.295, 0.726, 1272). (a4) Sunset: Wheat Fields Near Arles, 1888 (0.345, 0.697, 1648). (b4) Vase with Fifteen Sunflowers, 1888 (0.349, 0.581, 2736). (a5) Olive Grove: Pale Blue Sky, 1889 (0.339, 0.593, 2456). (b5) Starry Night, 1889 (0.322, 0.594, 1758). (a6) Daubigny’s Garden, 1890 (0.315, 0.714, 2375). (b6) Thatched Cottages at Cordeville, 1890 (0.312, 0.592, 2095).
Figure 2. Group a and b of representative paintings of each period are shown (chronologically ordered from period 1:(a1&b1) to period 6:(a6&b6), Copyright 1996–2010 DavidBrooks). The values of M B , M K , and M s 1 ( 0.25 ) are labeled for each painting. (a1) Fisherman’s Wife on the Beach, 1882 (0.418, 0.759, 1264). (b1) Two women in the Woods, 1882 (0.310, 0.650, 2020). (a2) Shepherd with a Flock of Sheep, 1884 (0.463, 0.739, 875). (b2) The Potato Eaters, 1885 (0575, 0.850, 1417). (a3) The Seine with the Pont de la Grande Jette, 1887 (0.385, 0.718, 1396). (b3) Self-Portrait with Straw Hat, 1887 (0.295, 0.726, 1272). (a4) Sunset: Wheat Fields Near Arles, 1888 (0.345, 0.697, 1648). (b4) Vase with Fifteen Sunflowers, 1888 (0.349, 0.581, 2736). (a5) Olive Grove: Pale Blue Sky, 1889 (0.339, 0.593, 2456). (b5) Starry Night, 1889 (0.322, 0.594, 1758). (a6) Daubigny’s Garden, 1890 (0.315, 0.714, 2375). (b6) Thatched Cottages at Cordeville, 1890 (0.312, 0.592, 2095).
Entropy 22 00540 g002
Figure 3. An example painting divided in the 9 AOIs.
Figure 3. An example painting divided in the 9 AOIs.
Entropy 22 00540 g003
Figure 4. The gaze information channel for painting b5 with 9 AOIs, between the AOIs with equilibrium distribution (a,c) and information channel X Y (b). Observe that the input (a) and output (c) distributions are the same.
Figure 4. The gaze information channel for painting b5 with 9 AOIs, between the AOIs with equilibrium distribution (a,c) and information channel X Y (b). Observe that the input (a) and output (c) distributions are the same.
Entropy 22 00540 g004
Figure 5. The clustered entropies (left) and clustered MI values (right) of 12 paintings.
Figure 5. The clustered entropies (left) and clustered MI values (right) of 12 paintings.
Entropy 22 00540 g005
Figure 6. The clustered entropies (left) and clustered MI values (right) for 10 observers.
Figure 6. The clustered entropies (left) and clustered MI values (right) for 10 observers.
Entropy 22 00540 g006
Figure 7. An example of horizontal division (left) and vertical division (right) with 3 AOIs.
Figure 7. An example of horizontal division (left) and vertical division (right) with 3 AOIs.
Entropy 22 00540 g007
Figure 8. The clustered gaze measures for all observers under horizontal and vertical division: (a) H ( X ) , (b) H ( X | Y ) , (c) H ( X , Y ) , (d) I ( X ; Y ) .
Figure 8. The clustered gaze measures for all observers under horizontal and vertical division: (a) H ( X ) , (b) H ( X | Y ) , (c) H ( X , Y ) , (d) I ( X ; Y ) .
Entropy 22 00540 g008
Figure 9. The clustered gaze measures for all paintings under horizontal and vertical division: (a) H ( X ) , (b) H ( X | Y ) , (c) H ( X , Y ) , (d) I ( X ; Y ) .
Figure 9. The clustered gaze measures for all paintings under horizontal and vertical division: (a) H ( X ) , (b) H ( X | Y ) , (c) H ( X , Y ) , (d) I ( X ; Y ) .
Entropy 22 00540 g009
Figure 10. The line charts of 12 paintings with entropies and MI of different observation time: (a) H ( X ) , (b) H ( X | Y ) , (c) H ( X , Y ) , (d) I ( X ; Y ) .
Figure 10. The line charts of 12 paintings with entropies and MI of different observation time: (a) H ( X ) , (b) H ( X | Y ) , (c) H ( X , Y ) , (d) I ( X ; Y ) .
Entropy 22 00540 g010
Figure 11. Comparing M b with entropies and normalized mutual information of paintings: (a) M b and H ( X ) , (b) M b and H ( X | Y ) , (c) M b and H ( X , Y ) , (d) M b and normalized MI.
Figure 11. Comparing M b with entropies and normalized mutual information of paintings: (a) M b and H ( X ) , (b) M b and H ( X | Y ) , (c) M b and H ( X , Y ) , (d) M b and normalized MI.
Entropy 22 00540 g011
Figure 12. Comparing M k with entropies and normalized mutual information of paintings: (a) M k and H ( X ) , (b) M k and H ( X | Y ) , (c) M k and H ( X , Y ) , (d) M k and normalized MI.
Figure 12. Comparing M k with entropies and normalized mutual information of paintings: (a) M k and H ( X ) , (b) M k and H ( X | Y ) , (c) M k and H ( X , Y ) , (d) M k and normalized MI.
Entropy 22 00540 g012
Figure 13. Comparison M k and normalized MI by H ( X ) at different observation times.
Figure 13. Comparison M k and normalized MI by H ( X ) at different observation times.
Entropy 22 00540 g013
Figure 14. Comparison of normalized MI by H ( X ) , M k , permutation entropy P E and complexity C.
Figure 14. Comparison of normalized MI by H ( X ) , M k , permutation entropy P E and complexity C.
Entropy 22 00540 g014
Figure 15. Illustration of the dependency of C vs P E for the values corresponding to the 12 paintings.
Figure 15. Illustration of the dependency of C vs P E for the values corresponding to the 12 paintings.
Entropy 22 00540 g015
Figure 16. Comparing normalized H ( X | Y ) by H ( X ) , 1 M k and permutation entropy P E of paintings.
Figure 16. Comparing normalized H ( X | Y ) by H ( X ) , 1 M k and permutation entropy P E of paintings.
Entropy 22 00540 g016
Table 1. The clustered entropies, MI, M k , M b , and M s 1 , for 12 paintings.
Table 1. The clustered entropies, MI, M k , M b , and M s 1 , for 12 paintings.
ID H ( X ) H ( X | Y ) H ( X , Y ) I ( X ; Y ) Normalized
MI by H ( X )
M k M b M s 1
a12.82461.67474.49931.14990.40710.7590.4181264
b13.10721.97645.08351.14170.36740.6500.3102020
a22.98211.78744.76951.19300.40000.7390.463875
b22.99581.47094.46671.52440.50880.8500.5751417
a32.85181.51884.37061.33360.46760.7180.3851396
b32.92511.64604.57111.27990.43750.7260.2951272
a43.03841.68744.72591.35910.44730.6970.3451648
b43.02111.71874.73991.30390.43160.5810.3492736
a52.95101.41574.36671.52960.51830.5930.3392456
b53.15911.64224.80131.51740.48030.5940.3221758
a63.12501.44514.57021.67970.53750.7140.3152375
b63.13791.74654.88441.39060.44320.5920.3122095
Average Value3.00991.64414.65411.36690.45390.6840.3691776
Standard Deviation0.10540.15470.20840.16150.04880.0800.078542
Table 2. The clustered entropies and MI for 10 observers.
Table 2. The clustered entropies and MI for 10 observers.
ID H ( X ) H ( X | Y ) H ( X , Y ) I ( X ; Y ) Normalized MI
by H ( X )
observer13.02431.78074.80491.24440.4115
observer23.07201.82074.89261.25310.4079
observer33.10141.75154.85291.35140.4357
observer43.09531.88914.98441.20480.3892
observer53.09501.73064.82571.36390.4407
observer63.13591.82564.96151.30900.4174
observer73.13351.91875.05221.21550.3879
observer83.12961.72964.85921.39860.4469
observer93.15341.70984.86311.44400.4579
observer103.12001.66474.78471.45570.4666
Average Value3.10601.78214.88811.32400.4262
Standard Deviation0.03570.07670.08110.08790.0261
Table 3. The MI, M k , permutation entropy P E and complexity C for 12 paintings.
Table 3. The MI, M k , permutation entropy P E and complexity C for 12 paintings.
IDNormalized
MI by H ( X )
M k PE C
a10.40710.7590.97770.0279
b10.36740.6500.99240.0097
a20.40000.7390.96210.0493
b20.50880.8500.91540.1044
a30.46760.7180.95180.0601
b30.43750.7260.92310.0981
a40.44730.6970.91400.1051
b40.43160.5810.99730.0036
a50.51830.5930.99400.0078
b50.48030.5940.97600.0316
a60.53750.7140.92600.0919
b60.44320.5920.98670.0172
Average Value0.45390.68440.95970.0506
Standard Deviation0.04880.08000.03110.0383

Share and Cite

MDPI and ACS Style

Hao, Q.; Ma, L.; Sbert, M.; Feixas, M.; Zhang, J. Gaze Information Channel in Van Gogh’s Paintings. Entropy 2020, 22, 540. https://doi.org/10.3390/e22050540

AMA Style

Hao Q, Ma L, Sbert M, Feixas M, Zhang J. Gaze Information Channel in Van Gogh’s Paintings. Entropy. 2020; 22(5):540. https://doi.org/10.3390/e22050540

Chicago/Turabian Style

Hao, Qiaohong, Lijing Ma, Mateu Sbert, Miquel Feixas, and Jiawan Zhang. 2020. "Gaze Information Channel in Van Gogh’s Paintings" Entropy 22, no. 5: 540. https://doi.org/10.3390/e22050540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop