Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Authors = Mateu Sbert

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 8291 KiB  
Article
Viewpoint Selection for 3D-Games with f-Divergences
by Micaela Y. Martin, Mateu Sbert and Miguel Chover
Entropy 2024, 26(6), 464; https://doi.org/10.3390/e26060464 - 29 May 2024
Cited by 1 | Viewed by 1094
Abstract
In this paper, we present a novel approach for the optimal camera selection in video games. The new approach explores the use of information theoretic metrics f-divergences, to measure the correlation between the objects as viewed in camera frustum and the ideal or [...] Read more.
In this paper, we present a novel approach for the optimal camera selection in video games. The new approach explores the use of information theoretic metrics f-divergences, to measure the correlation between the objects as viewed in camera frustum and the ideal or target view. The f-divergences considered are the Kullback–Leibler divergence or relative entropy, the total variation and the χ2 divergence. Shannon entropy is also used for comparison purposes. The visibility is measured using the differential form factors from the camera to objects and is computed by casting rays with importance sampling Monte Carlo. Our method allows a very fast dynamic selection of the best viewpoints, which can take into account changes in the scene, in the ideal or target view, and in the objectives of the game. Our prototype is implemented in Unity engine, and our results show an efficient selection of the camera and an improved visual quality. The most discriminating results are obtained with the use of Kullback–Leibler divergence. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

15 pages, 2899 KiB  
Article
Robust Multiple Importance Sampling with Tsallis φ-Divergences
by Mateu Sbert and László Szirmay-Kalos
Entropy 2022, 24(9), 1240; https://doi.org/10.3390/e24091240 - 3 Sep 2022
Cited by 4 | Viewed by 2511
Abstract
Multiple Importance Sampling (MIS) combines the probability density functions (pdf) of several sampling techniques. The combination weights depend on the proportion of samples used for the particular techniques. Weights can be found by optimization of the variance, but this approach is costly and [...] Read more.
Multiple Importance Sampling (MIS) combines the probability density functions (pdf) of several sampling techniques. The combination weights depend on the proportion of samples used for the particular techniques. Weights can be found by optimization of the variance, but this approach is costly and numerically unstable. We show in this paper that MIS can be represented as a divergence problem between the integrand and the pdf, which leads to simpler computations and more robust solutions. The proposed idea is validated with 1D numerical examples and with the illumination problem of computer graphics. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

37 pages, 6795 KiB  
Article
A Bounded Measure for Estimating the Benefit of Visualization (Part II): Case Studies and Empirical Evaluation
by Min Chen, Alfie Abdul-Rahman, Deborah Silver and Mateu Sbert
Entropy 2022, 24(2), 282; https://doi.org/10.3390/e24020282 - 16 Feb 2022
Cited by 3 | Viewed by 3197
Abstract
Many visual representations, such as volume-rendered images and metro maps, feature a noticeable amount of information loss due to a variety of many-to-one mappings. At a glance, there seem to be numerous opportunities for viewers to misinterpret the data being visualized, hence, undermining [...] Read more.
Many visual representations, such as volume-rendered images and metro maps, feature a noticeable amount of information loss due to a variety of many-to-one mappings. At a glance, there seem to be numerous opportunities for viewers to misinterpret the data being visualized, hence, undermining the benefits of these visual representations. In practice, there is little doubt that these visual representations are useful. The recently-proposed information-theoretic measure for analyzing the cost–benefit ratio of visualization processes can explain such usefulness experienced in practice and postulate that the viewers’ knowledge can reduce the potential distortion (e.g., misinterpretation) due to information loss. This suggests that viewers’ knowledge can be estimated by comparing the potential distortion without any knowledge and the actual distortion with some knowledge. However, the existing cost–benefit measure consists of an unbounded divergence term, making the numerical measurements difficult to interpret. This is the second part of a two-part paper, which aims to improve the existing cost–benefit measure. Part I of the paper provided a theoretical discourse about the problem of unboundedness, reported a conceptual analysis of nine candidate divergence measures for resolving the problem, and eliminated three from further consideration. In this Part II, we describe two groups of case studies for evaluating the remaining six candidate measures empirically. In particular, we obtained instance data for (i) supporting the evaluation of the remaining candidate measures and (ii) demonstrating their applicability in practical scenarios for estimating the cost–benefit of visualization processes as well as the impact of human knowledge in the processes. The real world data about visualization provides practical evidence for evaluating the usability and intuitiveness of the candidate measures. The combination of the conceptual analysis in Part I and the empirical evaluation in this part allows us to select the most appropriate bounded divergence measure for improving the existing cost–benefit measure. Full article
Show Figures

Figure 1

25 pages, 2528 KiB  
Article
A Bounded Measure for Estimating the Benefit of Visualization (Part I): Theoretical Discourse and Conceptual Evaluation
by Min Chen and Mateu Sbert
Entropy 2022, 24(2), 228; https://doi.org/10.3390/e24020228 - 31 Jan 2022
Cited by 5 | Viewed by 2677
Abstract
Information theory can be used to analyze the cost–benefit of visualization processes. However, the current measure of benefit contains an unbounded term that is neither easy to estimate nor intuitive to interpret. In this work, we propose to revise the existing cost–benefit measure [...] Read more.
Information theory can be used to analyze the cost–benefit of visualization processes. However, the current measure of benefit contains an unbounded term that is neither easy to estimate nor intuitive to interpret. In this work, we propose to revise the existing cost–benefit measure by replacing the unbounded term with a bounded one. We examine a number of bounded measures that include the Jenson–Shannon divergence, its square root, and a new divergence measure formulated as part of this work. We describe the rationale for proposing a new divergence measure. In the first part of this paper, we focus on the conceptual analysis of the mathematical properties of these candidate measures. We use visualization to support the multi-criteria comparison, narrowing the search down to several options with better mathematical properties. The theoretical discourse and conceptual evaluation in this part provides the basis for further data-driven evaluation based on synthetic and experimental case studies that are reported in the second part of this paper. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 409 KiB  
Article
Generalizing the Balance Heuristic Estimator in Multiple Importance Sampling
by Mateu Sbert and Víctor Elvira
Entropy 2022, 24(2), 191; https://doi.org/10.3390/e24020191 - 27 Jan 2022
Cited by 11 | Viewed by 3730
Abstract
In this paper, we propose a novel and generic family of multiple importance sampling estimators. We first revisit the celebrated balance heuristic estimator, a widely used Monte Carlo technique for the approximation of intractable integrals. Then, we establish a generalized framework for the [...] Read more.
In this paper, we propose a novel and generic family of multiple importance sampling estimators. We first revisit the celebrated balance heuristic estimator, a widely used Monte Carlo technique for the approximation of intractable integrals. Then, we establish a generalized framework for the combination of samples simulated from multiple proposals. Our approach is based on considering as free parameters both the sampling rates and the combination coefficients, which are the same in the balance heuristics estimator. Thus our novel framework contains the balance heuristic as a particular case. We study the optimal choice of the free parameters in such a way that the variance of the resulting estimator is minimized. A theoretical variance study shows the optimal solution is always better than the balance heuristic estimator (except in degenerate cases where both are the same). We also give sufficient conditions on the parameter values for the new generalized estimator to be better than the balance heuristic estimator, and one necessary and sufficient condition related to χ2 divergence. Using five numerical examples, we first show the gap in the efficiency of both new and classical balance heuristic estimators, for equal sampling and for several state of the art sampling rates. Then, for these five examples, we find the variances for some notable selection of parameters showing that, for the important case of equal count of samples, our new estimator with an optimal selection of parameters outperforms the classical balance heuristic. Finally, new heuristics are introduced that exploit the theoretical findings. Full article
19 pages, 366 KiB  
Article
Stochastic Order and Generalized Weighted Mean Invariance
by Mateu Sbert, Jordi Poch, Shuning Chen and Víctor Elvira
Entropy 2021, 23(6), 662; https://doi.org/10.3390/e23060662 - 25 May 2021
Viewed by 2497
Abstract
In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic [...] Read more.
In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction. Full article
(This article belongs to the Special Issue Measures of Information)
28 pages, 1528 KiB  
Article
Interpreting Social Accounting Matrix (SAM) as an Information Channel
by Mateu Sbert, Shuning Chen, Miquel Feixas, Marius Vila and Amos Golan
Entropy 2020, 22(12), 1346; https://doi.org/10.3390/e22121346 - 28 Nov 2020
Cited by 2 | Viewed by 3756
Abstract
Information theory, and the concept of information channel, allows us to calculate the mutual information between the source (input) and the receiver (output), both represented by probability distributions over their possible states. In this paper, we use the theory behind the information channel [...] Read more.
Information theory, and the concept of information channel, allows us to calculate the mutual information between the source (input) and the receiver (output), both represented by probability distributions over their possible states. In this paper, we use the theory behind the information channel to provide an enhanced interpretation to a Social Accounting Matrix (SAM), a square matrix whose columns and rows present the expenditure and receipt accounts of economic actors. Under our interpretation, the SAM’s coefficients, which, conceptually, can be viewed as a Markov chain, can be interpreted as an information channel, allowing us to optimize the desired level of aggregation within the SAM. In addition, the developed information measures can describe accurately the evolution of a SAM over time. Interpreting the SAM matrix as an ergodic chain could show the effect of a shock on the economy after several periods or economic cycles. Under our new framework, finding the power limit of the matrix allows one to check (and confirm) whether the matrix is well-constructed (irreducible and aperiodic), and obtain new optimization functions to balance the SAM matrix. In addition to the theory, we also provide two empirical examples that support our channel concept and help to understand the associated measures. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

23 pages, 3116 KiB  
Article
Gaze Information Channel in Van Gogh’s Paintings
by Qiaohong Hao, Lijing Ma, Mateu Sbert, Miquel Feixas and Jiawan Zhang
Entropy 2020, 22(5), 540; https://doi.org/10.3390/e22050540 - 12 May 2020
Cited by 3 | Viewed by 4219
Abstract
This paper uses quantitative eye tracking indicators to analyze the relationship between images of paintings and human viewing. First, we build the eye tracking fixation sequences through areas of interest (AOIs) into an information channel, the gaze channel. Although this channel can be [...] Read more.
This paper uses quantitative eye tracking indicators to analyze the relationship between images of paintings and human viewing. First, we build the eye tracking fixation sequences through areas of interest (AOIs) into an information channel, the gaze channel. Although this channel can be interpreted as a generalization of a first-order Markov chain, we show that the gaze channel is fully independent of this interpretation, and stands even when first-order Markov chain modeling would no longer fit. The entropy of the equilibrium distribution and the conditional entropy of a Markov chain are extended with additional information-theoretic measures, such as joint entropy, mutual information, and conditional entropy of each area of interest. Then, the gaze information channel is applied to analyze a subset of Van Gogh paintings. Van Gogh artworks, classified by art critics into several periods, have been studied under computational aesthetics measures, which include the use of Kolmogorov complexity and permutation entropy. The gaze information channel paradigm allows the information-theoretic measures to analyze both individual gaze behavior and clustered behavior from observers and paintings. Finally, we show that there is a clear correlation between the gaze information channel quantities that come from direct human observation, and the computational aesthetics measures that do not rely on any human observation at all. Full article
Show Figures

Figure 1

24 pages, 11005 KiB  
Article
Gaze Information Channel in Cognitive Comprehension of Poster Reading
by Qiaohong Hao, Mateu Sbert and Lijing Ma
Entropy 2019, 21(5), 444; https://doi.org/10.3390/e21050444 - 28 Apr 2019
Cited by 13 | Viewed by 5441
Abstract
Today, eye trackers are extensively used in studying human cognition. However, it is hard to analyze and interpret eye movement data from the cognitive comprehension perspective of poster reading. To find quantitative links between eye movements and cognitive comprehension, we tracked observers’ eye [...] Read more.
Today, eye trackers are extensively used in studying human cognition. However, it is hard to analyze and interpret eye movement data from the cognitive comprehension perspective of poster reading. To find quantitative links between eye movements and cognitive comprehension, we tracked observers’ eye movement for reading scientific poster publications. We model in this paper eye tracking fixation sequences between content-dependent Areas of Interests (AOIs) as a Markov chain. Furthermore, we use the fact that a Markov chain is a special case of information or communication channel. Then, the gaze transition can be modeled as a discrete information channel, the gaze information channel. Next, some traditional eye tracking metrics, together with the gaze entropy and mutual information of the gaze information channel are calculated to quantify cognitive comprehension for every participant. The analysis of the results demonstrate that the gaze entropy and mutual information from individual gaze information channel are related to participants’ individual differences. This is the first study that eye tracking technology has been used to assess the cognitive comprehension of poster reading. The present work provides insights into human cognitive comprehension by using the novel gaze information channel methodology. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Show Figures

Figure 1

21 pages, 4328 KiB  
Article
Local Parallel Cross Pattern: A Color Texture Descriptor for Image Retrieval
by Qinghe Feng, Qiaohong Hao, Mateu Sbert, Yugen Yi, Ying Wei and Jiangyan Dai
Sensors 2019, 19(2), 315; https://doi.org/10.3390/s19020315 - 14 Jan 2019
Cited by 7 | Viewed by 3894
Abstract
Riding the wave of visual sensor equipment (e.g., personal smartphones, home security cameras, vehicle cameras, and camcorders), image retrieval (IR) technology has received increasing attention due to its potential applications in e-commerce, visual surveillance, and intelligent traffic. However, determining how to design an [...] Read more.
Riding the wave of visual sensor equipment (e.g., personal smartphones, home security cameras, vehicle cameras, and camcorders), image retrieval (IR) technology has received increasing attention due to its potential applications in e-commerce, visual surveillance, and intelligent traffic. However, determining how to design an effective feature descriptor has been proven to be the main bottleneck for retrieving a set of images of interest. In this paper, we first construct a six-layer color quantizer to extract a color map. Then, motivated by the human visual system, we design a local parallel cross pattern (LPCP) in which the local binary pattern (LBP) map is amalgamated with the color map in “parallel” and “cross” manners. Finally, to reduce the computational complexity and improve the robustness to image rotation, the LPCP is extended to the uniform local parallel cross pattern (ULPCP) and the rotation-invariant local parallel cross pattern (RILPCP), respectively. Extensive experiments are performed on eight benchmark datasets. The experimental results validate the effectiveness, efficiency, robustness, and computational complexity of the proposed descriptors against eight state-of-the-art color texture descriptors to produce an in-depth comparison. Additionally, compared with a series of Convolutional Neural Network (CNN)-based models, the proposed descriptors still achieve competitive results. Full article
(This article belongs to the Special Issue Visual Sensors)
Show Figures

Figure 1

10 pages, 861 KiB  
Article
Some Order Preserving Inequalities for Cross Entropy and Kullback–Leibler Divergence
by Mateu Sbert, Min Chen, Jordi Poch and Anton Bardera
Entropy 2018, 20(12), 959; https://doi.org/10.3390/e20120959 - 12 Dec 2018
Cited by 5 | Viewed by 3836
Abstract
Cross entropy and Kullback–Leibler (K-L) divergence are fundamental quantities of information theory, and they are widely used in many fields. Since cross entropy is the negated logarithm of likelihood, minimizing cross entropy is equivalent to maximizing likelihood, and thus, cross entropy is applied [...] Read more.
Cross entropy and Kullback–Leibler (K-L) divergence are fundamental quantities of information theory, and they are widely used in many fields. Since cross entropy is the negated logarithm of likelihood, minimizing cross entropy is equivalent to maximizing likelihood, and thus, cross entropy is applied for optimization in machine learning. K-L divergence also stands independently as a commonly used metric for measuring the difference between two distributions. In this paper, we introduce new inequalities regarding cross entropy and K-L divergence by using the fact that cross entropy is the negated logarithm of the weighted geometric mean. We first apply the well-known rearrangement inequality, followed by a recent theorem on weighted Kolmogorov means, and, finally, we introduce a new theorem that directly applies to inequalities between K-L divergences. To illustrate our results, we show numerical examples of distributions. Full article
Show Figures

Figure 1

22 pages, 30998 KiB  
Article
A Survey of Viewpoint Selection Methods for Polygonal Models
by Xavier Bonaventura, Miquel Feixas, Mateu Sbert, Lewis Chuang and Christian Wallraven
Entropy 2018, 20(5), 370; https://doi.org/10.3390/e20050370 - 16 May 2018
Cited by 37 | Viewed by 8070
Abstract
Viewpoint selection has been an emerging area in computer graphics for some years, and it is now getting maturity with applications in fields such as scene navigation, scientific visualization, object recognition, mesh simplification, and camera placement. In this survey, we review and compare [...] Read more.
Viewpoint selection has been an emerging area in computer graphics for some years, and it is now getting maturity with applications in fields such as scene navigation, scientific visualization, object recognition, mesh simplification, and camera placement. In this survey, we review and compare twenty-two measures to select good views of a polygonal 3D model, classify them using an extension of the categories defined by Secord et al., and evaluate them against the Dutagaci et al. benchmark. Eleven of these measures have not been reviewed in previous surveys. Three out of the five short-listed best viewpoint measures are directly related to information. We also present in which fields the different viewpoint measures have been applied. Finally, we provide a publicly available framework where all the viewpoint selection measures are implemented and can be compared against each other. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Show Figures

Figure 1

15 pages, 1641 KiB  
Article
IBVis: Interactive Visual Analytics for Information Bottleneck Based Trajectory Clustering
by Yuejun Guo, Qing Xu and Mateu Sbert
Entropy 2018, 20(3), 159; https://doi.org/10.3390/e20030159 - 2 Mar 2018
Cited by 2 | Viewed by 5348
Abstract
Analyzing trajectory data plays an important role in practical applications, and clustering is one of the most widely used techniques for this task. The clustering approach based on information bottleneck (IB) principle has shown its effectiveness for trajectory data, in which a predefined [...] Read more.
Analyzing trajectory data plays an important role in practical applications, and clustering is one of the most widely used techniques for this task. The clustering approach based on information bottleneck (IB) principle has shown its effectiveness for trajectory data, in which a predefined number of the clusters and an explicit distance measure between trajectories are not required. However, presenting directly the final results of IB clustering gives no clear idea of both trajectory data and clustering process. Visual analytics actually provides a powerful methodology to address this issue. In this paper, we present an interactive visual analytics prototype called IBVis to supply an expressive investigation of IB-based trajectory clustering. IBVis provides various views to graphically present the key components of IB and the current clustering results. Rich user interactions drive different views work together, so as to monitor and steer the clustering procedure and to refine the results. In this way, insights on how to make better use of IB for different featured trajectory data can be gained for users, leading to better analyzing and understanding trajectory data. The applicability of IBVis has been evidenced in usage scenarios. In addition, the conducted user study shows IBVis is well designed and helpful for users. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Show Figures

Figure 1

22 pages, 8680 KiB  
Article
Trajectory Shape Analysis and Anomaly Detection Utilizing Information Theory Tools
by Yuejun Guo, Qing Xu, Peng Li, Mateu Sbert and Yu Yang
Entropy 2017, 19(7), 323; https://doi.org/10.3390/e19070323 - 30 Jun 2017
Cited by 15 | Viewed by 6982
Abstract
In this paper, we propose to improve trajectory shape analysis by explicitly considering the speed attribute of trajectory data, and to successfully achieve anomaly detection. The shape of object motion trajectory is modeled using Kernel Density Estimation (KDE), making use of both the [...] Read more.
In this paper, we propose to improve trajectory shape analysis by explicitly considering the speed attribute of trajectory data, and to successfully achieve anomaly detection. The shape of object motion trajectory is modeled using Kernel Density Estimation (KDE), making use of both the angle attribute of the trajectory and the speed of the moving object. An unsupervised clustering algorithm, based on the Information Bottleneck (IB) method, is employed for trajectory learning to obtain an adaptive number of trajectory clusters through maximizing the Mutual Information (MI) between the clustering result and a feature set of the trajectory data. Furthermore, we propose to effectively enhance the performance of IB by taking into account the clustering quality in each iteration of the clustering procedure. The trajectories are determined as either abnormal (infrequently observed) or normal by a measure based on Shannon entropy. Extensive tests on real-world and synthetic data show that the proposed technique behaves very well and outperforms the state-of-the-art methods. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

25 pages, 6193 KiB  
Article
Selecting Video Key Frames Based on Relative Entropy and the Extreme Studentized Deviate Test
by Yuejun Guo, Qing Xu, Shihua Sun, Xiaoxiao Luo and Mateu Sbert
Entropy 2016, 18(3), 73; https://doi.org/10.3390/e18030073 - 9 Mar 2016
Cited by 13 | Viewed by 10043
Abstract
This paper studies the relative entropy and its square root as distance measures of neighboring video frames for video key frame extraction. We develop a novel approach handling both common and wavelet video sequences, in which the extreme Studentized deviate test is exploited [...] Read more.
This paper studies the relative entropy and its square root as distance measures of neighboring video frames for video key frame extraction. We develop a novel approach handling both common and wavelet video sequences, in which the extreme Studentized deviate test is exploited to identify shot boundaries for segmenting a video sequence into shots. Then, video shots can be divided into different sub-shots, according to whether the video content change is large or not, and key frames are extracted from sub-shots. The proposed technique is general, effective and efficient to deal with video sequences of any kind. Our new approach can offer optional additional multiscale summarizations of video data, achieving a balance between having more details and maintaining less redundancy. Extensive experimental results show that the new scheme obtains very encouraging results in video key frame extraction, in terms of both objective evaluation metrics and subjective visual perception. Full article
Show Figures

Figure 1

Back to TopTop