A Framework for Designing the Architectures of Deep Convolutional Neural Networks*Entropy* **2017**, *19*(6), 242; doi:10.3390/e19060242 - 24 May 2017**Abstract **

►
Figures
Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert

[...] Read more.
Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this article, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance.
Full article

On the Energy-Distortion Tradeoff of Gaussian Broadcast Channels with Feedback*Entropy* **2017**, *19*(6), 243; doi:10.3390/e19060243 - 24 May 2017**Abstract **

This work studies the relationship between the energy allocated for transmitting a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel with noiseless channel output feedback (GBCF) and the resulting distortion at the receivers. Our goal is to characterize the minimum

[...] Read more.
This work studies the relationship between the energy allocated for transmitting a pair of correlated Gaussian sources over a two-user Gaussian broadcast channel with noiseless channel output feedback (GBCF) and the resulting distortion at the receivers. Our goal is to characterize the minimum transmission energy required for broadcasting a pair of source samples, such that each source can be reconstructed at its respective receiver to within a target distortion, when the source-channel bandwidth ratio is not restricted. This minimum transmission energy is defined as the energy-distortion tradeoff (EDT). We derive a lower bound and three upper bounds on the optimal EDT. For the upper bounds, we analyze the EDT of three transmission schemes: two schemes are based on separate source-channel coding and apply encoding over multiple samples of source pairs, and the third scheme is a joint source-channel coding scheme that applies uncoded linear transmission on a single source-sample pair and is obtained by extending the Ozarow–Leung (OL) scheme. Numerical simulations show that the EDT of the OL-based scheme is close to that of the better of the two separation-based schemes, which makes the OL scheme attractive for energy-efficient, low-latency and low-complexity source transmission over GBCFs.
Full article

The Tale of Two Financial Crises: An Entropic Perspective*Entropy* **2017**, *19*(6), 244; doi:10.3390/e19060244 - 24 May 2017**Abstract **

►
Figures
This paper provides a comparative analysis of stock market dynamics of the 1987 and 2008 financial crises and discusses the extent to which risk management measures based on entropy can be successful in predicting aggregate market expectations. We find that the Tsallis entropy

[...] Read more.
This paper provides a comparative analysis of stock market dynamics of the 1987 and 2008 financial crises and discusses the extent to which risk management measures based on entropy can be successful in predicting aggregate market expectations. We find that the Tsallis entropy is more appropriate for the short and sudden market crash of 1987, while the approximate entropy is the dominant predictor of the prolonged, fundamental crisis of 2008. We conclude by suggesting the use of entropy as a market sentiment indicator in technical analysis.
Full article

Entropy Analysis of Monetary Unions*Entropy* **2017**, *19*(6), 245; doi:10.3390/e19060245 - 24 May 2017**Abstract **

This paper is an exercise of quantitative dynamic analysis applied to bilateral relationships among partners within two monetary union case studies: the Portuguese *escudo* zone monetary union (EZMU) and the European *euro* zone monetary union (EMU). Real world data are tackled and measures

[...] Read more.
This paper is an exercise of quantitative dynamic analysis applied to bilateral relationships among partners within two monetary union case studies: the Portuguese *escudo* zone monetary union (EZMU) and the European *euro* zone monetary union (EMU). Real world data are tackled and measures that are usual in complex system analysis, such as entropy, mutual information, Canberra distance, and Jensen–Shannon divergence, are adopted. The emerging relationships are visualized by means of the multidimensional scaling and hierarchical clustering computational techniques. Results bring evidence on long-run stochastic dynamics that lead to asymmetric indebtedness mechanisms among the partners of a monetary union and sustainability difficulties. The consequences of unsustainability and disruption of monetary unions have high importance for the discussion on optimal currency areas from a geopolitical perspective.
Full article

Maxwell’s Demon—A Historical Review*Entropy* **2017**, *19*(6), 240; doi:10.3390/e19060240 - 23 May 2017**Abstract **

►
Figures
For more than 140 years Maxwell’s demon has intrigued, enlightened, mystified, frustrated, and challenged physicists in unique and interesting ways. Maxwell’s original conception was brilliant and insightful, but over the years numerous different versions of Maxwell’s demon have been presented. Most versions have

[...] Read more.
For more than 140 years Maxwell’s demon has intrigued, enlightened, mystified, frustrated, and challenged physicists in unique and interesting ways. Maxwell’s original conception was brilliant and insightful, but over the years numerous different versions of Maxwell’s demon have been presented. Most versions have been answered with reasonable physical arguments, with each of these answers (apparently) keeping the second law of thermodynamics intact. Though the laws of physics did not change in this process of questioning and answering, we have learned a lot along the way about statistical mechanics and thermodynamics. This paper will review a selected history and discuss some of the interesting historical characters who have participated.
Full article

Axiomatic Characterization of the Quantum Relative Entropy and Free Energy*Entropy* **2017**, *19*(6), 241; doi:10.3390/e19060241 - 23 May 2017**Abstract **

Building upon work by Matsumoto, we show that the quantum relative entropy with full-rank second argument is determined by four simple axioms: (i) Continuity in the first argument; (ii) the validity of the data-processing inequality; (iii) additivity under tensor products; and (iv) super-additivity.

[...] Read more.
Building upon work by Matsumoto, we show that the quantum relative entropy with full-rank second argument is determined by four simple axioms: (i) Continuity in the first argument; (ii) the validity of the data-processing inequality; (iii) additivity under tensor products; and (iv) super-additivity. This observation has immediate implications for quantum thermodynamics, which we discuss. Specifically, we demonstrate that, under reasonable restrictions, the free energy is singled out as a measure of athermality. In particular, we consider an extended class of Gibbs-preserving maps as free operations in a resource-theoretic framework, in which a catalyst is allowed to build up correlations with the system at hand. The free energy is the only extensive and continuous function that is monotonic under such free operations.
Full article

Entropy in Investigation of Vasovagal Syndrome in Passive Head Up Tilt Test*Entropy* **2017**, *19*(5), 236; doi:10.3390/e19050236 - 20 May 2017**Abstract **

►
Figures
This paper presents an application of Approximate Entropy (ApEn) and Sample Entropy (SampEn) in the analysis of heart rhythm, blood pressure and stroke volume for the diagnosis of vasovagal syndrome. The analyzed biosignals were recorded during positive passive tilt tests—HUTT(+). Signal changes and

[...] Read more.
This paper presents an application of Approximate Entropy (ApEn) and Sample Entropy (SampEn) in the analysis of heart rhythm, blood pressure and stroke volume for the diagnosis of vasovagal syndrome. The analyzed biosignals were recorded during positive passive tilt tests—HUTT(+). Signal changes and their entropy were compared in three main phases of the test: supine position, tilt, and pre-syncope, with special focus on the latter, which was analyzed in a sliding window of each signal. In some cases, ApEn and SampEn were equally useful for the assessment of signal complexity (*p* < 0.05 in corresponding calculations). The complexity of the signals was found to decrease in the pre-syncope phase (SampEn (RRI): 1.20–0.34, SampEn (sBP): 1.29–0.57, SampEn (dBP): 1.19–0.48, SampEn (SV): 1.62–0.91). The pattern of the SampEn (SV) decrease differs from the pattern of the SampEn (sBP), SampEn (dBP) and SampEn (RRI) decrease. For all signals, the lowest entropy values in the pre-syncope phase were observed at the moment when loss of consciousness occurred.
Full article

Can a Robot Have Free Will?*Entropy* **2017**, *19*(5), 237; doi:10.3390/e19050237 - 20 May 2017**Abstract **

►
Figures
Using insights from cybernetics and an information-based understanding of biological systems, a precise, scientifically inspired, definition of free-will is offered and the essential requirements for an agent to possess it in principle are set out. These are: (a) there must be a self

[...] Read more.
Using insights from cybernetics and an information-based understanding of biological systems, a precise, scientifically inspired, definition of free-will is offered and the essential requirements for an agent to possess it in principle are set out. These are: (a) there must be a self to self-determine; (b) there must be a non-zero probability of more than one option being enacted; (c) there must be an internal means of choosing among options (which is not merely random, since randomness is not a choice). For (a) to be fulfilled, the agent of self-determination must be organisationally closed (a “Kantian whole”). For (c) to be fulfilled: (d) options must be generated from an internal model of the self which can calculate future states contingent on possible responses; (e) choosing among these options requires their evaluation using an internally generated goal defined on an objective function representing the overall “master function” of the agent and (f) for “deep free-will”, at least two nested levels of choice and goal (d–e) must be enacted by the agent. The agent must also be able to enact its choice in physical reality. The only systems known to meet all these criteria are living organisms, not just humans, but a wide range of organisms. The main impediment to free-will in present-day artificial robots, is their lack of being a Kantian whole. Consciousness does not seem to be a requirement and the minimum complexity for a free-will system may be quite low and include relatively simple life-forms that are at least able to learn.
Full article

Lyapunov Spectra of Coulombic and Gravitational Periodic Systems*Entropy* **2017**, *19*(5), 238; doi:10.3390/e19050238 - 20 May 2017**Abstract **

►
Figures
An open question in nonlinear dynamics is the relation between the Kolmogorov entropy and the largest Lyapunov exponent of a given orbit. Both have been shown to have diagnostic capability for phase transitions in thermodynamic systems. For systems with long-range interactions, the choice

[...] Read more.
An open question in nonlinear dynamics is the relation between the Kolmogorov entropy and the largest Lyapunov exponent of a given orbit. Both have been shown to have diagnostic capability for phase transitions in thermodynamic systems. For systems with long-range interactions, the choice of boundary plays a critical role and appropriate boundary conditions must be invoked. In this work, we compute Lyapunov spectra for Coulombic and gravitational versions of the one-dimensional systems of parallel sheets with periodic boundary conditions. Exact expressions for time evolution of the tangent-space vectors are derived and are utilized toward computing Lypaunov characteristic exponents using an event-driven algorithm. The results indicate that the energy dependence of the largest Lyapunov exponent emulates that of Kolmogorov entropy for each system for a given system size. Our approach forms an effective and approximation-free instrument for studying the dynamical properties exhibited by the Coulombic and gravitational systems and finds applications in investigating indications of thermodynamic transitions in small as well as large versions of the spatially periodic systems. When a phase transition exists, we find that the largest Lyapunov exponent serves as a precursor of the transition that becomes more pronounced as the system size increases.
Full article

On Linear Coding over Finite Rings and Applications to Computing*Entropy* **2017**, *19*(5), 233; doi:10.3390/e19050233 - 20 May 2017**Abstract **

►
Figures
This paper presents a coding theorem for linear coding over finite rings, in the setting of the Slepian–Wolf source coding problem. This theorem covers corresponding achievability theorems of Elias (*IRE Conv. Rec.* 1955, 3, 37–46) and Csiszár (*IEEE Trans. Inf. Theory*

[...] Read more.
This paper presents a coding theorem for linear coding over finite rings, in the setting of the Slepian–Wolf source coding problem. This theorem covers corresponding achievability theorems of Elias (*IRE Conv. Rec.* 1955, 3, 37–46) and Csiszár (*IEEE Trans. Inf. Theory* 1982, 28, 585–592) for linear coding over finite fields as special cases. In addition, it is shown that, for any set of finite correlated discrete memoryless sources, there always exists a sequence of linear encoders over some finite non-field rings which achieves the data compression limit, the Slepian–Wolf region. Hence, the optimality problem regarding linear coding over finite non-field rings for data compression is closed with positive confirmation with respect to existence. For application, we address the problem of source coding for computing, where the decoder is interested in recovering a discrete function of the data generated and independently encoded by several correlated i.i.d. random sources. We propose linear coding over finite rings as an alternative solution to this problem. Results in Körner–Marton (*IEEE Trans. Inf. Theory* 1979, 25, 219–221) and Ahlswede–Han (*IEEE Trans. Inf. Theory* 1983, 29, 396–411, Theorem 10) are generalized to cases for encoding (pseudo) nomographic functions (over rings). Since a discrete function with a finite domain always admits a nomographic presentation, we conclude that both generalizations universally apply for encoding all discrete functions of finite domains. Based on these, we demonstrate that linear coding over finite rings strictly outperforms its field counterpart in terms of achieving better coding rates and reducing the required alphabet sizes of the encoders for encoding infinitely many discrete functions.
Full article

The Particle as a Statistical Ensemble of Events in Stueckelberg–Horwitz–Piron Electrodynamics *Entropy* **2017**, *19*(5), 234; doi:10.3390/e19050234 - 19 May 2017**Abstract **

In classical Maxwell electrodynamics, charged particles following deterministic trajectories are described by currents that induce fields, mediating interactions with other particles. Statistical methods are used when needed to treat complex particle and/or field configurations. In Stueckelberg–Horwitz–Piron (SHP) electrodynamics, the classical trajectories are traced

[...] Read more.
In classical Maxwell electrodynamics, charged particles following deterministic trajectories are described by currents that induce fields, mediating interactions with other particles. Statistical methods are used when needed to treat complex particle and/or field configurations. In Stueckelberg–Horwitz–Piron (SHP) electrodynamics, the classical trajectories are traced out dynamically, through the evolution of a 4D spacetime event ${x}^{\mu}\left(\tau \right)$ as $\tau $ grows monotonically. Stueckelberg proposed to formalize the distinction between coordinate time ${x}^{0}=ct$ (measured by laboratory clocks) and chronology $\tau $ (the temporal ordering of event occurrence) in order to describe antiparticles and resolve problems of irreversibility such as grandfather paradoxes. Consequently, in SHP theory, the elementary object is not a particle (a 4D curve in spacetime) but rather an event (a single point along the dynamically evolving curve). Following standard deterministic methods in classical relativistic field theory, one is led to Maxwell-like field equations that are $\tau $ -dependent and sourced by a current that represents a statistical ensemble of instantaneous events distributed along the trajectory. The width $\lambda $ of this distribution defines a correlation time for the interactions and a mass spectrum for the photons emitted by particles. As $\lambda $ becomes very large, the photon mass goes to zero and the field equations become $\tau $ -independent Maxwell’s equations. Maxwell theory thus emerges as an equilibrium limit of SHP, in which $\lambda $ is larger than any other relevant time scale. Thus, statistical mechanics is a fundamental ingredient in SHP electrodynamics, and its insights are required to give meaning to the concept of a particle.
Full article

A Kullback–Leibler View of Maximum Entropy and Maximum Log-Probability Methods*Entropy* **2017**, *19*(5), 232; doi:10.3390/e19050232 - 19 May 2017**Abstract **

►
Figures
Entropy methods enable a convenient general approach to providing a probability distribution with partial information. The minimum cross-entropy principle selects the distribution that minimizes the Kullback–Leibler divergence subject to the given constraints. This general principle encompasses a wide variety of distributions, and generalizes

[...] Read more.
Entropy methods enable a convenient general approach to providing a probability distribution with partial information. The minimum cross-entropy principle selects the distribution that minimizes the Kullback–Leibler divergence subject to the given constraints. This general principle encompasses a wide variety of distributions, and generalizes other methods that have been proposed independently. There remains, however, some confusion about the breadth of entropy methods in the literature. In particular, the asymmetry of the Kullback–Leibler divergence provides two important special cases when the target distribution is uniform: the maximum entropy method and the maximum log-probability method. This paper compares the performance of both methods under a variety of conditions. We also examine a generalized maximum log-probability method as a further demonstration of the generality of the entropy approach.
Full article

Ion Hopping and Constrained Li Diffusion Pathways in the Superionic State of Antifluorite Li_{2}O*Entropy* **2017**, *19*(5), 227; doi:10.3390/e19050227 - 18 May 2017**Abstract **

►
Figures
Li_{2}O belongs to the family of antifluorites that show superionic behavior at high temperatures. While some of the superionic characteristics of Li_{2}O are well-known, the mechanistic details of ionic conduction processes are somewhat nebulous. In this work, we first

[...] Read more.
Li_{2}O belongs to the family of antifluorites that show superionic behavior at high temperatures. While some of the superionic characteristics of Li_{2}O are well-known, the mechanistic details of ionic conduction processes are somewhat nebulous. In this work, we first establish an onset of superionic conduction that is emblematic of a gradual disordering process among the Li ions at a characteristic temperature *T*_{α} (~1000 K) using reported neutron diffraction data and atomistic simulations. In the superionic state, the Li ions are observed to portray dynamic disorder by hopping between the tetrahedral lattice sites. We then show that string-like ionic diffusion pathways are established among the Li ions in the superionic state. The diffusivity of these dynamical string-like structures, which have a finite lifetime, shows a remarkable correlation to the bulk diffusivity of the system.
Full article

Specific and Complete Local Integration of Patterns in Bayesian Networks*Entropy* **2017**, *19*(5), 230; doi:10.3390/e19050230 - 18 May 2017**Abstract **

►
Figures
We present a first formal analysis of specific and complete local integration. Complete local integration was previously proposed as a criterion for detecting entities or wholes in distributed dynamical systems. Such entities in turn were conceived to form the basis of a theory

[...] Read more.
We present a first formal analysis of specific and complete local integration. Complete local integration was previously proposed as a criterion for detecting entities or wholes in distributed dynamical systems. Such entities in turn were conceived to form the basis of a theory of emergence of agents within dynamical systems. Here, we give a more thorough account of the underlying formal measures. The main contribution is the disintegration theorem which reveals a special role of completely locally integrated patterns (what we call *ι-entities*) within the trajectories they occur in. Apart from proving this theorem we introduce the disintegration hierarchy and its refinement-free version as a way to structure the patterns in a trajectory. Furthermore, we construct the least upper bound and provide a candidate for the greatest lower bound of specific local integration. Finally, we calculate the $\iota $ -entities in small example systems as a first sanity check and find that $\iota $ -entities largely fulfil simple expectations.
Full article

A Novel Faults Diagnosis Method for Rolling Element Bearings Based on EWT and Ambiguity Correlation Classifiers*Entropy* **2017**, *19*(5), 231; doi:10.3390/e19050231 - 18 May 2017**Abstract **

►
Figures
According to non-stationary characteristic of the acoustic emission signal of rolling element bearings, a novel fault diagnosis method based on empirical wavelet transform (EWT) and ambiguity correlation classification (ACC) is proposed. In the proposed method, the acoustic emission signal acquired from a one-channel

[...] Read more.
According to non-stationary characteristic of the acoustic emission signal of rolling element bearings, a novel fault diagnosis method based on empirical wavelet transform (EWT) and ambiguity correlation classification (ACC) is proposed. In the proposed method, the acoustic emission signal acquired from a one-channel sensor is firstly decomposed using the EWT method, and then the mutual information of decomposed components and the original signal is computed and used to extract the noiseless component in order to obtain the reconstructed signal. Afterwards, the ambiguity correlation classifier, which has the advantages of ambiguity functions in the processing of the non-stationary signal, and the combining of correlation coefficients, is applied. Finally, multiple datasets of reconstructed signals for different operative conditions are fed to the ambiguity correlation classifier for training and testing. The proposed method was verified by experiments, and experimental results have shown that the proposed method can effectively diagnose three different operative conditions of rolling element bearings with higher detection rates than support vector machine and back-propagation (BP) neural network algorithms.
Full article

Investigation of the Intra- and Inter-Limb Muscle Coordination of Hands-and-Knees Crawling in Human Adults by Means of Muscle Synergy Analysis*Entropy* **2017**, *19*(5), 229; doi:10.3390/e19050229 - 17 May 2017**Abstract **

►
Figures
To investigate the intra- and inter-limb muscle coordination mechanism of human hands-and-knees crawling by means of muscle synergy analysis, surface electromyographic (sEMG) signals of 20 human adults were collected bilaterally from 32 limb related muscles during crawling with hands and knees at different

[...] Read more.
To investigate the intra- and inter-limb muscle coordination mechanism of human hands-and-knees crawling by means of muscle synergy analysis, surface electromyographic (sEMG) signals of 20 human adults were collected bilaterally from 32 limb related muscles during crawling with hands and knees at different speeds. The nonnegative matrix factorization (NMF) algorithm was exerted on each limb to extract muscle synergies. The results showed that intra-limb coordination was relatively stable during human hands-and-knees crawling. Two synergies, one relating to the stance phase and the other relating to the swing phase, could be extracted from each limb during a crawling cycle. Synergy structures during different speeds kept good consistency, but the recruitment levels, durations, and phases of muscle synergies were adjusted to adapt the change of crawling speed. Furthermore, the ipsilateral phase lag (IPL) value which was used to depict the inter-limb coordination changed with crawling speed for most subjects, and subjects using the no-limb-pairing mode at low speed tended to adopt the trot-like mode or pace-like mode at high speed. The research results could be well explained by the two-level central pattern generator (CPG) model consisting of a half-center rhythm generator (RG) and a pattern formation (PF) circuit. This study sheds light on the underlying control mechanism of human crawling.
Full article

Face Verification with Multi-Task and Multi-Scale Feature Fusion*Entropy* **2017**, *19*(5), 228; doi:10.3390/e19050228 - 17 May 2017**Abstract **

►
Figures
Face verification for unrestricted faces in the wild is a challenging task. This paper proposes a method based on two deep convolutional neural networks (CNN) for face verification. In this work, we explore using identification signals to supervise one CNN and the combination

[...] Read more.
Face verification for unrestricted faces in the wild is a challenging task. This paper proposes a method based on two deep convolutional neural networks (CNN) for face verification. In this work, we explore using identification signals to supervise one CNN and the combination of semi-verification and identification to train the other one. In order to estimate semi-verification loss at a low computation cost, a circle, which is composed of all faces, is used for selecting face pairs from pairwise samples. In the process of face normalization, we propose using different landmarks of faces to solve the problems caused by poses. In addition, the final face representation is formed by the concatenating feature of each deep CNN after principal component analysis (PCA) reduction. Furthermore, each feature is a combination of multi-scale representations through making use of auxiliary classifiers. For the final verification, we only adopt the face representation of one region and one resolution of a face jointing Joint Bayesian classifier. Experiments show that our method can extract effective face representation with a small training dataset and our algorithm achieves 99.71% verification accuracy on Labeled Faces in the Wild (LFW) dataset.
Full article

Information Entropy and Measures of Market Risk*Entropy* **2017**, *19*(5), 226; doi:10.3390/e19050226 - 16 May 2017**Abstract **

►
Figures
In this paper we investigate the relationship between the information entropy of the distribution of intraday returns and intraday and daily measures of market risk. Using data on the EUR/JPY exchange rate, we find a negative relationship between entropy and intraday Value-at-Risk, and

[...] Read more.
In this paper we investigate the relationship between the information entropy of the distribution of intraday returns and intraday and daily measures of market risk. Using data on the EUR/JPY exchange rate, we find a negative relationship between entropy and intraday Value-at-Risk, and also between entropy and intraday Expected Shortfall. This relationship is then used to forecast daily Value-at-Risk, using the entropy of the distribution of intraday returns as a predictor.
Full article

Classification of Fractal Signals Using Two-Parameter Non-Extensive Wavelet Entropy*Entropy* **2017**, *19*(5), 224; doi:10.3390/e19050224 - 15 May 2017**Abstract **

►
Figures
This article proposes a methodology for the classification of fractal signals as stationary or nonstationary. The methodology is based on the theoretical behavior of two-parameter wavelet entropy of fractal signals. The wavelet $(q,{q}^{\prime})$ -entropy is a wavelet-based extension

[...] Read more.
This article proposes a methodology for the classification of fractal signals as stationary or nonstationary. The methodology is based on the theoretical behavior of two-parameter wavelet entropy of fractal signals. The wavelet $(q,{q}^{\prime})$ -entropy is a wavelet-based extension of the $(q,{q}^{\prime})$ -entropy of Borges and is based on the entropy planes for various *q* and ${q}^{\prime}$ ; it is theoretically shown that it constitutes an efficient and effective technique for fractal signal classification. Moreover, the second parameter ${q}^{\prime}$ provides further analysis flexibility and robustness in the sense that different $(q,{q}^{\prime})$ pairs can analyze the same phenomena and increase the range of dispersion of entropies. A comparison study against the standard signal summation conversion technique shows that the proposed methodology is not only comparable in accuracy but also more computationally efficient. The application of the proposed methodology to physiological and financial time series is also presented along with the classification of these as stationary or nonstationary.
Full article

Entropy Information of Cardiorespiratory Dynamics in Neonates during Sleep*Entropy* **2017**, *19*(5), 225; doi:10.3390/e19050225 - 15 May 2017**Abstract **

►
Figures
Sleep is a central activity in human adults and characterizes most of the newborn infant life. During sleep, autonomic control acts to modulate heart rate variability (HRV) and respiration. Mechanisms underlying cardiorespiratory interactions in different sleep states have been studied but are not

[...] Read more.
Sleep is a central activity in human adults and characterizes most of the newborn infant life. During sleep, autonomic control acts to modulate heart rate variability (HRV) and respiration. Mechanisms underlying cardiorespiratory interactions in different sleep states have been studied but are not yet fully understood. Signal processing approaches have focused on cardiorespiratory analysis to elucidate this co-regulation. This manuscript proposes to analyze heart rate (HR), respiratory variability and their interrelationship in newborn infants to characterize cardiorespiratory interactions in different sleep states (active vs. quiet). We are searching for indices that could detect regulation alteration or malfunction, potentially leading to infant distress. We have analyzed inter-beat (RR) interval series and respiration in a population of 151 newborns, and followed up with 33 at 1 month of age. RR interval series were obtained by recognizing peaks of the QRS complex in the electrocardiogram (ECG), corresponding to the ventricles depolarization. Univariate time domain, frequency domain and entropy measures were applied. In addition, Transfer Entropy was considered as a bivariate approach able to quantify the bidirectional information flow from one signal (respiration) to another (RR series). Results confirm the validity of the proposed approach. Overall, HRV is higher in active sleep, while high frequency (HF) power characterizes more quiet sleep. Entropy analysis provides higher indices for SampEn and Quadratic Sample entropy (QSE) in quiet sleep. Transfer Entropy values were higher in quiet sleep and point to a major influence of respiration on the RR series. At 1 month of age, time domain parameters show an increase in HR and a decrease in variability. No entropy differences were found across ages. The parameters employed in this study help to quantify the potential for infants to adapt their cardiorespiratory responses as they mature. Thus, they could be useful as early markers of risk for infant cardiorespiratory vulnerabilities.
Full article