Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 11 (November 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) From physics to the social sciences, information is now seen as a fundamental component of reality. [...] Read more.
View options order results:
result details:
Displaying articles 1-84
Export citation of selected articles as:
Open AccessArticle Mechanical Properties and Microstructure of a NiCrFeCoMn High-Entropy Alloy Deformed at High Strain Rates
Entropy 2018, 20(11), 892; https://doi.org/10.3390/e20110892
Received: 1 October 2018 / Revised: 4 November 2018 / Accepted: 17 November 2018 / Published: 21 November 2018
Viewed by 279 | PDF Full-text (4513 KB) | HTML Full-text | XML Full-text
Abstract
The equiatomic NiCrFeCoMn high-entropy alloy prepared by arc melting has a single crystallographic structure. Mechanical properties and microstructure of the NiCrFeCoMn high-entropy alloy deformed at high strain rates (900 s−1 to 4600 s−1) were investigated. The yield strength of the
[...] Read more.
The equiatomic NiCrFeCoMn high-entropy alloy prepared by arc melting has a single crystallographic structure. Mechanical properties and microstructure of the NiCrFeCoMn high-entropy alloy deformed at high strain rates (900 s−1 to 4600 s−1) were investigated. The yield strength of the NiCrFeCoMn high-entropy alloy is sensitive to the change of high strain rates. Serration behaviors were also observed on the flow stress curves of the alloy deformed at the strain rates ranging from 900 s−1 to 4600 s−1. The Zerilli–Armstrong constitutive equation can be used to predict the flow stress curves of the NiCrFeCoMn high-entropy alloy. Large amounts of deformation bands led to obvious serration behaviors of the NiCrFeCoMn high-entropy alloy under dynamic loading. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle Improving Entropy Estimates of Complex Network Topology for the Characterization of Coupling in Dynamical Systems
Entropy 2018, 20(11), 891; https://doi.org/10.3390/e20110891
Received: 24 October 2018 / Revised: 13 November 2018 / Accepted: 19 November 2018 / Published: 20 November 2018
Viewed by 293 | PDF Full-text (1866 KB) | HTML Full-text | XML Full-text
Abstract
A new measure for the characterization of interconnected dynamical systems coupling is proposed. The method is based on the representation of time series as weighted cross-visibility networks. The weights are introduced as the metric distance between connected nodes. The structure of the networks,
[...] Read more.
A new measure for the characterization of interconnected dynamical systems coupling is proposed. The method is based on the representation of time series as weighted cross-visibility networks. The weights are introduced as the metric distance between connected nodes. The structure of the networks, depending on the coupling strength, is quantified via the entropy of the weighted adjacency matrix. The method has been tested on several coupled model systems with different individual properties. The results show that the proposed measure is able to distinguish the degree of coupling of the studied dynamical systems. The original use of the geodesic distance on Gaussian manifolds as a metric distance, which is able to take into account the noise inherently superimposed on the experimental data, provides significantly better results in the calculation of the entropy, improving the reliability of the coupling estimates. The application to the interaction between the El Niño Southern Oscillation (ENSO) and the Indian Ocean Dipole and to the influence of ENSO on influenza pandemic occurrence illustrates the potential of the method for real-life problems. Full article
Figures

Graphical abstract

Open AccessReview Liquid Phase Separation in High-Entropy Alloys—A Review
Entropy 2018, 20(11), 890; https://doi.org/10.3390/e20110890
Received: 24 October 2018 / Revised: 15 November 2018 / Accepted: 16 November 2018 / Published: 20 November 2018
Viewed by 376 | PDF Full-text (4884 KB) | HTML Full-text | XML Full-text
Abstract
It has been 14 years since the discovery of the high-entropy alloys (HEAs), an idea of alloying which has reinvigorated materials scientists to explore unconventional alloy compositions and multicomponent alloy systems. Many authors have referred to these alloys as multi-principal element alloys (MPEAs)
[...] Read more.
It has been 14 years since the discovery of the high-entropy alloys (HEAs), an idea of alloying which has reinvigorated materials scientists to explore unconventional alloy compositions and multicomponent alloy systems. Many authors have referred to these alloys as multi-principal element alloys (MPEAs) or complex concentrated alloys (CCAs) in order to place less restrictions on what constitutes an HEA. Regardless of classification, the research is rooted in the exploration of structure-properties and processing relations in these multicomponent alloys with the aim to surpass the physical properties of conventional materials. More recent studies show that some of these alloys undergo liquid phase separation, a phenomenon largely dictated by low entropy of mixing and positive mixing enthalpy. Studies posit that positive mixing enthalpy of the binary and ternary components contribute substantially to the formation of liquid miscibility gaps. The objective of this review is to bring forth and summarize the findings of the experiments which detail liquid phase separation (LPS) in HEAs, MPEAs, and CCAs and to draw parallels between HEAs and the conventional alloy systems which undergo liquid-liquid separation. Positive mixing enthalpy if not compensated by the entropy of mixing will lead to liquid phase separation. It appears that Co, Ni, and Ti promote miscibility in HEAs/CCAs/MPEAs while Cr, V, and Nb will raise the miscibility gap temperature and increase LPS. Moreover, addition of appropriate amounts of Ni to CoCrCu eliminates immiscibility, such as in cases of dendritically solidifying CoCrCuNi, CoCrCuFeNi, and CoCrCuMnNi. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle Small-Scale Plastic Deformation of Nanocrystalline High Entropy Alloy
Entropy 2018, 20(11), 889; https://doi.org/10.3390/e20110889
Received: 22 October 2018 / Revised: 16 November 2018 / Accepted: 16 November 2018 / Published: 20 November 2018
Viewed by 364 | PDF Full-text (3463 KB) | HTML Full-text | XML Full-text
Abstract
High entropy alloys (HEAs) have attracted widespread interest due to their unique properties at many different length-scales. Here, we report the fabrication of nanocrystalline (NC) Al0.1CoCrFeNi high entropy alloy and subsequent small-scale plastic deformation behavior via nano-pillar compression tests. Exceptional strength
[...] Read more.
High entropy alloys (HEAs) have attracted widespread interest due to their unique properties at many different length-scales. Here, we report the fabrication of nanocrystalline (NC) Al0.1CoCrFeNi high entropy alloy and subsequent small-scale plastic deformation behavior via nano-pillar compression tests. Exceptional strength was realized for the NC HEA compared to pure Ni of similar grain sizes. Grain boundary mediated deformation mechanisms led to high strain rate sensitivity of flow stress in the nanocrystalline HEA. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Graphical abstract

Open AccessArticle Magnetocaloric Effect in an Antidot: The Effect of the Aharonov-Bohm Flux and Antidot Radius
Entropy 2018, 20(11), 888; https://doi.org/10.3390/e20110888
Received: 11 October 2018 / Revised: 13 November 2018 / Accepted: 17 November 2018 / Published: 19 November 2018
Cited by 1 | Viewed by 311 | PDF Full-text (33920 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we report the magnetocaloric effect (MCE) for an electron interacting with an antidot, under the effect of an Aharonov-Bohm flux (AB-flux) subjected to a parabolic confinement potential. We use the Bogachek and Landman model, which additionally allows the study of
[...] Read more.
In this work, we report the magnetocaloric effect (MCE) for an electron interacting with an antidot, under the effect of an Aharonov-Bohm flux (AB-flux) subjected to a parabolic confinement potential. We use the Bogachek and Landman model, which additionally allows the study of quantum dots with Fock-Darwin energy levels for vanishing antidot radius and AB-flux. We find that AB-flux strongly controls the oscillatory behaviour of the MCE, thus acting as a control parameter for the cooling or heating of the magnetocaloric effect. We propose a way to detect AB-flux by measuring temperature differences. Full article
(This article belongs to the Section Quantum Information)
Figures

Figure 1

Open AccessArticle A Technology-Based Classification of Firms: Can We Learn Something Looking Beyond Industry Classifications?
Entropy 2018, 20(11), 887; https://doi.org/10.3390/e20110887
Received: 2 August 2018 / Revised: 30 October 2018 / Accepted: 9 November 2018 / Published: 18 November 2018
Viewed by 428 | PDF Full-text (2461 KB) | HTML Full-text | XML Full-text
Abstract
In this work we use clustering techniques to identify groups of firms competing in similar technological markets. Our clustering properly highlights technological similarities grouping together firms normally classified in different industrial sectors. Technological development leads to a continuous changing structure of industries and
[...] Read more.
In this work we use clustering techniques to identify groups of firms competing in similar technological markets. Our clustering properly highlights technological similarities grouping together firms normally classified in different industrial sectors. Technological development leads to a continuous changing structure of industries and firms. For this reason, we propose a data driven approach to classify firms together allowing for fast adaptation of the classification to the changing technological landscape. In this respect we differentiate from previous taxonomic exercises of industries and innovation which are based on more general common features. In our empirical application, we use patent data as a proxy for the firms’ capabilities of developing new solutions in different technological fields. On this basis, we extract what we define a Technologically Driven Classification (TDC). In order to validate the result of our exercise we use information theory to look at the amount of information explained by our clustering and the amount of information shared with an industrial classification. All-in-all, our approach provides a good grouping of firms on the basis of their technological capabilities and represents an attractive option to compare firms in the technological space and better characterise competition in technological markets. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Figures

Figure 1

Open AccessArticle Advanced Statistical Testing of Quantum Random Number Generators
Entropy 2018, 20(11), 886; https://doi.org/10.3390/e20110886
Received: 20 October 2018 / Revised: 12 November 2018 / Accepted: 14 November 2018 / Published: 17 November 2018
Cited by 1 | Viewed by 333 | PDF Full-text (615 KB) | HTML Full-text | XML Full-text
Abstract
Pseudo-random number generators are widely used in many branches of science, mainly in applications related to Monte Carlo methods, although they are deterministic in design and, therefore, unsuitable for tackling fundamental problems in security and cryptography. The natural laws of the microscopic realm
[...] Read more.
Pseudo-random number generators are widely used in many branches of science, mainly in applications related to Monte Carlo methods, although they are deterministic in design and, therefore, unsuitable for tackling fundamental problems in security and cryptography. The natural laws of the microscopic realm provide a fairly simple method to generate non-deterministic sequences of random numbers, based on measurements of quantum states. In practice, however, the experimental devices on which quantum random number generators are based are often unable to pass some tests of randomness. In this review, we briefly discuss two such tests, point out the challenges that we have encountered in experimental implementations and finally present a fairly simple method that successfully generates non-deterministic maximally random sequences. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness)
Figures

Figure 1

Open AccessArticle Blind Image Quality Assessment of Natural Scenes Based on Entropy Differences in the DCT Domain
Entropy 2018, 20(11), 885; https://doi.org/10.3390/e20110885
Received: 29 October 2018 / Revised: 15 November 2018 / Accepted: 16 November 2018 / Published: 17 November 2018
Viewed by 354 | PDF Full-text (1683 KB) | HTML Full-text | XML Full-text
Abstract
Blind/no-reference image quality assessment is performed to accurately evaluate the perceptual quality of a distorted image without prior information from a reference image. In this paper, an effective blind image quality assessment approach based on entropy differences in the discrete cosine transform domain
[...] Read more.
Blind/no-reference image quality assessment is performed to accurately evaluate the perceptual quality of a distorted image without prior information from a reference image. In this paper, an effective blind image quality assessment approach based on entropy differences in the discrete cosine transform domain for natural images is proposed. Information entropy is an effective measure of the amount of information in an image. We find the discrete cosine transform coefficient distribution of distorted natural images shows a pulse-shape phenomenon, which directly affects the differences of entropy. Then, a Weibull model is used to fit the distributions of natural and distorted images. This is because the Weibull model sufficiently approximates the pulse-shape phenomenon as well as the sharp-peak and heavy-tail phenomena of natural scene statistics rules. Four features that are related to entropy differences and human visual system are extracted from the Weibull model for three scaling images. Image quality is assessed by the support vector regression method based on the extracted features. This blind Weibull statistics algorithm is thoroughly evaluated using three widely used databases: LIVE, TID2008, and CSIQ. The experimental results show that the performance of the proposed blind Weibull statistics method is highly consistent with that of human visual perception and greater than that of the state-of-the-art blind and full-reference image quality assessment methods in most cases. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle Hybrid Integration Approach of Entropy with Logistic Regression and Support Vector Machine for Landslide Susceptibility Modeling
Entropy 2018, 20(11), 884; https://doi.org/10.3390/e20110884
Received: 7 October 2018 / Revised: 7 November 2018 / Accepted: 7 November 2018 / Published: 17 November 2018
Cited by 1 | Viewed by 358 | PDF Full-text (2169 KB) | HTML Full-text | XML Full-text
Abstract
The main purpose of the present study is to apply three classification models, namely, the index of entropy (IOE) model, the logistic regression (LR) model, and the support vector machine (SVM) model by radial basis function (RBF), to produce landslide susceptibility maps for
[...] Read more.
The main purpose of the present study is to apply three classification models, namely, the index of entropy (IOE) model, the logistic regression (LR) model, and the support vector machine (SVM) model by radial basis function (RBF), to produce landslide susceptibility maps for the Fugu County of Shaanxi Province, China. Firstly, landslide locations were extracted from field investigation and aerial photographs, and a total of 194 landslide polygons were transformed into points to produce a landslide inventory map. Secondly, the landslide points were randomly split into two groups (70/30) for training and validation purposes, respectively. Then, 10 landslide explanatory variables, such as slope aspect, slope angle, altitude, lithology, mean annual precipitation, distance to roads, distance to rivers, distance to faults, land use, and normalized difference vegetation index (NDVI), were selected and the potential multicollinearity problems between these factors were detected by the Pearson Correlation Coefficient (PCC), the variance inflation factor (VIF), and tolerance (TOL). Subsequently, the landslide susceptibility maps for the study region were obtained using the IOE model, the LR–IOE, and the SVM–IOE model. Finally, the performance of these three models was verified and compared using the receiver operating characteristics (ROC) curve. The success rate results showed that the LR–IOE model has the highest accuracy (90.11%), followed by the IOE model (87.43%) and the SVM–IOE model (86.53%). Similarly, the AUC values also showed that the prediction accuracy expresses a similar result, with the LR–IOE model having the highest accuracy (81.84%), followed by the IOE model (76.86%) and the SVM–IOE model (76.61%). Thus, the landslide susceptibility map (LSM) for the study region can provide an effective reference for the Fugu County government to properly address land planning and mitigate landslide risk. Full article
Figures

Figure 1

Open AccessArticle The Role of Complex Analysis in Modelling Economic Growth
Entropy 2018, 20(11), 883; https://doi.org/10.3390/e20110883
Received: 31 July 2018 / Revised: 2 November 2018 / Accepted: 5 November 2018 / Published: 16 November 2018
Viewed by 390 | PDF Full-text (4065 KB) | HTML Full-text | XML Full-text
Abstract
Development and growth are complex and tumultuous processes. Modern economic growth theories identify some key determinants of economic growth. However, the relative importance of the determinants remains unknown, and additional variables may help clarify the directions and dimensions of the interactions. The novel
[...] Read more.
Development and growth are complex and tumultuous processes. Modern economic growth theories identify some key determinants of economic growth. However, the relative importance of the determinants remains unknown, and additional variables may help clarify the directions and dimensions of the interactions. The novel stream of literature on economic complexity goes beyond aggregate measures of productive inputs and considers instead a more granular and structural view of the productive possibilities of countries, i.e., their capabilities. Different endowments of capabilities are crucial ingredients in explaining differences in economic performances. In this paper we employ economic fitness, a measure of productive capabilities obtained through complex network techniques. Focusing on the combined roles of fitness and some more traditional drivers of growth—GDP per capita, capital intensity, employment ratio, life expectancy, human capital and total factor productivity—we build a bridge between economic growth theories and the economic complexity literature. Our findings show that fitness plays a crucial role in fostering economic growth and, when it is included in the analysis, can be either complementary to traditional drivers of growth or can completely overshadow them. Notably, for the most complex countries, which have the most diversified export baskets and the largest endowments of capabilities, fitness is complementary to the chosen growth determinants in enhancing economic growth. The empirical findings are in agreement with neoclassical and endogenous growth theories. By contrast, for countries with intermediate and low capability levels, fitness emerges as the key growth driver. This suggests that economic models should account for capabilities; in fact, describing the technological possibilities of countries solely in terms of their production functions may lead to a misinterpretation of the roles of factors. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Figures

Figure 1

Open AccessArticle Study in Natural Time of Geoelectric Field and Seismicity Changes Preceding the Mw6.8 Earthquake on 25 October 2018 in Greece
Entropy 2018, 20(11), 882; https://doi.org/10.3390/e20110882
Received: 5 November 2018 / Revised: 13 November 2018 / Accepted: 14 November 2018 / Published: 16 November 2018
Viewed by 343 | PDF Full-text (844 KB) | HTML Full-text | XML Full-text
Abstract
A strong earthquake of magnitude Mw6.8 struck Western Greece on 25 October 2018 with an epicenter at 37.515 N 20.564 E. It was preceded by an anomalous geolectric signal that was recorded on 2 October 2018 at a measuring
[...] Read more.
A strong earthquake of magnitude M w 6.8 struck Western Greece on 25 October 2018 with an epicenter at 37.515 N 20.564 E. It was preceded by an anomalous geolectric signal that was recorded on 2 October 2018 at a measuring station 70 km away from the epicenter. Upon analyzing this signal in natural time, we find that it conforms to the conditions suggested for its identification as precursory Seismic Electric Signal (SES) activity. Notably, the observed lead time of 23 days lies within the range of values that has been very recently identified as being statistically significant for the precursory variations of the electric field of the Earth. Moreover, the analysis in natural time of the seismicity subsequent to the SES activity in the area candidate to suffer this strong earthquake reveals that the criticality conditions were obeyed early in the morning of 18 October 2018, i.e., almost a week before the strong earthquake occurrence, in agreement with earlier findings. Finally, when employing the recent method of nowcasting earthquakes, which is based on natural time, we find an earthquake potential score around 80%. Full article
Figures

Figure 1

Open AccessArticle Between Waves and Diffusion: Paradoxical Entropy Production in an Exceptional Regime
Entropy 2018, 20(11), 881; https://doi.org/10.3390/e20110881
Received: 30 October 2018 / Revised: 8 November 2018 / Accepted: 9 November 2018 / Published: 16 November 2018
Viewed by 268 | PDF Full-text (1851 KB) | HTML Full-text | XML Full-text
Abstract
The entropy production rate is a well established measure for the extent of irreversibility in a process. For irreversible processes, one thus usually expects that the entropy production rate approaches zero in the reversible limit. Fractional diffusion equations provide a fascinating testbed for
[...] Read more.
The entropy production rate is a well established measure for the extent of irreversibility in a process. For irreversible processes, one thus usually expects that the entropy production rate approaches zero in the reversible limit. Fractional diffusion equations provide a fascinating testbed for that intuition in that they build a bridge connecting the fully irreversible diffusion equation with the fully reversible wave equation by a one-parameter family of processes. The entropy production paradox describes the very non-intuitive increase of the entropy production rate as that bridge is passed from irreversible diffusion to reversible waves. This paradox has been established for time- and space-fractional diffusion equations on one-dimensional continuous space and for the Shannon, Tsallis and Renyi entropies. After a brief review of the known results, we generalize it to time-fractional diffusion on a finite chain of points described by a fractional master equation. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Merging of Numerical Intervals in Entropy-Based Discretization
Entropy 2018, 20(11), 880; https://doi.org/10.3390/e20110880
Received: 25 September 2018 / Revised: 8 November 2018 / Accepted: 13 November 2018 / Published: 16 November 2018
Viewed by 240 | PDF Full-text (2045 KB) | HTML Full-text | XML Full-text
Abstract
As previous research indicates, a multiple-scanning methodology for discretization of numerical datasets, based on entropy, is very competitive. Discretization is a process of converting numerical values of the data records into discrete values associated with numerical intervals defined over the domains of the
[...] Read more.
As previous research indicates, a multiple-scanning methodology for discretization of numerical datasets, based on entropy, is very competitive. Discretization is a process of converting numerical values of the data records into discrete values associated with numerical intervals defined over the domains of the data records. In multiple-scanning discretization, the last step is the merging of neighboring intervals in discretized datasets as a kind of postprocessing. Our objective is to check how the error rate, measured by tenfold cross validation within the C4.5 system, is affected by such merging. We conducted experiments on 17 numerical datasets, using the same setup of multiple scanning, with three different options for merging: no merging at all, merging based on the smallest entropy, and merging based on the biggest entropy. As a result of the Friedman rank sum test (5% significance level) we concluded that the differences between all three approaches are statistically insignificant. There is no universally best approach. Then, we repeated all experiments 30 times, recording averages and standard deviations. The test of the difference between averages shows that, for a comparison of no merging with merging based on the smallest entropy, there are statistically highly significant differences (with a 1% significance level). In some cases, the smaller error rate is associated with no merging, in some cases the smaller error rate is associated with merging based on the smallest entropy. A comparison of no merging with merging based on the biggest entropy showed similar results. So, our final conclusion was that there are highly significant differences between no merging and merging, depending on the dataset. The best approach should be chosen by trying all three approaches. Full article
Figures

Figure 1

Open AccessArticle Assessing the Relevance of Specific Response Features in the Neural Code
Entropy 2018, 20(11), 879; https://doi.org/10.3390/e20110879
Received: 1 October 2018 / Revised: 12 November 2018 / Accepted: 13 November 2018 / Published: 15 November 2018
Cited by 1 | Viewed by 301 | PDF Full-text (960 KB) | HTML Full-text | XML Full-text
Abstract
The study of the neural code aims at deciphering how the nervous system maps external stimuli into neural activity—the encoding phase—and subsequently transforms such activity into adequate responses to the original stimuli—the decoding phase. Several information-theoretical methods have been proposed to assess the
[...] Read more.
The study of the neural code aims at deciphering how the nervous system maps external stimuli into neural activity—the encoding phase—and subsequently transforms such activity into adequate responses to the original stimuli—the decoding phase. Several information-theoretical methods have been proposed to assess the relevance of individual response features, as for example, the spike count of a given neuron, or the amount of correlation in the activity of two cells. These methods work under the premise that the relevance of a feature is reflected in the information loss that is induced by eliminating the feature from the response. The alternative methods differ in the procedure by which the tested feature is removed, and the algorithm with which the lost information is calculated. Here we compare these methods, and show that more often than not, each method assigns a different relevance to the tested feature. We demonstrate that the differences are both quantitative and qualitative, and connect them with the method employed to remove the tested feature, as well as the procedure to calculate the lost information. By studying a collection of carefully designed examples, and working on analytic derivations, we identify the conditions under which the relevance of features diagnosed by different methods can be ranked, or sometimes even equated. The condition for equality involves both the amount and the type of information contributed by the tested feature. We conclude that the quest for relevant response features is more delicate than previously thought, and may yield to multiple answers depending on methodological subtleties. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessReview Coherent Precipitation and Strengthening in Compositionally Complex Alloys: A Review
Entropy 2018, 20(11), 878; https://doi.org/10.3390/e20110878
Received: 29 October 2018 / Revised: 12 November 2018 / Accepted: 14 November 2018 / Published: 15 November 2018
Viewed by 467 | PDF Full-text (4448 KB) | HTML Full-text | XML Full-text
Abstract
High-performance conventional engineering materials (including Al alloys, Mg alloys, Cu alloys, stainless steels, Ni superalloys, etc.) and newly-developed high entropy alloys are all compositionally-complex alloys (CCAs). In these CCA systems, the second-phase particles are generally precipitated in their solid-solution matrix, in which the
[...] Read more.
High-performance conventional engineering materials (including Al alloys, Mg alloys, Cu alloys, stainless steels, Ni superalloys, etc.) and newly-developed high entropy alloys are all compositionally-complex alloys (CCAs). In these CCA systems, the second-phase particles are generally precipitated in their solid-solution matrix, in which the precipitates are diverse and can result in different strengthening effects. The present work aims at generalizing the precipitation behavior and precipitation strengthening in CCAs comprehensively. First of all, the morphology evolution of second-phase particles and precipitation strengthening mechanisms are introduced. Then, the precipitation behaviors in diverse CCA systems are illustrated, especially the coherent precipitation. The relationship between the particle morphology and strengthening effectiveness is discussed. It is addressed that the challenge in the future is to design the stable coherent microstructure in different solid-solution matrices, which will be the most effective approach for the enhancement of alloy strength. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle Closing the Door on Quantum Nonlocality
Entropy 2018, 20(11), 877; https://doi.org/10.3390/e20110877
Received: 11 September 2018 / Revised: 1 November 2018 / Accepted: 8 November 2018 / Published: 15 November 2018
Viewed by 334 | PDF Full-text (351 KB) | HTML Full-text | XML Full-text
Abstract
Bell-type inequalities are proven using oversimplified probabilistic models and/or counterfactual definiteness (CFD). If setting-dependent variables describing measuring instruments are correctly introduced, none of these inequalities may be proven. In spite of this, a belief in a mysterious quantum nonlocality is not fading. Computer
[...] Read more.
Bell-type inequalities are proven using oversimplified probabilistic models and/or counterfactual definiteness (CFD). If setting-dependent variables describing measuring instruments are correctly introduced, none of these inequalities may be proven. In spite of this, a belief in a mysterious quantum nonlocality is not fading. Computer simulations of Bell tests allow people to study the different ways in which the experimental data might have been created. They also allow for the generation of various counterfactual experiments’ outcomes, such as repeated or simultaneous measurements performed in different settings on the same “photon-pair”, and so forth. They allow for the reinforcing or relaxing of CFD compliance and/or for studying the impact of various “photon identification procedures”, mimicking those used in real experiments. Data samples consistent with quantum predictions may be generated by using a specific setting-dependent identification procedure. It reflects the active role of instruments during the measurement process. Each of the setting-dependent data samples are consistent with specific setting-dependent probabilistic models which may not be deduced using non-contextual local realistic or stochastic hidden variables. In this paper, we will be discussing the results of these simulations. Since the data samples are generated in a locally causal way, these simulations provide additional strong arguments for closing the door on quantum nonlocality. Full article
(This article belongs to the Special Issue Towards Ultimate Quantum Theory (UQT))
Open AccessArticle A Fractional Single-Phase-Lag Model of Heat Conduction for Describing Propagation of the Maximum Temperature in a Finite Medium
Entropy 2018, 20(11), 876; https://doi.org/10.3390/e20110876
Received: 23 October 2018 / Revised: 9 November 2018 / Accepted: 10 November 2018 / Published: 15 November 2018
Viewed by 270 | PDF Full-text (1393 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, an investigation of the maximum temperature propagation in a finite medium is presented. The heat conduction in the medium was modelled by using a single-phase-lag equation with fractional Caputo derivatives. The formulation and solution of the problem concern the heat
[...] Read more.
In this paper, an investigation of the maximum temperature propagation in a finite medium is presented. The heat conduction in the medium was modelled by using a single-phase-lag equation with fractional Caputo derivatives. The formulation and solution of the problem concern the heat conduction in a slab, a hollow cylinder, and a hollow sphere, which are subjected to a heat source represented by the Robotnov function and a harmonically varying ambient temperature. The problem with time-dependent Robin and homogenous Neumann boundary conditions has been solved by using an eigenfunction expansion method and the Laplace transform technique. The solution of the heat conduction problem was used for determination of the maximum temperature trajectories. The trajectories and propagation speeds of the temperature maxima in the medium depend on the order of fractional derivatives occurring in the heat conduction model. These dependencies for the heat conduction in the hollow cylinder have been numerically investigated. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Efficiency of Harmonic Quantum Otto Engines at Maximal Power
Entropy 2018, 20(11), 875; https://doi.org/10.3390/e20110875
Received: 24 October 2018 / Revised: 10 November 2018 / Accepted: 13 November 2018 / Published: 15 November 2018
Cited by 1 | Viewed by 456 | PDF Full-text (1134 KB) | HTML Full-text | XML Full-text
Abstract
Recent experimental breakthroughs produced the first nano heat engines that have the potential to harness quantum resources. An instrumental question is how their performance measures up against the efficiency of classical engines. For single ion engines undergoing quantum Otto cycles it has been
[...] Read more.
Recent experimental breakthroughs produced the first nano heat engines that have the potential to harness quantum resources. An instrumental question is how their performance measures up against the efficiency of classical engines. For single ion engines undergoing quantum Otto cycles it has been found that the efficiency at maximal power is given by the Curzon–Ahlborn efficiency. This is rather remarkable as the Curzon–Alhbron efficiency was originally derived for endoreversible Carnot cycles. Here, we analyze two examples of endoreversible Otto engines within the same conceptual framework as Curzon and Ahlborn’s original treatment. We find that for endoreversible Otto cycles in classical harmonic oscillators the efficiency at maximal power is, indeed, given by the Curzon–Ahlborn efficiency. However, we also find that the efficiency of Otto engines made of quantum harmonic oscillators is significantly larger. Full article
Figures

Figure 1

Open AccessArticle Information Dynamics in Urban Crime
Entropy 2018, 20(11), 874; https://doi.org/10.3390/e20110874
Received: 28 September 2018 / Revised: 1 November 2018 / Accepted: 6 November 2018 / Published: 14 November 2018
Viewed by 548 | PDF Full-text (3039 KB) | HTML Full-text | XML Full-text
Abstract
Information production in both space and time has been highlighted as one of the elements that shapes the footprint of complexity in natural and socio-technical systems. However, information production in urban crime has barely been studied. This work copes with this problem by
[...] Read more.
Information production in both space and time has been highlighted as one of the elements that shapes the footprint of complexity in natural and socio-technical systems. However, information production in urban crime has barely been studied. This work copes with this problem by using multifractal analysis to characterize the spatial information scaling in urban crime reports and nonlinear processing tools to study the temporal behavior of this scaling. Our results suggest that information scaling in urban crime exhibits dynamics that evolve in low-dimensional chaotic attractors, and this can be observed in several spatio-temporal scales, although some of them are more favorable than others. This evidence has practical implications in terms of defining the characteristic scales to approach urban crime from available data and supporting theoretical perspectives about the complexity of urban crime. Full article
(This article belongs to the Special Issue Information Theory in Complex Systems)
Figures

Figure 1

Open AccessArticle Early Fault Detection Method for Rotating Machinery Based on Harmonic-Assisted Multivariate Empirical Mode Decomposition and Transfer Entropy
Entropy 2018, 20(11), 873; https://doi.org/10.3390/e20110873
Received: 24 September 2018 / Revised: 28 October 2018 / Accepted: 1 November 2018 / Published: 13 November 2018
Viewed by 312 | PDF Full-text (13188 KB) | HTML Full-text | XML Full-text
Abstract
It is a difficult task to analyze the coupling characteristics of rotating machinery fault signals under the influence of complex and nonlinear interference signals. This difficulty is due to the strong noise background of rotating machinery fault feature extraction and weaknesses, such as
[...] Read more.
It is a difficult task to analyze the coupling characteristics of rotating machinery fault signals under the influence of complex and nonlinear interference signals. This difficulty is due to the strong noise background of rotating machinery fault feature extraction and weaknesses, such as modal mixing problems, in the existing Ensemble Empirical Mode Decomposition (EEMD) time–frequency analysis methods. To quantitatively study the nonlinear synchronous coupling characteristics and information transfer characteristics of rotating machinery fault signals between different frequency scales under the influence of complex and nonlinear interference signals, a new nonlinear signal processing method—the harmonic assisted multivariate empirical mode decomposition method (HA-MEMD)—is proposed in this paper. By adding additional high-frequency harmonic-assisted channels and reducing them, the decomposing precision of the Intrinsic Mode Function (IMF) can be effectively improved, and the phenomenon of mode aliasing can be mitigated. Analysis results of the simulated signals prove the effectiveness of this method. By combining HA-MEMD with the transfer entropy algorithm and introducing signal processing of the rotating machinery, a fault detection method of rotating machinery based on high-frequency harmonic-assisted multivariate empirical mode decomposition-transfer entropy (HA-MEMD-TE) was established. The main features of the mechanical transmission system were extracted by the high-frequency harmonic-assisted multivariate empirical mode decomposition method, and the signal, after noise reduction, was used for the transfer entropy calculation. The evaluation index of the rotating machinery state based on HA-MEMD-TE was established to quantitatively describe the degree of nonlinear coupling between signals to effectively evaluate and diagnose the operating state of the mechanical system. By adding noise to different signal-to-noise ratios, the fault detection ability of HA-MEMD-TE method in the background of strong noise is investigated, which proves that the method has strong reliability and robustness. In this paper, transfer entropy is applied to the fault diagnosis field of rotating machinery, which provides a new effective method for early fault diagnosis and performance degradation-state recognition of rotating machinery, and leads to relevant research conclusions. Full article
Figures

Figure 1

Open AccessArticle Magnetic Properties and Microstructure of FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) High-Entropy Alloys
Entropy 2018, 20(11), 872; https://doi.org/10.3390/e20110872
Received: 9 October 2018 / Revised: 8 November 2018 / Accepted: 8 November 2018 / Published: 13 November 2018
Cited by 1 | Viewed by 400 | PDF Full-text (4308 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The present work exhibits the effects of Sn addition on the magnetic properties and microstructure of FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) high-entropy alloys (HEAs). The results show all the samples consist of a mixed structure of face-centered-cubic (FCC) phase
[...] Read more.
The present work exhibits the effects of Sn addition on the magnetic properties and microstructure of FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) high-entropy alloys (HEAs). The results show all the samples consist of a mixed structure of face-centered-cubic (FCC) phase and body-centered-cubic (BCC) phase. The addition of Sn promotes the formation of BCC phase, and it also affects the shape of Cu-rich nano-precipitates in BCC matrix. It also shows that the Curie temperatures (Tc) of the FCC phase and the saturation magnetization (Ms) of the FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) HEAs increase greatly while the remanence (Br) decreases after the addition of Sn into FeCoNi(CuAl)0.8 HEA. The thermomagnetic curves indicate that the phases of the FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) HEAs will transform from FCC with low Tc to BCC phase with high Tc at temperature of 600–700 K. This work provides a new idea for FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) HEAs for their potential application as soft magnets to be used at high temperatures. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Graphical abstract

Open AccessArticle Characterization of Artifact Influence on the Classification of Glucose Time Series Using Sample Entropy Statistics
Entropy 2018, 20(11), 871; https://doi.org/10.3390/e20110871
Received: 8 October 2018 / Revised: 7 November 2018 / Accepted: 9 November 2018 / Published: 12 November 2018
Viewed by 339 | PDF Full-text (1131 KB) | HTML Full-text | XML Full-text
Abstract
This paper analyses the performance of SampEn and one of its derivatives, Fuzzy Entropy (FuzzyEn), in the context of artifacted blood glucose time series classification. This is a difficult and practically unexplored framework, where the availability of more sensitive and reliable measures could
[...] Read more.
This paper analyses the performance of SampEn and one of its derivatives, Fuzzy Entropy (FuzzyEn), in the context of artifacted blood glucose time series classification. This is a difficult and practically unexplored framework, where the availability of more sensitive and reliable measures could be of great clinical impact. Although the advent of new blood glucose monitoring technologies may reduce the incidence of the problems stated above, incorrect device or sensor manipulation, patient adherence, sensor detachment, time constraints, adoption barriers or affordability can still result in relatively short and artifacted records, as the ones analyzed in this paper or in other similar works. This study is aimed at characterizing the changes induced by such artifacts, enabling the arrangement of countermeasures in advance when possible. Despite the presence of these disturbances, results demonstrate that SampEn and FuzzyEn are sufficiently robust to achieve a significant classification performance, using records obtained from patients with duodenal-jejunal exclusion. The classification results, in terms of area under the ROC of up to 0.9, with several tests yielding AUC values also greater than 0.8, and in terms of a leave-one-out average classification accuracy of 80%, confirm the potential of these measures in this context despite the presence of artifacts, with SampEn having slightly better performance than FuzzyEn. Full article
(This article belongs to the Special Issue The 20th Anniversary of Entropy - Approximate and Sample Entropy)
Figures

Figure 1

Open AccessArticle Robust Signaling for Bursty Interference
Entropy 2018, 20(11), 870; https://doi.org/10.3390/e20110870
Received: 3 September 2018 / Revised: 25 October 2018 / Accepted: 6 November 2018 / Published: 12 November 2018
Viewed by 325 | PDF Full-text (1359 KB) | HTML Full-text | XML Full-text
Abstract
This paper studies a bursty interference channel, where the presence/absence of interference is modeled by a block-i.i.d. Bernoulli process that stays constant for a duration of T symbols (referred to as coherence block) and then changes independently to a new state. We consider
[...] Read more.
This paper studies a bursty interference channel, where the presence/absence of interference is modeled by a block-i.i.d. Bernoulli process that stays constant for a duration of T symbols (referred to as coherence block) and then changes independently to a new state. We consider both a quasi-static setup, where the interference state remains constant during the whole transmission of the codeword, and an ergodic setup, where a codeword spans several coherence blocks. For the quasi-static setup, we study the largest rate of a coding strategy that provides reliable communication at a basic rate and allows an increased (opportunistic) rate when there is no interference. For the ergodic setup, we study the largest achievable rate. We study how non-causal knowledge of the interference state, referred to as channel-state information (CSI), affects the achievable rates. We derive converse and achievability bounds for (i) local CSI at the receiver side only; (ii) local CSI at the transmitter and receiver side; and (iii) global CSI at all nodes. Our bounds allow us to identify when interference burstiness is beneficial and in which scenarios global CSI outperforms local CSI. The joint treatment of the quasi-static and ergodic setup further allows for a thorough comparison of these two setups. Full article
(This article belongs to the Special Issue Multiuser Information Theory II)
Figures

Figure 1

Open AccessArticle Short-Time Propagators and the Born–Jordan Quantization Rule
Entropy 2018, 20(11), 869; https://doi.org/10.3390/e20110869
Received: 14 October 2018 / Revised: 6 November 2018 / Accepted: 8 November 2018 / Published: 10 November 2018
Viewed by 359 | PDF Full-text (259 KB) | HTML Full-text | XML Full-text
Abstract
We have shown in previous work that the equivalence of the Heisenberg and Schrödinger pictures of quantum mechanics requires the use of the Born and Jordan quantization rules. In the present work we give further evidence that the Born–Jordan rule is the correct
[...] Read more.
We have shown in previous work that the equivalence of the Heisenberg and Schrödinger pictures of quantum mechanics requires the use of the Born and Jordan quantization rules. In the present work we give further evidence that the Born–Jordan rule is the correct quantization scheme for quantum mechanics. For this purpose we use correct short-time approximations to the action functional, initially due to Makri and Miller, and show that these lead to the desired quantization of the classical Hamiltonian. Full article
Open AccessArticle Quantitative Assessment of Landslide Susceptibility Comparing Statistical Index, Index of Entropy, and Weights of Evidence in the Shangnan Area, China
Entropy 2018, 20(11), 868; https://doi.org/10.3390/e20110868
Received: 12 October 2018 / Revised: 6 November 2018 / Accepted: 8 November 2018 / Published: 10 November 2018
Viewed by 333 | PDF Full-text (4759 KB) | HTML Full-text | XML Full-text
Abstract
In this study, a comparative analysis of the statistical index (SI), index of entropy (IOE) and weights of evidence (WOE) models was introduced to landslide susceptibility mapping, and the performance of the three models was validated and systematically compared. As one of the
[...] Read more.
In this study, a comparative analysis of the statistical index (SI), index of entropy (IOE) and weights of evidence (WOE) models was introduced to landslide susceptibility mapping, and the performance of the three models was validated and systematically compared. As one of the most landslide-prone areas in Shaanxi Province, China, Shangnan County was selected as the study area. Firstly, a series of reports, remote sensing images and geological maps were collected, and field surveys were carried out to prepare a landslide inventory map. A total of 348 landslides were identified in study area, and they were reclassified as a training dataset (70% = 244 landslides) and testing dataset (30% = 104 landslides) by random selection. Thirteen conditioning factors were then employed. Corresponding thematic data layers and landslide susceptibility maps were generated based on ArcGIS software. Finally, the area under the curve (AUC) values were calculated for the training dataset and the testing dataset in order to validate and compare the performance of the three models. For the training dataset, the AUC plots showed that the WOE model had the highest accuracy rate of 76.05%, followed by the SI model (74.67%) and the IOE model (71.12%). In the case of the testing dataset, the prediction accuracy rates for the SI, IOE and WOE models were 73.75%, 63.89%, and 75.10%, respectively. It can be concluded that the WOE model had the best prediction capacity for landslide susceptibility mapping in Shangnan County. The landslide susceptibility map produced by the WOE model had a profound geological and engineering significance in terms of landslide hazard prevention and control in the study area and other similar areas. Full article
(This article belongs to the Special Issue Applications of Information Theory in the Geosciences II)
Figures

Figure 1

Open AccessArticle Double Quantum Image Encryption Based on Arnold Transform and Qubit Random Rotation
Entropy 2018, 20(11), 867; https://doi.org/10.3390/e20110867
Received: 8 October 2018 / Revised: 1 November 2018 / Accepted: 8 November 2018 / Published: 10 November 2018
Viewed by 342 | PDF Full-text (4672 KB) | HTML Full-text | XML Full-text
Abstract
Quantum image encryption offers major advantages over its classical counterpart in terms of key space, computational complexity, and so on. A novel double quantum image encryption approach based on quantum Arnold transform (QAT) and qubit random rotation is proposed in this paper, in
[...] Read more.
Quantum image encryption offers major advantages over its classical counterpart in terms of key space, computational complexity, and so on. A novel double quantum image encryption approach based on quantum Arnold transform (QAT) and qubit random rotation is proposed in this paper, in which QAT is used to scramble pixel positions and the gray information is changed by utilizing random qubit rotation. Actually, the independent random qubit rotation operates once, respectively, in spatial and frequency domains with the help of quantum Fourier transform (QFT). The encryption process accomplishes pixel confusion and diffusion, and finally the noise-like cipher image is obtained. Numerical simulation and theoretical analysis verify that the method is valid and it shows superior performance in security and computational complexity. Full article
(This article belongs to the collection Quantum Information)
Figures

Figure 1

Open AccessArticle An Entropy-Guided Monte Carlo Tree Search Approach for Generating Optimal Container Loading Layouts
Entropy 2018, 20(11), 866; https://doi.org/10.3390/e20110866
Received: 4 October 2018 / Revised: 7 November 2018 / Accepted: 7 November 2018 / Published: 9 November 2018
Viewed by 357 | PDF Full-text (872 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a novel approach to the container loading problem using a spatial entropy measure to bias a Monte Carlo Tree Search is proposed. The proposed algorithm generates layouts that achieve the goals of both fitting a constrained space and also having
[...] Read more.
In this paper, a novel approach to the container loading problem using a spatial entropy measure to bias a Monte Carlo Tree Search is proposed. The proposed algorithm generates layouts that achieve the goals of both fitting a constrained space and also having “consistency” or neatness that enables forklift truck drivers to apply them easily to real shipping containers loaded from one end. Three algorithms are analysed. The first is a basic Monte Carlo Tree Search, driven only by the principle of minimising the length of container that is occupied. The second is an algorithm that uses the proposed entropy measure to drive an otherwise random process. The third algorithm combines these two principles and produces superior results to either. These algorithms are then compared to a classical deterministic algorithm. It is shown that where the classical algorithm fails, the entropy-driven algorithms are still capable of providing good results in a short computational time. Full article
Figures

Graphical abstract

Open AccessArticle Optimization and Stability of Heat Engines: The Role of Entropy Evolution
Entropy 2018, 20(11), 865; https://doi.org/10.3390/e20110865
Received: 23 October 2018 / Revised: 5 November 2018 / Accepted: 7 November 2018 / Published: 9 November 2018
Viewed by 443 | PDF Full-text (2042 KB) | HTML Full-text | XML Full-text
Abstract
Local stability of maximum power and maximum compromise (Omega) operation regimes dynamic evolution for a low-dissipation heat engine is analyzed. The thermodynamic behavior of trajectories to the stationary state, after perturbing the operation regime, display a trade-off between stability, entropy production, efficiency and
[...] Read more.
Local stability of maximum power and maximum compromise (Omega) operation regimes dynamic evolution for a low-dissipation heat engine is analyzed. The thermodynamic behavior of trajectories to the stationary state, after perturbing the operation regime, display a trade-off between stability, entropy production, efficiency and power output. This allows considering stability and optimization as connected pieces of a single phenomenon. Trajectories inside the basin of attraction display the smallest entropy drops. Additionally, it was found that time constraints, related with irreversible and endoreversible behaviors, influence the thermodynamic evolution of relaxation trajectories. The behavior of the evolution in terms of the symmetries of the model and the applied thermal gradients was analyzed. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Figures

Figure 1

Open AccessArticle Modeling and Fusing the Uncertainty of FMEA Experts Using an Entropy-Like Measure with an Application in Fault Evaluation of Aircraft Turbine Rotor Blades
Entropy 2018, 20(11), 864; https://doi.org/10.3390/e20110864
Received: 14 October 2018 / Revised: 3 November 2018 / Accepted: 7 November 2018 / Published: 9 November 2018
Viewed by 360 | PDF Full-text (354 KB) | HTML Full-text | XML Full-text
Abstract
As a typical tool of risk analysis in practical engineering, failure mode and effects analysis (FMEA) theory is a well known method for risk prediction and prevention. However, how to quantify the uncertainty of the subjective assessments from FMEA experts and aggregate the
[...] Read more.
As a typical tool of risk analysis in practical engineering, failure mode and effects analysis (FMEA) theory is a well known method for risk prediction and prevention. However, how to quantify the uncertainty of the subjective assessments from FMEA experts and aggregate the corresponding uncertainty to the classical FMEA approach still needs further study. In this paper, we argue that the subjective assessments of FMEA experts can be adopted to model the weight of each FMEA expert, which can be regarded as a data-driven method for ambiguity information modeling in FMEA method. Based on this new perspective, a modified FMEA approach is proposed, where the subjective uncertainty of FMEA experts is handled in the framework of Dempster–Shafer evidence theory (DST). In the improved FMEA approach, the ambiguity measure (AM) which is an entropy-like uncertainty measure in DST framework is applied to quantify the uncertainty degree of each FMEA expert. Then, the classical risk priority number (RPN) model is improved by aggregating an AM-based weight factor into the RPN function. A case study based on the new RPN model in aircraft turbine rotor blades verifies the applicable and useful of the proposed FMEA approach. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis)
Figures

Figure 1

Open AccessArticle Sample Entropy of sEMG Signals at Different Stages of Rectal Cancer Treatment
Entropy 2018, 20(11), 863; https://doi.org/10.3390/e20110863
Received: 11 October 2018 / Revised: 5 November 2018 / Accepted: 7 November 2018 / Published: 9 November 2018
Cited by 1 | Viewed by 392 | PDF Full-text (424 KB) | HTML Full-text | XML Full-text
Abstract
Information theory provides a spectrum of nonlinear methods capable of grasping an internal structure of a signal together with an insight into its complex nature. In this work, we discuss the usefulness of the selected entropy techniques for a description of the information
[...] Read more.
Information theory provides a spectrum of nonlinear methods capable of grasping an internal structure of a signal together with an insight into its complex nature. In this work, we discuss the usefulness of the selected entropy techniques for a description of the information carried by the surface electromyography signals during colorectal cancer treatment. The electrical activity of the external anal sphincter can serve as a potential source of knowledge of the actual state of the patient who underwent a common surgery for rectal cancer in the form of anterior or lower anterior resection. The calculation of Sample entropy parameters has been extended to multiple time scales in terms of the Multiscale Sample Entropy. The specific values of the entropy measures and their dependence on the time scales were analyzed with regard to the time elapsed since the operation, the type of surgical treatment and also the different depths of the rectum canal. The Mann–Whitney U test and Anova Friedman statistics indicate the statistically significant differences among all of stages of treatment and for all consecutive depths of rectum area for the estimated Sample Entropy. The further analysis at the multiple time scales signify the substantial differences among compared stages of treatment in the group of patients who underwent the lower anterior resection. Full article
Figures

Figure 1

Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top