Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 21, Issue 4 (April 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) In contrast to perfect rationality, bounded rationality takes into account information processing [...] Read more.
View options order results:
result details:
Displaying articles 1-108
Export citation of selected articles as:
Open AccessArticle Assessing the Performance of Hierarchical Forecasting Methods on the Retail Sector
Entropy 2019, 21(4), 436; https://doi.org/10.3390/e21040436
Received: 18 March 2019 / Revised: 16 April 2019 / Accepted: 22 April 2019 / Published: 24 April 2019
Viewed by 95 | PDF Full-text (982 KB) | HTML Full-text | XML Full-text
Abstract
Retailers need demand forecasts at different levels of aggregation in order to support a variety of decisions along the supply chain. To ensure aligned decision-making across the hierarchy, it is essential that forecasts at the most disaggregated level add up to forecasts at [...] Read more.
Retailers need demand forecasts at different levels of aggregation in order to support a variety of decisions along the supply chain. To ensure aligned decision-making across the hierarchy, it is essential that forecasts at the most disaggregated level add up to forecasts at the aggregate levels above. It is not clear if these aggregate forecasts should be generated independently or by using an hierarchical forecasting method that ensures coherent decision-making at the different levels but does not guarantee, at least, the same accuracy. To give guidelines on this issue, our empirical study investigates the relative performance of independent and reconciled forecasting approaches, using real data from a Portuguese retailer. We consider two alternative forecasting model families for generating the base forecasts; namely, state space models and ARIMA. Appropriate models from both families are chosen for each time-series by minimising the bias-corrected Akaike information criteria. The results show significant improvements in forecast accuracy, providing valuable information to support management decisions. It is clear that reconciled forecasts using the Minimum Trace Shrinkage estimator (MinT-Shrink) generally improve on the accuracy of the ARIMA base forecasts for all levels and for the complete hierarchy, across all forecast horizons. The accuracy gains generally increase with the horizon, varying between 1.7% and 3.7% for the complete hierarchy. It is also evident that the gains in forecast accuracy are more substantial at the higher levels of aggregation, which means that the information about the individual dynamics of the series, which was lost due to aggregation, is brought back again from the lower levels of aggregation to the higher levels by the reconciliation process, substantially improving the forecast accuracy over the base forecasts. Full article
(This article belongs to the Special Issue Entropy Application for Forecasting)
Figures

Graphical abstract

Open AccessArticle Canonical Divergence for Measuring Classical and Quantum Complexity
Entropy 2019, 21(4), 435; https://doi.org/10.3390/e21040435
Received: 25 March 2019 / Revised: 15 April 2019 / Accepted: 18 April 2019 / Published: 24 April 2019
Viewed by 97 | PDF Full-text (312 KB) | HTML Full-text | XML Full-text
Abstract
A new canonical divergence is put forward for generalizing an information-geometric measure of complexity for both classical and quantum systems. On the simplex of probability measures, it is proved that the new divergence coincides with the Kullback–Leibler divergence, which is used to quantify [...] Read more.
A new canonical divergence is put forward for generalizing an information-geometric measure of complexity for both classical and quantum systems. On the simplex of probability measures, it is proved that the new divergence coincides with the Kullback–Leibler divergence, which is used to quantify how much a probability measure deviates from the non-interacting states that are modeled by exponential families of probabilities. On the space of positive density operators, we prove that the same divergence reduces to the quantum relative entropy, which quantifies many-party correlations of a quantum state from a Gibbs family. Full article
(This article belongs to the Special Issue Quantum Entropies and Complexity)
Open AccessArticle Evolution Model of Spatial Interaction Network in Online Social Networking Services
Entropy 2019, 21(4), 434; https://doi.org/10.3390/e21040434
Received: 22 March 2019 / Revised: 15 April 2019 / Accepted: 23 April 2019 / Published: 24 April 2019
Viewed by 99 | PDF Full-text (1101 KB)
Abstract
The development of online social networking services provides a rich source of data of social networks including geospatial information. More and more research has shown that geographical space is an important factor in the interactions of users in social networks. In this paper, [...] Read more.
The development of online social networking services provides a rich source of data of social networks including geospatial information. More and more research has shown that geographical space is an important factor in the interactions of users in social networks. In this paper, we construct the spatial interaction network from the city level, which is called the city interaction network, and study the evolution mechanism of the city interaction network formed in the process of information dissemination in social networks. A network evolution model for interactions among cities is established. The evolution model consists of two core processes: the edge arrival and the preferential attachment of the edge. The edge arrival model arranges the arrival time of each edge; the model of preferential attachment of the edge determines the source node and the target node of each arriving edge. Six preferential attachment models (Random-Random, Random-Degree, Degree-Random, Geographical distance, Degree-Degree, Degree-Degree-Geographical distance) are built, and the maximum likelihood approach is used to do the comparison. We find that the degree of the node and the geographic distance of the edge are the key factors affecting the evolution of the city interaction network. Finally, the evolution experiments using the optimal model DDG are conducted, and the experiment results are compared with the real city interaction network extracted from the information dissemination data of the WeChat web page. The results indicate that the model can not only capture the attributes of the real city interaction network, but also reflect the actual characteristics of the interactions among cities. Full article
(This article belongs to the Special Issue Computation in Complex Networks)
Open AccessArticle Robust Baseband Compression Against Congestion in Packet-Based Fronthaul Networks Using Multiple Description Coding
Entropy 2019, 21(4), 433; https://doi.org/10.3390/e21040433
Received: 4 February 2019 / Revised: 18 April 2019 / Accepted: 22 April 2019 / Published: 24 April 2019
Viewed by 93 | PDF Full-text (574 KB) | HTML Full-text | XML Full-text
Abstract
In modern implementations of Cloud Radio Access Network (C-RAN), the fronthaul transport network will often be packet-based and it will have a multi-hop architecture built with general-purpose switches using network function virtualization (NFV) and software-defined networking (SDN). This paper studies the joint design [...] Read more.
In modern implementations of Cloud Radio Access Network (C-RAN), the fronthaul transport network will often be packet-based and it will have a multi-hop architecture built with general-purpose switches using network function virtualization (NFV) and software-defined networking (SDN). This paper studies the joint design of uplink radio and fronthaul transmission strategies for a C-RAN with a packet-based fronthaul network. To make an efficient use of multiple routes that carry fronthaul packets from remote radio heads (RRHs) to cloud, as an alternative to more conventional packet-based multi-route reception or coding, a multiple description coding (MDC) strategy is introduced that operates directly at the level of baseband signals. MDC ensures an improved quality of the signal received at the cloud in conditions of low network congestion, i.e., when more fronthaul packets are received within a tolerated deadline. The advantages of the proposed MDC approach as compared to the traditional path diversity scheme are validated via extensive numerical results. Full article
(This article belongs to the Special Issue Information Theory for Data Communications and Processing)
Figures

Figure 1

Open AccessArticle Examining the Limits of Predictability of Human Mobility
Entropy 2019, 21(4), 432; https://doi.org/10.3390/e21040432
Received: 8 March 2019 / Revised: 12 April 2019 / Accepted: 18 April 2019 / Published: 24 April 2019
Viewed by 138 | PDF Full-text (2144 KB) | HTML Full-text | XML Full-text
Abstract
We challenge the upper bound of human-mobility predictability that is widely used to corroborate the accuracy of mobility prediction models. We observe that extensions of recurrent-neural network architectures achieve significantly higher prediction accuracy, surpassing this upper bound. Given this discrepancy, the central objective [...] Read more.
We challenge the upper bound of human-mobility predictability that is widely used to corroborate the accuracy of mobility prediction models. We observe that extensions of recurrent-neural network architectures achieve significantly higher prediction accuracy, surpassing this upper bound. Given this discrepancy, the central objective of our work is to show that the methodology behind the estimation of the predictability upper bound is erroneous and identify the reasons behind this discrepancy. In order to explain this anomaly, we shed light on several underlying assumptions that have contributed to this bias. In particular, we highlight the consequences of the assumed Markovian nature of human-mobility on deriving this upper bound on maximum mobility predictability. By using several statistical tests on three real-world mobility datasets, we show that human mobility exhibits scale-invariant long-distance dependencies, contrasting with the initial Markovian assumption. We show that this assumption of exponential decay of information in mobility trajectories, coupled with the inadequate usage of encoding techniques results in entropy inflation, consequently lowering the upper bound on predictability. We highlight that the current upper bound computation methodology based on Fano’s inequality tends to overlook the presence of long-range structural correlations inherent to mobility behaviors and we demonstrate its significance using an alternate encoding scheme. We further show the manifestation of not accounting for these dependencies by probing the mutual information decay in mobility trajectories. We expose the systematic bias that culminates into an inaccurate upper bound and further explain as to why the recurrent-neural architectures, designed to handle long-range structural correlations, surpass this upper limit on human mobility predictability. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessReview Welding of High Entropy Alloys—A Review
Entropy 2019, 21(4), 431; https://doi.org/10.3390/e21040431
Received: 22 February 2019 / Revised: 10 April 2019 / Accepted: 15 April 2019 / Published: 24 April 2019
Viewed by 105 | PDF Full-text (7503 KB) | HTML Full-text | XML Full-text
Abstract
High-entropy alloy (HEA) offers great flexibility in materials design with 3–5 principal elements and a range of unique advantages such as good microstructure stability, mechanical strength over a broad range of temperatures and corrosion resistance, etc. Welding of high entropy alloy, as a [...] Read more.
High-entropy alloy (HEA) offers great flexibility in materials design with 3–5 principal elements and a range of unique advantages such as good microstructure stability, mechanical strength over a broad range of temperatures and corrosion resistance, etc. Welding of high entropy alloy, as a key joining method, is an important emerging area with significant potential impact to future application-oriented research and technological developments in HEAs. The selection of feasible welding processes with optimized parameters is essential to enhance the applications of HEAs. However, the structure of the welded joints varies with material systems, welding methods and parameters. A systemic understanding of the structures and properties of the weldment is directly relevant to the application of HEAs as well as managing the effect of welding on situations such as corrosion that are known to be a service life limiting factor of welded structures in conditions such as marine environments. In this paper, key recent work on welding of HEAs is reviewed in detail focusing on the research of main HEA systems when applying different welding techniques. The experimental details including sample preparation, sample size (thickness) and welding conditions reflecting energy input are summarized and key issues are highlighted. The microstructures and properties of different welding zones, in particular the fusion zone (FZ) and the heat affected zones (HAZ), formed with different welding methods are compared and presented in details and the structure-property relationships are discussed. The work shows that the weldability of HEAs varies with the HEA composition groups and the welding method employed. Arc and laser welding of AlCoCrFeNi HEAs results in lower hardness in the FZ and HAZ and reduced overall strength. Friction stir welding results in higher hardness in the FZ and achieves comparable/higher strength of the welded joints in tensile tests. The welded HEAs are capable of maintaining a reasonable proportion of the ductility. The key structure changes including element distribution, the volume fraction of face centered cubic (FCC) and body centered cubic (BCC) phase as well as reported changes in the lattice constants are summarized and analyzed. Detailed mechanisms governing the mechanical properties including the grain size-property/hardness relationship in the form of Hall–Petch (H–P) effect for both bulk and welded structure of HEAs are compared. Finally, future challenges and main areas to research are highlighted. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle A q-Extension of Sigmoid Functions and the Application for Enhancement of Ultrasound Images
Entropy 2019, 21(4), 430; https://doi.org/10.3390/e21040430
Received: 14 March 2019 / Revised: 14 April 2019 / Accepted: 17 April 2019 / Published: 23 April 2019
Viewed by 177 | PDF Full-text (1417 KB)
Abstract
This paper proposes the q-sigmoid functions, which are variations of the sigmoid expressions and an analysis of their application to the process of enhancing regions of interest in digital images. These new functions are based on the non-extensive Tsallis statistics, arising in [...] Read more.
This paper proposes the q-sigmoid functions, which are variations of the sigmoid expressions and an analysis of their application to the process of enhancing regions of interest in digital images. These new functions are based on the non-extensive Tsallis statistics, arising in the field of statistical mechanics through the use of q-exponential functions. The potential of q-sigmoids for image processing is demonstrated in tasks of region enhancement in ultrasound images which are highly affected by speckle noise. Before demonstrating the results in real images, we study the asymptotic behavior of these functions and the effect of the obtained expressions when processing synthetic images. In both experiments, the q-sigmoids overcame the original sigmoid functions, as well as two other well-known methods for the enhancement of regions of interest: slicing and histogram equalization. These results show that q-sigmoids can be used as a preprocessing step in pipelines including segmentation as demonstrated for the Otsu algorithm and deep learning approaches for further feature extractions and analyses. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Open AccessArticle A Data-Weighted Prior Estimator for Forecast Combination
Entropy 2019, 21(4), 429; https://doi.org/10.3390/e21040429
Received: 7 February 2019 / Revised: 11 April 2019 / Accepted: 18 April 2019 / Published: 23 April 2019
Viewed by 115 | PDF Full-text (283 KB) | HTML Full-text | XML Full-text
Abstract
Forecast combination methods reduce the information in a vector of forecasts to a single combined forecast by using a set of combination weights. Although there are several methods, a typical strategy is the use of the simple arithmetic mean to obtain the combined [...] Read more.
Forecast combination methods reduce the information in a vector of forecasts to a single combined forecast by using a set of combination weights. Although there are several methods, a typical strategy is the use of the simple arithmetic mean to obtain the combined forecast. A priori, the use of this mean could be justified when all the forecasters have had the same performance in the past or when they do not have enough information. In this paper, we explore the possibility of using entropy econometrics as a procedure for combining forecasts that allows to discriminate between bad and good forecasters, even in the situation of little information. With this purpose, the data-weighted prior (DWP) estimator proposed by Golan (2001) is used for forecaster selection and simultaneous parameter estimation in linear statistical models. In particular, we examine the ability of the DWP estimator to effectively select relevant forecasts among all forecasts. We test the accuracy of the proposed model with a simulation exercise and compare its ex ante forecasting performance with other methods used to combine forecasts. The obtained results suggest that the proposed method dominates other combining methods, such as equal-weight averages or ordinal least squares methods, among others. Full article
(This article belongs to the Special Issue Entropy Application for Forecasting)
Open AccessArticle Thermodynamic Investigation of an Integrated Solar Combined Cycle with an ORC System
Entropy 2019, 21(4), 428; https://doi.org/10.3390/e21040428
Received: 11 February 2019 / Revised: 11 April 2019 / Accepted: 18 April 2019 / Published: 22 April 2019
Viewed by 167 | PDF Full-text (5268 KB)
Abstract
An integrated solar combined cycle (ISCC) with a low temperature waste heat recovery system is proposed in this paper. The combined system consists of a conventional natural gas combined cycle, organic Rankine cycle and solar fields. The performance of an organic Rankine cycle [...] Read more.
An integrated solar combined cycle (ISCC) with a low temperature waste heat recovery system is proposed in this paper. The combined system consists of a conventional natural gas combined cycle, organic Rankine cycle and solar fields. The performance of an organic Rankine cycle subsystem as well as the overall proposed ISCC system are analyzed using organic working fluids. Besides, parameters including the pump discharge pressure, exhaust gas temperature, thermal and exergy efficiencies, unit cost of exergy for product and annual CO2-savings were considered. Results indicate that Rc318 contributes the highest exhaust gas temperature of 71.2℃, while R113 showed the lowest exhaust gas temperature of 65.89 at 800 W/m2, in the proposed ISCC system. The overall plant thermal efficiency increases rapidly with solar radiation, while the exergy efficiency appears to have a downward trend. R227ea had both the largest thermal efficiency of 58.33% and exergy efficiency of 48.09% at 800W/m2. In addition, for the organic Rankine cycle, the exergy destructions of the evaporator, turbine and condenser decreased with increasing solar radiation. The evaporator contributed the largest exergy destruction followed by the turbine, condenser and pump. Besides, according to the economic analysis, R227ea had the lowest production cost of 19.3 $/GJ. Full article
(This article belongs to the Section Thermodynamics)
Figures

Graphical abstract

Open AccessArticle Secure Transmission in mmWave Wiretap Channels: On Sector Guard Zone and Blockages
Entropy 2019, 21(4), 427; https://doi.org/10.3390/e21040427
Received: 2 January 2019 / Revised: 14 April 2019 / Accepted: 17 April 2019 / Published: 22 April 2019
Viewed by 145 | PDF Full-text (499 KB) | HTML Full-text | XML Full-text
Abstract
Millimeter-wave (mmWave) communication is one of the key enabling technologies for fifth generation (5G) mobile networks. In this paper, we study the problem of secure communication in a mmWave wiretap network, where directional beamforming and link blockages are taken into account. For the [...] Read more.
Millimeter-wave (mmWave) communication is one of the key enabling technologies for fifth generation (5G) mobile networks. In this paper, we study the problem of secure communication in a mmWave wiretap network, where directional beamforming and link blockages are taken into account. For the secure transmission in the presence of spatially random eavesdroppers, an adaptive transmission scheme is adopted, for which sector secrecy guard zone and artificial noise (AN) are employed to enhance secrecy performance. When there exists no eavesdroppers within the sector secrecy guard zone, the transmitter only transmits information-bearing signal, and, conversely, AN along with information-bearing signal are transmitted. The closed-form expressions for secrecy outage probability (SOP), connection outage probability (COP) and secrecy throughput are derived under stochastic geometry. Then, we evaluate the effect of the sector secrecy guard zone and AN on the secrecy performance. Our results reveal that the application of the sector secrecy guard zone and AN can significantly improve the security of the system, and blockages also can be utilized to improve secrecy performance. An easy choice of transmit power and power allocation factor is provided for achieving higher secrecy throughput. Furthermore, increasing the density of eavesdroppers not always deteriorates the secrecy performance due to the use of the sector secrecy guard zone and AN. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle An Entropy-Based Car Failure Detection Method Based on Data Acquisition Pipeline
Entropy 2019, 21(4), 426; https://doi.org/10.3390/e21040426
Received: 11 March 2019 / Revised: 18 April 2019 / Accepted: 18 April 2019 / Published: 22 April 2019
Viewed by 144 | PDF Full-text (630 KB) | HTML Full-text | XML Full-text
Abstract
Modern cars are equipped with plenty of electronic devices called Electronic Control Units (ECU). ECUs collect diagnostic data from a car’s components such as the engine, brakes etc. These data are then processed, and the appropriate information is communicated to the driver. From [...] Read more.
Modern cars are equipped with plenty of electronic devices called Electronic Control Units (ECU). ECUs collect diagnostic data from a car’s components such as the engine, brakes etc. These data are then processed, and the appropriate information is communicated to the driver. From the point of view of safety of the driver and the passengers, the information about the car faults is vital. Regardless of the development of on-board computers, only a small amount of information is passed on to the driver. With the data mining approach, it is possible to obtain much more information from the data than it is provided by standard car equipment. This paper describes the environment built by the authors for data collection from ECUs. The collected data have been processed using parameterized entropies and data mining algorithms. Finally, we built a classifier able to detect a malfunctioning thermostat even if the car equipment does not indicate it. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis)
Figures

Figure 1

Open AccessArticle Analysis of Weak Fault in Hydraulic System Based on Multi-scale Permutation Entropy of Fault-Sensitive Intrinsic Mode Function and Deep Belief Network
Entropy 2019, 21(4), 425; https://doi.org/10.3390/e21040425
Received: 15 March 2019 / Revised: 11 April 2019 / Accepted: 17 April 2019 / Published: 22 April 2019
Viewed by 142 | PDF Full-text (2458 KB) | HTML Full-text | XML Full-text
Abstract
With the aim of automatic recognition of weak faults in hydraulic systems, this paper proposes an identification method based on multi-scale permutation entropy feature extraction of fault-sensitive intrinsic mode function (IMF) and deep belief network (DBN). In this method, the leakage fault signal [...] Read more.
With the aim of automatic recognition of weak faults in hydraulic systems, this paper proposes an identification method based on multi-scale permutation entropy feature extraction of fault-sensitive intrinsic mode function (IMF) and deep belief network (DBN). In this method, the leakage fault signal is first decomposed by empirical mode decomposition (EMD), and fault-sensitive IMF components are screened by adopting the correlation analysis method. The multi-scale entropy feature of each screened IMF is then extracted and features closely related to the weak fault information are then obtained. Finally, DBN is used for identification of fault diagnosis. Experimental results prove that this identification method has an ideal recognition effect. It can accurately judge whether there is a leakage fault, determine the degree of severity of the fault, and can diagnose and analyze hydraulic weak faults in general. Full article
Figures

Figure 1

Open AccessArticle Soft Randomized Machine Learning Procedure for Modeling Dynamic Interaction of Regional Systems
Entropy 2019, 21(4), 424; https://doi.org/10.3390/e21040424
Received: 19 March 2019 / Revised: 10 April 2019 / Accepted: 11 April 2019 / Published: 20 April 2019
Viewed by 232 | PDF Full-text (518 KB) | HTML Full-text | XML Full-text
Abstract
The paper suggests a randomized model for dynamic migratory interaction of regional systems. The locally stationary states of migration flows in the basic and immigration systems are described by corresponding entropy operators. A soft randomization procedure that defines the optimal probability density functions [...] Read more.
The paper suggests a randomized model for dynamic migratory interaction of regional systems. The locally stationary states of migration flows in the basic and immigration systems are described by corresponding entropy operators. A soft randomization procedure that defines the optimal probability density functions of system parameters and measurement noises is developed. The advantages of soft randomization with approximate empirical data balance conditions are demonstrated, which considerably reduces algorithmic complexity and computational resources demand. An example of migratory interaction modeling and testing is given. Full article
(This article belongs to the Special Issue Entropy Application for Forecasting)
Figures

Figure 1

Open AccessArticle Optimizing Deep CNN Architectures for Face Liveness Detection
Entropy 2019, 21(4), 423; https://doi.org/10.3390/e21040423
Received: 28 March 2019 / Revised: 17 April 2019 / Accepted: 18 April 2019 / Published: 20 April 2019
Viewed by 225 | PDF Full-text (1810 KB) | HTML Full-text | XML Full-text
Abstract
Face recognition is a popular and efficient form of biometric authentication used in many software applications. One drawback of this technique is that it is prone to face spoofing attacks, where an impostor can gain access to the system by presenting a photograph [...] Read more.
Face recognition is a popular and efficient form of biometric authentication used in many software applications. One drawback of this technique is that it is prone to face spoofing attacks, where an impostor can gain access to the system by presenting a photograph of a valid user to the sensor. Thus, face liveness detection is a necessary step before granting authentication to the user. In this paper, we have developed deep architectures for face liveness detection that use a combination of texture analysis and a convolutional neural network (CNN) to classify the captured image as real or fake. Our development greatly improved upon a recent approach that applies nonlinear diffusion based on an additive operator splitting scheme and a tridiagonal matrix block-solver algorithm to the image, which enhances the edges and surface texture in the real image. We then fed the diffused image to a deep CNN to identify the complex and deep features for classification. We obtained 100% accuracy on the NUAA Photograph Impostor dataset for face liveness detection using one of our enhanced architectures. Further, we gained insight into the enhancement of the face liveness detection architecture by evaluating three different deep architectures, which included deep CNN, residual network, and the inception network version 4. We evaluated the performance of each of these architectures on the NUAA dataset and present here the experimental results showing under what conditions an architecture would be better suited for face liveness detection. While the residual network gave us competitive results, the inception network version 4 produced the optimal accuracy of 100% in liveness detection (with nonlinear anisotropic diffused images with a smoothness parameter of 15). Our approach outperformed all current state-of-the-art methods. Full article
Figures

Figure 1

Open AccessArticle The Einstein–Podolsky–Rosen Steering and Its Certification
Entropy 2019, 21(4), 422; https://doi.org/10.3390/e21040422
Received: 17 January 2019 / Revised: 3 April 2019 / Accepted: 16 April 2019 / Published: 20 April 2019
Viewed by 172 | PDF Full-text (965 KB) | HTML Full-text | XML Full-text
Abstract
The Einstein–Podolsky–Rosen (EPR) steering is a subtle intermediate correlation between entanglement and Bell nonlocality. It not only theoretically completes the whole picture of non-local effects but also practically inspires novel quantum protocols in specific scenarios. However, a verification of EPR steering is still [...] Read more.
The Einstein–Podolsky–Rosen (EPR) steering is a subtle intermediate correlation between entanglement and Bell nonlocality. It not only theoretically completes the whole picture of non-local effects but also practically inspires novel quantum protocols in specific scenarios. However, a verification of EPR steering is still challenging due to difficulties in bounding unsteerable correlations. In this survey, the basic framework to study the bipartite EPR steering is discussed, and general techniques to certify EPR steering correlations are reviewed. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Figures

Figure 1

Open AccessLetter Nucleation and Cascade Features of Earthquake Mainshock Statistically Explored from Foreshock Seismicity
Entropy 2019, 21(4), 421; https://doi.org/10.3390/e21040421
Received: 16 January 2019 / Revised: 10 April 2019 / Accepted: 11 April 2019 / Published: 19 April 2019
Viewed by 219 | PDF Full-text (3052 KB) | HTML Full-text | XML Full-text
Abstract
The relation between the size of an earthquake mainshock preparation zone and the magnitude of the forthcoming mainshock is different between nucleation and domino-like cascade models. The former model indicates that magnitude is predictable before an earthquake’s mainshock because the preparation zone is [...] Read more.
The relation between the size of an earthquake mainshock preparation zone and the magnitude of the forthcoming mainshock is different between nucleation and domino-like cascade models. The former model indicates that magnitude is predictable before an earthquake’s mainshock because the preparation zone is related to the rupture area. In contrast, the latter indicates that magnitude is substantially unpredictable because it is practically impossible to predict the size of final rupture, which likely consists of a sequence of smaller earthquakes. As this proposal is still controversial, we discuss both models statistically, comparing their spatial occurrence rates between foreshocks and aftershocks. Using earthquake catalogs from three regions, California, Japan, and Taiwan, we showed that the spatial occurrence rates of foreshocks and aftershocks displayed a similar behavior, although this feature did not vary between these regions. An interpretation of this result, which was based on statistical analyses, indicates that the nucleation model is dominant. Full article
Figures

Figure 1

Open AccessArticle Numerical Investigation on the Thermal Performance of Nanofluid-Based Cooling System for Synchronous Generators
Entropy 2019, 21(4), 420; https://doi.org/10.3390/e21040420
Received: 7 March 2019 / Revised: 12 April 2019 / Accepted: 17 April 2019 / Published: 19 April 2019
Viewed by 181 | PDF Full-text (4974 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a nanofluid-based cooling method for a brushless synchronous generator (BLSG) by using Al2O3 lubricating oil. In order to demonstrate the superiority of the nanofluid-based cooling method, analysis of the thermal performance and efficiency of the nanofluid-based cooling [...] Read more.
This paper presents a nanofluid-based cooling method for a brushless synchronous generator (BLSG) by using Al2O3 lubricating oil. In order to demonstrate the superiority of the nanofluid-based cooling method, analysis of the thermal performance and efficiency of the nanofluid-based cooling system (NBCS) for the BLSG is conducted along with the modeling and simulation cases arranged for NBCS. Compared with the results obtained under the base fluid cooling condition, results show that the nanofluid-based cooling method can reduce the steady-state temperature and power losses in BLSG and decrease the temperature settling time and changing ratio, which demonstrate that both steady-state and transient thermal performance of NBCS are improved as nanoparticle volume fraction (NVF) in nanofluid increases. Besides, although the input power of cycling pumps in NBCS has ~30% increase when the NVF is 10%, the efficiency of the NBCS has a slight increase because the 4.1% reduction in power loss of BLSG is bigger than the total incensement of input power of the cycling pumps. The results illustrate the superiority of the nanofluid-based cooling method, and it indicates that the proposed method has a broad application prospect in the field of thermal control of onboard synchronous generators with high power density. Full article
Figures

Figure 1

Open AccessArticle Entropic Statistical Description of Big Data Quality in Hotel Customer Relationship Management
Entropy 2019, 21(4), 419; https://doi.org/10.3390/e21040419
Received: 23 February 2019 / Revised: 7 April 2019 / Accepted: 17 April 2019 / Published: 19 April 2019
Viewed by 189 | PDF Full-text (1493 KB) | HTML Full-text | XML Full-text
Abstract
Customer Relationship Management (CRM) is a fundamental tool in the hospitality industry nowadays, which can be seen as a big-data scenario due to the large amount of recordings which are annually handled by managers. Data quality is crucial for the success of these [...] Read more.
Customer Relationship Management (CRM) is a fundamental tool in the hospitality industry nowadays, which can be seen as a big-data scenario due to the large amount of recordings which are annually handled by managers. Data quality is crucial for the success of these systems, and one of the main issues to be solved by businesses in general and by hospitality businesses in particular in this setting is the identification of duplicated customers, which has not received much attention in recent literature, probably and partly because it is not an easy-to-state problem in statistical terms. In the present work, we address the problem statement of duplicated customer identification as a large-scale data analysis, and we propose and benchmark a general-purpose solution for it. Our system consists of four basic elements: (a) A generic feature representation for the customer fields in a simple table-shape database; (b) An efficient distance for comparison among feature values, in terms of the Wagner-Fischer algorithm to calculate the Levenshtein distance; (c) A big-data implementation using basic map-reduce techniques to readily support the comparison of strategies; (d) An X-from-M criterion to identify those possible neighbors to a duplicated-customer candidate. We analyze the mass density function of the distances in the CRM text-based fields and characterized their behavior and consistency in terms of the entropy and of the mutual information for these fields. Our experiments in a large CRM from a multinational hospitality chain show that the distance distributions are statistically consistent for each feature, and that neighbourhood thresholds are automatically adjusted by the system at a first step and they can be subsequently more-finely tuned according to the manager experience. The entropy distributions for the different variables, as well as the mutual information between pairs, are characterized by multimodal profiles, where a wide gap between close and far fields is often present. This motivates the proposal of the so-called X-from-M strategy, which is shown to be computationally affordable, and can provide the expert with a reduced number of duplicated candidates to supervise, with low X values being enough to warrant the sensitivity required at the automatic detection stage. The proposed system again encourages and supports the benefits of big-data technologies in CRM scenarios for hotel chains, and rather than the use of ad-hoc heuristic rules, it promotes the research and development of theoretically principled approaches. Full article
(This article belongs to the Section Signal and Data Analysis)
Figures

Figure 1

Open AccessArticle Role of Quantum Entropy and Establishment of H-Theorems in the Presence of Graviton Sinks for Manifestly-Covariant Quantum Gravity
Entropy 2019, 21(4), 418; https://doi.org/10.3390/e21040418
Received: 1 April 2019 / Revised: 11 April 2019 / Accepted: 17 April 2019 / Published: 19 April 2019
Viewed by 204 | PDF Full-text (332 KB) | HTML Full-text | XML Full-text
Abstract
Based on the introduction of a suitable quantum functional, identified here with the Boltzmann–Shannon entropy, entropic properties of the quantum gravitational field are investigated in the framework of manifestly-covariant quantum gravity theory. In particular, focus is given to gravitational quantum states in a [...] Read more.
Based on the introduction of a suitable quantum functional, identified here with the Boltzmann–Shannon entropy, entropic properties of the quantum gravitational field are investigated in the framework of manifestly-covariant quantum gravity theory. In particular, focus is given to gravitational quantum states in a background de Sitter space-time, with the addition of possible quantum non-unitarity effects modeled in terms of an effective quantum graviton sink localized near the de Sitter event horizon. The theory of manifestly-covariant quantum gravity developed accordingly is shown to retain its emergent-gravity features, which are recovered when the generalized-Lagrangian-path formalism is adopted, yielding a stochastic trajectory-based representation of the quantum wave equation. This permits the analytic determination of the quantum probability density function associated with the quantum gravity state, represented in terms of a generally dynamically-evolving shifted Gaussian function. As an application, the study of the entropic properties of quantum gravity is developed and the conditions for the existence of a local H-theorem or, alternatively, of a constant H-theorem are established. Full article
(This article belongs to the Special Issue Entropy in Covariant Quantum Gravity)
Open AccessArticle Entropy Rate Superpixel Classification for Automatic Red Lesion Detection in Fundus Images
Entropy 2019, 21(4), 417; https://doi.org/10.3390/e21040417
Received: 28 February 2019 / Revised: 17 April 2019 / Accepted: 17 April 2019 / Published: 19 April 2019
Viewed by 218 | PDF Full-text (24876 KB) | HTML Full-text | XML Full-text
Abstract
Diabetic retinopathy (DR) is the main cause of blindness in the working-age population in developed countries. Digital color fundus images can be analyzed to detect lesions for large-scale screening. Thereby, automated systems can be helpful in the diagnosis of this disease. The aim [...] Read more.
Diabetic retinopathy (DR) is the main cause of blindness in the working-age population in developed countries. Digital color fundus images can be analyzed to detect lesions for large-scale screening. Thereby, automated systems can be helpful in the diagnosis of this disease. The aim of this study was to develop a method to automatically detect red lesions (RLs) in retinal images, including hemorrhages and microaneurysms. These signs are the earliest indicators of DR. Firstly, we performed a novel preprocessing stage to normalize the inter-image and intra-image appearance and enhance the retinal structures. Secondly, the Entropy Rate Superpixel method was used to segment the potential RL candidates. Then, we reduced superpixel candidates by combining inaccurately fragmented regions within structures. Finally, we classified the superpixels using a multilayer perceptron neural network. The used database contained 564 fundus images. The DB was randomly divided into a training set and a test set. Results on the test set were measured using two different criteria. With a pixel-based criterion, we obtained a sensitivity of 81.43% and a positive predictive value of 86.59%. Using an image-based criterion, we reached 84.04% sensitivity, 85.00% specificity and 84.45% accuracy. The algorithm was also evaluated on the DiaretDB1 database. The proposed method could help specialists in the detection of RLs in diabetic patients. Full article
Figures

Figure 1

Open AccessArticle Exploring Entropy Measurements to Identify Multi-Occupancy in Activities of Daily Living
Entropy 2019, 21(4), 416; https://doi.org/10.3390/e21040416
Received: 20 January 2019 / Revised: 15 April 2019 / Accepted: 17 April 2019 / Published: 19 April 2019
Viewed by 186 | PDF Full-text (1043 KB) | HTML Full-text | XML Full-text
Abstract
Human Activity Recognition (HAR) is the process of automatically detecting human actions from the data collected from different types of sensors. Research related to HAR has devoted particular attention to monitoring and recognizing the human activities of a single occupant in a home [...] Read more.
Human Activity Recognition (HAR) is the process of automatically detecting human actions from the data collected from different types of sensors. Research related to HAR has devoted particular attention to monitoring and recognizing the human activities of a single occupant in a home environment, in which it is assumed that only one person is present at any given time. Recognition of the activities is then used to identify any abnormalities within the routine activities of daily living. Despite the assumption in the published literature, living environments are commonly occupied by more than one person and/or accompanied by pet animals. In this paper, a novel method based on different entropy measures, including Approximate Entropy (ApEn), Sample Entropy (SampEn), and Fuzzy Entropy (FuzzyEn), is explored to detect and identify a visitor in a home environment. The research has mainly focused on when another individual visits the main occupier, and it is, therefore, not possible to distinguish between their movement activities. The goal of this research is to assess whether entropy measures can be used to detect and identify the visitor in a home environment. Once the presence of the main occupier is distinguished from others, the existing activity recognition and abnormality detection processes could be applied for the main occupier. The proposed method is tested and validated using two different datasets. The results obtained from the experiments show that the proposed method could be used to detect and identify a visitor in a home environment with a high degree of accuracy based on the data collected from the occupancy sensors. Full article
(This article belongs to the Section Multidisciplinary Applications)
Figures

Graphical abstract

Open AccessArticle Unstable Limit Cycles and Singular Attractors in a Two-Dimensional Memristor-Based Dynamic System
Entropy 2019, 21(4), 415; https://doi.org/10.3390/e21040415
Received: 28 March 2019 / Revised: 16 April 2019 / Accepted: 16 April 2019 / Published: 19 April 2019
Viewed by 159 | PDF Full-text (5960 KB) | HTML Full-text | XML Full-text
Abstract
This paper reports the finding of unstable limit cycles and singular attractors in a two-dimensional dynamical system consisting of an inductor and a bistable bi-local active memristor. Inspired by the idea of nested intervals theorem, a new programmable scheme for finding unstable limit [...] Read more.
This paper reports the finding of unstable limit cycles and singular attractors in a two-dimensional dynamical system consisting of an inductor and a bistable bi-local active memristor. Inspired by the idea of nested intervals theorem, a new programmable scheme for finding unstable limit cycles is proposed, and its feasibility is verified by numerical simulations. The unstable limit cycles and their evolution laws in the memristor-based dynamic system are found from two subcritical Hopf bifurcation domains, which are subdomains of twin local activity domains of the memristor. Coexisting singular attractors are discovered in the twin local activity domains, apart from the two corresponding subcritical Hopf bifurcation domains. Of particular interest is the coexistence of a singular attractor and a period-2 or period-3 attractor, observed in numerical simulations. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Action Recognition Using Single-Pixel Time-of-Flight Detection
Entropy 2019, 21(4), 414; https://doi.org/10.3390/e21040414
Received: 14 January 2019 / Revised: 15 April 2019 / Accepted: 15 April 2019 / Published: 18 April 2019
Viewed by 197 | PDF Full-text (1303 KB) | HTML Full-text | XML Full-text
Abstract
Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In [...] Read more.
Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject’s privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene. Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47 % accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent neural network. Full article
(This article belongs to the Special Issue Statistical Machine Learning for Human Behaviour Analysis)
Figures

Figure 1

Open AccessArticle Acknowledging Uncertainty in Economic Forecasting. Some Insight from Confidence and Industrial Trend Surveys
Entropy 2019, 21(4), 413; https://doi.org/10.3390/e21040413
Received: 18 March 2019 / Revised: 11 April 2019 / Accepted: 12 April 2019 / Published: 18 April 2019
Viewed by 234 | PDF Full-text (954 KB) | HTML Full-text | XML Full-text
Abstract
The role of uncertainty has become increasingly important in economic forecasting, due to both theoretical and empirical reasons. Although the traditional practice consisted of reporting point predictions without specifying the attached probabilities, uncertainty about the prospects deserves increasing attention, and recent literature has [...] Read more.
The role of uncertainty has become increasingly important in economic forecasting, due to both theoretical and empirical reasons. Although the traditional practice consisted of reporting point predictions without specifying the attached probabilities, uncertainty about the prospects deserves increasing attention, and recent literature has tried to quantify the level of uncertainty perceived by different economic agents, also examining its effects and determinants. In this context, the present paper aims to analyze the uncertainty in economic forecasting, paying attention to qualitative perceptions from confidence and industrial trend surveys and making use of the related ex-ante probabilities. With this objective, two entropy-based measures (Shannon’s and quadratic entropy) are computed, providing significant evidence about the perceived level of uncertainty. Our empirical findings show that survey’s respondents are able to distinguish between current and prospective uncertainty and between general and personal uncertainty. Furthermore, we find that uncertainty negatively affects economic growth. Full article
(This article belongs to the Special Issue Entropy Application for Forecasting)
Figures

Figure 1

Open AccessArticle Geosystemics View of Earthquakes
Entropy 2019, 21(4), 412; https://doi.org/10.3390/e21040412
Received: 20 March 2019 / Revised: 11 April 2019 / Accepted: 12 April 2019 / Published: 18 April 2019
Viewed by 212 | PDF Full-text (3212 KB) | HTML Full-text | XML Full-text
Abstract
Earthquakes are the most energetic phenomena in the lithosphere: their study and comprehension are greatly worth doing because of the obvious importance for society. Geosystemics intends to study the Earth system as a whole, looking at the possible couplings among the different geo-layers, [...] Read more.
Earthquakes are the most energetic phenomena in the lithosphere: their study and comprehension are greatly worth doing because of the obvious importance for society. Geosystemics intends to study the Earth system as a whole, looking at the possible couplings among the different geo-layers, i.e., from the earth’s interior to the above atmosphere. It uses specific universal tools to integrate different methods that can be applied to multi-parameter data, often taken on different platforms (e.g., ground, marine or satellite observations). Its main objective is to understand the particular phenomenon of interest from a holistic point of view. Central is the use of entropy, together with other physical quantities that will be introduced case by case. In this paper, we will deal with earthquakes, as final part of a long-term chain of processes involving, not only the interaction between different components of the Earth’s interior but also the coupling of the solid earth with the above neutral or ionized atmosphere, and finally culminating with the main rupture along the fault of concern. Particular emphasis will be given to some Italian seismic sequences. Full article
Figures

Figure 1

Open AccessArticle Aging Modulates the Resting Brain after a Memory Task: A Validation Study from Multivariate Models
Entropy 2019, 21(4), 411; https://doi.org/10.3390/e21040411
Received: 28 February 2019 / Revised: 12 April 2019 / Accepted: 16 April 2019 / Published: 17 April 2019
Viewed by 201 | PDF Full-text (2793 KB) | HTML Full-text | XML Full-text
Abstract
Recent work has demonstrated that aging modulates the resting brain. However, the study of these modulations after cognitive practice, resulting from a memory task, has been scarce. This work aims at examining age-related changes in the functional reorganization of the resting brain after [...] Read more.
Recent work has demonstrated that aging modulates the resting brain. However, the study of these modulations after cognitive practice, resulting from a memory task, has been scarce. This work aims at examining age-related changes in the functional reorganization of the resting brain after cognitive training, namely, neuroplasticity, by means of the most innovative tools for data analysis. To this end, electroencephalographic activity was recorded in 34 young and 38 older participants. Different methods for data analyses, including frequency, time-frequency and machine learning-based prediction models were conducted. Results showed reductions in Alpha power in old compared to young adults in electrodes placed over posterior and anterior areas of the brain. Moreover, young participants showed Alpha power increases after task performance, while their older counterparts exhibited a more invariant pattern of results. These results were significant in the 140–160 s time window in electrodes placed over anterior regions of the brain. Machine learning analyses were able to accurately classify participants by age, but failed to predict whether resting state scans took place before or after the memory task. These findings greatly contribute to the development of multivariate tools for electroencephalogram (EEG) data analysis and improve our understanding of age-related changes in the functional reorganization of the resting brain. Full article
Figures

Figure 1

Open AccessArticle Exponential Strong Converse for Successive Refinement with Causal Decoder Side Information
Entropy 2019, 21(4), 410; https://doi.org/10.3390/e21040410
Received: 27 February 2019 / Revised: 9 April 2019 / Accepted: 13 April 2019 / Published: 17 April 2019
Viewed by 206 | PDF Full-text (441 KB) | HTML Full-text | XML Full-text
Abstract
We consider the k-user successive refinement problem with causal decoder side information and derive an exponential strong converse theorem. The rate-distortion region for the problem can be derived as a straightforward extension of the two-user case by Maor and Merhav (2008). We [...] Read more.
We consider the k-user successive refinement problem with causal decoder side information and derive an exponential strong converse theorem. The rate-distortion region for the problem can be derived as a straightforward extension of the two-user case by Maor and Merhav (2008). We show that for any rate-distortion tuple outside the rate-distortion region of the k-user successive refinement problem with causal decoder side information, the joint excess-distortion probability approaches one exponentially fast. Our proof follows by judiciously adapting the recently proposed strong converse technique by Oohama using the information spectrum method, the variational form of the rate-distortion region and Hölder’s inequality. The lossy source coding problem with causal decoder side information considered by El Gamal and Weissman is a special case ( k = 1 ) of the current problem. Therefore, the exponential strong converse theorem for the El Gamal and Weissman problem follows as a corollary of our result. Full article
(This article belongs to the Special Issue Multiuser Information Theory II)
Figures

Figure 1

Open AccessReview A Review of Early Fault Diagnosis Approaches and Their Applications in Rotating Machinery
Entropy 2019, 21(4), 409; https://doi.org/10.3390/e21040409
Received: 14 February 2019 / Revised: 12 March 2019 / Accepted: 12 April 2019 / Published: 17 April 2019
Viewed by 183 | PDF Full-text (2004 KB) | HTML Full-text | XML Full-text
Abstract
Rotating machinery is widely applied in various types of industrial applications. As a promising field for reliability of modern industrial systems, early fault diagnosis (EFD) techniques have attracted increasing attention from both academia and industry. EFD is critical to provide appropriate information for [...] Read more.
Rotating machinery is widely applied in various types of industrial applications. As a promising field for reliability of modern industrial systems, early fault diagnosis (EFD) techniques have attracted increasing attention from both academia and industry. EFD is critical to provide appropriate information for taking necessary maintenance actions and thereby prevent severe failures and reduce financial losses. A massive amounts of research work has been conducted in last two decades to develop EFD techniques. This paper reviews and summarizes the research works on EFD of gears, rotors, and bearings. The main purpose of this paper is to serve as a guidemap for researchers in the field of early fault diagnosis. After a brief introduction of early fault diagnosis techniques, the applications of EFD of rotating machine are reviewed in two aspects: fault frequency-based methods and artificial intelligence-based methods. Finally, a summary and some new research prospects are discussed. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis)
Figures

Figure 1

Open AccessArticle Computer Model of Synapse Loss During an Alzheimer’s Disease-Like Pathology in Hippocampal Subregions DG, CA3 and CA1—The Way to Chaos and Information Transfer
Entropy 2019, 21(4), 408; https://doi.org/10.3390/e21040408
Received: 4 February 2019 / Revised: 14 April 2019 / Accepted: 16 April 2019 / Published: 17 April 2019
Viewed by 209 | PDF Full-text (2144 KB) | HTML Full-text | XML Full-text
Abstract
The aim of the study was to compare the computer model of synaptic breakdown in an Alzheimer’s disease-like pathology in the dentate gyrus (DG), CA3 and CA1 regions of the hippocampus with a control model using neuronal parameters and methods describing the complexity [...] Read more.
The aim of the study was to compare the computer model of synaptic breakdown in an Alzheimer’s disease-like pathology in the dentate gyrus (DG), CA3 and CA1 regions of the hippocampus with a control model using neuronal parameters and methods describing the complexity of the system, such as the correlative dimension, Shannon entropy and positive maximal Lyapunov exponent. The model of synaptic breakdown (from 13% to 50%) in the hippocampus modeling the dynamics of an Alzheimer’s disease-like pathology was simulated. Modeling consisted in turning off one after the other EC2 connections and connections from the dentate gyrus on the CA3 pyramidal neurons. The pathological model of synaptic disintegration was compared to a control. The larger synaptic breakdown was associated with a statistically significant decrease in the number of spikes (R = −0.79, P < 0.001), spikes per burst (R = −0.76, P < 0.001) and burst duration (R = −0.83, P < 0.001) and an increase in the inter-burst interval (R = 0.85, P < 0.001) in DG-CA3-CA1. The positive maximal Lyapunov exponent in the control model was negative, but in the pathological model had a positive value of DG-CA3-CA1. A statistically significant decrease of Shannon entropy with the direction of information flow DG->CA3->CA1 (R = −0.79, P < 0.001) in the pathological model and a statistically significant increase with greater synaptic breakdown (R = 0.24, P < 0.05) of the CA3-CA1 region was obtained. The reduction of entropy transfer for DG->CA3 at the level of synaptic breakdown of 35% was 35%, compared with the control. Entropy transfer for CA3->CA1 at the level of synaptic breakdown of 35% increased to 95% relative to the control. The synaptic breakdown model in an Alzheimer’s disease-like pathology in DG-CA3-CA1 exhibits chaotic features as opposed to the control. Synaptic breakdown in which an increase of Shannon entropy is observed indicates an irreversible process of Alzheimer’s disease. The increase in synapse loss resulted in decreased information flow and entropy transfer in DG->CA3, and at the same time a strong increase in CA3->CA1. Full article
Figures

Figure 1

Open AccessArticle Complex Modified Projective Synchronization of Fractional-Order Complex-Variable Chaotic System with Unknown Complex Parameters
Entropy 2019, 21(4), 407; https://doi.org/10.3390/e21040407
Received: 17 March 2019 / Revised: 14 April 2019 / Accepted: 15 April 2019 / Published: 17 April 2019
Viewed by 174 | PDF Full-text (1644 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the problem of complex modified projective synchronization (CMPS) of fractional-order complex-variable chaotic systems (FOCCS) with unknown complex parameters. By a complex-variable inequality and a stability theory for fractional-order nonlinear systems, a new scheme is presented for constructing CMPS of FOCCS [...] Read more.
This paper investigates the problem of complex modified projective synchronization (CMPS) of fractional-order complex-variable chaotic systems (FOCCS) with unknown complex parameters. By a complex-variable inequality and a stability theory for fractional-order nonlinear systems, a new scheme is presented for constructing CMPS of FOCCS with unknown complex parameters. The proposed scheme not only provides a new method to analyze fractional-order complex-valued systems but also significantly reduces the complexity of computation and analysis. Theoretical proof and simulation results substantiate the effectiveness of the presented synchronization scheme. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top