Entropy doi: 10.3390/e21040430

Authors: Paulo Sergio Rodrigues Guilherme Wachs-Lopes Ricardo Morello Santos Eduardo Coltri Gilson Antonio Giraldi

This paper proposes the q-sigmoid functions, which are variations of the sigmoid expressions and an analysis of their application to the process of enhancing regions of interest in digital images. These new functions are based on the non-extensive Tsallis statistics, arising in the field of statistical mechanics through the use of q-exponential functions. The potential of q-sigmoids for image processing is demonstrated in tasks of region enhancement in ultrasound images which are highly affected by speckle noise. Before demonstrating the results in real images, we study the asymptotic behavior of these functions and the effect of the obtained expressions when processing synthetic images. In both experiments, the q-sigmoids overcame the original sigmoid functions, as well as two other well-known methods for the enhancement of regions of interest: slicing and histogram equalization. These results show that q-sigmoids can be used as a preprocessing step in pipelines including segmentation as demonstrated for the Otsu algorithm and deep learning approaches for further feature extractions and analyses.

]]>Entropy doi: 10.3390/e21040429

Authors: Esteban Fernández-Vázquez Blanca Moreno Geoffrey J.D. Hewings

Forecast combination methods reduce the information in a vector of forecasts to a single combined forecast by using a set of combination weights. Although there are several methods, a typical strategy is the use of the simple arithmetic mean to obtain the combined forecast. A priori, the use of this mean could be justified when all the forecasters have had the same performance in the past or when they do not have enough information. In this paper, we explore the possibility of using entropy econometrics as a procedure for combining forecasts that allows to discriminate between bad and good forecasters, even in the situation of little information. With this purpose, the data-weighted prior (DWP) estimator proposed by Golan (2001) is used for forecaster selection and simultaneous parameter estimation in linear statistical models. In particular, we examine the ability of the DWP estimator to effectively select relevant forecasts among all forecasts. We test the accuracy of the proposed model with a simulation exercise and compare its ex ante forecasting performance with other methods used to combine forecasts. The obtained results suggest that the proposed method dominates other combining methods, such as equal-weight averages or ordinal least squares methods, among others.

]]>Entropy doi: 10.3390/e21040428

Authors: Wang Fu

An integrated solar combined cycle (ISCC) with a low temperature waste heat recovery system is proposed in this paper. The combined system consists of a conventional natural gas combined cycle, organic Rankine cycle and solar fields. The performance of an organic Rankine cycle subsystem as well as the overall proposed ISCC system are analyzed using organic working fluids. Besides, parameters including the pump discharge pressure, exhaust gas temperature, thermal and exergy efficiencies, unit cost of exergy for product and annual CO2-savings were considered. Results indicate that Rc318 contributes the highest exhaust gas temperature of 71.2℃, while R113 showed the lowest exhaust gas temperature of 65.89 at 800 W/m2, in the proposed ISCC system. The overall plant thermal efficiency increases rapidly with solar radiation, while the exergy efficiency appears to have a downward trend. R227ea had both the largest thermal efficiency of 58.33% and exergy efficiency of 48.09% at 800W/m2. In addition, for the organic Rankine cycle, the exergy destructions of the evaporator, turbine and condenser decreased with increasing solar radiation. The evaporator contributed the largest exergy destruction followed by the turbine, condenser and pump. Besides, according to the economic analysis, R227ea had the lowest production cost of 19.3 $/GJ.

]]>Entropy doi: 10.3390/e21040427

Authors: Yi Song Weiwei Yang Zhongwu Xiang Yiliang Liu Yueming Cai

Millimeter-wave (mmWave) communication is one of the key enabling technologies for fifth generation (5G) mobile networks. In this paper, we study the problem of secure communication in a mmWave wiretap network, where directional beamforming and link blockages are taken into account. For the secure transmission in the presence of spatially random eavesdroppers, an adaptive transmission scheme is adopted, for which sector secrecy guard zone and artificial noise (AN) are employed to enhance secrecy performance. When there exists no eavesdroppers within the sector secrecy guard zone, the transmitter only transmits information-bearing signal, and, conversely, AN along with information-bearing signal are transmitted. The closed-form expressions for secrecy outage probability (SOP), connection outage probability (COP) and secrecy throughput are derived under stochastic geometry. Then, we evaluate the effect of the sector secrecy guard zone and AN on the secrecy performance. Our results reveal that the application of the sector secrecy guard zone and AN can significantly improve the security of the system, and blockages also can be utilized to improve secrecy performance. An easy choice of transmit power and power allocation factor is provided for achieving higher secrecy throughput. Furthermore, increasing the density of eavesdroppers not always deteriorates the secrecy performance due to the use of the sector secrecy guard zone and AN.

]]>Entropy doi: 10.3390/e21040426

Authors: Bartosz Kowalik Marcin Szpyrka

Modern cars are equipped with plenty of electronic devices called Electronic Control Units (ECU). ECUs collect diagnostic data from a car&rsquo;s components such as the engine, brakes etc. These data are then processed, and the appropriate information is communicated to the driver. From the point of view of safety of the driver and the passengers, the information about the car faults is vital. Regardless of the development of on-board computers, only a small amount of information is passed on to the driver. With the data mining approach, it is possible to obtain much more information from the data than it is provided by standard car equipment. This paper describes the environment built by the authors for data collection from ECUs. The collected data have been processed using parameterized entropies and data mining algorithms. Finally, we built a classifier able to detect a malfunctioning thermostat even if the car equipment does not indicate it.

]]>Entropy doi: 10.3390/e21040425

Authors: Jie Huang Xinqing Wang Dong Wang Zhiwei Wang Xia Hua

With the aim of automatic recognition of weak faults in hydraulic systems, this paper proposes an identification method based on multi-scale permutation entropy feature extraction of fault-sensitive intrinsic mode function (IMF) and deep belief network (DBN). In this method, the leakage fault signal is first decomposed by empirical mode decomposition (EMD), and fault-sensitive IMF components are screened by adopting the correlation analysis method. The multi-scale entropy feature of each screened IMF is then extracted and features closely related to the weak fault information are then obtained. Finally, DBN is used for identification of fault diagnosis. Experimental results prove that this identification method has an ideal recognition effect. It can accurately judge whether there is a leakage fault, determine the degree of severity of the fault, and can diagnose and analyze hydraulic weak faults in general.

]]>Entropy doi: 10.3390/e21040424

Authors: Yuri S. Popkov

The paper suggests a randomized model for dynamic migratory interaction of regional systems. The locally stationary states of migration flows in the basic and immigration systems are described by corresponding entropy operators. A soft randomization procedure that defines the optimal probability density functions of system parameters and measurement noises is developed. The advantages of soft randomization with approximate empirical data balance conditions are demonstrated, which considerably reduces algorithmic complexity and computational resources demand. An example of migratory interaction modeling and testing is given.

]]>Entropy doi: 10.3390/e21040423

Authors: Ranjana Koshy Ausif Mahmood

Face recognition is a popular and efficient form of biometric authentication used in many software applications. One drawback of this technique is that it is prone to face spoofing attacks, where an impostor can gain access to the system by presenting a photograph of a valid user to the sensor. Thus, face liveness detection is a necessary step before granting authentication to the user. In this paper, we have developed deep architectures for face liveness detection that use a combination of texture analysis and a convolutional neural network (CNN) to classify the captured image as real or fake. Our development greatly improved upon a recent approach that applies nonlinear diffusion based on an additive operator splitting scheme and a tridiagonal matrix block-solver algorithm to the image, which enhances the edges and surface texture in the real image. We then fed the diffused image to a deep CNN to identify the complex and deep features for classification. We obtained 100% accuracy on the NUAA Photograph Impostor dataset for face liveness detection using one of our enhanced architectures. Further, we gained insight into the enhancement of the face liveness detection architecture by evaluating three different deep architectures, which included deep CNN, residual network, and the inception network version 4. We evaluated the performance of each of these architectures on the NUAA dataset and present here the experimental results showing under what conditions an architecture would be better suited for face liveness detection. While the residual network gave us competitive results, the inception network version 4 produced the optimal accuracy of 100% in liveness detection (with nonlinear anisotropic diffused images with a smoothness parameter of 15). Our approach outperformed all current state-of-the-art methods.

]]>Entropy doi: 10.3390/e21040422

Authors: Yi-Zheng Zhen Xin-Yu Xu Li Li Nai-Le Liu Kai Chen

The Einstein&ndash;Podolsky&ndash;Rosen (EPR) steering is a subtle intermediate correlation between entanglement and Bell nonlocality. It not only theoretically completes the whole picture of non-local effects but also practically inspires novel quantum protocols in specific scenarios. However, a verification of EPR steering is still challenging due to difficulties in bounding unsteerable correlations. In this survey, the basic framework to study the bipartite EPR steering is discussed, and general techniques to certify EPR steering correlations are reviewed.

]]>Entropy doi: 10.3390/e21040421

Authors: Masashi Kamogawa Kazuyoshi Z. Nanjo Jun Izutsu Yoshiaki Orihara Toshiyasu Nagao Seiya Uyeda

The relation between the size of an earthquake mainshock preparation zone and the magnitude of the forthcoming mainshock is different between nucleation and domino-like cascade models. The former model indicates that magnitude is predictable before an earthquake&rsquo;s mainshock because the preparation zone is related to the rupture area. In contrast, the latter indicates that magnitude is substantially unpredictable because it is practically impossible to predict the size of final rupture, which likely consists of a sequence of smaller earthquakes. As this proposal is still controversial, we discuss both models statistically, comparing their spatial occurrence rates between foreshocks and aftershocks. Using earthquake catalogs from three regions, California, Japan, and Taiwan, we showed that the spatial occurrence rates of foreshocks and aftershocks displayed a similar behavior, although this feature did not vary between these regions. An interpretation of this result, which was based on statistical analyses, indicates that the nucleation model is dominant.

]]>Entropy doi: 10.3390/e21040420

Authors: Kai Xiong Yunhua Li Yun-Ze Li Ji-Xiang Wang Yufeng Mao

This paper presents a nanofluid-based cooling method for a brushless synchronous generator (BLSG) by using Al2O3 lubricating oil. In order to demonstrate the superiority of the nanofluid-based cooling method, analysis of the thermal performance and efficiency of the nanofluid-based cooling system (NBCS) for the BLSG is conducted along with the modeling and simulation cases arranged for NBCS. Compared with the results obtained under the base fluid cooling condition, results show that the nanofluid-based cooling method can reduce the steady-state temperature and power losses in BLSG and decrease the temperature settling time and changing ratio, which demonstrate that both steady-state and transient thermal performance of NBCS are improved as nanoparticle volume fraction (NVF) in nanofluid increases. Besides, although the input power of cycling pumps in NBCS has ~30% increase when the NVF is 10%, the efficiency of the NBCS has a slight increase because the 4.1% reduction in power loss of BLSG is bigger than the total incensement of input power of the cycling pumps. The results illustrate the superiority of the nanofluid-based cooling method, and it indicates that the proposed method has a broad application prospect in the field of thermal control of onboard synchronous generators with high power density.

]]>Entropy doi: 10.3390/e21040419

Authors: Lydia González-Serrano Pilar Talón-Ballestero Sergio Muñoz-Romero Cristina Soguero-Ruiz José Luis Rojo-Álvarez

Customer Relationship Management (CRM) is a fundamental tool in the hospitality industry nowadays, which can be seen as a big-data scenario due to the large amount of recordings which are annually handled by managers. Data quality is crucial for the success of these systems, and one of the main issues to be solved by businesses in general and by hospitality businesses in particular in this setting is the identification of duplicated customers, which has not received much attention in recent literature, probably and partly because it is not an easy-to-state problem in statistical terms. In the present work, we address the problem statement of duplicated customer identification as a large-scale data analysis, and we propose and benchmark a general-purpose solution for it. Our system consists of four basic elements: (a) A generic feature representation for the customer fields in a simple table-shape database; (b) An efficient distance for comparison among feature values, in terms of the Wagner-Fischer algorithm to calculate the Levenshtein distance; (c) A big-data implementation using basic map-reduce techniques to readily support the comparison of strategies; (d) An X-from-M criterion to identify those possible neighbors to a duplicated-customer candidate. We analyze the mass density function of the distances in the CRM text-based fields and characterized their behavior and consistency in terms of the entropy and of the mutual information for these fields. Our experiments in a large CRM from a multinational hospitality chain show that the distance distributions are statistically consistent for each feature, and that neighbourhood thresholds are automatically adjusted by the system at a first step and they can be subsequently more-finely tuned according to the manager experience. The entropy distributions for the different variables, as well as the mutual information between pairs, are characterized by multimodal profiles, where a wide gap between close and far fields is often present. This motivates the proposal of the so-called X-from-M strategy, which is shown to be computationally affordable, and can provide the expert with a reduced number of duplicated candidates to supervise, with low X values being enough to warrant the sensitivity required at the automatic detection stage. The proposed system again encourages and supports the benefits of big-data technologies in CRM scenarios for hotel chains, and rather than the use of ad-hoc heuristic rules, it promotes the research and development of theoretically principled approaches.

]]>Entropy doi: 10.3390/e21040418

Authors: Massimo Tessarotto Claudio Cremaschini

Based on the introduction of a suitable quantum functional, identified here with the Boltzmann&ndash;Shannon entropy, entropic properties of the quantum gravitational field are investigated in the framework of manifestly-covariant quantum gravity theory. In particular, focus is given to gravitational quantum states in a background de Sitter space-time, with the addition of possible quantum non-unitarity effects modeled in terms of an effective quantum graviton sink localized near the de Sitter event horizon. The theory of manifestly-covariant quantum gravity developed accordingly is shown to retain its emergent-gravity features, which are recovered when the generalized-Lagrangian-path formalism is adopted, yielding a stochastic trajectory-based representation of the quantum wave equation. This permits the analytic determination of the quantum probability density function associated with the quantum gravity state, represented in terms of a generally dynamically-evolving shifted Gaussian function. As an application, the study of the entropic properties of quantum gravity is developed and the conditions for the existence of a local H-theorem or, alternatively, of a constant H-theorem are established.

]]>Entropy doi: 10.3390/e21040417

Authors: Roberto Romero-Oraá Jorge Jiménez-García María García María I. López-Gálvez Javier Oraá-Pérez Roberto Hornero

Diabetic retinopathy (DR) is the main cause of blindness in the working-age population in developed countries. Digital color fundus images can be analyzed to detect lesions for large-scale screening. Thereby, automated systems can be helpful in the diagnosis of this disease. The aim of this study was to develop a method to automatically detect red lesions (RLs) in retinal images, including hemorrhages and microaneurysms. These signs are the earliest indicators of DR. Firstly, we performed a novel preprocessing stage to normalize the inter-image and intra-image appearance and enhance the retinal structures. Secondly, the Entropy Rate Superpixel method was used to segment the potential RL candidates. Then, we reduced superpixel candidates by combining inaccurately fragmented regions within structures. Finally, we classified the superpixels using a multilayer perceptron neural network. The used database contained 564 fundus images. The DB was randomly divided into a training set and a test set. Results on the test set were measured using two different criteria. With a pixel-based criterion, we obtained a sensitivity of 81.43% and a positive predictive value of 86.59%. Using an image-based criterion, we reached 84.04% sensitivity, 85.00% specificity and 84.45% accuracy. The algorithm was also evaluated on the DiaretDB1 database. The proposed method could help specialists in the detection of RLs in diabetic patients.

]]>Entropy doi: 10.3390/e21040416

Authors: Aadel Howedi Ahmad Lotfi Amir Pourabdollah

Human Activity Recognition (HAR) is the process of automatically detecting human actions from the data collected from different types of sensors. Research related to HAR has devoted particular attention to monitoring and recognizing the human activities of a single occupant in a home environment, in which it is assumed that only one person is present at any given time. Recognition of the activities is then used to identify any abnormalities within the routine activities of daily living. Despite the assumption in the published literature, living environments are commonly occupied by more than one person and/or accompanied by pet animals. In this paper, a novel method based on different entropy measures, including Approximate Entropy (ApEn), Sample Entropy (SampEn), and Fuzzy Entropy (FuzzyEn), is explored to detect and identify a visitor in a home environment. The research has mainly focused on when another individual visits the main occupier, and it is, therefore, not possible to distinguish between their movement activities. The goal of this research is to assess whether entropy measures can be used to detect and identify the visitor in a home environment. Once the presence of the main occupier is distinguished from others, the existing activity recognition and abnormality detection processes could be applied for the main occupier. The proposed method is tested and validated using two different datasets. The results obtained from the experiments show that the proposed method could be used to detect and identify a visitor in a home environment with a high degree of accuracy based on the data collected from the occupancy sensors.

]]>Entropy doi: 10.3390/e21040415

Authors: Hui Chang Qinghai Song Yuxia Li Zhen Wang Guanrong Chen

This paper reports the finding of unstable limit cycles and singular attractors in a two-dimensional dynamical system consisting of an inductor and a bistable bi-local active memristor. Inspired by the idea of nested intervals theorem, a new programmable scheme for finding unstable limit cycles is proposed, and its feasibility is verified by numerical simulations. The unstable limit cycles and their evolution laws in the memristor-based dynamic system are found from two subcritical Hopf bifurcation domains, which are subdomains of twin local activity domains of the memristor. Coexisting singular attractors are discovered in the twin local activity domains, apart from the two corresponding subcritical Hopf bifurcation domains. Of particular interest is the coexistence of a singular attractor and a period-2 or period-3 attractor, observed in numerical simulations.

]]>Entropy doi: 10.3390/e21040414

Authors: Ikechukwu Ofodile Ahmed Helmi Albert Clapés Egils Avots Kerttu Maria Peensoo Sandhra-Mirella Valdma Andreas Valdmann Heli Valtna-Lukner Sergey Omelkov Sergio Escalera Cagri Ozcinar Gholamreza Anbarjafari

Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject&rsquo;s privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene. Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47 % accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent neural network.

]]>Entropy doi: 10.3390/e21040413

Authors: Ana Jesús López-Menéndez Rigoberto Pérez-Suárez

The role of uncertainty has become increasingly important in economic forecasting, due to both theoretical and empirical reasons. Although the traditional practice consisted of reporting point predictions without specifying the attached probabilities, uncertainty about the prospects deserves increasing attention, and recent literature has tried to quantify the level of uncertainty perceived by different economic agents, also examining its effects and determinants. In this context, the present paper aims to analyze the uncertainty in economic forecasting, paying attention to qualitative perceptions from confidence and industrial trend surveys and making use of the related ex-ante probabilities. With this objective, two entropy-based measures (Shannon&rsquo;s and quadratic entropy) are computed, providing significant evidence about the perceived level of uncertainty. Our empirical findings show that survey&rsquo;s respondents are able to distinguish between current and prospective uncertainty and between general and personal uncertainty. Furthermore, we find that uncertainty negatively affects economic growth.

]]>Entropy doi: 10.3390/e21040412

Authors: Angelo De Santis Cristoforo Abbattista Lucilla Alfonsi Leonardo Amoruso Saioa A. Campuzano Marianna Carbone Claudio Cesaroni Gianfranco Cianchini Giorgiana De Franceschi Anna De Santis Rita Di Giovambattista Dedalo Marchetti Luca Martino Loredana Perrone Alessandro Piscini Mario Luigi Rainone Maurizio Soldani Luca Spogli Francesca Santoro

Earthquakes are the most energetic phenomena in the lithosphere: their study and comprehension are greatly worth doing because of the obvious importance for society. Geosystemics intends to study the Earth system as a whole, looking at the possible couplings among the different geo-layers, i.e., from the earth&rsquo;s interior to the above atmosphere. It uses specific universal tools to integrate different methods that can be applied to multi-parameter data, often taken on different platforms (e.g., ground, marine or satellite observations). Its main objective is to understand the particular phenomenon of interest from a holistic point of view. Central is the use of entropy, together with other physical quantities that will be introduced case by case. In this paper, we will deal with earthquakes, as final part of a long-term chain of processes involving, not only the interaction between different components of the Earth&rsquo;s interior but also the coupling of the solid earth with the above neutral or ionized atmosphere, and finally culminating with the main rupture along the fault of concern. Particular emphasis will be given to some Italian seismic sequences.

]]>Entropy doi: 10.3390/e21040411

Authors: Garazi Artola Erik Isusquiza Ane Errarte Maitane Barrenechea Ane Alberdi María Hernández-Lorca Elena Solesio-Jofre

Recent work has demonstrated that aging modulates the resting brain. However, the study of these modulations after cognitive practice, resulting from a memory task, has been scarce. This work aims at examining age-related changes in the functional reorganization of the resting brain after cognitive training, namely, neuroplasticity, by means of the most innovative tools for data analysis. To this end, electroencephalographic activity was recorded in 34 young and 38 older participants. Different methods for data analyses, including frequency, time-frequency and machine learning-based prediction models were conducted. Results showed reductions in Alpha power in old compared to young adults in electrodes placed over posterior and anterior areas of the brain. Moreover, young participants showed Alpha power increases after task performance, while their older counterparts exhibited a more invariant pattern of results. These results were significant in the 140–160 s time window in electrodes placed over anterior regions of the brain. Machine learning analyses were able to accurately classify participants by age, but failed to predict whether resting state scans took place before or after the memory task. These findings greatly contribute to the development of multivariate tools for electroencephalogram (EEG) data analysis and improve our understanding of age-related changes in the functional reorganization of the resting brain.

]]>Entropy doi: 10.3390/e21040410

Authors: Lin Zhou Alfred Hero

We consider the k-user successive refinement problem with causal decoder side information and derive an exponential strong converse theorem. The rate-distortion region for the problem can be derived as a straightforward extension of the two-user case by Maor and Merhav (2008). We show that for any rate-distortion tuple outside the rate-distortion region of the k-user successive refinement problem with causal decoder side information, the joint excess-distortion probability approaches one exponentially fast. Our proof follows by judiciously adapting the recently proposed strong converse technique by Oohama using the information spectrum method, the variational form of the rate-distortion region and H&ouml;lder&rsquo;s inequality. The lossy source coding problem with causal decoder side information considered by El Gamal and Weissman is a special case ( k = 1 ) of the current problem. Therefore, the exponential strong converse theorem for the El Gamal and Weissman problem follows as a corollary of our result.

]]>Entropy doi: 10.3390/e21040409

Authors: Wei Li Xu Huang

Rotating machinery is widely applied in various types of industrial applications. As a promising field for reliability of modern industrial systems, early fault diagnosis (EFD) techniques have attracted increasing attention from both academia and industry. EFD is critical to provide appropriate information for taking necessary maintenance actions and thereby prevent severe failures and reduce financial losses. A massive amounts of research work has been conducted in last two decades to develop EFD techniques. This paper reviews and summarizes the research works on EFD of gears, rotors, and bearings. The main purpose of this paper is to serve as a guidemap for researchers in the field of early fault diagnosis. After a brief introduction of early fault diagnosis techniques, the applications of EFD of rotating machine are reviewed in two aspects: fault frequency-based methods and artificial intelligence-based methods. Finally, a summary and some new research prospects are discussed.

]]>Entropy doi: 10.3390/e21040408

Authors: Świetlik Białowąs Moryś Kusiak

The aim of the study was to compare the computer model of synaptic breakdown in an Alzheimer&rsquo;s disease-like pathology in the dentate gyrus (DG), CA3 and CA1 regions of the hippocampus with a control model using neuronal parameters and methods describing the complexity of the system, such as the correlative dimension, Shannon entropy and positive maximal Lyapunov exponent. The model of synaptic breakdown (from 13% to 50%) in the hippocampus modeling the dynamics of an Alzheimer&rsquo;s disease-like pathology was simulated. Modeling consisted in turning off one after the other EC2 connections and connections from the dentate gyrus on the CA3 pyramidal neurons. The pathological model of synaptic disintegration was compared to a control. The larger synaptic breakdown was associated with a statistically significant decrease in the number of spikes (R = &minus;0.79, P &lt; 0.001), spikes per burst (R = &minus;0.76, P &lt; 0.001) and burst duration (R = &minus;0.83, P &lt; 0.001) and an increase in the inter-burst interval (R = 0.85, P &lt; 0.001) in DG-CA3-CA1. The positive maximal Lyapunov exponent in the control model was negative, but in the pathological model had a positive value of DG-CA3-CA1. A statistically significant decrease of Shannon entropy with the direction of information flow DG-&gt;CA3-&gt;CA1 (R = &minus;0.79, P &lt; 0.001) in the pathological model and a statistically significant increase with greater synaptic breakdown (R = 0.24, P &lt; 0.05) of the CA3-CA1 region was obtained. The reduction of entropy transfer for DG-&gt;CA3 at the level of synaptic breakdown of 35% was 35%, compared with the control. Entropy transfer for CA3-&gt;CA1 at the level of synaptic breakdown of 35% increased to 95% relative to the control. The synaptic breakdown model in an Alzheimer&rsquo;s disease-like pathology in DG-CA3-CA1 exhibits chaotic features as opposed to the control. Synaptic breakdown in which an increase of Shannon entropy is observed indicates an irreversible process of Alzheimer&rsquo;s disease. The increase in synapse loss resulted in decreased information flow and entropy transfer in DG-&gt;CA3, and at the same time a strong increase in CA3-&gt;CA1.

]]>Entropy doi: 10.3390/e21040407

Authors: Zhang Feng Yang

This paper investigates the problem of complex modified projective synchronization (CMPS) of fractional-order complex-variable chaotic systems (FOCCS) with unknown complex parameters. By a complex-variable inequality and a stability theory for fractional-order nonlinear systems, a new scheme is presented for constructing CMPS of FOCCS with unknown complex parameters. The proposed scheme not only provides a new method to analyze fractional-order complex-valued systems but also significantly reduces the complexity of computation and analysis. Theoretical proof and simulation results substantiate the effectiveness of the presented synchronization scheme.

]]>Entropy doi: 10.3390/e21040406

Authors: Arturo Tozzi James F. Peters

We describe cosmic expansion as correlated with the standpoints of local observers&rsquo; co-moving horizons. In keeping with relational quantum mechanics, which claims that quantum systems are only meaningful in the context of measurements, we suggest that information gets ergodically &ldquo;diluted&rdquo; in our isotropic and homogeneous expanding Universe, so that an observer detects just a limited amount of the total cosmic bits. The reduced bit perception is due the decreased density of information inside the expanding cosmic volume in which the observer resides. Further, we show that the second law of thermodynamics can be correlated with cosmic expansion through a relational mechanism, because the decrease in information detected by a local observer in an expanding Universe is concomitant with an increase in perceived cosmic thermodynamic entropy, via the Bekenstein bound and the Laudauer principle. Reversing the classical scheme from thermodynamic entropy to information, we suggest that the cosmological constant of the quantum vacuum, which is believed to provoke the current cosmic expansion, could be one of the sources of the perceived increases in thermodynamic entropy. We conclude that entropies, including the entangled entropy of the recently developed framework of quantum computational spacetime, might not describe independent properties, but rather relations among systems and observers.

]]>Entropy doi: 10.3390/e21040405

Authors: Kyumin Moon

Integrated information theory (IIT) asserts that both the level and the quality of consciousness can be explained by the ability of physical systems to integrate information. Although the scientific content and empirical prospects of IIT have attracted interest, this paper focuses on another aspect of IIT, its unique theoretical structure, which relates the phenomenological axioms with the ontological postulates. In particular, the relationship between the exclusion axiom and the exclusion postulate is unclear. Moreover, the exclusion postulate leads to a serious problem in IIT: the quale underdetermination problem. Therefore, in this paper, I will explore answers to the following three questions: (1) how does the exclusion axiom lead to the exclusion postulate? (2) How does the exclusion postulate cause the qualia underdetermination problem? (3) Is there a solution to this problem? I will provide proposals and arguments for each question. If successful, IIT can be confirmed with respect to, not only its theoretical foundation, but also its practical application.

]]>Entropy doi: 10.3390/e21040404

Authors: Wenlong Fu Jiawen Tan Yanhe Xu Kai Wang Tie Chen

Rolling bearings are a vital and widely used component in modern industry, relating to the production efficiency and remaining life of a device. An effective and robust fault diagnosis method for rolling bearings can reduce the downtime caused by unexpected failures. Thus, a novel fault diagnosis method for rolling bearings by fine-sorted dispersion entropy and mutation sine cosine algorithm and particle swarm optimization (SCA-PSO) optimized support vector machine (SVM) is presented to diagnose a fault of various sizes, locations and motor loads. Vibration signals collected from different types of faults are firstly decomposed by variational mode decomposition (VMD) into sets of intrinsic mode functions (IMFs), where the decomposing mode number K is determined by the central frequency observation method, thus, to weaken the non-stationarity of original signals. Later, the improved fine-sorted dispersion entropy (FSDE) is proposed to enhance the perception for relationship information between neighboring elements and then employed to construct the feature vectors of different fault samples. Afterward, a hybrid optimization strategy combining advantages of mutation operator, sine cosine algorithm and particle swarm optimization (MSCAPSO) is proposed to optimize the SVM model. The optimal SVM model is subsequently applied to realize the pattern recognition for different fault samples. The superiority of the proposed method is assessed through multiple contrastive experiments. Result analysis indicates that the proposed method achieves better precision and stability over some relevant methods, whereupon it is promising in the field of fault diagnosis for rolling bearings.

]]>Entropy doi: 10.3390/e21040403

Authors: Changying Guo Biqin Song Yingjie Wang Hong Chen Huijuan Xiong

Model-free variable selection has attracted increasing interest recently due to its flexibility in algorithmic design and outstanding performance in real-world applications. However, most of the existing statistical methods are formulated under the mean square error (MSE) criterion, and susceptible to non-Gaussian noise and outliers. As the MSE criterion requires the data to satisfy Gaussian noise condition, it potentially hampers the effectiveness of model-free methods in complex circumstances. To circumvent this issue, we present a new model-free variable selection algorithm by integrating kernel modal regression and gradient-based variable identification together. The derived modal regression estimator is related closely to information theoretic learning under the maximum correntropy criterion, and assures algorithmic robustness to complex noise by replacing learning of the conditional mean with the conditional mode. The gradient information of estimator offers a model-free metric to screen the key variables. In theory, we investigate the theoretical foundations of our new model on generalization-bound and variable selection consistency. In applications, the effectiveness of the proposed method is verified by data experiments.

]]>Entropy doi: 10.3390/e21040402

Authors: Bing Liu Dong-Xiao Li Xiao-Qiang Shao

A scheme is proposed to generate maximally entangled states of two &Lambda; -type atoms trapped in separate overdamped optical cavities using quantum-jump-based feedback. This proposal can stabilize not only the singlet state, but also the other three triplet states by alternating the detuning parameter and relative phase of the classical fields. Meanwhile it is convenient to manipulate atoms, and much more robust against spontaneous emission of atoms. The parameters related to the potential experiment are analyzed comprehensively and it is confirmed that the quantum feedback technology is a significant tool for entanglement production with a high fidelity.

]]>Entropy doi: 10.3390/e21040401

Authors: Izlian Y. Orea-Flores Francisco J. Gallegos-Funes Alfonso Arellano-Reynoso

In this paper, we propose the local complexity estimation based filtering method in wavelet domain for MRI (magnetic resonance imaging) denoising. A threshold selection methodology is proposed in which the edge and detail preservation properties for each pixel are determined by the local complexity of the input image. In the proposed filtering method, the current wavelet kernel is compared with a threshold to identify the signal- or noise-dominant pixels in a scale providing a good visual quality avoiding blurred and over smoothened processed images. We present a comparative performance analysis with different wavelets to find the optimal wavelet for MRI denoising. Numerical experiments and visual results in simulated MR images degraded with Rician noise demonstrate that the proposed algorithm consistently outperforms other denoising methods by balancing the tradeoff between noise suppression and fine detail preservation. The proposed algorithm can enhance the contrast between regions allowing the delineation of the regions of interest between different textures or tissues in the processed images. The proposed approach produces a satisfactory result in the case of real MRI denoising by balancing the detail preservation and noise removal, by enhancing the contrast between the regions of the image. Additionally, the proposed algorithm is compared with other approaches in the case of Additive White Gaussian Noise (AWGN) using standard images to demonstrate that the proposed approach does not need to be adapted specifically to Rician or AWGN noise; it is an advantage of the proposed approach in comparison with other methods. Finally, the proposed scheme is simple, efficient and feasible for MRI denoising.

]]>Entropy doi: 10.3390/e21040400

Authors: Jie Zhou Xiaoming Guo Zhijian Wang Wenhua Du Junyuan Wang Xiaofeng Han Jingtai Wang Gaofeng He Huihui He Huiling Xue Yanfei Kou

In recent years, a new method of fault diagnosis, named variational mode decomposition (VMD), has been widely used in industrial production, but the decomposition accuracy of VMD is determined by two parameters, which are respectively the decomposition layer number k and the penalty factor &alpha;, if the parameters are not properly selected, there will be over-decomposition or under-decomposition. In order to find an approach to determine the parameters adaptively, a method to optimize VMD by using the immune fruit fly optimization algorithm (IFOA) is proposed in this paper. In this method, permutation entropy is used as the fitness function, firstly, the immune fruit fly optimization algorithm is used to search the combined parameters of k and &alpha; in VMD, searching for the best combination parameters of k and &alpha; by iteration, and then uses the combined parameters to perform VMD, finally, the center frequency is determined through frequency spectrum analysis. The method mentioned is applied to the fault extraction of a simulated signal and a measured signal of a wind turbine gearbox, and the fault frequency is successfully extracted. Using ensemble empirical mode decomposition (EEMD) and singular spectrum decomposition (SSD) to compare with the proposed method, which validated feasibility of the proposed method.

]]>Entropy doi: 10.3390/e21040398

Authors: Suhang Song Heming Jia Jun Ma

Multilevel thresholding segmentation of color images is an important technology in various applications which has received more attention in recent years. The process of determining the optimal threshold values in the case of traditional methods is time-consuming. In order to mitigate the above problem, meta-heuristic algorithms have been employed in this field for searching the optima during the past few years. In this paper, an effective technique of Electromagnetic Field Optimization (EFO) algorithm based on a fuzzy entropy criterion is proposed, and in addition, a novel chaotic strategy is embedded into EFO to develop a new algorithm named CEFO. To evaluate the robustness of the proposed algorithm, other competitive algorithms such as Artificial Bee Colony (ABC), Bat Algorithm (BA), Wind Driven Optimization (WDO), and Bird Swarm Algorithm (BSA) are compared using fuzzy entropy as the fitness function. Furthermore, the proposed segmentation method is also compared with the most widely used approaches of Otsu&rsquo;s variance and Kapur&rsquo;s entropy to verify its segmentation accuracy and efficiency. Experiments are conducted on ten Berkeley benchmark images and the simulation results are presented in terms of peak signal to noise ratio (PSNR), mean structural similarity (MSSIM), feature similarity (FSIM), and computational time (CPU Time) at different threshold levels of 4, 6, 8, and 10 for each test image. A series of experiments can significantly demonstrate the superior performance of the proposed technique, which can deal with multilevel thresholding color image segmentation excellently.

]]>Entropy doi: 10.3390/e21040399

Authors: Rahman Ahmad Kano Mustafa

Steam methane reforming (SMR) is a dominant technology for hydrogen production. For the highly energy-efficient operation, robust energy analysis is crucial. In particular, exergy analysis has received the attention of researchers due to its advantage over the conventional energy analysis. In this work, an exergy analysis based on the computational fluid dynamics (CFD)-based method was applied to a monolith microreactor of SMR. Initially, a CFD model of SMR was developed using literature data. Then, the design and operating conditions of the microreactor were optimized based on the developed CFD model to achieve higher conversion efficiency and shorter length. Exergy analysis of the optimized microreactor was performed using the custom field function (CFF) integrated with the CFD environment. The optimized catalytic monolith microreactor of SMR achieved higher conversion efficiency at a smaller consumption of energy, catalyst, and material of construction than the reactor reported in the literature. The exergy analysis algorithm helped in evaluating length-wise profiles of all three types of exergy, namely, physical exergy, chemical exergy, and mixing exergy, in the microreactor.

]]>Entropy doi: 10.3390/e21040397

Authors: Mostafa M. A. Khater Raghda A. M. Attia Dianchen Lu

This study investigates the solitary wave solutions of the nonlinear fractional Jimbo&ndash;Miwa (JM) equation by using the conformable fractional derivative and some other distinct analytical techniques. The JM equation describes the certain interesting (3+1)-dimensional waves in physics. Moreover, it is considered as a second equation of the famous Painlev&rsquo;e hierarchy of integrable systems. The fractional conformable derivatives properties were employed to convert it into an ordinary differential equation with an integer order to obtain many novel exact solutions of this model. The conformable fractional derivative is equivalent to the ordinary derivative for the functions that has continuous derivatives up to some desired order over some domain (smooth functions). The obtained solutions for each technique were characterized and compared to illustrate the similarities and differences between them. Profound solutions were concluded to be powerful, easy and effective on the nonlinear partial differential equation.

]]>Entropy doi: 10.3390/e21040396

Authors: Qiuwei Xing Haijiang Wang Mingbiao Chen Zhaoyun Chen Rongbin Li Peipeng Jin Yong Zhang

In this study, we designed and fabricated NbTiAlSiZrNx high-entropy alloy (HEA) films. The parameters of the radio frequency (RF) pulse magnetron sputtering process were fixed to maintain the N2 flux ratio at 0%, 10%, 20%, 30%, 40%, and 50%. Subsequently, NbTiAlSiZrNx HEA films were deposited on the 304 stainless steel (SS) substrate. With an increasing N2 flow rate, the film deposited at a RN of 50% had the highest hardness (12.4 GPa), the highest modulus (169 GPa), a small roughness, and a beautiful color. The thicknesses of the films were gradually reduced from 298.8 nm to 200 nm, and all the thin films were of amorphous structure. The electrochemical corrosion resistance of the film in a 0.5 mol/L H2SO4 solution at room temperature was studied and the characteristics changed. The HEA films prepared at N2 flow rates of 10% and 30% were more prone to corrosion than 304 SS, but the corrosion rate was lower than that of 304 SS. NbTiAlSiZrNx HEA films prepared at N2 flow rates of 20%, 40%, and 50% were more corrosion-resistant than 304 SS. In addition, the passivation stability of the NbTiAlSiZrNx HEA was worse than that of 304 SS. Altogether, these results show that pitting corrosion occurred on NbTiAlSiZrNx HEA films.

]]>Entropy doi: 10.3390/e21040395

Authors: Koivisto Zevenhoven

Mineral carbonation routes have been extensively studied for almost two decades at &Aring;bo Akademi University, focusing on the extraction of magnesium from magnesium silicates using ammonium sulfate (AS) and/or ammonium bisulfate (ABS) flux salt followed by carbonation. There is, however, a need for proper recovery and recirculation of chemicals involved. This study focused on the separation of AS, ABS and aqueous ammonia using different setups of bipolar membrane electrodialysis using both synthetic and rock-derived solutions. Bipolar membranes offer the possibility to split water, which in turn makes it possible to regenerate chemicals like acids and bases needed in mineral carbonation without excess gas formation. Tests were run in batch, continuous, and recirculating mode, and exergy (electricity) input during the tests was calculated. The results show that separation of ions was achieved, even if the solutions obtained were still too weak for use in the downstream process to control pH. Energy demand for separating 1 kg of NH4+ varied in the range 1.7, 3.4, 302 and 340 MJ/kg NH4+, depending on setup chosen. More work must hence be done in order to make the separation more efficient, such as narrowing the cell width.

]]>Entropy doi: 10.3390/e21040393

Authors: Haifeng Bao Weining Fang Beiyuan Guo Peng Wang

With the improvement in automation technology, humans have now become supervisors of the complicated control systems that monitor the informative human&ndash;machine interface. Analyzing the visual attention allocation behaviors of supervisors is essential for the design and evaluation of the interface. Supervisors tend to pay attention to visual sections with information with more fuzziness, which makes themselves have a higher mental entropy. Supervisors tend to focus on the important information in the interface. In this paper, the fuzziness tendency is described by the probability of correct evaluation of the visual sections using hybrid entropy. The importance tendency is defined by the proposed value priority function. The function is based on the definition of the amount of information using the membership degrees of the importance. By combining these two cognitive tendencies, the informative top-down visual attention allocation mechanism was revealed, and the supervisors&rsquo; visual attention allocation model was built. The Building Automatic System (BAS) was used to monitor the environmental equipment in a subway, which is a typical informative human&ndash;machine interface. An experiment using the BAS simulator was conducted to verify the model. The results showed that the supervisor&rsquo;s attention behavior was in good agreement with the proposed model. The effectiveness and comparison with the current models were also discussed. The proposed attention allocation model is effective and reasonable, which is promising for use in behavior analysis, cognitive optimization, and industrial design.

]]>Entropy doi: 10.3390/e21040394

Authors: Andrea Murari Emmanuele Peluso Francesco Cianfrani Pasquale Gaudio Michele Lungaroni

The most widely used forms of model selection criteria, the Bayesian Information Criterion (BIC) and the Akaike Information Criterion (AIC), are expressed in terms of synthetic indicators of the residual distribution: the variance and the mean-squared error of the residuals respectively. In many applications in science, the noise affecting the data can be expected to have a Gaussian distribution. Therefore, at the same level of variance and mean-squared error, models, whose residuals are more uniformly distributed, should be favoured. The degree of uniformity of the residuals can be quantified by the Shannon entropy. Including the Shannon entropy in the BIC and AIC expressions improves significantly these criteria. The better performances have been demonstrated empirically with a series of simulations for various classes of functions and for different levels and statistics of the noise. In presence of outliers, a better treatment of the errors, using the Geodesic Distance, has proved essential.

]]>Entropy doi: 10.3390/e21040392

Authors: Christoph Kawan

This is an editorial article summarizing the scope and contents of the Special Issue Entropy in Networked Control.

]]>Entropy doi: 10.3390/e21040391

Authors: Leandro Pardo

In the last decades the interest in statistical methods based on information measures and particularly in pseudodistances or divergences has grown substantially [...]

]]>Entropy doi: 10.3390/e21040390

Authors: Demei Li Huilin Lai Baochang Shi

In this work, we develop a mesoscopic lattice Boltzmann Bhatnagar-Gross-Krook (BGK) model to solve (2 + 1)-dimensional wave equation with the nonlinear damping and source terms. Through the Chapman-Enskog multiscale expansion, the macroscopic governing evolution equation can be obtained accurately by choosing appropriate local equilibrium distribution functions. We validate the present mesoscopic model by some related issues where the exact solution is known. It turned out that the numerical solution is in very good agreement with exact one, which shows that the present mesoscopic model is pretty valid, and can be used to solve more similar nonlinear wave equations with nonlinear damping and source terms, and predict and enrich the internal mechanism of nonlinearity and complexity in nonlinear dynamic phenomenon.

]]>Entropy doi: 10.3390/e21040389

Authors: Hanwen Zhang Peizhi Liu Jinxiong Hou Junwei Qiao Yucheng Wu

The mechanical behavior of a partially recrystallized fcc-CoCrFeNiTi0.2 high entropy alloys (HEA) is investigated. Temporal evolutions of the morphology, size, and volume fraction of the nanoscaled L12-(Ni,Co)3Ti precipitates at 800 &deg;C with various aging time were quantitatively evaluated. The ultimate tensile strength can be greatly improved to ~1200 MPa, accompanied with a tensile elongation of ~20% after precipitation. The temporal exponents for the average size and number density of precipitates reasonably conform the predictions by the PV model. A composite model was proposed to describe the plastic strain of the current HEA. As a consequence, the tensile strength and tensile elongation are well predicted, which is in accord with the experimental results. The present experiment provides a theoretical reference for the strengthening of partially recrystallized single-phase HEAs in the future.

]]>Entropy doi: 10.3390/e21040388

Authors: Primož Poredoš Andrej Kitanovski Alojz Poredoš

This paper presents an exergy-efficiency analysis of low-temperature district heating systems (DHSs) with different sanitary hot-water (SHW) boosters. The required temperature of the sanitary hot water (SHW) was set to 50 &deg;C. The main objective of this study was to compare the exergy efficiencies of a DHS without a booster to DHSs with three different types of boosters, i.e., electric-, gas-boiler- and heat-pump-based, during the winter and summer seasons. To achieve this, we developed a generalized model for the calculation of the exergy efficiency of a DHS with or without the booster. The results show that during the winter season, for a very low relative share of SHW production, the DHS without the booster exhibits favorable exergy efficiencies compared to the DHSs with boosters. By increasing this share, an intersection point above 45 &deg;C for the supply temperatures, at which the higher exergy efficiency of a DHS with a booster prevails, can be identified. In the summer season the results show that a DHS without a booster at a supply temperature above 70 &deg;C achieves lower exergy efficiencies compared to DHSs with boosters at supply temperatures above 40 &deg;C. The results also show that ultra-low supply and return temperatures should be avoided for the DHSs with boosters, due to higher rates of entropy generation.

]]>Entropy doi: 10.3390/e21040387

Authors: Tom Vergoossen Robert Bedington James A. Grieve Alexander Ling

An application of quantum communications is the transmission of qubits to create shared symmetric encryption keys in a process called quantum key distribution (QKD). Contrary to public-private key encryption, symmetric encryption is considered safe from (quantum) computing attacks, i.e. it provides forward security and is thus attractive for secure communications. In this paper we argue that for free-space quantum communications, especially with satellites, if one assumes that man-in-the-middle attacks can be detected by classical channel monitoring techniques, simplified quantum communications protocols and hardware systems can be implemented that offer improved key rates. We term these protocols photon key distribution (PKD) to differentiate them from the standard QKD protocols. We identify three types of photon sources and calculate asymptotic secret key rates for PKD protocols and compare them to their QKD counterparts. PKD protocols use only one measurement basis which we show roughly doubles the key rates. Furthermore, with the relaxed security assumptions one can establish keys at very high losses, in contrast to QKD where at the same losses privacy amplification would make key generation impossible.

]]>Entropy doi: 10.3390/e21040386

Authors: Lin Lin Bin Wang Jiajin Qi Da Wang Nantian Huang

To improve the accuracy of the recognition of complicated mechanical faults in bearings, a large number of features containing fault information need to be extracted. In most studies regarding bearing fault diagnosis, the influence of the limitation of fault training samples has not been considered. Furthermore, commonly used multi-classifiers could misidentify the type or severity of faults without using normal samples as training samples. Therefore, a novel bearing fault diagnosis method based on the one-class classification concept and random forest is proposed for reducing the impact of the limitations of the fault training sample. First, the bearing vibration signals are decomposed into numerous intrinsic mode functions using empirical wavelet transform. Then, 284 features including multiple entropy are extracted from the original signal and intrinsic mode functions to construct the initial feature set. Lastly, a hybrid classifier based on one-class support vector machine trained by normal samples and a random forest trained by imbalanced fault data without some specific severities is set up to accurately identify the mechanical state and specific fault type of the bearings. The experimental results show that the proposed method can significantly improve the classification accuracy compared with traditional methods in different diagnostic target.

]]>Entropy doi: 10.3390/e21040385

Authors: David Cuesta-Frau Juan Pablo Murillo-Escobar Diana Alexandra Orrego Edilson Delgado-Trejos

Permutation Entropy (PE) is a time series complexity measure commonly used in a variety of contexts, with medicine being the prime example. In its general form, it requires three input parameters for its calculation: time series length N, embedded dimension m, and embedded delay &tau; . Inappropriate choices of these parameters may potentially lead to incorrect interpretations. However, there are no specific guidelines for an optimal selection of N, m, or &tau; , only general recommendations such as N &gt; &gt; m ! , &tau; = 1 , or m = 3 , &hellip; , 7 . This paper deals specifically with the study of the practical implications of N &gt; &gt; m ! , since long time series are often not available, or non-stationary, and other preliminary results suggest that low N values do not necessarily invalidate PE usefulness. Our study analyses the PE variation as a function of the series length N and embedded dimension m in the context of a diverse experimental set, both synthetic (random, spikes, or logistic model time series) and real&ndash;world (climatology, seismic, financial, or biomedical time series), and the classification performance achieved with varying N and m. The results seem to indicate that shorter lengths than those suggested by N &gt; &gt; m ! are sufficient for a stable PE calculation, and even very short time series can be robustly classified based on PE measurements before the stability point is reached. This may be due to the fact that there are forbidden patterns in chaotic time series, not all the patterns are equally informative, and differences among classes are already apparent at very short lengths.

]]>Entropy doi: 10.3390/e21040384

Authors: Vicente Aboites David Liceaga Rider Jaimes-Reátegui Juan Hugo García-López

In this paper, we propose using paraxial matrix optics to describe a ring-phase conjugated resonator that includes an intracavity chaos-generating element; this allows the system to behave in phase space as a Bogdanov Map. Explicit expressions for the intracavity chaos-generating matrix elements were obtained. Furthermore, computer calculations for several parameter configurations were made; rich dynamic behavior among periodic orbits high periodicity and chaos were observed through bifurcation diagrams. These results confirm the direct dependence between the parameters present in the intracavity chaos-generating element.

]]>Entropy doi: 10.3390/e21040383

Authors: Licai Liu Chuanhong Du Xiefu Zhang Jian Li Shuaishuai Shi

Compared with fractional-order chaotic systems with a large number of dimensions, three-dimensional or integer-order chaotic systems exhibit low complexity. In this paper, two novel four-dimensional, continuous, fractional-order, autonomous, and dissipative chaotic system models with higher complexity are revised. Numerical simulation of the two systems was used to verify that the two new fractional-order chaotic systems exhibit very rich dynamic behavior. Moreover, the synchronization method for fractional-order chaotic systems is also an issue that demands attention. In order to apply the Lyapunov stability theory, it is often necessary to design complicated functions to achieve the synchronization of fractional-order systems. Based on the fractional Mittag&ndash;Leffler stability theory, an adaptive, large-scale, and asymptotic synchronization control method is studied in this paper. The proposed scheme realizes the synchronization of two different fractional-order chaotic systems under the conditions of determined parameters and uncertain parameters. The synchronization theory and its proof are given in this paper. Finally, the model simulation results prove that the designed adaptive controller has good reliability, which contributes to the theoretical research into, and practical engineering applications of, chaos.

]]>Entropy doi: 10.3390/e21040382

Authors: Luis Abrego Alexey Zaikin

Intercellular communication and its coordination allow cells to exhibit multistability as a form of adaptation. This conveys information processing from intracellular signaling networks enabling self-organization between other cells, typically involving mechanisms associated with cognitive systems. How information is integrated in a functional manner and its relationship with the different cell fates is still unclear. In parallel, drawn originally from studies on neuroscience, integrated information proposes an approach to quantify the balance between integration and differentiation in the causal dynamics among the elements in any interacting system. In this work, such an approach is considered to study the dynamical complexity in a genetic network of repressilators coupled by quorum sensing. Several attractors under different conditions are identified and related to proposed measures of integrated information to have an insight into the collective interaction and functional differentiation in cells. This research particularly accounts for the open question about the coding and information transmission in genetic systems.

]]>Entropy doi: 10.3390/e21040381

Authors: Daniel Álvarez Ana Sánchez-Fernández Ana M. Andrés-Blanco Gonzalo C. Gutiérrez-Tobal Fernando Vaquerizo-Villar Verónica Barroso-García Roberto Hornero Félix del Campo

Chronic obstructive pulmonary disease (COPD) is one of the most prevalent lung diseases worldwide. COPD patients show major dysfunction in cardiac autonomic modulation due to sustained hypoxaemia, which has been significantly related to higher risk of cardiovascular disease. Obstructive sleep apnoea syndrome (OSAS) is a frequent comorbidity in COPD patients. It has been found that patients suffering from both COPD and OSAS simultaneously, the so-called overlap syndrome, have notably higher morbidity and mortality. Heart rate variability (HRV) has demonstrated to be useful to assess changes in autonomic functioning in different clinical conditions. However, there is still little scientific evidence on the magnitude of changes in cardiovascular dynamics elicited by the combined effect of both respiratory diseases, particularly during sleep, when apnoeic events occur. In this regard, we hypothesised that a non-linear analysis is able to provide further insight into long-term dynamics of overnight cardiovascular modulation. Accordingly, this study is aimed at assessing the usefulness of sample entropy (SampEn) to distinguish changes in overnight pulse rate variability (PRV) recordings among three patient groups while sleeping: COPD, moderate-to-severe OSAS, and overlap syndrome. In order to achieve this goal, a population composed of 297 patients were studied: 22 with COPD alone, 213 showing moderate-to-severe OSAS, and 62 with COPD and moderate-to-severe OSAS simultaneously (COPD+OSAS). Cardiovascular dynamics were analysed using pulse rate (PR) recordings from unattended pulse oximetry carried out at patients&rsquo; home. Conventional time- and frequency- domain analyses were performed to characterise sympathetic and parasympathetic activation of the nervous system, while SampEn was applied to quantify long-term changes in irregularity. Our analyses revealed that overnight PRV recordings from COPD+OSAS patients were significantly more irregular (higher SampEn) than those from patients with COPD alone (0.267 [0.210&ndash;0.407] vs. 0.212 [0.151&ndash;0.267]; p &lt; 0.05) due to recurrent apnoeic events during the night. Similarly, COPD + OSAS patients also showed significantly higher irregularity in PRV during the night than subjects with OSAS alone (0.267 [0.210&ndash;0.407] vs. 0.241 [0.189&ndash;0.325]; p = 0.05), which suggests that the cumulative effect of both diseases increases disorganization of pulse rate while sleeping. On the other hand, no statistical significant differences were found between COPD and COPD + OSAS patients when traditional frequency bands (LF and HF) were analysed. We conclude that SampEn is able to properly quantify changes in overnight cardiovascular dynamics of patients with overlap syndrome, which could be useful to assess cardiovascular impairment in COPD patients due to the presence of concomitant OSAS.

]]>Entropy doi: 10.3390/e21040380

Authors: Naomichi Hatano Gonzalo Ordonez

It is one of the most important and long-standing issues of physics to derive the irreversibility out of a time-reversal symmetric equation of motion. The present paper considers the breaking of the time-reversal symmetry in open quantum systems and the emergence of an arrow of time. We claim that the time-reversal symmetric Schr&ouml;dinger equation can have eigenstates that break the time-reversal symmetry if the system is open in the sense that it has at least a countably infinite number of states. Such eigenstates, namely the resonant and anti-resonant states, have complex eigenvalues. We show that, although these states are often called &ldquo;unphysical&rdquo;, they observe the probability conservation in a particular way. We also comment that the seemingly Hermitian Hamiltonian is non-Hermitian in the functional space of the resonant and anti-resonant states, and hence there is no contradiction in the fact that it has complex eigenvalues. We finally show how the existence of the states that break the time-reversal symmetry affects the quantum dynamics. The dynamics that starts from a time-reversal symmetric initial state is dominated by the resonant states for t &gt; 0 ; this explains the phenomenon of the arrow of time, in which the decay excels the growth. The time-reversal symmetry holds in that the dynamic ending at a time-reversal symmetric final state is dominated by the anti-resonant states for t &lt; 0 .

]]>Entropy doi: 10.3390/e21040379

Authors: Victor Fernando Gomez Comendador Rosa Maria Arnaldo Valdés Manuel Villegas Diaz Eva Puntero Parla Danlin Zheng

Demand &amp; Capacity Management solutions are key SESAR (Single European Sky ATM Research) research projects to adapt future airspace to the expected high air traffic growth in a Trajectory Based Operations (TBO) environment. These solutions rely on processes, methods and metrics regarding the complexity assessment of traffic flows. However, current complexity methodologies and metrics do not properly take into account the impact of trajectories&rsquo; uncertainty to the quality of complexity predictions of air traffic demand. This paper proposes the development of several Bayesian network (BN) models to identify the impacts of TBO uncertainties to the quality of the predictions of complexity of air traffic demand for two particular Demand Capacity Balance (DCB) solutions developed by SESAR 2020, i.e., Dynamic Airspace Configuration (DAC) and Flight Centric Air Traffic Control (FCA). In total, seven BN models are elicited covering each concept at different time horizons. The models allow evaluating the influence of the &ldquo;complexity generators&rdquo; in the &ldquo;complexity metrics&rdquo;. Moreover, when the required level for the uncertainty of complexity is set, the networks allow identifying by how much uncertainty of the input variables should improve.

]]>Entropy doi: 10.3390/e21040378

Authors: Imanol Granada Pedro M. Crespo Javier Garcia-Frías

In this paper, we look at the problem of implementing high-throughput Joint Source- Channel (JSC) coding schemes for the transmission of binary sources with memory over AWGN channels. The sources are modeled either by a Markov chain (MC) or a hidden Markov model (HMM). We propose a coding scheme based on the Burrows-Wheeler Transform (BWT) and the parallel concatenation of Rate-Compatible Modulation and Low-Density Generator Matrix (RCM-LDGM) codes. The proposed scheme uses the BWT to convert the original source with memory into a set of independent non-uniform Discrete Memoryless (DMS) binary sources, which are then separately encoded, with optimal rates, using RCM-LDGM codes.

]]>Entropy doi: 10.3390/e21040377

Authors: Meng You Yiyong Xiao Siyue Zhang Shenghan Zhou Pei Yang Xing Pan

In this study, we investigated the time-varying capacitated lot-sizing problem under a fast-changing production environment, where production factors such as the setup costs, inventory-holding costs, production capacities, or even material prices may be subject to continuous changes during the entire planning horizon. Traditional lot-sizing theorems and algorithms, which often assume a constant production environment, are no longer fit for this situation. We analyzed the time-varying environment of today&rsquo;s agile enterprises and modeled the time-varying setup costs and the time-varying production capacities. Based on these, we presented two mixed-integer linear programming models for the time-varying capacitated single-level lot-sizing problem and the time-varying capacitated multi-level lot-sizing problem, respectively, with considerations on the impact of time-varying environments and dynamic capacity constraints. New properties of these models were analyzed on the solution&rsquo;s feasibility and optimality. The solution quality was evaluated in terms of the entropy which indicated that the optimized production system had a lower value than that of the unoptimized one. A number of computational experiments were conducted on well-known benchmark problem instances using the AMPL/CPLEX to verify the proposed models and to test the computational effectiveness and efficiency, which showed that the new models are applicable to the time-varying environment. Two of the benchmark problems were updated with new best-known solutions in the experiments.

]]>Entropy doi: 10.3390/e21040376

Authors: Yaohao Peng Pedro Henrique Melo Albuquerque Igor Ferreira do Nascimento João Victor Freitas Machado

This paper discusses the effects of introducing nonlinear interactions and noise-filtering to the covariance matrix used in Markowitz&rsquo;s portfolio allocation model, evaluating the technique&rsquo;s performances for daily data from seven financial markets between January 2000 and August 2018. We estimated the covariance matrix by applying Kernel functions, and applied filtering following the theoretical distribution of the eigenvalues based on the Random Matrix Theory. The results were compared with the traditional linear Pearson estimator and robust estimation methods for covariance matrices. The results showed that noise-filtering yielded portfolios with significantly larger risk-adjusted profitability than its non-filtered counterpart for almost half of the tested cases. Moreover, we analyzed the improvements and setbacks of the nonlinear approaches over linear ones, discussing in which circumstances the additional complexity of nonlinear features seemed to predominantly add more noise or predictive performance.

]]>Entropy doi: 10.3390/e21040375

Authors: Gottwald Braun

In its most basic form, decision-making can be viewed as a computational process thatprogressively eliminates alternatives, thereby reducing uncertainty. Such processes are generallycostly, meaning that the amount of uncertainty that can be reduced is limited by the amount ofavailable computational resources. Here, we introduce the notion of elementary computation basedon a fundamental principle for probability transfers that reduce uncertainty. Elementary computationscan be considered as the inverse of Pigou&ndash;Dalton transfers applied to probability distributions, closelyrelated to the concepts of majorization, T-transforms, and generalized entropies that induce a preorderon the space of probability distributions. Consequently, we can define resource cost functions that areorder-preserving and therefore monotonic with respect to the uncertainty reduction. This leads toa comprehensive notion of decision-making processes with limited resources. Along the way, weprove several new results on majorization theory, as well as on entropy and divergence measures.

]]>Entropy doi: 10.3390/e21040374

Authors: Xin Zhu Xin Xu Nan Mu

A key issue in saliency detection of the foggy images in the wild for human tracking is how to effectively define the less obvious salient objects, and the leading cause is that the contrast and resolution is reduced by the light scattering through fog particles. In this paper, to suppress the interference of the fog and acquire boundaries of salient objects more precisely, we present a novel saliency detection method for human tracking in the wild. In our method, a combination of object contour detection and salient object detection is introduced. The proposed model can not only maintain the object edge more precisely via object contour detection, but also ensure the integrity of salient objects, and finally obtain accurate saliency maps of objects. Firstly, the input image is transformed into HSV color space, and the amplitude spectrum (AS) of each color channel is adjusted to obtain the frequency domain (FD) saliency map. Then, the contrast of the local-global superpixel is calculated, and the saliency map of the spatial domain (SD) is obtained. We use Discrete Stationary Wavelet Transform (DSWT) to fuse the cues of the FD and SD. Finally, a fully convolutional encoder&ndash;decoder model is utilized to refine the contour of the salient objects. Experimental results demonstrate that the presented model can remove the influence of fog efficiently, and the performance is better than 16 state-of-the-art saliency models.

]]>Entropy doi: 10.3390/e21040373

Authors: Menglu Chen Shaowei Ning Yi Cui Juliang Jin Yuliang Zhou Chengguo Wu

Assessment and diagnosis of regional agricultural drought resilience (RADR) is an important groundwork to identify the shortcomings of regional agriculture to resist drought disasters accurately. In order to quantitatively assess the capacity of regional agriculture system to reduce losses from drought disasters under complex conditions and to identify vulnerability indexes, an assessment and diagnosis model for RADR was established. Firstly, this model used the improved fuzzy analytic hierarchy process to determine the index weights, then proposed an assessment method based on connection number and an improved connection entropy. Furthermore, the set pair potential based on subtraction was used to diagnose the vulnerability indexes. In addition, a practical application had been carried out in the region of the Huaibei Plain in Anhui Province. The evaluation results showed that the RADR in this area from 2005 to 2014 as a whole was in a relatively weak situation. However, the average grade values had decreased from 3.144 to 2.790 during these 10 years and the RADR had an enhanced tendency. Moreover, the possibility of RADR enhancement for six cities in this region decreased from east to west, and the drought emergency condition was the weak link of the RADR in the Huaibei Plain.

]]>Entropy doi: 10.3390/e21040372

Authors: Zhongjun Ma Shengwu Qin Chen Cao Jiangfeng Lv Guangjie Li Shuangshuang Qiao Xiuyu Hu

Landslides are one of the most frequent geomorphic hazards, and they often result in the loss of property and human life in the Changbai Mountain area (CMA), Northeast China. The objective of this study was to produce and compare landslide susceptibility maps for the CMA using an information content model (ICM) with three knowledge-driven methods (the artificial hierarchy process with the ICM (AHP-ICM), the entropy weight method with the ICM (EWM-ICM), and the rough set with the ICM (RS-ICM)) and to explore the influence of different knowledge-driven methods for a series of parameters on the accuracy of landslide susceptibility mapping (LSM). In this research, the landslide inventory data (145 landslides) were randomly divided into a training dataset: 70% (81 landslides) were used for training the models and 30% (35 landslides) were used for validation. In addition, 13 layers of landslide conditioning factors, namely, altitude, slope gradient, slope aspect, lithology, distance to faults, distance to roads, distance to rivers, annual precipitation, land type, normalized difference vegetation index (NDVI), topographic wetness index (TWI), plan curvature, and profile curvature, were taken as independent, causal predictors. Landslide susceptibility maps were developed using the ICM, RS-ICM, AHP-ICM, and EWM-ICM, in which weights were assigned to every conditioning factor. The resultant susceptibility was validated using the area under the ROC curve (AUC) method. The success accuracies of the landslide susceptibility maps produced by the ICM, RS-ICM, AHP-ICM, and EWM-ICM methods were 0.931, 0.939, 0.912, and 0.883, respectively, with prediction accuracy rates of 0.926, 0.927, 0.917, and 0.878 for the ICM, RS-ICM, AHP-ICM, and EWM-ICM, respectively. Hence, it can be concluded that the four models used in this study gave close results, with the RS-ICM exhibiting the best performance in landslide susceptibility mapping.

]]>Entropy doi: 10.3390/e21040371

Authors: Hamid A. Jalab Thamarai Subramaniam Rabha W. Ibrahim Hasan Kahtan Nurul F. Mohd Noor

Forgery in digital images is immensely affected by the improvement of image manipulation tools. Image forgery can be classified as image splicing or copy-move on the basis of the image manipulation type. Image splicing involves creating a new tampered image by merging the components of one or more images. Moreover, image splicing disrupts the content and causes abnormality in the features of a tampered image. Most of the proposed algorithms are incapable of accurately classifying high-dimension feature vectors. Thus, the current study focuses on improving the accuracy of image splicing detection with low-dimension feature vectors. This study also proposes an approximated Machado fractional entropy (AMFE) of the discrete wavelet transform (DWT) to effectively capture splicing artifacts inside an image. AMFE is used as a new fractional texture descriptor, while DWT is applied to decompose the input image into a number of sub-images with different frequency bands. The standard image dataset CASIA v2 was used to evaluate the proposed approach. Superior detection accuracy and positive and false positive rates were achieved compared with other state-of-the-art approaches with a low-dimension of feature vectors.

]]>Entropy doi: 10.3390/e21040370

Authors: Christos K. Volos Sajad Jafari Jacques Kengne Jesus M. Munoz-Pacheco Karthikeyan Rajagopal

In the last few years, entropy has been a fundamental and essential concept in information theory [...]

]]>Entropy doi: 10.3390/e21040369

Authors: Jing Bai Mengjie Wang Dexin Kong

Sketch-based 3D model retrieval has become an important research topic in many applications, such as computer graphics and computer-aided design. Although sketches and 3D models have huge interdomain visual perception discrepancies, and sketches of the same object have remarkable intradomain visual perception diversity, the 3D models and sketches of the same class share common semantic content. Motivated by these findings, we propose a novel approach for sketch-based 3D model retrieval by constructing a deep common semantic space embedding using triplet network. First, a common data space is constructed by representing every 3D model as a group of views. Second, a common modality space is generated by translating views to sketches according to cross entropy evaluation. Third, a common semantic space embedding for two domains is learned based on a triplet network. Finally, based on the learned features of sketches and 3D models, four kinds of distance metrics between sketches and 3D models are designed, and sketch-based 3D model retrieval results are achieved. The experimental results using the Shape Retrieval Contest (SHREC) 2013 and SHREC 2014 datasets reveal the superiority of our proposed method over state-of-the-art methods.

]]>Entropy doi: 10.3390/e21040368

Authors: Zhanna Reznikova Jan Levenets Sofia Panteleeva Anna Novikovskaya Boris Ryabko Natalia Feoktistova Anna Gureeva Alexey Surov

Using the data-compression method we revealed a similarity between hunting behaviors of the common shrew, which is insectivorous, and several rodent species with different types of diet. Seven rodent species studied displayed succinct, highly predictable hunting stereotypes, in which it was easy for the data compressor to find regularities. The generalist Norway rat, with its changeable manipulation of prey and less predictable transitions between stereotype elements, significantly differs from other species. The levels of complexities of hunting stereotypes in young and adult rats are similar, and both groups had no prior experience with the prey, so one can assume that it is not learning, but rather the specificity of the organization of the stereotype that is responsible for the nature of the hunting behavior in rats. We speculate that rodents possess different types of hunting behaviors, one of which is based on a succinct insectivorous standard, and another type, perhaps characteristic of generalists, which is less ordered and is characterized by poorly predictable transitions between elements. We suggest that the data-compression method may well be more broadly applicable to behavioral analysis.

]]>Entropy doi: 10.3390/e21040366

Authors: Masengo Ilunga

This study evaluates essentially mean annual runoff (MAR) information gain/loss for tertiary catchments (TCs) in the Middle Vaal basin. Data sets from surface water resources (WR) of South Africa 1990 (WR90), 2005 (WR2005) and 2012 (WR2012) referred in this study as hydrological phases, are used in this evaluation. The spatial complexity level or information redundancy associated with MAR of TCs is derived as well as the relative change in entropy of TCs between hydrological phases. Redundancy and relative change in entropy are shown to coincide under specific conditions. Finally, the spatial distributions of MAR iso-information transmission (i.e., gain or loss) and MAR iso-information redundancy are established for the Middle Vaal basin.

]]>Entropy doi: 10.3390/e21040367

Authors: Paulo L. dos Santos Noé Wiener

This paper is motivated by a distinctive appreciation of the difficulties posed by quantitative observational inquiry into complex social and economic systems. It develops ordinary and piecewise indices of joint and incremental informational association that enable robust approaches to a common problem in social inquiry: grappling with associations between a quantity of interest and two distinct sets of co-variates taking values over large numbers of individuals. The distinct analytical usefulness of these indices is illustrated with their application to inquiry into the systemic economic effects of patterns of discrimination by social identity in the U.S. economy.

]]>Entropy doi: 10.3390/e21040365

Authors: Daya Shankar Gupta Andreas Bahmer

Perception and motor interaction with physical surroundings can be analyzed by the changes in probability laws governing two possible outcomes of neuronal activity, namely the presence or absence of spikes (binary states). Perception and motor interaction with the physical environment are partly accounted for by a reduction in entropy within the probability distributions of binary states of neurons in distributed neural circuits, given the knowledge about the characteristics of stimuli in physical surroundings. This reduction in the total entropy of multiple pairs of circuits in networks, by an amount equal to the increase of mutual information, occurs as sensory information is processed successively from lower to higher cortical areas or between different areas at the same hierarchical level, but belonging to different networks. The increase in mutual information is partly accounted for by temporal coupling as well as synaptic connections as proposed by Bahmer and Gupta (Front. Neurosci. 2018). We propose that robust increases in mutual information, measuring the association between the characteristics of sensory inputs&rsquo; and neural circuits&rsquo; connectivity patterns, are partly responsible for perception and successful motor interactions with physical surroundings. The increase in mutual information, given the knowledge about environmental sensory stimuli and the type of motor response produced, is responsible for the coupling between action and perception. In addition, the processing of sensory inputs within neural circuits, with no prior knowledge of the occurrence of a sensory stimulus, increases Shannon information. Consequently, the increase in surprise serves to increase the evidence of the sensory model of physical surroundings

]]>Entropy doi: 10.3390/e21040364

Authors: Mo Li Hao Sun Vijay P. Singh Yan Zhou Mingwei Ma

Allocation and management of agricultural water resources is an emerging concern due to diminishing water supplies and increasing water demands. To achieve economic, social, and environmental goals in a specific irrigation district, decisions should be made subject to the changing water supply and water demand&mdash;the two critical random parameters in agricultural water resources management. This paper presents the foundations of a systematic framework for agricultural water resources management, including determination of distribution functions, joint probability of water supply and water demand, optimal allocation of agricultural water resources, and evaluation of various schemes according to agricultural water resources carrying capacity. The maximum entropy method is used to estimate parameters of probability distributions of water supply and demand, which is the basic for the other parts of the framework. The entropy-weight-based TOPSIS method is applied to evaluate agricultural water resources allocation schemes, because it avoids the subjectivity of weight determination and reflects the dynamic changing trend of agricultural water resources carrying capacity. A case study using an irrigation district in Northeast China is used to demonstrate the feasibility and applicability of the framework. It is found that the framework works effectively to balance multiple objectives and provides alternative schemes, considering the combinatorial variety of water supply and water demand, which are conducive to agricultural water resources planning.

]]>Entropy doi: 10.3390/e21040363

Authors: Igor Moravcik Jan Cizek Larissa de Almeida Gouvea Jan Cupera Ivan Guban Ivo Dlouhy

The present work is focused on the synthesis of CoCrFeMnNi high entropy alloy (HEA) interstitially alloyed with nitrogen via powder metallurgy routes. Using a simple method, nitrogen was introduced to the HEA from the protective N2 gas atmosphere during mechanical alloying (MA) processing. The lattice parameter and amount of nitrogen in HEA were observed to be linearly proportional to the milling duration. The limited solubility of nitrogen in the main face centered cubic (FCC) phase resulted in the in-situ formation of nitrides and, accordingly, significant increase in the hardness values. It has been shown that fabrication of such nitrogen-doped HEA bulk materials can be conveniently achieved by a simple combination of MA + spark plasma sintering processes, without the need for adding nitrogen from other sources.

]]>Entropy doi: 10.3390/e21040362

Authors: Denis Butusov Artur Karimov Aleksandra Tutueva Dmitry Kaplun Erivelton G. Nepomuceno

In this paper, we consider nonlinear integration techniques, based on direct Pad&eacute; approximation of the differential equation solution, and their application to conservative chaotic initial value problems. The properties of discrete maps obtained by nonlinear integration are studied, including phase space volume dynamics, bifurcation diagrams, spectral entropy, and the Lyapunov spectrum. We also plot 2D dynamical maps to enlighten the features introduced by nonlinear integration techniques. The comparative study of classical integration methods and Pad&eacute; approximation methods is given. It is shown that nonlinear integration techniques significantly change the behavior of discrete models of nonlinear systems, increasing the values of Lyapunov exponents and spectral entropy. This property reduces the applicability of numerical methods based on Pad&eacute; approximation to the chaotic system simulation but it is still useful for construction of pseudo-random number generators that are resistive to chaos degradation or discrete maps with highly nonlinear properties.

]]>Entropy doi: 10.3390/e21040361

Authors: Giedrė Streckienė Vytautas Martinaitis Juozas Bielskus

The continuous energy transformation processes in heating, ventilation, and air conditioning systems of buildings are responsible for 36% of global final energy consumption. Tighter thermal insulation requirements for buildings have significantly reduced heat transfer losses. Unfortunately, this has little effect on energy demand for ventilation. On the basis of the First and the Second Law of Thermodynamics, the concepts of entropy and exergy are applied to the analysis of ventilation air handling unit (AHU) with a heat pump, in this paper. This study aims to develop a consistent approach for this purpose, taking into account the variations of reference temperature and temperatures of working fluids. An analytical investigation on entropy generation and exergy analysis are used, when exergy is determined by calculating coenthalpies and evaluating exergy flows and their directions. The results show that each component of the AHU has its individual character of generated entropy, destroyed exergy, and exergy efficiency variation. However, the evaporator of the heat pump and fans have unabated quantities of exergy destruction. The exergy efficiency of AHU decreases from 45&ndash;55% to 12&ndash;15% when outdoor air temperature is within the range of &minus;30 to +10 &deg;C, respectively. This helps to determine the conditions and components of improving the exergy efficiency of the AHU at variable real-world local climate conditions. The presented methodological approach could be used in the dynamic modelling software and contribute to a wider application of the Second Law of Thermodynamics in practice.

]]>Entropy doi: 10.3390/e21040360

Authors: Serafín Moral-García Javier G. Castellano Carlos J. Mantas Alfonso Montella Joaquín Abellán

Presently, there is a critical need to analyze traffic accidents in order to mitigate their terrible economic and human impact. Most accidents occur in urban areas. Furthermore, driving experience has an important effect on accident analysis, since inexperienced drivers are more likely to suffer fatal injuries. This work studies the injury severity produced by accidents that involve inexperienced drivers in urban areas. The analysis was based on data provided by the Spanish General Traffic Directorate. The information root node variation (IRNV) method (based on decision trees) was used to get a rule set that provides useful information about the most probable causes of fatalities in accidents involving inexperienced drivers in urban areas. This may prove useful knowledge in preventing this kind of accidents and/or mitigating their consequences.

]]>Entropy doi: 10.3390/e21040359

Authors: Arshad Khan Faizan ul Karim Ilyas Khan Tawfeeq Abdullah Alkanhal Farhad Ali Dolat Khan Kottakkaran Sooppy Nisar

The current work will describe the entropy generation in an unsteady magnetohydrodynamic (MHD) flow with a combined influence of mass and heat transfer through a porous medium. It will consider the flow in the XY plane and the plate with isothermal and ramped wall temperature. The wall shear stress is also considered. The influences of different pertinent parameters on velocity, the Bejan number and on the total entropy generation number are reported graphically. Entropy generation in the fluid is controlled and reduced on the boundary by using wall shear stress. It is observed in this paper that by taking suitable values of pertinent parameters, the energy losses in the system can be minimized. These parameters are the Schmitt number, mass diffusion parameter, Prandtl number, Grashof number, magnetic parameter and modified Grashof number. These results will play an important role in the heat flow of uncertainty and must, therefore, be controlled and managed effectively.

]]>Entropy doi: 10.3390/e21040358

Authors: Song Wang Xu Xie Rusheng Ju

Traffic conditions can be more accurately estimated using data assimilation techniques since these methods incorporate an imperfect traffic simulation model with the (partial) noisy measurement data. In this paper, we propose a data assimilation framework for vehicle density estimation on urban traffic networks. To compromise between computational efficiency and estimation accuracy, a mesoscopic traffic simulation model (we choose the platoon based model) is employed in this framework. Vehicle passages from loop detectors are considered as the measurement data which contain errors, such as missed and false detections. Due to the nonlinear and non-Gaussian nature of the problem, particle filters are adopted to carry out the state estimation, since this method does not have any restrictions on the model dynamics and error assumptions. Simulation experiments are carried out to test the proposed data assimilation framework, and the results show that the proposed framework can provide good vehicle density estimation on relatively large urban traffic networks under moderate sensor quality. The sensitivity analysis proves that the proposed framework is robust to errors both in the model and in the measurements.

]]>Entropy doi: 10.3390/e21040357

Authors: Liang Gao Xu Lan Haibo Mi Dawei Feng Kele Xu Yuxing Peng

Recently, deep learning has achieved state-of-the-art performance in more aspects than traditional shallow architecture-based machine-learning methods. However, in order to achieve higher accuracy, it is usually necessary to extend the network depth or ensemble the results of different neural networks. Increasing network depth or ensembling different networks increases the demand for memory resources and computing resources. This leads to difficulties in deploying depth-learning models in resource-constrained scenarios such as drones, mobile phones, and autonomous driving. Improving network performance without expanding the network scale has become a hot topic for research. In this paper, we propose a cross-architecture online-distillation approach to solve this problem by transmitting supplementary information on different networks. We use the ensemble method to aggregate networks of different structures, thus forming better teachers than traditional distillation methods. In addition, discontinuous distillation with progressively enhanced constraints is used to replace fixed distillation in order to reduce loss of information diversity in the distillation process. Our training method improves the distillation effect and achieves strong network-performance improvement. We used some popular models to validate the results. On the CIFAR100 dataset, AlexNet&rsquo;s accuracy was improved by 5.94%, VGG by 2.88%, ResNet by 5.07%, and DenseNet by 1.28%. Extensive experiments were conducted to demonstrate the effectiveness of the proposed method. On the CIFAR10, CIFAR100, and ImageNet datasets, we observed significant improvements over traditional knowledge distillation.

]]>Entropy doi: 10.3390/e21040356

Authors: Gabriel García Adrián Colomer Valery Naranjo

Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, only focusing on the information contained in the automatically segmented gland candidates. We propose a hand-driven learning approach, in which we perform an exhaustive hand-crafted feature extraction stage combining in a novel way descriptors of morphology, texture, fractals and contextual information of the candidates under study. Then, we carry out an in-depth statistical analysis to select the most relevant features that constitute the inputs to the optimised machine-learning classifiers. Additionally, we apply for the first time on prostate segmented glands, deep-learning algorithms modifying the popular VGG19 neural network. We fine-tuned the last convolutional block of the architecture to provide the model specific knowledge about the gland images. The hand-driven learning approach, using a nonlinear Support Vector Machine, reports a slight outperforming over the rest of experiments with a final multi-class accuracy of 0.876 &plusmn; 0.026 in the discrimination between false glands (artefacts), benign glands and Gleason grade 3 glands.

]]>Entropy doi: 10.3390/e21040355

Authors: Milad Taleby Ahvanooey Qianmu Li Jun Hou Ahmed Raza Rajput Yini Chen

Modern text hiding is an intelligent programming technique which embeds a secret message/watermark into a cover text message/file in a hidden way to protect confidential information. Recently, text hiding in the form of watermarking and steganography has found broad applications in, for instance, covert communication, copyright protection, content authentication, etc. In contrast to text hiding, text steganalysis is the process and science of identifying whether a given carrier text file/message has hidden information in it, and, if possible, extracting/detecting the embedded hidden information. This paper presents an overview of state of the art of the text hiding area, and provides a comparative analysis of recent techniques, especially those focused on marking structural characteristics of digital text message/file to hide secret bits. Also, we discuss different types of attacks and their effects to highlight the pros and cons of the recently introduced approaches. Finally, we recommend some directions and guidelines for future works.

]]>Entropy doi: 10.3390/e21040354

Authors: Shuting Wan Bo Peng

Aiming at the problem that the weak faults of rolling bearing are difficult to recognize accurately, an approach on the basis of swarm decomposition (SWD), morphology envelope dispersion entropy (MEDE), and random forest (RF) is proposed to realize effective detection and intelligent recognition of weak faults in rolling bearings. The proposed approach is based on the idea of signal denoising, feature extraction and pattern classification. Firstly, the raw signal is divided into a group of oscillatory components through SWD algorithm. The first component has the richest fault information and perceived as the principal oscillatory component (POC). Secondly, the MEDE value of the POC is calculated and used to describe the characteristics of signal. Ultimately, the obtained MEDE values of various states are trained and recognized by being input as the feature vectors into the RF classifier to achieve the automatic identification of rolling bearing fault under different operation states. The dataset of Case Western Reserve University is conducted, the proposed approach achieves recognition accuracy rate of 100%. In summary, the proposed approach is efficient and robust, which can be used as a supplement to the rolling bearing fault diagnosis methods.

]]>Entropy doi: 10.3390/e21040353

Authors: Chunxiao Han Xiaozhou Sun Yaru Yang Yanqiu Che Yingmei Qin

Fatigued driving is one of the major causes of traffic accidents. Frequent repetition of driving behavior for a long time may lead to driver fatigue, which is closely related to the central nervous system. In the present work, we designed a fatigue driving simulation experiment and collected the electroencephalogram (EEG) signals. Complex network theory was introduced to study the evolution of brain dynamics under different rhythms of EEG signals during several periods of the simulated driving. The results show that as the fatigue degree deepened, the functional connectivity and the clustering coefficients increased while the average shortest path length decreased for the delta rhythm. In addition, there was a significant increase of the degree centrality in partial channels on the right side of the brain for the delta rhythm. Therefore, it can be concluded that driving fatigue can cause brain complex network characteristics to change significantly for certain brain regions and certain rhythms. This exploration may provide a theoretical basis for further finding objective and effective indicators to evaluate the degree of driving fatigue and to help avoid fatigue driving.

]]>Entropy doi: 10.3390/e21040352

Authors: Zhan-Yun Wang Yi-Tao Gou Jin-Xing Hou Li-Ke Cao Xiao-Hui Wang

We explicitly present a generalized quantum teleportation of a two-qubit entangled state protocol, which uses two pairs of partially entangled particles as quantum channel. We verify that the optimal probability of successful teleportation is determined by the smallest superposition coefficient of these partially entangled particles. However, the two-qubit entangled state to be teleported will be destroyed if teleportation fails. To solve this problem, we show a more sophisticated probabilistic resumable quantum teleportation scheme of a two-qubit entangled state, where the state to be teleported can be recovered by the sender when teleportation fails. Thus the information of the unknown state is retained during the process. Accordingly, we can repeat the teleportion process as many times as one has available quantum channels. Therefore, the quantum channels with weak entanglement can also be used to teleport unknown two-qubit entangled states successfully with a high number of repetitions, and for channels with strong entanglement only a small number of repetitions are required to guarantee successful teleportation.

]]>Entropy doi: 10.3390/e21040351

Authors: Piotr Bania

A Bayesian design of the input signal for linear dynamical model discrimination has been proposed. The discrimination task is formulated as an estimation problem, where the estimated parameter indexes particular models. As the mutual information between the parameter and model output is difficult to calculate, its lower bound has been used as a utility function. The lower bound is then maximized under the signal energy constraint. Selection between two models and the small energy limit are analyzed first. The solution of these tasks is given by the eigenvector of a certain Hermitian matrix. Next, the large energy limit is discussed. It is proved that almost all (in the sense of the Lebesgue measure) high energy signals generate the maximum available information, provided that the impulse responses of the models are different. The first illustrative example shows that the optimal signal can significantly reduce error probability, compared to the commonly-used step or square signals. In the second example, Bayesian design is compared with classical average D-optimal design. It is shown that the Bayesian design is superior to D-optimal design, at least in this example. Some extensions of the method beyond linear and Gaussian models are briefly discussed.

]]>Entropy doi: 10.3390/e21040350

Authors: Giovanni Pezzulo Stefano Nolfi

How do living organisms decide and act with limited and uncertain information? Here, we discuss two computational approaches to solving these challenging problems: a &ldquo;cognitive&rdquo; and a &ldquo;sensorimotor&rdquo; enrichment of stimuli, respectively. In both approaches, the key notion is that agents can strategically modulate their behavior in informative ways, e.g., to disambiguate amongst alternative hypotheses or to favor the perception of stimuli providing the information necessary to later act appropriately. We discuss how, despite their differences, both approaches appeal to the notion that actions must obey both epistemic (i.e., information-gathering or uncertainty-reducing) and pragmatic (i.e., goal- or reward-maximizing) imperatives and balance them. Our computationally-guided analysis reveals that epistemic behavior is fundamental to understanding several facets of cognitive processing, including perception, decision making, and social interaction.

]]>Entropy doi: 10.3390/e21040349

Authors: Isaac J. Sledge José C. Príncipe

In this paper, we propose an approach to obtain reduced-order models of Markov chains. Our approach is composed of two information-theoretic processes. The first is a means of comparing pairs of stationary chains on different state spaces, which is done via the negative, modified Kullback&ndash;Leibler divergence defined on a model joint space. Model reduction is achieved by solving a value-of-information criterion with respect to this divergence. Optimizing the criterion leads to a probabilistic partitioning of the states in the high-order Markov chain. A single free parameter that emerges through the optimization process dictates both the partition uncertainty and the number of state groups. We provide a data-driven means of choosing the &lsquo;optimal&rsquo; value of this free parameter, which sidesteps needing to a priori know the number of state groups in an arbitrary chain.

]]>Entropy doi: 10.3390/e21040348

Authors: Lei Li Anand N. Vidyashankar Guoqing Diao Ejaz Ahmed

Big data and streaming data are encountered in a variety of contemporary applications in business and industry. In such cases, it is common to use random projections to reduce the dimension of the data yielding compressed data. These data however possess various anomalies such as heterogeneity, outliers, and round-off errors which are hard to detect due to volume and processing challenges. This paper describes a new robust and efficient methodology, using Hellinger distance, to analyze the compressed data. Using large sample methods and numerical experiments, it is demonstrated that a routine use of robust estimation procedure is feasible. The role of double limits in understanding the efficiency and robustness is brought out, which is of independent interest.

]]>Entropy doi: 10.3390/e21040347

Authors: Katarzyna Buszko Agnieszka Piątkowska Edward Koźluk Tomasz Fabiszak Grzegorz Opolski

The paper presents an application of Transfer Entropy (TE) to the analysis of information transfer between biosignals (heart rate expressed as R-R intervals (RRI), blood pressure (sBP, dBP) and stroke volume (SV)) measured during head up tilt testing (HUTT) in patients with suspected vasovagal syndrome. The study group comprised of 80 patients who were divided into two groups: the HUTT(+) group consisting of 57 patients who developed syncope during the passive phase of the test and HUTT(&minus;) group consisting of 23 patients who had a negative result of the passive phase and experienced syncope after provocation with nitroglycerin. In both groups the information transfer depends on the phase of the tilt test. In supine position the highest transfer occurred between driver RRI and other components. In upright position it is the driver sBP that plays the crucial role. The pre-syncope phase features the highest information transfer from driver SV to blood pressure components. In each group the comparisons of TE between different phases of HUT test showed significant differences for RRI and SV as drivers.

]]>Entropy doi: 10.3390/e21040346

Authors: Gabriel Martin Bellino Luciano Schiaffino Marisa Battisti Juan Guerrero Alfredo Rosado-Muñoz

Deep Brain Stimulation (DBS) of the Subthalamic Nuclei (STN) is the most used surgical treatment to improve motor skills in patients with Parkinson&rsquo;s Disease (PD) who do not adequately respond to pharmacological treatment, or have related side effects. During surgery for the implantation of a DBS system, signals are obtained through microelectrodes recordings (MER) at different depths of the brain. These signals are analyzed by neurophysiologists to detect the entry and exit of the STN region, as well as the optimal depth for electrode implantation. In the present work, a classification model is developed and supervised by the K-nearest neighbour algorithm (KNN), which is automatically trained from the 18 temporal features of MER registers of 14 patients with PD in order to provide a clinical support tool during DBS surgery. We investigate the effect of different standardizations of the generated database, the optimal definition of KNN configuration parameters, and the selection of features that maximize KNN performance. The results indicated that KNN trained with data that was standardized per cerebral hemisphere and per patient presented the best performance, achieving an accuracy of 94.35% (p &lt; 0.001). By using feature selection algorithms, it was possible to achieve 93.5% in accuracy in selecting a subset of six features, improving computation time while processing in real time.

]]>Entropy doi: 10.3390/e21040345

Authors: Hang Shen Lingli Li Tianjing Wang Guangwei Bai

Compressed sensing based in-network compression methods which minimize data redundancy are critical to cognitive video sensor networks. However, most existing methods require a large number of sensors for each measurement, resulting in significant performance degradation in energy efficiency and quality-of-service satisfaction. In this paper, a cluster-based distributed compressed sensing scheme working together with a quality-of-service aware routing framework is proposed to deliver visual information in cognitive video sensor networks efficiently. First, the correlation among adjacent video sensors determines the member nodes that participate in a cluster. On this basis, a sequential compressed sensing approach is applied to determine whether enough measurements are obtained to limit the reconstruction error between decoded signals and original signals under a specified reconstruction threshold. The goal is to maximize the removal of unnecessary traffic without sacrificing video quality. Lastly, the compressed data is transmitted via a distributed spectrum-aware quality-of-service routing scheme, with an objective of minimizing energy consumption subject to delay and reliability constraints. Simulation results demonstrate that the proposed approach can achieve energy-efficient data delivery and reconstruction accuracy of visual information compared with existing quality-of-service routing schemes.

]]>Entropy doi: 10.3390/e21040344

Authors: Yiming Xiang Weifeng Pan Haibo Jiang Yunfang Zhu Hao Li

Modularity has been regarded as one of the most important properties of a successful software design. It has significant impact on many external quality attributes such as reusability, maintainability, and understandability. Thus, proposing metrics to measure the software modularity can be very useful. Although several metrics have been proposed to characterize some modularity-related attributes, they fail to characterize software modularity as a whole. A complex network uses network models to abstract the internal structure of complex systems, providing a general way to analyze complex systems as a whole. In this paper, we introduce the complex network theory into software engineering and employ modularity, a metric widely used in the field of community detection in complex network research, to measure software modularity as a whole. First, a specific piece of software is represented by a software network, feature coupling network (FCN), where methods and attributes are nodes, couplings between methods and attributes are edges, and the weight on the edges denotes the coupling strength. Then, modularity is applied to the FCN to measure software modularity. We apply the Weyuker&rsquo;s criteria which is widely used in the field of software metrics, to validate the modularity as a software metric theoretically, and also perform an empirical evaluation using open-source Java software systems to show its effectiveness as a software metric to measure software modularity.

]]>Entropy doi: 10.3390/e21040343

Authors: Hui Liu Bo Zhao Linquan Huang

The paper proposes a lossless quantum image encryption scheme based on substitution tables (S-box) scrambling, mutation operation and general Arnold transform with keys. First, the key generator builds upon the foundation of SHA-256 hash with plain-image and a random sequence. Its output value is used to yield initial conditions and parameters of the proposed image encryption scheme. Second, the permutation and gray-level encryption architecture is built by discrete Arnold map and quantum chaotic map. Before the permutation of Arnold transform, the pixel value is modified by quantum chaos sequence. In order to get high scrambling and randomness, S-box and mutation operation are exploited in gray-level encryption stage. The combination of linear transformation and nonlinear transformation ensures the complexity of the proposed scheme and avoids harmful periodicity. The simulation shows the cipher-image has a fairly uniform histogram, low correlation coefficients closed to 0, high information entropy closed to 8. The proposed cryptosystem provides 2256 key space and performs fast computational efficiency (speed = 11.920875 Mbit/s). Theoretical analyses and experimental results prove that the proposed scheme has strong resistance to various existing attacks and high level of security.

]]>Entropy doi: 10.3390/e21040342

Authors: Heng Chen Yunyun Wu Jidong Xu Gang Xu Yongping Yang Wenyi Liu Gangye Shi

High back-pressure (HBP) heating technology has been identified as an effective approach to improve the efficiency of combined heat and power (CHP). In this study, the novel concept of a HBP heating system with energy cascade utilization is developed and its probability examined. In the reformative design, the extracted heating steam from the intermediate-pressure turbine (IPT) is first drawn to an additional turbine where its excess pressure can be converted into electricity, then steam with a lower pressure can be employed to heat the supply water. As a consequence, the exergy destruction in the supply water heating process can be reduced and the efficiency of the cogeneration unit raised. A detailed thermodynamic investigation was performed based on a typical coal-fired HBP&ndash;CHP unit incorporating the proposed configuration. The results show that the artificial thermal efficiency (ATE) promotion was as much as 2.01 percentage points, with an additional net power output of 8.4 MW compared to the reference unit. This was attributed to a 14.65 percentage-point increment in the exergy efficiency of the supply water heating process caused by the suggested retrofitting. The influences of the unit power output, unit heat output, supply water and return water temperatures and turbine back pressure on the thermal performance of the modified system are discussed as well. In addition, the economic performance of the new design is assessed, indicating that the proposed concept is financially feasible.

]]>Entropy doi: 10.3390/e21040341

Authors: Bin Zhao Shuangmei Zhou Jianmei Feng Xueyuan Peng Xiaohan Jia

Boil-off gas (BOG) compressors are among the most critical devices in transportation and receiving systems for liquid natural gas (LNG) because they are used to pump out excess BOG from LNG storage tanks to ensure safety. Because of the ultralow suction temperature, the influence of heat transfer between the cold gas and the compressor parts on the in-cylinder thermodynamic process cannot be ignored. This paper reports the effects of suction temperature on the thermodynamic process and performance of a BOG compressor with consideration of gas pulsation. A computational fluid dynamics (CFD) model with dynamic and sliding meshes was established, in which user-defined functions (UDFs) were used to calculate the real-time valve lift to realize coupling between the thermodynamic process and the gas pulsation, and a performance test rig was constructed to verify the proposed numerical model. The simulated results agreed well with the experimental ones. The results show that as the suction temperature decreased from 30 &deg;C to &minus;150 &deg;C, the first-stage volumetric efficiency decreased to 0.69, and the preheating increased to 45.8 &deg;C. These results should provide academic guidance and an experimental basis for the design and optimization of BOG compressors.

]]>Entropy doi: 10.3390/e21040340

Authors: Wioletta Podgórska

The influence of the impeller type on drop size distribution (DSD) in turbulent liquid-liquid dispersion is considered in this paper. The effects of the application of two impellers, high power number, high shear impeller (six blade Rushton turbine, RT) and three blade low power number, and a high efficiency impeller (HE3) are compared. Large-scale and fine-scale inhomogeneity are taken into account. The flow field and the properties of the turbulence (energy dissipation rate and integral scale of turbulence) in the agitated vessel are determined using the k-&epsilon; model. The intermittency of turbulence is taken into account in droplet breakage and coalescence models by using multifractal formalism. The solution of the population balance equation for lean dispersions (when the only breakage takes place) with a dispersed phase of low viscosity (pure system or system containing surfactant), as well as high viscosity, show that at the same power input per unit mass HE3 impeller produces much smaller droplets. In the case of fast coalescence (low dispersed phase viscosity, no surfactant), the model predicts similar droplets generated by both impellers. In the case of a dispersed phase of high viscosity, when the mobility of the drop surface is reduced, HE3 produces slightly smaller droplets.

]]>Entropy doi: 10.3390/e21040339

Authors: Mustapha Muhammad Lixia Liu

In this paper, we introduced a new three-parameter probability model called Poisson generalized half logistic (PoiGHL). The new model possesses an increasing, decreasing, unimodal and bathtub failure rates depending on the parameters. The relationship of PoiGHL with the exponentiated Weibull Poisson (EWP), Poisson exponentiated Erlang-truncated exponential (PEETE), and Poisson generalized Gompertz (PGG) model is discussed. We also characterized the PoiGHL sub model, i.e the half logistic Poisson (HLP), based on certain functions of a random variable by truncated moments. Several mathematical and statistical properties of the PoiGHL are investigated such as moments, mean deviations, Bonferroni and Lorenz curves, order statistics, Shannon and Renyi entropy, Kullback-Leibler divergence, moments of residual life, and probability weighted moments. Estimation of the model parameters was achieved by maximum likelihood technique and assessed by simulation studies. The stress-strength analysis was discussed in detail based on maximum likelihood estimation (MLE), we derived the asymptotic confidence interval of R = P ( X 1 &lt; X 2 ) based on the MLEs, and examine by simulation studies. In three applications to real data set PoiGHL provided better fit and outperform some other popular distributions. In the stress-strength parameter estimation PoiGHL model illustrated as a reliable choice in reliability analysis as shown using two real data set.

]]>Entropy doi: 10.3390/e21040338

Authors: Franko Hržić Ivan Štajduhar Sebastian Tschauner Erich Sorantin Jonatan Lerga

The paper proposes a segmentation and classification technique for fracture detection in X-ray images. This novel rotation-invariant method introduces the concept of local entropy for de-noising and removing tissue from the analysed X-ray images, followed by an improved procedure for image segmentation and the detection of regions of interest. The proposed local Shannon entropy was calculated for each image pixel using a sliding 2D window. An initial image segmentation was performed on the entropy representation of the original image. Next, a graph theory-based technique was implemented for the purpose of removing false bone contours and improving the edge detection of long bones. Finally, the paper introduces a classification and localisation procedure for fracture detection by tracking the difference between the extracted contour and the estimation of an ideal healthy one. The proposed hybrid method excels at detecting small fractures (which are hard to detect visually by a radiologist) in the ulna and radius bones&mdash;common injuries in children. Therefore, it is imperative that a radiologist inspecting the X-ray image receives a warning from the computerised X-ray analysis system, in order to prevent false-negative diagnoses. The proposed method was applied to a data-set containing 860 X-ray images of child radius and ulna bones (642 fracture-free images and 218 images containing fractures). The obtained results showed the efficiency and robustness of the proposed approach, in terms of segmentation quality and classification accuracy and precision (up to 91.16 % and 86.22 % , respectively).

]]>Entropy doi: 10.3390/e21040337

Authors: Wenjia Wu Hongrui Zhao Qifan Tan Peichao Gao

Urban scaling laws describe powerful universalities of the scaling relationships between urban attributes and the city size across different countries and times. There are still challenges in precise statistical estimation of the scaling exponent; the properties of variance require further study. In this paper, a statistical regression method based on the maximum likelihood estimation considering the lower bound constraints and the heterogeneous variance of error structure, termed as CHVR, is proposed for urban scaling estimation. In the CHVR method, the heterogeneous properties of variance are explored and modeled in the form of a power-of-the-mean variance model. The maximum likelihood fitting method is supplemented to satisfy the lower bound constraints in empirical data. The CHVR method has been applied to estimating the scaling exponents of six urban attributes covering three scaling regimes in China and compared with two traditional methods. Method evaluations based on three different criteria validate that compared with both classical methods, the CHVR method is more effective and robust. Moreover, a statistical test and long-term variations of the parameter in the variance function demonstrate that the proposed heterogeneous variance function can not only describe the heterogeneity in empirical data adequately but also provide more meaningful urban information. Therefore, the CHVR method shows great potential to provide a valuable tool for effective urban scaling studies across the world and be applied to scaling law estimation in other complex systems in the future.

]]>Entropy doi: 10.3390/e21040336

Authors: Feng-Lin Chen Zhi-Hua Wang Yong-Mo Hu

The blind signature is widely used in cryptography applications because it can prevent the signer from gaining the original message. Owing to the unconditional security, the quantum blind signature is more advantageous than the classical one. In this paper, we propose a new provable secure quantum blind signature scheme with the nonorthogonal single-photon BB84-state and provide a new method to encode classical messages into quantum signature states. The message owner injects a randomizing factor into the original message and then strips the blind factor from the quantum blind signature signed by the blind signer. The verifier can validate the quantum signature and announce it publicly. At last, the analytical results show that the proposed scheme satisfies all of the security requirements of the blind signature: blindness, unforgeability, non-repudiation, unlinkability, and traceability. Due to there being no use of quantum entanglement states, the total feasibility and practicability of the scheme are obviously better than the previous ones.

]]>Entropy doi: 10.3390/e21040335

Authors: Rasool Shah Hassan Khan Muhammad Arif Poom Kumam

In the present article, we related the analytical solution of the fractional-order dispersive partial differential equations, using the Laplace&ndash;Adomian decomposition method. The Caputo operator is used to define the derivative of fractional-order. Laplace&ndash;Adomian decomposition method solutions for both fractional and integer orders are obtained in series form, showing higher convergence of the proposed method. Illustrative examples are considered to confirm the validity of the present method. The fractional order solutions that are convergent to integer order solutions are also investigated.

]]>Entropy doi: 10.3390/e21040334

Authors: Georgios Feretzakis Dimitris Kalles Vassilios S. Verykios

The sharing of data among organizations has become an increasingly common procedure in several areas like banking, electronic commerce, advertising, marketing, health, and insurance sectors. However, any organization will most likely try to keep some patterns hidden once it shares its datasets with others. This article focuses on preserving the privacy of sensitive patterns when inducing decision trees. We propose a heuristic approach that can be used to hide a certain rule which can be inferred from the derivation of a binary decision tree. This hiding method is preferred over other heuristic solutions like output perturbation or cryptographic techniques&mdash;which limit the usability of the data&mdash;since the raw data itself is readily available for public use. This method can be used to hide decision tree rules with a minimum impact on all other rules derived.

]]>Entropy doi: 10.3390/e21040332

Authors: Hao Wu Yongqiang Cheng Hongqiang Wang

Information geometry is the study of the intrinsic geometric properties of manifolds consisting of a probability distribution and provides a deeper understanding of statistical inference. Based on this discipline, this letter reports on the influence of the signal processing on the geometric structure of the statistical manifold in terms of estimation issues. This letter defines the intrinsic parameter submanifold, which reflects the essential geometric characteristics of the estimation issues. Moreover, the intrinsic parameter submanifold is proven to be a tighter one after signal processing. In addition, the necessary and sufficient condition of invariant signal processing of the geometric structure, i.e., isometric signal processing, is given. Specifically, considering the processing with the linear form, the construction method of linear isometric signal processing is proposed, and its properties are presented in this letter.

]]>Entropy doi: 10.3390/e21040333

Authors: Xiaodong Wu Yijun Wang Qin Liao Hai Zhong Ying Guo

We propose a simultaneous classical communication and quantum key distribution (SCCQ) protocol based on plug-and-play configuration with an optical amplifier. Such a protocol could be attractive in practice since the single plug-and-play system is taken advantage of for multiple purposes. The plug-and-play scheme waives the necessity of using two independent frequency-locked laser sources to perform coherent detection, thus the phase noise existing in our protocol is small which can be tolerated by the SCCQ protocol. To further improve its capabilities, we place an optical amplifier inside Alice&rsquo;s apparatus. Simulation results show that the modified protocol can well improve the secret key rate compared with the original protocol whether in asymptotic limit or finite-size regime.

]]>Entropy doi: 10.3390/e21040331

Authors: Zhuorui Li Jun Ma Xiaodong Wang Jiande Wu

In order to extract fault features of rolling bearings to characterize their operation state effectively, an improved method, based on modified variational mode decomposition (MVMD) and multipoint optimal minimum entropy deconvolution adjusted (MOMEDA), is proposed. Firstly, the MVMD method is introduced to decompose the vibration signal into intrinsic mode functions (IMFs), and then calculate the energy ratio of each IMF component. The IMF component is selected as the effective component from high energy ratio to low in turn until the total energy proportion Esum(t) &ge; 90%. The IMF effective components are reconstructed to obtain the subsequent analysis signal x_new(t). Secondly, the MOMEDA method is introduced to analyze x_new(t), extract the fault period impulse component x_cov(t), which is submerged by noise, and demodulate the signal x_cov(t) by Teager energy operator demodulation (TEO) to calculate Teager energy spectrum. Thirdly, matching the dominant frequency in the spectrum with the fault characteristic frequency of rolling bearings, the fault feature extraction of rolling bearings are completed. Finally, the experiments have compared MVMD-MOEDA-TEO with MVMD-TEO and MOMEDA-TEO based on two different data sets to verify the superiority of the proposed method. The experimental results show that MVMD-MOMEDA-TEO method has better performance than the other two methods, and provides a new solution for condition monitoring and fault diagnosis of rolling bearings.

]]>