Display options:
Normal
Show Abstracts
Compact

Select/unselect all

Displaying article 1-23

Editorial
p. 5068-5077
Received: 22 June 2014 / Accepted: 20 August 2014 / Published: 23 September 2014

Show/Hide Abstract
| PDF Full-text (119 KB) | HTML Full-text | XML Full-text
Abstract: This special issue collects contributions from the participants of the “Information in Dynamical Systems and Complex Systems” workshop, which cover a wide range of important problems and new approaches that lie in the intersection of information theory and dynamical systems. The contributions include theoretical characterization and understanding of the different types of information flow and causality in general stochastic processes, inference and identification of coupling structure and parameters of system dynamics, rigorous coarse-grain modeling of network dynamical systems, and exact statistical testing of fundamental information-theoretic quantities such as the mutual information. The collective efforts reported here in reflect a modern perspective of the intimate connection between dynamical systems and information flow, leading to the promise of better understanding and modeling of natural complex systems and better/optimal design of engineering systems.

Research
p. 4713-4748
Received: 17 March 2014 / Revised: 3 August 2014 / Accepted: 19 August 2014 / Published: 25 August 2014

Show/Hide Abstract
| Cited by 2 | PDF Full-text (2318 KB) | HTML Full-text | XML Full-text
Abstract: A stochastic nonlinear dynamical system generates information, as measured by its entropy rate. Some—the ephemeral information—is dissipated and some—the bound information—is actively stored and so affects future behavior. We derive analytic expressions for the ephemeral and bound information in the limit of infinitesimal time discretization for two classical systems that exhibit dynamical equilibria: first-order Langevin equations (i) where the drift is the gradient of an analytic potential function and the diffusion matrix is invertible and (ii) with a linear drift term (Ornstein–Uhlenbeck), but a noninvertible diffusion matrix. In both cases, the bound information is sensitive to the drift and diffusion, while the ephemeral information is sensitive only to the diffusion matrix and not to the drift. Notably, this information anatomy changes discontinuously as any of the diffusion coefficients vanishes, indicating that it is very sensitive to the noise structure. We then calculate the information anatomy of the stochastic cusp catastrophe and of particles diffusing in a heat bath in the overdamped limit, both examples of stochastic gradient descent on a potential landscape. Finally, we use our methods to calculate and compare approximations for the time-local predictive information for adaptive agents.

p. 4769-4787
Received: 3 July 2014 / Revised: 26 July 2014 / Accepted: 19 August 2014 / Published: 27 August 2014

Show/Hide Abstract
| Cited by 2 | PDF Full-text (1697 KB) | HTML Full-text | XML Full-text
Abstract: One way to increase the thermal efficiency of vehicle diesel engines is to recover waste heat by using an organic Rankine cycle (ORC) system. Tests were conducted to study the running performances of diesel engines in the whole operating range. The law of variation of the exhaust energy rate under various engine operating conditions was also analyzed. A diesel engine-ORC combined system was designed, and relevant evaluation indexes proposed. The variation of the running performances of the combined system under various engine operating conditions was investigated. R245fa and R152a were selected as the components of the mixed working fluid. Thereafter, six kinds of mixed working fluids with different compositions were presented. The effects of mixed working fluids with different compositions on the running performances of the combined system were revealed. Results show that the running performances of the combined system can be improved effectively when mass fraction R152a in the mixed working fluid is high and the engine operates with high power. For the mixed working fluid M1 (R245fa/R152a, 0.1/0.9, by mass fraction), the net power output of the combined system reaches the maximum of 34.61 kW. Output energy density of working fluid (OEDWF), waste heat recovery efficiency (WHRE), and engine thermal efficiency increasing ratio (ETEIR) all reach their maximum values at 42.7 kJ/kg, 10.90%, and 11.29%, respectively.

p. 4788-4800
Received: 6 July 2014 / Revised: 19 August 2014 / Accepted: 25 August 2014 / Published: 29 August 2014

Show/Hide Abstract
| Cited by 1 | PDF Full-text (906 KB) | HTML Full-text | XML Full-text
Abstract: In medicine, artificial neural networks (ANN) have been extensively applied in many fields to model the nonlinear relationship of multivariate data. Due to the difficulty of selecting input variables, attribute reduction techniques were widely used to reduce data to get a smaller set of attributes. However, to compute reductions from heterogeneous data, a discretizing algorithm was often introduced in dimensionality reduction methods, which may cause information loss. In this study, we developed an integrated method for estimating the medical care costs, obtained from 798 cases, associated with myocardial infarction disease. The subset of attributes was selected as the input variables of ANN by using an entropy-based information measure, fuzzy information entropy, which can deal with both categorical attributes and numerical attributes without discretization. Then, we applied a correction for the Akaike information criterion (ΑIC_{c} ) to compare the networks. The results revealed that fuzzy information entropy was capable of selecting input variables from heterogeneous data for ANN, and the proposed procedure of this study provided a reasonable estimation of medical care costs, which can be adopted in other fields of medical science.

p. 4801-4817
Received: 8 May 2014 / Revised: 10 August 2014 / Accepted: 29 August 2014 / Published: 3 September 2014

Show/Hide Abstract
| PDF Full-text (2316 KB) | HTML Full-text | XML Full-text
Abstract: In high-pressure dynamic thermodynamic processes, the pressure is much higher than the air critical pressure, and the temperature can deviate significantly from the Boyle temperature. In such situations, the thermo-physical properties and pneumatic performance can’t be described accurately by the ideal gas law. This paper proposes an approach to evaluate the pneumatic performance of a high-pressure air catapult launch system, in which esidual functions are used to compensate the thermal physical property uncertainties of caused by real gas effects. Compared with the Nelson-Obert generalized compressibility charts, the precision of the improved virial equation of state is better than Soave-Redlich-Kwong (S-R-K) and Peng-Robinson (P-R) equations for high pressure air. In this paper, the improved virial equation of state is further used to establish a compressibility factor database which is applied to evaluate real gas effects. The specific residual thermodynamic energy and specific residual enthalpy of the high-pressure air are also derived using the modified corresponding state equation and improved virial equation of state which are truncated to the third virial coefficient. The pneumatic equations are established on the basis of the derived residual functions. The comparison of the numerical results shows that the real gas effects are strong, and the pneumatic performance analysis indicates that the real dynamic thermodynamic process is obviously different from the ideal one.

p. 4818-4838
Received: 26 February 2014 / Revised: 21 May 2014 / Accepted: 14 July 2014 / Published: 3 September 2014

Show/Hide Abstract
| PDF Full-text (235 KB) | HTML Full-text | XML Full-text
Abstract: As the early design decision-making structure, a software architecture plays a key role in the final software product quality and the whole project. In the software design and development process, an effective evaluation of the trustworthiness of a software architecture can help making scientific and reasonable decisions on the architecture, which are necessary for the construction of highly trustworthy software. In consideration of lacking the trustworthiness evaluation and measurement studies for software architecture, this paper provides one trustworthy attribute model of software architecture. Based on this model, the paper proposes to use the Principle of Maximum Entropy (POME) and Grey Decision-making Method (GDMM) as the trustworthiness evaluation method of a software architecture and proves the scientificity and rationality of this method, as well as verifies the feasibility through case analysis.

p. 4839-4854
Received: 11 July 2014 / Revised: 20 August 2014 / Accepted: 1 September 2014 / Published: 5 September 2014

Show/Hide Abstract
| Cited by 1 | PDF Full-text (923 KB) | HTML Full-text | XML Full-text
Abstract: Entropy-based complexity of cardiovascular variability at short time scales is largely dependent on the noise and/or action of neural circuits operating at high frequencies. This study proposes a technique for canceling fast variations from cardiovascular variability, thus limiting the effect of these overwhelming influences on entropy-based complexity. The low-pass filtering approach is based on the computation of the fastest intrinsic mode function via empirical mode decomposition (EMD) and its subtraction from the original variability. Sample entropy was exploited to estimate complexity. The procedure was applied to heart period (HP) and QT (interval from Q-wave onset to T-wave end) variability derived from 24-hour Holter recordings in 14 non-mutation carriers (NMCs) and 34 mutation carriers (MCs) subdivided into 11 asymptomatic MCs (AMCs) and 23 symptomatic MCs (SMCs). All individuals belonged to the same family developing long QT syndrome type 1 (LQT1) via KCNQ1 -A341V mutation. We found that complexity indexes computed over EMD-filtered QT variability differentiated AMCs from NMCs and detected the effect of beta-blocker therapy, while complexity indexes calculated over EMD-filtered HP variability separated AMCs from SMCs. The EMD-based filtering method enhanced features of the cardiovascular control that otherwise would have remained hidden by the dominant presence of noise and/or fast physiological variations, thus improving classification in LQT1.

p. 4855-4873
Received: 4 July 2014 / Revised: 6 August 2014 / Accepted: 18 August 2014 / Published: 5 September 2014

Show/Hide Abstract
| Cited by 1 | PDF Full-text (1047 KB) | HTML Full-text | XML Full-text
Abstract: Besides its importance in statistical physics and information theory, the Boltzmann-Shannon entropy S has become one of the most widely used and misused summary measures of various attributes (characteristics) in diverse fields of study. It has also been the subject of extensive and perhaps excessive generalizations. This paper introduces the concept and criteria for value validity as a means of determining if an entropy takes on values that reasonably reflect the attribute being measured and that permit different types of comparisons to be made for different probability distributions. While neither S nor its relative entropy equivalent S * meet the value-validity conditions, certain power functions of S and S * do to a considerable extent. No parametric generalization offers any advantage over S in this regard. A measure based on Euclidean distances between probability distributions is introduced as a potential entropy that does comply fully with the value-validity requirements and its statistical inference procedure is discussed.

p. 4892-4910
Received: 23 July 2014 / Revised: 18 August 2014 / Accepted: 28 August 2014 / Published: 10 September 2014

Show/Hide Abstract
| PDF Full-text (416 KB) | HTML Full-text | XML Full-text
Abstract: In the information theory community, the following “historical” statements are generally well accepted: (1) Hartley did put forth his rule twenty years before Shannon; (2) Shannon’s formula as a fundamental tradeoff between transmission rate, bandwidth, and signal-to-noise ratio came out unexpected in 1948; (3) Hartley’s rule is inexact while Shannon’s formula is characteristic of the additive white Gaussian noise channel; (4) Hartley’s rule is an imprecise relation that is not an appropriate formula for the capacity of a communication channel. We show that all these four statements are somewhat wrong. In fact, a careful calculation shows that “Hartley’s rule” in fact coincides with Shannon’s formula. We explain this mathematical coincidence by deriving the necessary and sufficient conditions on an additive noise channel such that its capacity is given by Shannon’s formula and construct a sequence of such channels that makes the link between the uniform (Hartley) and Gaussian (Shannon) channels.

p. 4911-4922
Received: 10 July 2014 / Revised: 24 August 2014 / Accepted: 10 September 2014 / Published: 12 September 2014

Show/Hide Abstract
| PDF Full-text (216 KB) | HTML Full-text | XML Full-text
Abstract: The present work focuses on entropy solutions for the fractional Cauchy problem of nonsymmetric systems. We impose sufficient conditions on the parameters to obtain bounded solutions of L ^{∞} . The solutions attained are unique and exclusive. Performance is established by utilizing the maximum principle for certain generalized time and space-fractional diffusion equations. The fractional differential operator is inspected based on the interpretation of the Riemann–Liouville differential operator. Fractional entropy inequalities are imposed.

p. 4923-4936
Received: 26 May 2014 / Revised: 21 July 2014 / Accepted: 25 August 2014 / Published: 15 September 2014

Show/Hide Abstract
| PDF Full-text (738 KB) | HTML Full-text | XML Full-text
Abstract: An entropy-controlled bending mechanism is presented to study the nanomechanics of microcantilever-based single-stranded DNA (ssDNA) sensors. First; the conformational free energy of the ssDNA layer is given with an improved scaling theory of thermal blobs considering the curvature effect; and the mechanical energy of the non-biological layer is described by Zhang’s two-variable method for laminated beams. Then; an analytical model for static deflections of ssDNA microcantilevers is formulated by the principle of minimum energy. The comparisons of deflections predicted by the proposed model; Utz–Begley’s model and Hagan’s model are also examined. Numerical results show that the conformational entropy effect on microcantilever deflections cannot be ignored; especially at the conditions of high packing density or long chain systems; and the variation of deflection predicted by the proposed analytical model not only accords with that observed in the related experiments qualitatively; but also appears quantitatively closer to the experimental values than that by the preexisting models. In order to improve the sensitivity of static-mode biosensors; it should be as small as possible to reduce the substrate stiffness.

p. 4937-4959
Received: 21 May 2014 / Accepted: 29 August 2014 / Published: 15 September 2014

Show/Hide Abstract
| PDF Full-text (1364 KB) | HTML Full-text | XML Full-text
Abstract: In a previous study we provided analytical and experimental evidence that some materials are able to store entropy-flow, of which the heat-conduction behaves as standing waves in a bounded region small enough in practice. In this paper we continue to develop distributed control of heat conduction in these thermal-inductive materials. The control objective is to achieve subtle temperature distribution in space and simultaneously to suppress its transient overshoots in time. This technology concerns safe and accurate heating/cooling treatments in medical operations, polymer processing, and other prevailing modern day practices. Serving for distributed feedback, spatiotemporal H_{ ∞} /μ control is developed by expansion of the conventional 1D-H _{∞ } /μ control to a 2D version. Therein 2D geometrical isomorphism is constructed with the Laplace-Galerkin transform, which extends the small-gain theorem into the mode-frequency domain, wherein 2D transfer-function controllers are synthesized with graphical methods. Finally, 2D digital-signal processing is programmed to implement 2D transfer-function controllers, possibly of spatial fraction-orders, into DSP-engine embedded microcontrollers.

p. 4960-4973
Received: 23 June 2014 / Revised: 20 August 2014 / Accepted: 12 September 2014 / Published: 17 September 2014

Show/Hide Abstract
| Cited by 1 | PDF Full-text (901 KB) | HTML Full-text | XML Full-text
Abstract: In this study; the Rayleigh–Bénard convection model was established; and a great number of Bénard cells with different numbered vortexes were acquired by numerical simulation. Additionally; the Bénard cell with two vortexes; which appeared in the steady Bénard fluid with a different Rayleigh number (abbreviated Ra ); was found to display the primary characteristics of the system’s entropy production. It was found that two entropy productions; which are calculated using either linear theory or classical thermodynamic theory; are all basically consistent when the system can form a steady Bénard flow in the proper range of the Rayleigh number’s parameters. Furthermore; in a steady Bénard flow; the entropy productions of the system increase alongside the Ra parameters. It was also found that the difference between the two entropy productions is the driving force to drive the system to a steady state. Otherwise; through the distribution of the local entropy production of the Bénard cell; two vortexes are clearly located where there is minimum local entropy production and in the borders around the cell’s areas of larger local entropy production.

p. 4974-4991
Received: 23 April 2014 / Revised: 26 June 2014 / Accepted: 12 September 2014 / Published: 17 September 2014

Show/Hide Abstract
| PDF Full-text (352 KB) | HTML Full-text | XML Full-text
Abstract: In this paper, we continue our efforts to show how maximum relative entropy (MrE) can be used as a universal updating algorithm. Here, our purpose is to tackle a joint state and parameter estimation problem where our system is nonlinear and in a non-equilibrium state, i.e., perturbed by varying external forces. Traditional parameter estimation can be performed by using filters, such as the extended Kalman filter (EKF). However, as shown with a toy example of a system with first order non-homogeneous ordinary differential equations, assumptions made by the EKF algorithm (such as the Markov assumption) may not be valid. The problem can be solved with exponential smoothing, e.g., exponentially weighted moving average (EWMA). Although this has been shown to produce acceptable filtering results in real exponential systems, it still cannot simultaneously estimate both the state and its parameters and has its own assumptions that are not always valid, for example when jump discontinuities exist. We show that by applying MrE as a filter, we can not only develop the closed form solutions, but we can also infer the parameters of the differential equation simultaneously with the means. This is useful in real, physical systems, where we want to not only filter the noise from our measurements, but we also want to simultaneously infer the parameters of the dynamics of a nonlinear and non-equilibrium system. Although there were many assumptions made throughout the paper to illustrate that EKF and exponential smoothing are special cases ofMrE, we are not “constrained”, by these assumptions. In other words, MrE is completely general and can be used in broader ways.

p. 4992-5019
Received: 25 August 2014 / Revised: 12 September 2014 / Accepted: 12 September 2014 / Published: 19 September 2014

Show/Hide Abstract
| Cited by 4 | PDF Full-text (725 KB) | HTML Full-text | XML Full-text
Abstract: The complex magnetic and structural properties of Co-doped Ni-Mn-Ga Heusler alloys have been investigated by using a combination of first-principles calculations and classical Monte Carlo simulations. We have restricted the investigations to systems with 0, 5 and 9 at% Co. Ab initio calculations show the presence of the ferrimagnetic order of austenite and martensite depending on the composition, where the excess Mn atoms on Ga sites show reversed spin configurations. Stable ferrimagnetic martensite is found for systems with 0 (5) at% Co and a c=a ratio of 1.31 (1.28), respectively, leading to a strong competition of ferro- and antiferro-magnetic exchange interactions between nearest neighbor Mn atoms. The Monte Carlo simulations with ab initio exchange coupling constants as input parameters allow one to discuss the behavior at finite temperatures and to determine magnetic transition temperatures. The Curie temperature of austenite is found to increase with Co, while the Curie temperature of martensite decreases with increasing Co content. This behavior can be attributed to the stronger Co-Mn, Mn-Mn and Mn-Ni exchange coupling constants in austenite compared to the corresponding ones in martensite. The crossover from a direct to inverse magnetocaloric effect in Ni-Mn-Ga due to the substitution of Ni by Co leads to the appearance of a “paramagnetic gap” in the martensitic phase. Doping with In increases the magnetic jump at the martensitic transition temperature. The simulated magnetic and magnetocaloric properties of Co- and In-doped Ni-Mn-Ga alloys are in good qualitative agreement with the available experimental data.

p. 5020-5031
Received: 10 August 2014 / Revised: 13 September 2014 / Accepted: 15 September 2014 / Published: 22 September 2014

Show/Hide Abstract
| PDF Full-text (556 KB) | HTML Full-text | XML Full-text
Abstract: This paper deals with the leader-following consensus of multi-agent systems with matched nonlinear dynamics. Compared with previous works, the major difficulty here is caused by the simultaneous existence of nonidentical agent dynamics and unknown system parameters, which are more practical in real-world applications. To tackle this difficulty, a distributed adaptive control law for each follower is proposed based on algebraic graph theory and algebraic Riccati equation. By a Lyapunov function method, we show that the designed control law guarantees that each follower asymptotically converges to the leader under connected communication graphs. A simulation example demonstrates the effectiveness of the proposed scheme.

p. 5032-5067
Received: 21 April 2014 / Revised: 7 July 2014 / Accepted: 25 August 2014 / Published: 22 September 2014

Show/Hide Abstract
| PDF Full-text (489 KB) | HTML Full-text | XML Full-text
Abstract: Projects are an important part of our activities and regardless of their magnitude, scheduling is at the very core of every project. In an ideal world makespan minimization, which is the most commonly sought objective, would give us an advantage. However, every time we execute a project we have to deal with uncertainty; part of it coming from known sources and part remaining unknown until it affects us. For this reason, it is much more practical to focus on making our schedules robust, capable of handling uncertainty, and even to determine a range in which the project could be completed. In this paper we focus on an approach to determine such a range for the Multi-mode Resource Constrained Project Scheduling Problem (MRCPSP), a widely researched, NP-complete problem, but without adding any subjective considerations to its estimation. We do this by using a concept well known in the domain of thermodynamics, entropy and a three-stage approach. First we use Artificial Bee Colony (ABC)—an effective and powerful meta-heuristic—to determine a schedule with minimized makespan which serves as a lower bound. The second stage defines buffer times and creates an upper bound makespan using an entropy function, with the advantage over other methods that it only considers elements which are inherent to the schedule itself and does not introduce any subjectivity to the buffer time generation. In the last stage, we use the ABC algorithm with an objective function that seeks to maximize robustness while staying within the makespan boundaries defined previously and in some cases even below the lower boundary. We evaluate our approach with two different benchmarks sets: when using the PSPLIB for the MRCPSP benchmark set, the computational results indicate that it is possible to generate robust schedules which generally result in an increase of less than 10% of the best known solutions while increasing the robustness in at least 20% for practically every benchmark set. And, in an attempt to solve larger instances with 50 or 100 activities, we also used the MRCPSP/max benchmark sets, where the increase of the makespan is approximately 35% with respect to the best known solutions at the same time as with a 20% increase in robustness.

p. 5078-5101
Received: 13 June 2014 / Revised: 29 August 2014 / Accepted: 16 September 2014 / Published: 23 September 2014

Show/Hide Abstract
| Cited by 3 | PDF Full-text (3044 KB) | HTML Full-text | XML Full-text
Abstract: Good prediction of the behavior of wind around buildings improves designs for natural ventilation in warm climates. However wind modeling is complex, predictions are often inaccurate due to the large uncertainties in parameter values. The goal of this work is to enhance wind prediction around buildings using measurements through implementing a multiple-model system-identification approach. The success of system-identification approaches depends directly upon the location and number of sensors. Therefore, this research proposes a methodology for optimal sensor configuration based on hierarchical sensor placement involving calculations of prediction-value joint entropy. Computational Fluid Dynamics (CFD) models are generated to create a discrete population of possible wind-flow predictions, which are then used to identify optimal sensor locations. Optimal sensor configurations are revealed using the proposed methodology and considering the effect of systematic and spatially distributed modeling errors, as well as the common information between sensor locations. The methodology is applied to a full-scale case study and optimum configurations are evaluated for their ability to falsify models and improve predictions at locations where no measurements have been taken. It is concluded that a sensor placement strategy using joint entropy is able to lead to predictions of wind characteristics around buildings and capture short-term wind variability more effectively than sequential strategies, which maximize entropy.

p. 5102-5121
Received: 8 July 2014 / Revised: 18 September 2014 / Accepted: 18 September 2014 / Published: 24 September 2014

Show/Hide Abstract
| PDF Full-text (1891 KB) | HTML Full-text | XML Full-text
Abstract: Many of the issues we face as a society are made more problematic by the rapidly changing context in which important decisions are made. For example buying a petrol powered car is most advantageous when there are many petrol pumps providing cheap petrol whereas buying an electric car is most advantageous when there are many electrical recharge points or high capacity batteries available. Such collective decision-making is often studied using economic game theory where the focus is on how individuals might reach an agreement regarding the supply and demand for the different energy types. But even if the two parties find a mutually agreeable strategy, as technology and costs change over time, for example through cheaper and more efficient batteries and a more accurate pricing of the total cost of oil consumption, so too do the incentives for the choices buyers and sellers make, the result of which can be the stranding of an industry or even a whole economy on an island of inefficient outcomes. In this article we consider the issue of how changes in the underlying incentives can move us from an optimal economy to a sub-optimal economy while at the same time making it impossible to collectively navigate our way to a better strategy without forcing us to pass through a socially undesirable “tipping point”. We show that different perturbations to underlying incentives results in the creation or destruction of “strategic islands” isolated by disruptive transitions between strategies. The significant result in this work is the illustration that an economy that remains strategically stationary can over time become stranded in a suboptimal outcome from which there is no easy way to put the economy on a path to better outcomes without going through an economic tipping point.

p. 5122-5143
Received: 17 June 2014 / Revised: 16 August 2014 / Accepted: 11 September 2014 / Published: 25 September 2014

Show/Hide Abstract
| PDF Full-text (242 KB) | HTML Full-text | XML Full-text
Abstract: The entropy of a closure operator has been recently proposed for the study of network coding and secret sharing. In this paper, we study closure operators in relation to their entropy. We first introduce four different kinds of rank functions for a given closure operator, which determine bounds on the entropy of that operator. This yields new axioms for matroids based on their closure operators. We also determine necessary conditions for a large class of closure operators to be solvable. We then define the Shannon entropy of a closure operator and use it to prove that the set of closure entropies is dense. Finally, we justify why we focus on the solvability of closure operators only.

p. 5144-5158
Received: 2 September 2014 / Accepted: 19 September 2014 / Published: 25 September 2014

Show/Hide Abstract
| PDF Full-text (233 KB) | HTML Full-text | XML Full-text
Abstract: In order to construct the border solutions for nonsupersingular elliptic curve equations, some common used models need to be adapted from linear treated cases for use in particular nonlinear cases. There are some approaches that conclude with these solutions. Optimization in this area means finding the majority of points on the elliptic curve and minimizing the time to compute the solution in contrast with the necessary time to compute the inverse solution. We can compute the positive solution of PDE (partial differential equation) like oscillations of f(s)/s around the principal eigenvalue λ_{1} of -Δ in ${H}_{0}^{1}\left(\Omega \right)$ .Translating mathematics into cryptographic applications will be relevant in everyday life, where in there are situations in which two parts that communicate need a third part to confirm this process. For example, if two persons want to agree on something they need an impartial person to confirm this agreement, like a notary. This third part does not influence in anyway the communication process. It is just a witness to the agreement. We present a system where the communicating parties do not authenticate one another. Each party authenticates itself to a third part who also sends the keys for the encryption/decryption process. Another advantage of such a system is that if someone (sender) wants to transmit messages to more than one person (receivers), he needs only one authentication, unlike the classic systems where he would need to authenticate himself to each receiver. We propose an authentication method based on zero-knowledge and elliptic curves.

Review
p. 4749-4768
Received: 4 February 2014 / Revised: 31 July 2014 / Accepted: 12 August 2014 / Published: 26 August 2014

Show/Hide Abstract
| Cited by 16 | PDF Full-text (6174 KB) | HTML Full-text | XML Full-text
Abstract: This paper describes some underlying principles of multicomponent and high entropy alloys, and gives some examples of these materials. Different types of multicomponent alloy and different methods of accessing multicomponent phase space are discussed. The alloys were manufactured by conventional and high speed solidification techniques, and their macroscopic, microscopic and nanoscale structures were studied by optical, X-ray and electron microscope methods. They exhibit a variety of amorphous, quasicrystalline, dendritic and eutectic structures.

p. 4874-4891
Received: 7 July 2014 / Revised: 31 July 2014 / Accepted: 3 September 2014 / Published: 10 September 2014

Show/Hide Abstract
| Cited by 1 | PDF Full-text (1071 KB) | HTML Full-text | XML Full-text
Abstract: This paper reviews the quantum electrodynamics theory of water put forward by Del Giudice and colleagues and how it may provide a useful foundation for a new science of water for life. The interaction of light with liquid water generates quantum coherent domains in which the water molecules oscillate between the ground state and an excited state close to the ionizing potential of water. This produces a plasma of almost free electrons favouring redox reactions, the basis of energy metabolism in living organisms. Coherent domains stabilized by surfaces, such as membranes and macromolecules, provide the excited interfacial water that enables photosynthesis to take place, on which most of life on Earth depends. Excited water is the source of superconducting protons for rapid intercommunication within the body that may be associated with the acupuncture meridians. Coherent domains can also trap electromagnetic frequencies from the environment to orchestrate and activate specific biochemical reactions through resonance, a mechanism for the most precise regulation of gene function.

Select/unselect all

Displaying article 1-23

Export citation of selected articles as:
Plain Text
BibTeX
BibTeX (without abstracts)
Endnote
Endnote (without abstracts)
Tab-delimited
RIS