**Abstract: **The paper studies the “Lagrangian temperature” defined through the entropy maximization in the canonical ensemble, which is the negative inverse Lagrangian multiplier corresponding to the constraint of internal energy. The Lagrangian temperature is derived for systems out of thermal equilibrium described by kappa distributions such as space plasmas. The physical meaning of temperature is manifested by the equivalency of two different definitions, that is, through Maxwell’s kinetic theory and Clausius’ thermodynamics. The equivalency of the two definitions is true either for systems at thermal equilibrium described by Maxwell distributions or for systems out of thermal equilibrium described by kappa distributions, and gives the meaning of the actual temperature, that is, the real or measured temperature. However, the third definition, that of the Lagrangian temperature, coincides with the primary two definitions only at thermal equilibrium, and thus, in the general case of systems out of thermal equilibrium, it does not represent the actual temperature, but it is rather a function of this. The paper derives and examines the exact expression and physical meaning of the Lagrangian temperature, showing that it has essentially different content to what is commonly thought. This is achieved by: (i) maximizing the entropy in the continuous description of energy within the general framework of non-extensive statistical mechanics, (ii) using the concept of the “*N*-particle” kappa distribution, which is governed by a special kappa index that is invariant of the degrees of freedom and the number of particles, and (iii) determining the appropriate scales of length and speed involved in the phase-space microstates. Finally, the paper demonstrates the behavior of the Lagrangian against the actual temperature in various datasets of space plasmas.

**Abstract: **e discuss the use of the Newton method in the computation of max(p → Ε_{p }[f]), where p belongs to a statistical exponential family on a finite state space. In a number of papers, the authors have applied first order search methods based on information geometry. Second order methods have been widely used in optimization on manifolds, e.g., matrix manifolds, but appear to be new in statistical manifolds. These methods require the computation of the Riemannian Hessian in a statistical manifold. We use a non-parametric formulation of information geometry in view of further applications in the continuous state space cases, where the construction of a proper Riemannian structure is still an open problem.

**Abstract: **Three alternatives for integrating a solar field with the bottoming cycle of a combined cycle plant are modeled: parabolic troughs with oil at intermediate and low cycle pressures and Fresnel linear collectors at low cycle pressure. It is assumed that the plant will always operate at nominal conditions, using post-combustion during the hours of no solar resource. A thermoeconomic study of the operation of the plant throughout a year has been carried out. The energy and exergy efficiencies of the plant working in fuel only and hybrid modes are compared. The energy efficiencies obtained are very similar; slightly better for the fuel only mode. The exergy efficiencies are slightly better for hybrid operation than for fuel-only mode, due to the high exergy destruction associated with post-combustion. The values for solar electric efficiency are in line with those of similar studies. The economic study shows that the Fresnel hybridization alternative offers similar performance to the others at a significantly lower cost.

**Abstract: **In the last few decades, computer simulations have become a fundamental tool in the field of soft matter science, allowing researchers to investigate the properties of a large variety of systems. Nonetheless, even the most powerful computational resources presently available are, in general, sufficient to simulate complex biomolecules only for a few nanoseconds. This limitation is often circumvented by using coarse-grained models, in which only a subset of the system’s degrees of freedom is retained; for an effective and insightful use of these simplified models; however, an appropriate parametrization of the interactions is of fundamental importance. Additionally, in many cases the removal of fine-grained details in a specific, small region of the system would destroy relevant features; such cases can be treated using dual-resolution simulation methods, where a subregion of the system is described with high resolution, and a coarse-grained representation is employed in the rest of the simulation domain. In this review we discuss the basic notions of coarse-graining theory, presenting the most common methodologies employed to build low-resolution descriptions of a system and putting particular emphasis on their similarities and differences. The AdResS and H-AdResS adaptive resolution simulation schemes are reported as examples of dual-resolution approaches, especially focusing in particular on their theoretical background.

**Abstract: **In a cloud computing environment, user data is encrypted and stored using a large number of distributed servers. Global Internet service companies such as Google and Yahoo have recognized the importance of Internet service platforms and conducted their own research and development to utilize large cluster-based cloud computing platform technologies based on low-cost commercial off-the-shelf nodes. Accordingly, as various data services are now allowed over a distributed computing environment, distributed management of big data has become a major issue. On the other hand, security vulnerability and privacy infringement due to malicious attackers or internal users can occur by means of various usage types of big data. In particular, various security vulnerabilities can occur in the block access token, which is used for the permission control of data blocks in Hadoop. To solve this problem, we have proposed a weight-applied XOR-based efficient distribution storage and recovery scheme in this paper. In particular, various security vulnerabilities can occur in the block access token, which is used for the permission control of data blocks in Hadoop. In this paper, a secret sharing-based block access token management scheme is proposed to overcome such security vulnerabilities.

**Abstract: **The minimum expected number of bits needed to describe a random variable is its entropy, assuming knowledge of the distribution of the random variable. On the other hand, universal compression describes data supposing that the underlying distribution is unknown, but that it belongs to a known set Ρ of distributions. However, since universal descriptions are not matched exactly to the underlying distribution, the number of bits they use on average is higher, and the excess over the entropy used is the redundancy. In this paper, we study the redundancy incurred by the universal description of strings of positive integers (Z+), the strings being generated independently and identically distributed (*i.i.d.*) according an unknown distribution over Z+ in a known collection P. We first show that if describing a single symbol incurs finite redundancy, then P is tight, but that the converse does not always hold. If a single symbol can be described with finite worst-case regret (a more stringent formulation than redundancy above), then it is known that describing length *n i.i.d.* strings only incurs vanishing (to zero) redundancy per symbol as n increases. On the contrary, we show it is possible that the description of a single symbol from an unknown distribution of P incurs finite redundancy, yet the description of length *n i.i.d.* strings incurs a constant (> 0) redundancy per symbol encoded. We then show a sufficient condition on single-letter marginals, such that length* n i.i.d.* samples will incur vanishing redundancy per symbol encoded.